content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: KindError on setting a ReferenceProperty value This seemingly perfect Google App Engine code fails with a KindError. # in a django project 'stars' from google.appengine.ext import db class User(db.Model): pass class Picture(db.Model): user = db.ReferenceProperty(User) user = User() user.put() picture = Picture() picture.user = user # ===> KindError: Property user must be an instance of stars_user The exception is raised in google.appengine.ext.db.ReferenceProperty.validate: def validate(self, value): ... if value is not None and not isinstance(value, self.reference_class): raise KindError('Property %s must be an instance of %s' % (self.name, self.reference_class.kind())) ... A: Turns out that I was importing the model in admin.py as from frontend.stars.models import Star This line had contaminated the module namespace of Star and the isinstance query was failing. >>> user.__class__ <class 'frontend.stars.models.User'> >>> Picture.user.reference_class <class 'stars.models.User'>
KindError on setting a ReferenceProperty value
This seemingly perfect Google App Engine code fails with a KindError. # in a django project 'stars' from google.appengine.ext import db class User(db.Model): pass class Picture(db.Model): user = db.ReferenceProperty(User) user = User() user.put() picture = Picture() picture.user = user # ===> KindError: Property user must be an instance of stars_user The exception is raised in google.appengine.ext.db.ReferenceProperty.validate: def validate(self, value): ... if value is not None and not isinstance(value, self.reference_class): raise KindError('Property %s must be an instance of %s' % (self.name, self.reference_class.kind())) ...
[ "Turns out that I was importing the model in admin.py as\nfrom frontend.stars.models import Star\n\nThis line had contaminated the module namespace of Star and the isinstance query was failing.\n>>> user.__class__\n<class 'frontend.stars.models.User'>\n>>> Picture.user.reference_class\n<class 'stars.models.User'>\n\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001460105_google_app_engine_python.txt
Q: Modify address in Django middleware I don't know if it's possible but I'd like to add few parameters at the end of the URL using middleware. Can it be done without redirect after modyfing requested URL? ie. user clicks: .../some_link and middleware rewrites it to: .../some_link?par1=1&par2=2 Other way is to modify reponse and replace every HTML link but it's not something I'd like to do. Thanks A: class YourRedirectMiddleware: def process_request(self, request): redirect_url = request.path+'?par1=1&par2=2' return HttpResponsePermanentRedirect(redirect_url) what are you trying to accomplish and why this way? A: I think this really depends on your problem and what exactly you are trying to do. You cannot change the URL without redirecting the user, as you cannot modify the URL on a page without a reload. Basically a redirect is a response telling the user to move on, there is no way to actually change the URL. Note that even if you do it in something like JavaScript you basically do the same as a redirect, so it can't be done client or server side. I think it might help if you explain to us why you need to pass this information via the URL. Why not store data in the session? I guess you could add the data to the request object but that doesn't add it to the URL. A: You can do whatever you like in the middleware. You have access to the request object, you can get the URL and redirect to a new one if you want. My question would be, why do you want to do this? If you need to keep information about the request, the proper place to do this is in the session.
Modify address in Django middleware
I don't know if it's possible but I'd like to add few parameters at the end of the URL using middleware. Can it be done without redirect after modyfing requested URL? ie. user clicks: .../some_link and middleware rewrites it to: .../some_link?par1=1&par2=2 Other way is to modify reponse and replace every HTML link but it's not something I'd like to do. Thanks
[ "class YourRedirectMiddleware:\n\n def process_request(self, request):\n redirect_url = request.path+'?par1=1&par2=2'\n return HttpResponsePermanentRedirect(redirect_url)\n\nwhat are you trying to accomplish and why this way?\n", "I think this really depends on your problem and what exactly you are trying to do.\nYou cannot change the URL without redirecting the user, as you cannot modify the URL on a page without a reload. Basically a redirect is a response telling the user to move on, there is no way to actually change the URL. Note that even if you do it in something like JavaScript you basically do the same as a redirect, so it can't be done client or server side.\nI think it might help if you explain to us why you need to pass this information via the URL. Why not store data in the session?\nI guess you could add the data to the request object but that doesn't add it to the URL.\n", "You can do whatever you like in the middleware. You have access to the request object, you can get the URL and redirect to a new one if you want.\nMy question would be, why do you want to do this? If you need to keep information about the request, the proper place to do this is in the session.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "django", "django_middleware", "django_urls", "python" ]
stackoverflow_0001458829_django_django_middleware_django_urls_python.txt
Q: Why does Python keep a reference count on False and True? I was looking at the source code to the hasattr built-in function and noticed a couple of lines that piqued my interest: Py_INCREF(Py_False); return Py_False; ... Py_INCREF(Py_True); return Py_True; Aren't Py_False and Py_True global values? Just out of sheer curiosity, why is Python keeping a reference count for these variables? A: It's to make all object handling uniform. If I'm writing C code that handles a return value from a function, I have to increment and decrement the reference count on that object. If the function returns me True, I don't want to have to check to see if it's one of those special objects to know whether to manipulate its reference count. I can treat all objects identically. By treating True and False (and None, btw) the same as all other objects, the C code is much simpler throughout.
Why does Python keep a reference count on False and True?
I was looking at the source code to the hasattr built-in function and noticed a couple of lines that piqued my interest: Py_INCREF(Py_False); return Py_False; ... Py_INCREF(Py_True); return Py_True; Aren't Py_False and Py_True global values? Just out of sheer curiosity, why is Python keeping a reference count for these variables?
[ "It's to make all object handling uniform. If I'm writing C code that handles a return value from a function, I have to increment and decrement the reference count on that object. If the function returns me True, I don't want to have to check to see if it's one of those special objects to know whether to manipulate its reference count. I can treat all objects identically.\nBy treating True and False (and None, btw) the same as all other objects, the C code is much simpler throughout.\n" ]
[ 23 ]
[]
[]
[ "python", "python_c_api", "reference_counting" ]
stackoverflow_0001460454_python_python_c_api_reference_counting.txt
Q: How can I get the element of a list that has a minimum/maximum property in Python? I have the following array in Python: points_list = [point0, point1, point2] where each of points_list is of the type: class point: __init__(self, coord, value): self.coord = numpy.array(coord) self.value = value # etc... And a function: def distance(x,y): return numpy.linalg.norm(x.coord - y.coord) And I have a point point_a defined elsewhere. Now I want to find the point in points_list that's closest to point_a. Other than a loop, what's the best way to do this in Python? A: Have you tried this? min(points_list, key=lambda x: distance(x, point_a)) To answer a question in comment: lambda is indeed necessary here since function specified as a key argument needs to accept only a single argument. However, since your point_a is essentially global you could "hard-code" it into the distance function: >>> point_a = point([1, 2, 3], 5) >>> def distance(x): return numpy.linalg.norm(x.coord - point_a.coord) This way you could pass distance as a key argument skipping lambda altogether. >>> min(points_list, key=distance)
How can I get the element of a list that has a minimum/maximum property in Python?
I have the following array in Python: points_list = [point0, point1, point2] where each of points_list is of the type: class point: __init__(self, coord, value): self.coord = numpy.array(coord) self.value = value # etc... And a function: def distance(x,y): return numpy.linalg.norm(x.coord - y.coord) And I have a point point_a defined elsewhere. Now I want to find the point in points_list that's closest to point_a. Other than a loop, what's the best way to do this in Python?
[ "Have you tried this?\nmin(points_list, key=lambda x: distance(x, point_a))\n\nTo answer a question in comment: lambda is indeed necessary here since function specified as a key argument needs to accept only a single argument.\nHowever, since your point_a is essentially global you could \"hard-code\" it into the distance function:\n>>> point_a = point([1, 2, 3], 5)\n>>> def distance(x):\n return numpy.linalg.norm(x.coord - point_a.coord)\n\nThis way you could pass distance as a key argument skipping lambda altogether.\n>>> min(points_list, key=distance)\n\n" ]
[ 13 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001460512_list_python.txt
Q: Verbose list comprehension in Python I have a list of integers and I want to create a new list with all elements smaller than a given limit. a=range(15) #example list limit=9 #example limit My approach to solve this problem was [i for i in a if i < limit] To me the beginning 'i for i in' looks pretty verbose. Is there a better implementation in Python? A: You could use filter >>> filter(lambda i: i < limit, a) [0, 1, 2, 3, 4, 5, 6, 7, 8] But list comprehensions are the preferred way to do it Here is what python docs has to say about this: List comprehensions provide a concise way to create lists without resorting to use of map(), filter() and/or lambda. The resulting list definition tends often to be clearer than lists built using those constructs. A: This is about the best you can do. You may be able to do better with filter, but I wouldn't recommend it. Bear in mind that the list comprehension reads almost like english: "i for i in a if i < limit". This makes it much easier to read and understand, if a little on the verbose side. A: You can use filter() (at least in the Python 2.x series... I think it might have been removed in 3.0) newlist = filter(lambda item: item < limit, oldlist) The first argument can be any callable (its result will be coerced to boolean, so it's best to use a callable that returns boolean anyway), and the second argument can be any sequence. A: The nicer versions require boilerplate code, so the list comprehension is as nice as you can get. This would be one different way to do it: from operator import ge from functools import partial filter(partial(ge, limit), a) (But if you were to use filter, Nadia's way would be the obvious way to do it)
Verbose list comprehension in Python
I have a list of integers and I want to create a new list with all elements smaller than a given limit. a=range(15) #example list limit=9 #example limit My approach to solve this problem was [i for i in a if i < limit] To me the beginning 'i for i in' looks pretty verbose. Is there a better implementation in Python?
[ "You could use filter\n>>> filter(lambda i: i < limit, a)\n[0, 1, 2, 3, 4, 5, 6, 7, 8]\n\nBut list comprehensions are the preferred way to do it\nHere is what python docs has to say about this:\n\nList comprehensions provide a concise\n way to create lists without resorting\n to use of map(), filter() and/or\n lambda. The resulting list definition\n tends often to be clearer than lists\n built using those constructs.\n\n", "This is about the best you can do. You may be able to do better with filter, but I wouldn't recommend it. Bear in mind that the list comprehension reads almost like english: \"i for i in a if i < limit\". This makes it much easier to read and understand, if a little on the verbose side.\n", "You can use filter() (at least in the Python 2.x series... I think it might have been removed in 3.0)\nnewlist = filter(lambda item: item < limit, oldlist)\n\nThe first argument can be any callable (its result will be coerced to boolean, so it's best to use a callable that returns boolean anyway), and the second argument can be any sequence.\n", "The nicer versions require boilerplate code, so the list comprehension is as nice as you can get.\nThis would be one different way to do it:\nfrom operator import ge\nfrom functools import partial\n\nfilter(partial(ge, limit), a)\n\n(But if you were to use filter, Nadia's way would be the obvious way to do it)\n" ]
[ 4, 2, 1, 0 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0001460484_list_comprehension_python.txt
Q: Install MySQLdb (for python) as non-compressed egg The install instructions are: $ python setup.py build $ sudo python setup.py install # or su first This gives me an .egg file. How do I tell the install to dump the files as a normal, uncompressed library? Thanks! A: OK, I hate to answer my own question, but: find your python site-packages (mine is /usr/local/lib/python2.5/site-packages ) then: $ unzip MySQL_python-1.2.2-py2.5-linux-i686.egg This worked fine for me A: I'm a little late to this party, but here's a way to do it that seems to work great: sudo python setup.py install --single-version-externally-managed --root=/ And then you don't use a .python-egg, any *.pth files etc. A: From the EasyInstall doc, command line options: --always-unzip, -Z Don't install any packages as zip files, even if the packages are marked as safe for running as a zipfile. Can you use easyinstall instead of calling setup.py ? calling easy_install -Z mysql_python from the command prompt, finds the egg on the net and installs it. A: This will tell setuptools to not zip it up: sudo python setup.py install --single-version-externally-managed
Install MySQLdb (for python) as non-compressed egg
The install instructions are: $ python setup.py build $ sudo python setup.py install # or su first This gives me an .egg file. How do I tell the install to dump the files as a normal, uncompressed library? Thanks!
[ "OK, I hate to answer my own question, but:\nfind your python site-packages (mine is /usr/local/lib/python2.5/site-packages )\nthen:\n$ unzip MySQL_python-1.2.2-py2.5-linux-i686.egg\n\nThis worked fine for me\n", "I'm a little late to this party, but here's a way to do it that seems to work great:\nsudo python setup.py install --single-version-externally-managed --root=/\n\nAnd then you don't use a .python-egg, any *.pth files etc.\n", "From the EasyInstall doc, command line options:\n\n--always-unzip, -Z\nDon't install any packages as zip files, even if the packages are marked as safe for running as a zipfile.\n\nCan you use easyinstall instead of calling setup.py ?\ncalling easy_install -Z mysql_python from the command prompt, finds the egg on the net and installs it.\n", "This will tell setuptools to not zip it up:\nsudo python setup.py install --single-version-externally-managed\n\n" ]
[ 7, 4, 3, 1 ]
[]
[]
[ "egg", "installation", "mysql", "python" ]
stackoverflow_0000268025_egg_installation_mysql_python.txt
Q: Related to executing Java programs through Python I am using os.system to execute a java program through a python script. I need to pass a file name to the java program as an argument. I am not able to figure out how to pass the relative file location. What should be my reference for determining the relative location. I tried to use the location of the python script as the reference but doesn't work. A: See the subprocess module for all your external process invoking needs. p = subprocess.Popen(['myjavaapp', 'afilename.txt']) If you need to get the relative location and you aren't sure how the other command is going to take it, make it absolute. p = subprocess.Popen(['myjavaapp', os.path.abspath('afilename.txt')])
Related to executing Java programs through Python
I am using os.system to execute a java program through a python script. I need to pass a file name to the java program as an argument. I am not able to figure out how to pass the relative file location. What should be my reference for determining the relative location. I tried to use the location of the python script as the reference but doesn't work.
[ "See the subprocess module for all your external process invoking needs.\np = subprocess.Popen(['myjavaapp', 'afilename.txt'])\n\nIf you need to get the relative location and you aren't sure how the other command is going to take it, make it absolute.\np = subprocess.Popen(['myjavaapp', os.path.abspath('afilename.txt')])\n\n" ]
[ 2 ]
[]
[]
[ "java", "python" ]
stackoverflow_0001460590_java_python.txt
Q: How to decompile a regex? Is there any way to decompile a regular expression once compiled? A: Compiled regular expression objects have a "pattern" attribute which gives the original text pattern. >>> import re >>> regex = re.compile('foo (?:bar)*') >>> regex.pattern 'foo (?:bar)*' A: r = re.compile('some[pattern]'); print r.pattern
How to decompile a regex?
Is there any way to decompile a regular expression once compiled?
[ "Compiled regular expression objects have a \"pattern\" attribute which gives the original text pattern.\n>>> import re\n>>> regex = re.compile('foo (?:bar)*')\n>>> regex.pattern\n'foo (?:bar)*'\n\n", "r = re.compile('some[pattern]');\nprint r.pattern\n\n" ]
[ 40, 8 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001460686_python_regex.txt
Q: Jinja2 If Statement The code below is a sample form I'm using to learn jinja2. As written, it returns an error saying that it doesn't recognize the {% endif %} tag. Why does this happen? <html> Name: {{ name }} Print {{ num }} times Color: {{ color }} {% if convert_to_upper %}Case: Upper {% elif not convert_to_upper %}Case: Lower{% endif %} {% for repeats in range(0,num) %} {% if convert_to_upper %} {% filter upper %} {% endif %} <li><p style="color:{{ color }}">{{ name }}</style></li> {% endfilter %} {% endfor %} </html> A: I think you have your lines mixed up. Your endif comese before endfilter whereas if is before filter. That's just a syntax error.
Jinja2 If Statement
The code below is a sample form I'm using to learn jinja2. As written, it returns an error saying that it doesn't recognize the {% endif %} tag. Why does this happen? <html> Name: {{ name }} Print {{ num }} times Color: {{ color }} {% if convert_to_upper %}Case: Upper {% elif not convert_to_upper %}Case: Lower{% endif %} {% for repeats in range(0,num) %} {% if convert_to_upper %} {% filter upper %} {% endif %} <li><p style="color:{{ color }}">{{ name }}</style></li> {% endfilter %} {% endfor %} </html>
[ "I think you have your lines mixed up. Your endif comese before endfilter whereas if is before filter. That's just a syntax error.\n" ]
[ 12 ]
[]
[]
[ "jinja2", "python" ]
stackoverflow_0001461484_jinja2_python.txt
Q: epylint script is not working on Windows I have installed http://ftp.logilab.org/pub/pylint/pylint-0.18.1.tar.gz on Windows and now I am trying to configure my Emacs's flymake mode using epylint script. Here is the output of I got when I tried epylint on windows command prompt. C:\>epylint test.py 'test.py':1: [F] No module named 'test.py' Any suggestions on how to fix this problem? A: Reading the documentation of the epylint.lint function: When run from emacs we will be in the directory of a file, and passed its filename. If this file is part of a package and is trying to import other modules from within its own package or another package rooted in a directory below it, pylint will classify it as a failed import. To get around this, we traverse down the directory tree to find the root of the package this module is in. We then invoke pylint from this directory. Finally, we must correct the filenames in the output generated by pylint so Emacs doesn't become confused (it will expect just the original filename, while pylint may extend it with extra directories if we've traversed down the tree) It sounds like it has to do some extra magic to work within Emacs. It doesn't look like you can run it the same way from the command line. Is it not working for you from within Emacs? It might be a bug in pylint then. Does pylint have a mailing list you can report issues to?
epylint script is not working on Windows
I have installed http://ftp.logilab.org/pub/pylint/pylint-0.18.1.tar.gz on Windows and now I am trying to configure my Emacs's flymake mode using epylint script. Here is the output of I got when I tried epylint on windows command prompt. C:\>epylint test.py 'test.py':1: [F] No module named 'test.py' Any suggestions on how to fix this problem?
[ "Reading the documentation of the epylint.lint function:\n\nWhen run from emacs we will be in the directory of a file, and passed its filename.\nIf this file is part of a package and is trying to import other modules from within\nits own package or another package rooted in a directory below it, pylint will classify\nit as a failed import.\nTo get around this, we traverse down the directory tree to find the root of the package this\nmodule is in. We then invoke pylint from this directory.\nFinally, we must correct the filenames in the output generated by pylint so Emacs doesn't\nbecome confused (it will expect just the original filename, while pylint may extend it with\nextra directories if we've traversed down the tree)\n\nIt sounds like it has to do some extra magic to work within Emacs. It doesn't look like you can run it the same way from the command line.\nIs it not working for you from within Emacs? It might be a bug in pylint then. Does pylint have a mailing list you can report issues to?\n" ]
[ 0 ]
[]
[]
[ "emacs", "pylint", "python", "windows" ]
stackoverflow_0001460275_emacs_pylint_python_windows.txt
Q: do you know of any python component(s) for syntax highlighting? Are there any easy to use python components that could be used in a GUI? It would be great to have something like JSyntaxPane for Python. I would like to know of python-only versions ( not interested in jython ) . A: Other than pygments? http://pygments.org/ A: If you're using gtk+, there's a binding of gtksourceview for Python in gnome-python-extras. It seems to work well in my experience. The downside: the documentation is less than perfect. There's also a binding of QScintilla for Python if PyQt is your thing. A: You can use StyledTextCtrl in wxPython. Check out the official demo for an example (The demo code tab for any demo). A: You say "in a GUI app" but don't mention the toolkit. If you are using PyQt, and need a read-only widget, you can use QWebKit which has a whole HTML widget in it based on WebKit, so it supports pretty much anything, from flash to the ACID2 test. If you want a read-write widget, Qt's QTextEdit supports syntax highlighting, and I wrote an adapter to let pygments worj with it: http://lateral.netmanagers.com.ar/weblog/2009/09/21.html#BB831 I am sure something similar can be done with other toolkits, but I don't know how.
do you know of any python component(s) for syntax highlighting?
Are there any easy to use python components that could be used in a GUI? It would be great to have something like JSyntaxPane for Python. I would like to know of python-only versions ( not interested in jython ) .
[ "Other than pygments? http://pygments.org/\n", "If you're using gtk+, there's a binding of gtksourceview for Python in gnome-python-extras. It seems to work well in my experience. The downside: the documentation is less than perfect.\nThere's also a binding of QScintilla for Python if PyQt is your thing.\n", "You can use StyledTextCtrl in wxPython. Check out the official demo for an example (The demo code tab for any demo).\n", "You say \"in a GUI app\" but don't mention the toolkit.\nIf you are using PyQt, and need a read-only widget, you can use QWebKit which has a whole HTML widget in it based on WebKit, so it supports pretty much anything, from flash to the ACID2 test.\nIf you want a read-write widget, Qt's QTextEdit supports syntax highlighting, and I wrote an adapter to let pygments worj with it:\nhttp://lateral.netmanagers.com.ar/weblog/2009/09/21.html#BB831\nI am sure something similar can be done with other toolkits, but I don't know how.\n" ]
[ 9, 1, 1, 0 ]
[]
[]
[ "components", "python", "syntax_highlighting" ]
stackoverflow_0000620954_components_python_syntax_highlighting.txt
Q: How much data could be stored into a Google App Engine, application? Answering this question and searching for references I have this doubt my self: *How much data could be stored into a Google App Engine, application? If I'm reading well this table: Resources | Free daily | Free Max Rate | Daily Billing enable | Max Rate Billing ------------------------------------------------------------------------------------------ Stored Data | 1 gigabyte | None | 1 gigabytes free; no maximum | None Does it means you can storage as much as you want for free ( as long as it is 1 gb daily? ) :-o EDIT Mmhh I was wrong. I have found also the official link that answer my own question: http://code.google.com/appengine/docs/billing.html Resource | Unit | Unit cost ------------------------------------------------- Outgoing Bandwidth | gigabytes | $0.12 ------------------------------------------------- Incoming Bandwidth | gigabytes | $0.10 ------------------------------------------------- CPU Time | CPU hours | $0.10 ------------------------------------------------- Stored Data | gigabytes per month | $0.15 ------------------------------------------------- Recipients Emailed | recipients | $0.0001 So, using 7.6 gb of storage wouldcost $1 USD/month :-o Still, very cheap. Am I missing something? A: I read that to mean 1 GB stored during the course of a day, not added in a day, so in other words you can have up to 1 GB of storage for free. If you store more, calculated daily, you have to pay for that additional storage. There is no maximum on how much you can store, you just get billed for it.
How much data could be stored into a Google App Engine, application?
Answering this question and searching for references I have this doubt my self: *How much data could be stored into a Google App Engine, application? If I'm reading well this table: Resources | Free daily | Free Max Rate | Daily Billing enable | Max Rate Billing ------------------------------------------------------------------------------------------ Stored Data | 1 gigabyte | None | 1 gigabytes free; no maximum | None Does it means you can storage as much as you want for free ( as long as it is 1 gb daily? ) :-o EDIT Mmhh I was wrong. I have found also the official link that answer my own question: http://code.google.com/appengine/docs/billing.html Resource | Unit | Unit cost ------------------------------------------------- Outgoing Bandwidth | gigabytes | $0.12 ------------------------------------------------- Incoming Bandwidth | gigabytes | $0.10 ------------------------------------------------- CPU Time | CPU hours | $0.10 ------------------------------------------------- Stored Data | gigabytes per month | $0.15 ------------------------------------------------- Recipients Emailed | recipients | $0.0001 So, using 7.6 gb of storage wouldcost $1 USD/month :-o Still, very cheap. Am I missing something?
[ "I read that to mean 1 GB stored during the course of a day, not added in a day, so in other words you can have up to 1 GB of storage for free. If you store more, calculated daily, you have to pay for that additional storage. There is no maximum on how much you can store, you just get billed for it.\n" ]
[ 6 ]
[]
[]
[ "cloud", "google_app_engine", "java", "python" ]
stackoverflow_0001462160_cloud_google_app_engine_java_python.txt
Q: Django forms: how to display media (javascript) for a DateTimeInput widget? Hello (please excuse me for my bad english ;) ), Imagine the classes bellow: models.py from django import models class MyModel(models.Model): content_type = models.ForeignKey(ContentType, verbose_name=_('content type')) object_id = models.PositiveIntegerField(_('object id')) content_object = generic.GenericForeignKey('content_type', 'object_id') published_at = models.DateTimeField() forms.py from django import forms class MyModelForm(forms.ModelForm): published_at = forms.DateTimeField(required=False, widget=DateTimeInput) admin.py from django.contrib import admin form django.contrib.contenttypes import generic class MyModelInline(generic.GenericStackedInline): model = MyModel form = MyModelForm class MyModelAdmin(admin.ModelAdmin): inlines = [MyModelInline] Problem: the <script> tags for javascript from the DateTimeInput widget don't appear in the admin site (adding a new MyModel object). i.e. these two lines : <script type="text/javascript" src="/admin/media/js/calendar.js"></script> <script type="text/javascript" src="/admin/media/js/admin/DateTimeShortcuts.js"></script> Please, do you have any idea to fix it ? Thank you very much and have a good day :) A: The standard DateTimeWidget doesn't include any javascript. The widget used in the admin is a different one - django.contrib.admin.widgets.AdminSplitDateTime - and this includes the javascript.
Django forms: how to display media (javascript) for a DateTimeInput widget?
Hello (please excuse me for my bad english ;) ), Imagine the classes bellow: models.py from django import models class MyModel(models.Model): content_type = models.ForeignKey(ContentType, verbose_name=_('content type')) object_id = models.PositiveIntegerField(_('object id')) content_object = generic.GenericForeignKey('content_type', 'object_id') published_at = models.DateTimeField() forms.py from django import forms class MyModelForm(forms.ModelForm): published_at = forms.DateTimeField(required=False, widget=DateTimeInput) admin.py from django.contrib import admin form django.contrib.contenttypes import generic class MyModelInline(generic.GenericStackedInline): model = MyModel form = MyModelForm class MyModelAdmin(admin.ModelAdmin): inlines = [MyModelInline] Problem: the <script> tags for javascript from the DateTimeInput widget don't appear in the admin site (adding a new MyModel object). i.e. these two lines : <script type="text/javascript" src="/admin/media/js/calendar.js"></script> <script type="text/javascript" src="/admin/media/js/admin/DateTimeShortcuts.js"></script> Please, do you have any idea to fix it ? Thank you very much and have a good day :)
[ "The standard DateTimeWidget doesn't include any javascript. The widget used in the admin is a different one - django.contrib.admin.widgets.AdminSplitDateTime - and this includes the javascript.\n" ]
[ 5 ]
[]
[]
[ "django", "django_admin", "django_forms", "django_models", "python" ]
stackoverflow_0001462003_django_django_admin_django_forms_django_models_python.txt
Q: Command line options with optional arguments in Python I was wondering if there's a simple way to parse command line options having optional arguments in Python. For example, I'd like to be able to call a script two ways: > script.py --foo > script.py --foo=bar From the Python getopt docs it seems I have to choose one or the other. A: Also, note that the standard library also has optparse, a more powerful options parser. A: check out argparse: http://code.google.com/p/argparse/ especially the 'nargs' option A: optparse module from stdlib doesn't support it out of the box (and it shouldn't due to it is a bad practice to use command-line options in such way). As @Kevin Horn pointed out you can use argparse module (installable via easy_install argparse or just grab argparse.py and put it anywhere in your sys.path). Example #!/usr/bin/env python from argparse import ArgumentParser if __name__ == "__main__": parser = ArgumentParser(prog='script.py') parser.add_argument('--foo', nargs='?', metavar='bar', default='baz') parser.print_usage() for args in ([], ['--foo'], ['--foo', 'bar']): print "$ %s %s -> foo=%s" % ( parser.prog, ' '.join(args).ljust(9), parser.parse_args(args).foo) Output usage: script.py [-h] [--foo [bar]] $ script.py -> foo=baz $ script.py --foo -> foo=None $ script.py --foo bar -> foo=bar A: There isn't an option in optparse that allows you to do this. But you can extend it to do it: http://docs.python.org/library/optparse.html#adding-new-actions A: Use the optparse package.
Command line options with optional arguments in Python
I was wondering if there's a simple way to parse command line options having optional arguments in Python. For example, I'd like to be able to call a script two ways: > script.py --foo > script.py --foo=bar From the Python getopt docs it seems I have to choose one or the other.
[ "Also, note that the standard library also has optparse, a more powerful options parser.\n", "check out argparse:\nhttp://code.google.com/p/argparse/\nespecially the 'nargs' option\n", "optparse module from stdlib doesn't support it out of the box (and it shouldn't due to it is a bad practice to use command-line options in such way).\nAs @Kevin Horn pointed out you can use argparse module (installable via easy_install argparse or just grab argparse.py and put it anywhere in your sys.path).\nExample\n#!/usr/bin/env python\nfrom argparse import ArgumentParser\n\nif __name__ == \"__main__\":\n parser = ArgumentParser(prog='script.py')\n parser.add_argument('--foo', nargs='?', metavar='bar', default='baz')\n\n parser.print_usage() \n for args in ([], ['--foo'], ['--foo', 'bar']):\n print \"$ %s %s -> foo=%s\" % (\n parser.prog, ' '.join(args).ljust(9), parser.parse_args(args).foo)\n\nOutput\n\nusage: script.py [-h] [--foo [bar]]\n$ script.py -> foo=baz\n$ script.py --foo -> foo=None\n$ script.py --foo bar -> foo=bar\n\n", "There isn't an option in optparse that allows you to do this. But you can extend it to do it:\nhttp://docs.python.org/library/optparse.html#adding-new-actions\n", "Use the optparse package. \n" ]
[ 3, 3, 3, 2, 1 ]
[]
[]
[ "getopt", "python" ]
stackoverflow_0001461942_getopt_python.txt
Q: How do I switch this Proxy to use Proxy-Authentication? I'm trying to modify my simple Twisted web proxy to use "Proxy-Authentication" (username/password) instead of the current IP based authentication. Problem is, I'm new to Twisted and don't even know where to start. Here is my Factory Class. class ProxyFactory(http.HTTPFactory): def __init__(self, ip, internal_ips): http.HTTPFactory.__init__(self) self.ip = ip self.protocol = proxy.Proxy self.INTERNAL_IPS = internal_ips def buildProtocol(self, addr): print addr # IP based authentication -- need to switch this to use standard Proxy password authentication if addr.host not in self.INTERNAL_IPS: return None #p = protocol.ServerFactory.buildProtocol(self, addr) p = self.protocol() p.factory = self # timeOut needs to be on the Protocol instance cause # TimeoutMixin expects it there p.timeOut = self.timeOut return p Any idea what I need to do to make this work? Thanks for your help! A: A similar question came up on the Twisted mailing list a while ago: http://www.mail-archive.com/[email protected]/msg01080.html As I mentioned there, you probably need to subclass some of the twisted.proxy classes so that they understand the Proxy-Authenticate and Proxy-Authorization headers.
How do I switch this Proxy to use Proxy-Authentication?
I'm trying to modify my simple Twisted web proxy to use "Proxy-Authentication" (username/password) instead of the current IP based authentication. Problem is, I'm new to Twisted and don't even know where to start. Here is my Factory Class. class ProxyFactory(http.HTTPFactory): def __init__(self, ip, internal_ips): http.HTTPFactory.__init__(self) self.ip = ip self.protocol = proxy.Proxy self.INTERNAL_IPS = internal_ips def buildProtocol(self, addr): print addr # IP based authentication -- need to switch this to use standard Proxy password authentication if addr.host not in self.INTERNAL_IPS: return None #p = protocol.ServerFactory.buildProtocol(self, addr) p = self.protocol() p.factory = self # timeOut needs to be on the Protocol instance cause # TimeoutMixin expects it there p.timeOut = self.timeOut return p Any idea what I need to do to make this work? Thanks for your help!
[ "A similar question came up on the Twisted mailing list a while ago:\nhttp://www.mail-archive.com/[email protected]/msg01080.html\nAs I mentioned there, you probably need to subclass some of the twisted.proxy classes so that they understand the Proxy-Authenticate and Proxy-Authorization headers.\n" ]
[ 1 ]
[]
[]
[ "authentication", "proxy", "python", "twisted" ]
stackoverflow_0001336882_authentication_proxy_python_twisted.txt
Q: Automatic String to Number conversion in Python I am trying to compare two lists of string in python. Some of the strings are numbers however I don't want to use it as number, only for string comparison. I read the string from a file and put them on a list like this: def main(): inputFileName = 'BateCarteira.csv' inputFile = open(inputFileName, "r") bankNumbers = [] for line in inputFile: values = line[0:len(line)-1].split(';'); if (len(values[0]) > 3): bankNumbers.append(''+values[0]) However, when I try to print the number, it prints like: 1,20091E+11 The code for the printing: print 'not in the list: ' + bankNumber outputFile.write(bankNumber + '-') What can I do so python never casts the string to an int? sorry for my english :D A: Python never transforms a string to a number, unless you try something like: s = "1.2" a = float(s) So I guess that your .csv file has the string "1,20091E+11" inside it. Also notice that the decimal point is a coma. So, if you tried to convert it to a float, the transformation would fail. A: You need the locale module to read numbers in the locale format (i.e with decimal comma that's used in (most) parts of Europe). import locale locale.setlocale(locale.LC_ALL, '') f = locale.atof("1,20091E+11")
Automatic String to Number conversion in Python
I am trying to compare two lists of string in python. Some of the strings are numbers however I don't want to use it as number, only for string comparison. I read the string from a file and put them on a list like this: def main(): inputFileName = 'BateCarteira.csv' inputFile = open(inputFileName, "r") bankNumbers = [] for line in inputFile: values = line[0:len(line)-1].split(';'); if (len(values[0]) > 3): bankNumbers.append(''+values[0]) However, when I try to print the number, it prints like: 1,20091E+11 The code for the printing: print 'not in the list: ' + bankNumber outputFile.write(bankNumber + '-') What can I do so python never casts the string to an int? sorry for my english :D
[ "Python never transforms a string to a number, unless you try something like:\ns = \"1.2\"\na = float(s)\n\nSo I guess that your .csv file has the string \"1,20091E+11\" inside it. Also notice that the decimal point is a coma. So, if you tried to convert it to a float, the transformation would fail.\n", "You need the locale module to read numbers in the locale format (i.e with decimal comma that's used in (most) parts of Europe).\nimport locale\nlocale.setlocale(locale.LC_ALL, '')\nf = locale.atof(\"1,20091E+11\")\n\n" ]
[ 3, 2 ]
[]
[]
[ "casting", "python" ]
stackoverflow_0001462427_casting_python.txt
Q: Django generates 'WHERE ... BETWEEN ...' sentences? Actually, somewhere in the view: dif = datetime.timedelta(days=1) today = datetime.date.today() yesterday = today - dif ex = Fact.objects.filter(fecha_fact__lte=today ,fecha_fact__gte=yesterday ) It results to this SQL Query: SELECT `facts_fact`.`id` ... FROM `facts_fact` WHERE (`facts_fact`.`fecha_fact` >= 2009-09-21 AND `facts_fact`.`fecha_fact` <= 2009-09-22 ) There is a way to make Django do a WHERE BETWEEN sentence instead >= <= ? :) thx. A: Have you tried range? Fact.objects.filter(fecha_fact__range=(yesterday, today)) A: i dont know, but maybe (gte / lte ): q1 = Entry.objects.filter(headline__startswith="What") q2 = q1.exclude(pub_date__gte=datetime.now()) q3 = q1.filter(pub_date__gte=datetime.now()) q = Entry.objects.filter(headline__startswith="What") q = q.filter(pub_date__lte=datetime.now())
Django generates 'WHERE ... BETWEEN ...' sentences?
Actually, somewhere in the view: dif = datetime.timedelta(days=1) today = datetime.date.today() yesterday = today - dif ex = Fact.objects.filter(fecha_fact__lte=today ,fecha_fact__gte=yesterday ) It results to this SQL Query: SELECT `facts_fact`.`id` ... FROM `facts_fact` WHERE (`facts_fact`.`fecha_fact` >= 2009-09-21 AND `facts_fact`.`fecha_fact` <= 2009-09-22 ) There is a way to make Django do a WHERE BETWEEN sentence instead >= <= ? :) thx.
[ "Have you tried range?\nFact.objects.filter(fecha_fact__range=(yesterday, today))\n\n", "i dont know, but maybe (gte / lte ):\nq1 = Entry.objects.filter(headline__startswith=\"What\")\nq2 = q1.exclude(pub_date__gte=datetime.now())\nq3 = q1.filter(pub_date__gte=datetime.now())\n\n\nq = Entry.objects.filter(headline__startswith=\"What\")\nq = q.filter(pub_date__lte=datetime.now())\n\n" ]
[ 4, 0 ]
[]
[]
[ "django", "django_models", "python", "sql" ]
stackoverflow_0001462968_django_django_models_python_sql.txt
Q: How to improve performance of python cgi that reads a big file and returns it as a download? I have this python cgi script that checks if it hasn't been accessed to many times from the same IP, and if everything is ok, reads a big file form disk (11MB) and then returns it as a download. It works,but performance sucks. The bottleneck seems to be reading this huge file over and over: def download_demo(): """ Returns the demo file """ file = open(FILENAME, 'r') buff = file.read() print "Content-Type:application/x-download\nContent-Disposition:attachment;filename=%s\nContent-Length:%s\n\n%s" % (os.path.split(FILENAME)[-1], len(buff), buff) How can I make this faster? I thought of using a ram disk to keep the file, but there must be some better solution. Would using mod_wsgi instead of a cgi script help? Would I be able to keep the big file in apache's memory space? Any help is greatly appreciated. A: Use mod_wsgi and use something akin to: def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain')] start_response(status, response_headers) file = open('/usr/share/dict/words', 'rb') return environ['wsgi.file_wrapper'](file) In other words, use wsgi.file_wrapper extension of WSGI standard to allow Apache/mod_wsgi to perform optimised reply of file contents using sendfile/mmap. In other words, avoids your application even needing to read file into memory. A: Why are you printing is all in one print statement? Python has to generate several temporary strings to handle the content headers and because of that last %s, it has to hold the entire contents of the file in two different string vars. This should be better. print "Content-Type:application/x-download\nContent-Disposition:attachment;filename=%s\nContent-Length:%s\n\n" % (os.path.split(FILENAME)[-1], len(buff)) print buff You might also consider reading the file using the raw IO module so Python doesn't create temp buffers that you aren't using. A: Try reading and outputting (i.e. buffering) a chunk of say 16KB at a time. Probably Python is doing something slow behind the scenes and manually buffering may be faster. You shouldn't have to use e.g. a ramdisk - the OS disk cache ought to cache the file contents for you. A: mod_wsgi or FastCGI would help in the sense that you don't need to reload the Python interpreter every time your script is run. However, they'd do little to improve the performance of reading the file (if that's what's really your bottleneck). I'd advise you to use something along the lines of memcached instead.
How to improve performance of python cgi that reads a big file and returns it as a download?
I have this python cgi script that checks if it hasn't been accessed to many times from the same IP, and if everything is ok, reads a big file form disk (11MB) and then returns it as a download. It works,but performance sucks. The bottleneck seems to be reading this huge file over and over: def download_demo(): """ Returns the demo file """ file = open(FILENAME, 'r') buff = file.read() print "Content-Type:application/x-download\nContent-Disposition:attachment;filename=%s\nContent-Length:%s\n\n%s" % (os.path.split(FILENAME)[-1], len(buff), buff) How can I make this faster? I thought of using a ram disk to keep the file, but there must be some better solution. Would using mod_wsgi instead of a cgi script help? Would I be able to keep the big file in apache's memory space? Any help is greatly appreciated.
[ "Use mod_wsgi and use something akin to:\ndef application(environ, start_response):\n status = '200 OK'\n output = 'Hello World!'\n\n response_headers = [('Content-type', 'text/plain')]\n start_response(status, response_headers)\n\n file = open('/usr/share/dict/words', 'rb')\n return environ['wsgi.file_wrapper'](file)\n\nIn other words, use wsgi.file_wrapper extension of WSGI standard to allow Apache/mod_wsgi to perform optimised reply of file contents using sendfile/mmap. In other words, avoids your application even needing to read file into memory.\n", "Why are you printing is all in one print statement? Python has to generate several temporary strings to handle the content headers and because of that last %s, it has to hold the entire contents of the file in two different string vars. This should be better.\nprint \"Content-Type:application/x-download\\nContent-Disposition:attachment;filename=%s\\nContent-Length:%s\\n\\n\" % (os.path.split(FILENAME)[-1], len(buff))\nprint buff\n\nYou might also consider reading the file using the raw IO module so Python doesn't create temp buffers that you aren't using.\n", "Try reading and outputting (i.e. buffering) a chunk of say 16KB at a time. Probably Python is doing something slow behind the scenes and manually buffering may be faster.\nYou shouldn't have to use e.g. a ramdisk - the OS disk cache ought to cache the file contents for you.\n", "mod_wsgi or FastCGI would help in the sense that you don't need to reload the Python interpreter every time your script is run. However, they'd do little to improve the performance of reading the file (if that's what's really your bottleneck). I'd advise you to use something along the lines of memcached instead.\n" ]
[ 9, 2, 1, 1 ]
[]
[]
[ "cgi", "mod_wsgi", "performance", "python" ]
stackoverflow_0001462330_cgi_mod_wsgi_performance_python.txt
Q: django - using a common header with some dynamic elements I'm planning to create a website using django that will have a common header throughout the entire website. I've read django's documentation on templating inheritance, but I can't seem to find an elegant solution for the "dynamic" elements in my header. For example, the header in the website will include tabs, say similar to http://www.google.com/ (where it has "Web", "Images", etc), where the selected tab will describe your current location in the website. Using the django template inheritance, it would seem like you would create a base template like this: <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <link rel="stylesheet" href="style.css" /> <title>{% block title %}My Amazing Site{% endblock %}</title> </head> <body> <div id="header"> {% block header %} .... html to create tabs ... {% endblock header %} </div> and then in all of my other pages, i would do this: {% extends "base.html" %} {% block header % } .... html to create tabs with one tab "selected" ... {% endblock header %} which seems annoying as every single one of my pages would have to have duplicated HTML with the header information, but slightly different. So when its time to add a new tab, i have to modify every single HTML file. Upon further reading, it seems like some other possible solutions are: 1 - Create a custom template tag that takes in which tab is currently selected, that way in each HTML page i just call: {% block header %} {% mycustomtag abc %} {% endblock header %} I don't like this solution because it would requiring placing HTML into the python code for creating this custom tag. 2 - Create X number of sub-templates of base.html, all with the appropriate tab selected. Then each page would inherit from the appropriate sub-template based on which tab they want selected. This solution seems fine, except for the fact that it will require X number of almost exactly the same HTML, and still runs into the issue of having to modify all the files when a tab is added or removed. 3 - Use javascript (like jquery) to modify the header on page load to "select" the correct tab. This solution is fine but then would require one to remember to add this functionality to every page's javascript. the good part is that the header HTML would only live in a single HTML file. Any other suggestions? Thanks! A: I'm assuming each tab is a list item in your template base.html. <ul> <li>Tab 1</li> <li>Tab 2</li> ... </ul> Add an extra block to each li. <ul> <li class="{% block class_tab1 %}inactive{% endblock %}">Tab 1</li> <li class="{% block class_tab2 %}inactive{% endblock %}">Tab 2</li> <li class="{% block class_tab3 %}inactive{% endblock %}">Tab 3</li> ... </ul> Then in your template if tab 1 is to be selected: {% extends "base.html" %} {% block class_tab1 %}active{% endblock %} ... So the html rendered for Tab 1 is: <ul> <li class="active">Tab 1</li> <li class="inactive">Tab 2</li> <li class="inactive">Tab 3</li> ... </ul> and you can write CSS to target the li .active as you wish. A: A version of #1 will do the trick — with a separate template file for the tag. Lets say you have the models "Category" and "Article". class Category(models.Model): title = models.CharField(_("Name"), max_length=200) introduction = models.TextField(blank=True, null=True) slug = models.SlugField(help_text=_("Used for URLs")) sort_order = models.IntegerField(_("Sortierung")) class Article(models.Model): title = models.CharField(_("Full Name"), max_length=255) slug = models.SlugField(_("Slug Name"), unique=True, help_text=_("This is a short, descriptive name of article that will be used in the URL link to this item")) text = models.TextField(_("Text of Article"), blank=True, null=True) category = models.ForeignKey(Category) in your views you would pass the category you are viewing to the template: @render_to('cat_index.html') def category_view(request,string): cat = Category.objects.get(slug=string) articles = Article.objects.filter(category = cat).order_by('date') return { 'articles':articles, 'category':cat, } (Note: using the annoying render_to-decorator – same as render_to_response) and in your template you call a inclusion_tag like this: @register.inclusion_tag('snippets/navigation.html') def navigation(cat=None): return {'cats':Category.objects.order_by('sort_order'), 'cat':cat } by using this in your base-template (often called base.html) {% navigation category %} Now in the inclusions_tags's template (snippets/navigation.html) you would for-loop over cats and if one of it equals cat you can assign other styles <ul> {% for c in cats %} <li{% ifequal c cat %} class="active"{% endifequal %}> <a href="{{c|url}}">{{ c }}</a> </li> {% endfor %} </ul> A: This is a rather common problems and I've come up with some various ways to solve it. Since you're asking for options, here's 3 other alternative ways achieve this effect. The options you mentioned and these listed below all have thier positives and negatives. It's really up to you to decide which is a best fit. Alternate 1 - Use Regular Expressions and a Hash Table This could be performed either client-side (less advantageous) or server-side (a better pick). To do this you could have a tag that had 1 input: a regular expression. In use it would look like this... // In base.html... <li class="tab {% is_tab_active r'^/cars/' %}"><a>Cars</a></li> <li class="tab {% is_tab_active r'^/trucks/' %}"><a>Trucks</a></li> The custom tag applies the regular expression against the current page being viewed. If successfull, it adds a css class "active" if not "inactive" (or whatever your CSS classes are). I've been pondering this method for a while. I feel as if there should be some good way to come up with a way to tie it into urls.py, but I haven't seen it yet. Alternate 2 - Use CSS If you were to identify each [body] tag, or at least have a common template for the sections of your site, CSS could be used to assign which was active. Consider the following: body.cars_section .navigation #cars_tab { color: #00000; } body.truck_section .navigation #trucks_tab { color: #00000; } For your base template... <body class="{% block category %}{% endblock %}"> ... <ul class="navigation"> <li id="cars_tab"><a>Cars</a></li> <li id="trucks_tab"><a>Trucks</a></li> Then for any page you simply put the category it's a part of (matching the CSS rule)... {% extends "base.html" %} ... {% block category %}cars_section{% endblock %} Alternate 3 - Have Some Bloated Middleware Django lets you write Middleware to affect the behavior of just about whatever you want. This seems like a bloated and complex route with potential negative performance impact, but I figured I'd at least mention it as an option.
django - using a common header with some dynamic elements
I'm planning to create a website using django that will have a common header throughout the entire website. I've read django's documentation on templating inheritance, but I can't seem to find an elegant solution for the "dynamic" elements in my header. For example, the header in the website will include tabs, say similar to http://www.google.com/ (where it has "Web", "Images", etc), where the selected tab will describe your current location in the website. Using the django template inheritance, it would seem like you would create a base template like this: <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <link rel="stylesheet" href="style.css" /> <title>{% block title %}My Amazing Site{% endblock %}</title> </head> <body> <div id="header"> {% block header %} .... html to create tabs ... {% endblock header %} </div> and then in all of my other pages, i would do this: {% extends "base.html" %} {% block header % } .... html to create tabs with one tab "selected" ... {% endblock header %} which seems annoying as every single one of my pages would have to have duplicated HTML with the header information, but slightly different. So when its time to add a new tab, i have to modify every single HTML file. Upon further reading, it seems like some other possible solutions are: 1 - Create a custom template tag that takes in which tab is currently selected, that way in each HTML page i just call: {% block header %} {% mycustomtag abc %} {% endblock header %} I don't like this solution because it would requiring placing HTML into the python code for creating this custom tag. 2 - Create X number of sub-templates of base.html, all with the appropriate tab selected. Then each page would inherit from the appropriate sub-template based on which tab they want selected. This solution seems fine, except for the fact that it will require X number of almost exactly the same HTML, and still runs into the issue of having to modify all the files when a tab is added or removed. 3 - Use javascript (like jquery) to modify the header on page load to "select" the correct tab. This solution is fine but then would require one to remember to add this functionality to every page's javascript. the good part is that the header HTML would only live in a single HTML file. Any other suggestions? Thanks!
[ "I'm assuming each tab is a list item in your template base.html.\n<ul>\n <li>Tab 1</li>\n <li>Tab 2</li>\n ...\n</ul>\n\nAdd an extra block to each li.\n<ul>\n <li class=\"{% block class_tab1 %}inactive{% endblock %}\">Tab 1</li>\n <li class=\"{% block class_tab2 %}inactive{% endblock %}\">Tab 2</li>\n <li class=\"{% block class_tab3 %}inactive{% endblock %}\">Tab 3</li>\n ...\n</ul>\n\nThen in your template if tab 1 is to be selected:\n{% extends \"base.html\" %}\n\n{% block class_tab1 %}active{% endblock %}\n...\n\nSo the html rendered for Tab 1 is:\n<ul>\n <li class=\"active\">Tab 1</li>\n <li class=\"inactive\">Tab 2</li>\n <li class=\"inactive\">Tab 3</li>\n ...\n</ul>\n\nand you can write CSS to target the li .active as you wish.\n", "A version of #1 will do the trick — with a separate template file for the tag.\nLets say you have the models \"Category\" and \"Article\".\nclass Category(models.Model):\n title = models.CharField(_(\"Name\"), max_length=200)\n introduction = models.TextField(blank=True, null=True)\n slug = models.SlugField(help_text=_(\"Used for URLs\"))\n sort_order = models.IntegerField(_(\"Sortierung\"))\n\nclass Article(models.Model):\n title = models.CharField(_(\"Full Name\"), max_length=255)\n slug = models.SlugField(_(\"Slug Name\"), unique=True, help_text=_(\"This is a short, descriptive name of article that will be used in the URL link to this item\"))\n text = models.TextField(_(\"Text of Article\"), blank=True, null=True)\n category = models.ForeignKey(Category)\n\nin your views you would pass the category you are viewing to the template:\n@render_to('cat_index.html')\ndef category_view(request,string):\n\n cat = Category.objects.get(slug=string)\n articles = Article.objects.filter(category = cat).order_by('date')\n return {\n 'articles':articles,\n 'category':cat,\n }\n\n(Note: using the annoying render_to-decorator – same as render_to_response)\nand in your template you call a inclusion_tag like this:\[email protected]_tag('snippets/navigation.html')\ndef navigation(cat=None):\n return {'cats':Category.objects.order_by('sort_order'),\n 'cat':cat\n }\n\nby using this in your base-template (often called base.html)\n{% navigation category %}\n\nNow in the inclusions_tags's template (snippets/navigation.html) you would for-loop over cats and if one of it equals cat you can assign other styles\n <ul> \n {% for c in cats %}\n <li{% ifequal c cat %} class=\"active\"{% endifequal %}>\n <a href=\"{{c|url}}\">{{ c }}</a>\n </li>\n {% endfor %}\n </ul>\n\n", "This is a rather common problems and I've come up with some various ways to solve it.\nSince you're asking for options, here's 3 other alternative ways achieve this effect. The options you mentioned and these listed below all have thier positives and negatives. It's really up to you to decide which is a best fit.\nAlternate 1 - Use Regular Expressions and a Hash Table\nThis could be performed either client-side (less advantageous) or server-side (a better pick). To do this you could have a tag that had 1 input: a regular expression. In use it would look like this...\n// In base.html...\n\n<li class=\"tab {% is_tab_active r'^/cars/' %}\"><a>Cars</a></li>\n<li class=\"tab {% is_tab_active r'^/trucks/' %}\"><a>Trucks</a></li>\n\nThe custom tag applies the regular expression against the current page being viewed. If successfull, it adds a css class \"active\" if not \"inactive\" (or whatever your CSS classes are).\nI've been pondering this method for a while. I feel as if there should be some good way to come up with a way to tie it into urls.py, but I haven't seen it yet.\nAlternate 2 - Use CSS\nIf you were to identify each [body] tag, or at least have a common template for the sections of your site, CSS could be used to assign which was active. Consider the following:\nbody.cars_section .navigation #cars_tab { color: #00000; }\nbody.truck_section .navigation #trucks_tab { color: #00000; }\n\nFor your base template...\n<body class=\"{% block category %}{% endblock %}\">\n\n...\n\n<ul class=\"navigation\">\n <li id=\"cars_tab\"><a>Cars</a></li>\n <li id=\"trucks_tab\"><a>Trucks</a></li>\n\nThen for any page you simply put the category it's a part of (matching the CSS rule)...\n{% extends \"base.html\" %}\n\n...\n\n{% block category %}cars_section{% endblock %}\n\nAlternate 3 - Have Some Bloated Middleware\nDjango lets you write Middleware to affect the behavior of just about whatever you want. This seems like a bloated and complex route with potential negative performance impact, but I figured I'd at least mention it as an option.\n" ]
[ 9, 1, 1 ]
[]
[]
[ "django", "javascript", "python" ]
stackoverflow_0001463153_django_javascript_python.txt
Q: python system calls with this command, I get only the file called OUTPUT (in reality I have many more --include flags) - so works as expected: os.system("rsync --rsh=ssh -arvuP --include='OUTPUT' --exclude='*' user@host:there/ ./here") In this case, the --include and --exclude flags are ignored: subprocess.call("rsync --rsh=ssh -arvuP --include='OUTPUT' --exclude='*' user@host:there/ ./here".split()) I wonder what I am doing wrong? Thank you much! Edit: Sorry, this is on OS X Leopard, and I get all the files... A: Try using subprocess.call with shell=True, it will simulate os.system more closely: subprocess.call("...", shell=True) A: Python does have an rsync module if I am not wrong, why not use that instead of a call. It will make your app more manageable.
python system calls
with this command, I get only the file called OUTPUT (in reality I have many more --include flags) - so works as expected: os.system("rsync --rsh=ssh -arvuP --include='OUTPUT' --exclude='*' user@host:there/ ./here") In this case, the --include and --exclude flags are ignored: subprocess.call("rsync --rsh=ssh -arvuP --include='OUTPUT' --exclude='*' user@host:there/ ./here".split()) I wonder what I am doing wrong? Thank you much! Edit: Sorry, this is on OS X Leopard, and I get all the files...
[ "Try using subprocess.call with shell=True, it will simulate os.system more closely:\nsubprocess.call(\"...\", shell=True)\n\n", "Python does have an rsync module if I am not wrong, why not use that instead of a call. It will make your app more manageable.\n" ]
[ 3, 1 ]
[]
[]
[ "python", "shell" ]
stackoverflow_0001462878_python_shell.txt
Q: How to create user defined fields in Django Ok, I am working on a Django application with several different models, namely Accounts, Contacts, etc, each with a different set of fields. I need to be able to allow each of my users to define their own fields in addition to the existing fields. I have seen several different ways to implement this, from having a large number of CustomFields and just mapping a custom name to each field used by each user. I have also seem recommendations for implementing complex mapping or XML/JSON style storage/retrieval of user defined fields. So my question is this, has anyone implemented user defined fields in a Django application? If so, how did you do it and what was your experience with the overall implementation (stability, performance, etc)? Update: My goal is to allow each of my users to create n number of each record type (accounts, contacts, etc) and associate user defined data with each record. So for example, one of my users might want to associate an SSN with each of his contacts, so I would need to store that additional field for each Contact record he creates. Thanks! Mark A: What if you were to use a ForeignKey? This code (untested and for demo) is assuming there is a system-wide set of custom fields. To make it user-specific, you'd add a "user = models.ForiegnKey(User)" onto the class CustomField. class Account(models.Model): name = models.CharField(max_length=75) # ... def get_custom_fields(self): return CustomField.objects.filter(content_type=ContentType.objects.get_for_model(Account)) custom_fields = property(get_fields) class CustomField(models.Model): """ A field abstract -- it describe what the field is. There are one of these for each custom field the user configures. """ name = models.CharField(max_length=75) content_type = models.ForeignKey(ContentType) class CustomFieldValueManager(models.Manager): get_value_for_model_instance(self, model): content_type = ContentType.objects.get_for_model(model) return self.filter(model__content_type=content_type, model__object_id=model.pk) class CustomFieldValue(models.Model): """ A field instance -- contains the actual data. There are many of these, for each value that corresponds to a CustomField for a given model. """ field = models.ForeignKey(CustomField, related_name='instance') value = models.CharField(max_length=255) model = models.GenericForeignKey() objects = CustomFieldValueManager() # If you wanted to enumerate the custom fields and their values, it would look # look like so: account = Account.objects.get(pk=1) for field in account.custom_fields: print field.name, field.instance.objects.get_value_for_model_instance(account)
How to create user defined fields in Django
Ok, I am working on a Django application with several different models, namely Accounts, Contacts, etc, each with a different set of fields. I need to be able to allow each of my users to define their own fields in addition to the existing fields. I have seen several different ways to implement this, from having a large number of CustomFields and just mapping a custom name to each field used by each user. I have also seem recommendations for implementing complex mapping or XML/JSON style storage/retrieval of user defined fields. So my question is this, has anyone implemented user defined fields in a Django application? If so, how did you do it and what was your experience with the overall implementation (stability, performance, etc)? Update: My goal is to allow each of my users to create n number of each record type (accounts, contacts, etc) and associate user defined data with each record. So for example, one of my users might want to associate an SSN with each of his contacts, so I would need to store that additional field for each Contact record he creates. Thanks! Mark
[ "What if you were to use a ForeignKey?\nThis code (untested and for demo) is assuming there is a system-wide set of custom fields. To make it user-specific, you'd add a \"user = models.ForiegnKey(User)\" onto the class CustomField.\nclass Account(models.Model):\n name = models.CharField(max_length=75)\n\n # ...\n\n def get_custom_fields(self):\n return CustomField.objects.filter(content_type=ContentType.objects.get_for_model(Account))\n custom_fields = property(get_fields)\n\nclass CustomField(models.Model):\n \"\"\"\n A field abstract -- it describe what the field is. There are one of these\n for each custom field the user configures.\n \"\"\"\n name = models.CharField(max_length=75)\n content_type = models.ForeignKey(ContentType)\n\nclass CustomFieldValueManager(models.Manager):\n\n get_value_for_model_instance(self, model):\n content_type = ContentType.objects.get_for_model(model)\n return self.filter(model__content_type=content_type, model__object_id=model.pk)\n\n\nclass CustomFieldValue(models.Model):\n \"\"\"\n A field instance -- contains the actual data. There are many of these, for\n each value that corresponds to a CustomField for a given model.\n \"\"\"\n field = models.ForeignKey(CustomField, related_name='instance')\n value = models.CharField(max_length=255)\n model = models.GenericForeignKey()\n\n objects = CustomFieldValueManager()\n\n# If you wanted to enumerate the custom fields and their values, it would look\n# look like so:\n\naccount = Account.objects.get(pk=1)\nfor field in account.custom_fields:\n print field.name, field.instance.objects.get_value_for_model_instance(account)\n\n" ]
[ 4 ]
[]
[]
[ "configuration", "django", "modeling", "python" ]
stackoverflow_0001463242_configuration_django_modeling_python.txt
Q: How can this python function code work? this is from the source code of csv2rec in matplotlib how can this function work, if its only parameters are 'func, default'? def with_default_value(func, default): def newfunc(name, val): if ismissing(name, val): return default else: return func(val) return newfunc ismissing takes a name and a value and determines if the row should be masked in a numpy array. func will either be str, int, float, or dateparser...it converts data. Maybe not important. I'm just wondering how it can get a 'name' and a 'value' I'm a beginner. Thanks for any 2cents! I hope to get good enough to help others! A: This with_default_value function is what's often referred to (imprecisely) as "a closure" (technically, the closure is rather the inner function that gets returned, here newfunc -- see e.g. here). More generically, with_default_value is a higher-order function ("HOF"): it takes a function (func) as an argument, it also returns a function (newfunc) as the result. I've seen answers confusing this with the decorator concept and construct in Python, which is definitely not the case -- especially since you mention func as often being a built-in such as int. Decorators are also higher-order functions, but rather specific ones: ones which return a decorated, i.e. "enriched", version of their function argument (which must be the only argument -- "decorators with arguments" are obtained through one more level of function/closure nesting, not by giving the decorator HOF more than one argument), which gets reassigned to exactly the same name as that function argument (and so typically has the same signature -- using a decorator otherwise would be extremely peculiar, un-idiomatic, unreadable, etc). So forget decorators, which have absolutely nothing to do with the case, and focus on the newfunc closure. A lexically nested function can refer to (though not rebind) all local variable names (including argument names, since arguments are local variables) of the enclosing function(s) -- that's why it's known as a closure: it's "closed over" these "free variables". Here, newfunc can refer to func and default -- and does. Higher-order functions are a very natural thing in Python, especially since functions are first-class objects (so there's nothing special you need to do to pass them as arguments, return them as function values, or even storing them in lists or other containers, etc), and there's no namespace distinction between functions and other kinds of objects, no automatic calling of functions just because they're mentioned, etc, etc. (It's harder - a bit harder, or MUCH harder, depending - in other languages that do draw lots of distinctions of this sort). In Python, mentioning a function is just that -- a mention; the CALL only happens if and when the function object (referred to by name, or otherwise) is followed by parentheses. That's about all there is to this example -- please do feel free to edit your question, comment here, etc, if there's some other specific aspect that you remain in doubt about! Edit: so the OP commented courteously asking for more examples of "closure factories". Here's one -- imagine some abstract kind of GUI toolkit, and you're trying to do: for i in range(len(buttons)): buttons[i].onclick(lambda: mainwin.settitle("button %d click!" % i)) but this doesn't work right -- i within the lambda is late-bound, so by the time one button is clicked i's value is always going to be the index of the last button, no matter which one was clicked. There are various feasible solutions, but a closure factory's an elegant possibility: def makeOnclick(message): return lambda: mainwin.settitle(message) for i in range(len(buttons)): buttons[i].onclick(makeOnClick("button %d click!" % i)) Here, we're using the closure factory to tweak the binding time of variables!-) In one specific form or another, this is a pretty common use case for closure factories. A: This is a Python decorator -- basically a function wrapper. (Read all about decorators in PEP 318 -- http://www.python.org/dev/peps/pep-0318/) If you look through the code, you will probably find something like this: def some_func(name, val): # ... some_func = with_default_value(some_func, 'the_default_value') The intention of this decorator seems to supply a default value if either the name or val arguments are missing (presumably, if they are set to None). A: As for why it works: with_default_value returns a function object, which is basically going to be a copy of that nested newfunc, with the 'func' call and default value substited with whatever was passed to with_default_value. If someone does 'foo = with_default_value(bar, 3)', the return value is basically going to be a new function: def foo(name, val): ifismissing(name, val): return 3 else: return bar(val) so you can then take that return value, and call it. A: This is a function that returns another function. name and value are the parameters of the returned function.
How can this python function code work?
this is from the source code of csv2rec in matplotlib how can this function work, if its only parameters are 'func, default'? def with_default_value(func, default): def newfunc(name, val): if ismissing(name, val): return default else: return func(val) return newfunc ismissing takes a name and a value and determines if the row should be masked in a numpy array. func will either be str, int, float, or dateparser...it converts data. Maybe not important. I'm just wondering how it can get a 'name' and a 'value' I'm a beginner. Thanks for any 2cents! I hope to get good enough to help others!
[ "This with_default_value function is what's often referred to (imprecisely) as \"a closure\" (technically, the closure is rather the inner function that gets returned, here newfunc -- see e.g. here). More generically, with_default_value is a higher-order function (\"HOF\"): it takes a function (func) as an argument, it also returns a function (newfunc) as the result.\nI've seen answers confusing this with the decorator concept and construct in Python, which is definitely not the case -- especially since you mention func as often being a built-in such as int. Decorators are also higher-order functions, but rather specific ones: ones which return a decorated, i.e. \"enriched\", version of their function argument (which must be the only argument -- \"decorators with arguments\" are obtained through one more level of function/closure nesting, not by giving the decorator HOF more than one argument), which gets reassigned to exactly the same name as that function argument (and so typically has the same signature -- using a decorator otherwise would be extremely peculiar, un-idiomatic, unreadable, etc).\nSo forget decorators, which have absolutely nothing to do with the case, and focus on the newfunc closure. A lexically nested function can refer to (though not rebind) all local variable names (including argument names, since arguments are local variables) of the enclosing function(s) -- that's why it's known as a closure: it's \"closed over\" these \"free variables\". Here, newfunc can refer to func and default -- and does.\nHigher-order functions are a very natural thing in Python, especially since functions are first-class objects (so there's nothing special you need to do to pass them as arguments, return them as function values, or even storing them in lists or other containers, etc), and there's no namespace distinction between functions and other kinds of objects, no automatic calling of functions just because they're mentioned, etc, etc. (It's harder - a bit harder, or MUCH harder, depending - in other languages that do draw lots of distinctions of this sort). In Python, mentioning a function is just that -- a mention; the CALL only happens if and when the function object (referred to by name, or otherwise) is followed by parentheses.\nThat's about all there is to this example -- please do feel free to edit your question, comment here, etc, if there's some other specific aspect that you remain in doubt about!\nEdit: so the OP commented courteously asking for more examples of \"closure factories\". Here's one -- imagine some abstract kind of GUI toolkit, and you're trying to do:\nfor i in range(len(buttons)):\n buttons[i].onclick(lambda: mainwin.settitle(\"button %d click!\" % i))\n\nbut this doesn't work right -- i within the lambda is late-bound, so by the time one button is clicked i's value is always going to be the index of the last button, no matter which one was clicked. There are various feasible solutions, but a closure factory's an elegant possibility:\ndef makeOnclick(message):\n return lambda: mainwin.settitle(message)\n\nfor i in range(len(buttons)):\n buttons[i].onclick(makeOnClick(\"button %d click!\" % i))\n\nHere, we're using the closure factory to tweak the binding time of variables!-) In one specific form or another, this is a pretty common use case for closure factories.\n", "This is a Python decorator -- basically a function wrapper. (Read all about decorators in PEP 318 -- http://www.python.org/dev/peps/pep-0318/)\nIf you look through the code, you will probably find something like this:\ndef some_func(name, val):\n # ...\nsome_func = with_default_value(some_func, 'the_default_value')\n\nThe intention of this decorator seems to supply a default value if either the name or val arguments are missing (presumably, if they are set to None).\n", "As for why it works:\nwith_default_value returns a function object, which is basically going to be a copy of that nested newfunc, with the 'func' call and default value substited with whatever was passed to with_default_value.\nIf someone does 'foo = with_default_value(bar, 3)', the return value is basically going to be a new function:\ndef foo(name, val):\n ifismissing(name, val):\n return 3\n else:\n return bar(val)\n\nso you can then take that return value, and call it. \n", "This is a function that returns another function. name and value are the parameters of the returned function.\n" ]
[ 8, 6, 1, 0 ]
[]
[]
[ "function", "matplotlib", "python" ]
stackoverflow_0001463588_function_matplotlib_python.txt
Q: Is it Pythonic for a function to return an iterable or non-iterable depending on its input? (Title and contents updated after reading Alex's answer) In general I believe that it's considered bad form (un-Pythonic) for a function to sometimes return an iterable and sometimes a single item depending on its parameters. For example struct.unpack always returns a tuple even if it contains only one item. I'm trying to finalise the API for a module and I have a few functions that can take one or more parameters (via *args) like this: a = s.read(10) # reads 10 bits and returns a single item b, c = s.read(5, 5) # reads 5 bits twice and returns a list of two items. So it returns a single item if there's only one parameter, otherwise it returns a list. Now I think this is fine and not at all confusing, but I suspect that others may disagree. The most common use-case for these functions would be to only want a single item returned, so always returning a list (or tuple) feels wrong: a, = s.read(10) # Prone to bugs when people forget to unpack the object a = s.read(10)[0] # Ugly and it's not clear only one item is being returned Another option is to have two functions: a = s.read(10) b, c = s.read_list(5, 5) which is OK, but it clutters up the API and requires the user to remember twice as many functions without adding any value. So my question is: Is sometimes returning an iterable and sometimes a single item confusing and un-Pythonic? If so what's the best option? Update: I think the general consensus is that it's very naughty to only return an iterable sometimes. I think that the best option for most cases would be to always return the iterable, even if it contained only one item. Having said that, for my particular case I think I'll go for the splitting into two functions (read(item) / readlist(*items)), the reasoning being that I think the single item case will happen much more often than the multiple item case, so it makes it easier to use and the API change less problematic for users. Thanks everyone. A: If you are going to be returning iterators sometimes, and single objects on others, I'd say return always an iterator, so you don't have to think about it. Generaly, you would use that function in a context that expects an iterator, so if you'd have to check if it where a list to iterate or an object to do just one time the work, then its easier to just return an iterator and iterate always, even if its one time. If you need to do something different if you are returned one element, just use if len(var):. Remember, consistency is a valuable good. I lean towards returning a consistent object, not necesarily the same type, but if I return ever an iterable, I return always an iterable. A: In general, I would have to say that returning two different types is bad practice. Imagine the next developer coming to read and maintain your code. At first he/she will read a method using your function and think "Ah, read() returns a single item." Later they will see code treating read()'s result as a list. At best this will simply confuse them and force them to examine read()'s usage. At worst they might think there is a bug in the implementation using read() and attempt to fix it. Finally, once they understand read() returns two possible types they will have to ask themselves "is there possibly a third return type I need to be ready for?" This reminds me of the saying: "Code as if the next guy to maintain your code is a homicidal maniac who knows where you live." A: Returning either a single object, or an iterable of objects, depending on arguments, is definitely hard to deal with. But the question in your title is much more general and the assertion that the standard library's function avoid (or "mostly avoid") returning different types based on the argument(s) is quite incorrect. There are many counter-examples. Functions copy.copy and copy.deepcopy return the same type as their argument, so of course they're "returning different types depending on" the argument. "Return same type as the input" is actually VERY common -- you could class here, also, the "fetch an object back from a container where it was put", though normally that's done with a method rather than a function;-). But also, in the same vein, consider itertools.repeat (once you iterate on its returned iterator), or, say, filter...: >>> filter(lambda x: x>'f', 'zaplepidop') 'zplpiop' >>> filter(lambda x: x>'f', list('zaplepidop')) ['z', 'p', 'l', 'p', 'i', 'o', 'p'] filtering a string returns a string, filtering a list returns a list. But wait, there's more!-) Functions pickle.loads and its friends (e.g. in module marshal &c) return objects of types entirely dependent on the value you're passing as an argument. So does built-in function eval (and similarly input, in Python 2.*). This is the second common pattern: construct or reconstruct an object as dictated by the value of the argument(s), of a wide (or even ubbounded) variety of possible types, and return it. I know no good example of the specific anti-pattern you've observed (and I do believe it's an anti-pattern, mildly -- not for any high-falutin' reason, just because it's pesky and inconvenient to deal with;-). Note that these cases I have exemplified ARE handy and convenient -- that's the real design discriminant in most standard library issue!-) A: The only situation where I would do this is with a parameterized function or method, where one or more of the parameters the caller gives determines the type returned; for example, a "factory" function that returns one of a logically similar family of objects: newCharacter = characterFactory("human", "male", "warrior") In the general case, where the caller doesn't get to specify, I'd avoid the "box of chocolates" behavior. :) A: It may not be a matter of "pythonic" but rather a matter of "good design". If you returnd different things AND nobody has to do typechecks on them, then it's probably okay. That's polymorphism for you. OTOH, if the caller has to "pierce the veil" then you have a design problem, known as a violation of the Liskov Substitution Principle. Pythonic or not, it is clearly not an OO design, which means it will be prone to bugs and programming inconveniences. A: I would have read(integer) and read_list(iterable). That way you could do read(10) and get back a single result and read_list([5, 5, 10, 5]) and get back a list of results. This is both more flexible and explicit.
Is it Pythonic for a function to return an iterable or non-iterable depending on its input?
(Title and contents updated after reading Alex's answer) In general I believe that it's considered bad form (un-Pythonic) for a function to sometimes return an iterable and sometimes a single item depending on its parameters. For example struct.unpack always returns a tuple even if it contains only one item. I'm trying to finalise the API for a module and I have a few functions that can take one or more parameters (via *args) like this: a = s.read(10) # reads 10 bits and returns a single item b, c = s.read(5, 5) # reads 5 bits twice and returns a list of two items. So it returns a single item if there's only one parameter, otherwise it returns a list. Now I think this is fine and not at all confusing, but I suspect that others may disagree. The most common use-case for these functions would be to only want a single item returned, so always returning a list (or tuple) feels wrong: a, = s.read(10) # Prone to bugs when people forget to unpack the object a = s.read(10)[0] # Ugly and it's not clear only one item is being returned Another option is to have two functions: a = s.read(10) b, c = s.read_list(5, 5) which is OK, but it clutters up the API and requires the user to remember twice as many functions without adding any value. So my question is: Is sometimes returning an iterable and sometimes a single item confusing and un-Pythonic? If so what's the best option? Update: I think the general consensus is that it's very naughty to only return an iterable sometimes. I think that the best option for most cases would be to always return the iterable, even if it contained only one item. Having said that, for my particular case I think I'll go for the splitting into two functions (read(item) / readlist(*items)), the reasoning being that I think the single item case will happen much more often than the multiple item case, so it makes it easier to use and the API change less problematic for users. Thanks everyone.
[ "If you are going to be returning iterators sometimes, and single objects on others, I'd say return always an iterator, so you don't have to think about it. \nGeneraly, you would use that function in a context that expects an iterator, so if you'd have to check if it where a list to iterate or an object to do just one time the work, then its easier to just return an iterator and iterate always, even if its one time.\nIf you need to do something different if you are returned one element, just use if len(var):.\nRemember, consistency is a valuable good.\nI lean towards returning a consistent object, not necesarily the same type, but if I return ever an iterable, I return always an iterable.\n", "In general, I would have to say that returning two different types is bad practice.\nImagine the next developer coming to read and maintain your code. At first he/she will read a method using your function and think \"Ah, read() returns a single item.\" \nLater they will see code treating read()'s result as a list. At best this will simply confuse them and force them to examine read()'s usage. At worst they might think there is a bug in the implementation using read() and attempt to fix it.\nFinally, once they understand read() returns two possible types they will have to ask themselves \"is there possibly a third return type I need to be ready for?\"\nThis reminds me of the saying: \"Code as if the next guy to maintain your code is a homicidal maniac who knows where you live.\" \n", "Returning either a single object, or an iterable of objects, depending on arguments, is definitely hard to deal with. But the question in your title is much more general and the assertion that the standard library's function avoid (or \"mostly avoid\") returning different types based on the argument(s) is quite incorrect. There are many counter-examples.\nFunctions copy.copy and copy.deepcopy return the same type as their argument, so of course they're \"returning different types depending on\" the argument. \"Return same type as the input\" is actually VERY common -- you could class here, also, the \"fetch an object back from a container where it was put\", though normally that's done with a method rather than a function;-). But also, in the same vein, consider itertools.repeat (once you iterate on its returned iterator), or, say, filter...:\n>>> filter(lambda x: x>'f', 'zaplepidop')\n'zplpiop'\n>>> filter(lambda x: x>'f', list('zaplepidop'))\n['z', 'p', 'l', 'p', 'i', 'o', 'p']\n\nfiltering a string returns a string, filtering a list returns a list. \nBut wait, there's more!-) Functions pickle.loads and its friends (e.g. in module marshal &c) return objects of types entirely dependent on the value you're passing as an argument. So does built-in function eval (and similarly input, in Python 2.*). This is the second common pattern: construct or reconstruct an object as dictated by the value of the argument(s), of a wide (or even ubbounded) variety of possible types, and return it.\nI know no good example of the specific anti-pattern you've observed (and I do believe it's an anti-pattern, mildly -- not for any high-falutin' reason, just because it's pesky and inconvenient to deal with;-). Note that these cases I have exemplified ARE handy and convenient -- that's the real design discriminant in most standard library issue!-)\n", "The only situation where I would do this is with a parameterized function or method, where one or more of the parameters the caller gives determines the type returned; for example, a \"factory\" function that returns one of a logically similar family of objects:\nnewCharacter = characterFactory(\"human\", \"male\", \"warrior\")\n\nIn the general case, where the caller doesn't get to specify, I'd avoid the \"box of chocolates\" behavior. :)\n", "It may not be a matter of \"pythonic\" but rather a matter of \"good design\". If you returnd different things AND nobody has to do typechecks on them, then it's probably okay. That's polymorphism for you. OTOH, if the caller has to \"pierce the veil\" then you have a design problem, known as a violation of the Liskov Substitution Principle. Pythonic or not, it is clearly not an OO design, which means it will be prone to bugs and programming inconveniences. \n", "I would have read(integer) and read_list(iterable).\nThat way you could do read(10) and get back a single result and read_list([5, 5, 10, 5]) and get back a list of results. This is both more flexible and explicit.\n" ]
[ 12, 3, 2, 1, 1, 1 ]
[ "In python lists are objects :) So no type mismatch\n" ]
[ -1 ]
[ "python", "return_value" ]
stackoverflow_0001461392_python_return_value.txt
Q: Google app engine and python and objects that pass by value This is really a python language question, but its wrapped around a Google appengine specific problem. We have class User( db.Model ) : email = db.StringProperty() name = db.StringProperty() password = db.StringProperty() # more fields.. Because the user account is accessed so often, we keep a copy in session using gaeutilities (as a bonus question, is this bad on GAE? I thought I'd relieve the db a bit.) class UpdateProfile( webapp.RequestHandler ): def post( self ): # User posting update to his name self.session = sessions.Session() ####### # way#1: update the copy of the User object in SESSION, then .put() it self.session[ 'current_user' ].name = self.request.get( 'name' ) self.session[ 'current_user' ].put() # does not work. ####### ####### # way#2: make a copy of the User object in SESSION, then .put() it user = self.session[ 'current_user' ] user.name = self.request.get( 'name' ) user.put() # works to update the datastore, but the copy of user in self.session # is NOT UPDATED! I thought Python was # pass-by-reference. It is when you work with lists anyway. # Why isn't it "acting right" with this type of object?? ####### ####### # way#3: way that works. user = self.session[ 'current_user' ] user.name = self.request.get( 'name' ) user.put() self.session[ 'current_user' ] = user # works completely ####### What is happening in each of these 3 cases? Why don't cases 1 and 2 work? A: I'm guessing: Putting objects in the Session means that the objects is serialized ( pickled usually ) and stored somewhere (disk, memory, db). When it is retrieved from the Session, a new object is created from serialized the old state. In the first example each self.session[ 'current_user' ] gives you a new object, one of which you update and the other is saved to the db. In the 2nd you get one object, save it to the DB but not in the session. Btw, Python does "call by sharing", but that has nothing to do with your problem ;-) A: I'm the author of gaeutilities. I contacted Nick to talk about this, and he gave me some good ideas. I'll have a solution for this in the final 1.3 release. A: Sorry, I'm new to this site, and I don't see where to comment on an answer? Anyway, I've fixed the specific issue that instigated the original post. While serialization of data is still suboptimal, happening on write, I have managed to make sure assigning model entities as items in a session will work as expected. Rather than get into rewriting the serialization (would be a major rewrite), I instead chose to detect when a model was being inserted as a session data item, and set a ReferenceProperty for it. This means that the model objects never have to get the overhead of being serialized at all. If the entity the ReferenceProperty is associated is deleted, the ReferenceProperty in the session will be deleted when you try to load it as well. This does mean you'll have to catch exceptions such as KeyError that are raised by Session, as a dictionary should. I hope this is intuitive enough, but am open to any comments. I'll check this page again a few times over the next several weeks.
Google app engine and python and objects that pass by value
This is really a python language question, but its wrapped around a Google appengine specific problem. We have class User( db.Model ) : email = db.StringProperty() name = db.StringProperty() password = db.StringProperty() # more fields.. Because the user account is accessed so often, we keep a copy in session using gaeutilities (as a bonus question, is this bad on GAE? I thought I'd relieve the db a bit.) class UpdateProfile( webapp.RequestHandler ): def post( self ): # User posting update to his name self.session = sessions.Session() ####### # way#1: update the copy of the User object in SESSION, then .put() it self.session[ 'current_user' ].name = self.request.get( 'name' ) self.session[ 'current_user' ].put() # does not work. ####### ####### # way#2: make a copy of the User object in SESSION, then .put() it user = self.session[ 'current_user' ] user.name = self.request.get( 'name' ) user.put() # works to update the datastore, but the copy of user in self.session # is NOT UPDATED! I thought Python was # pass-by-reference. It is when you work with lists anyway. # Why isn't it "acting right" with this type of object?? ####### ####### # way#3: way that works. user = self.session[ 'current_user' ] user.name = self.request.get( 'name' ) user.put() self.session[ 'current_user' ] = user # works completely ####### What is happening in each of these 3 cases? Why don't cases 1 and 2 work?
[ "I'm guessing: \nPutting objects in the Session means that the objects is serialized ( pickled usually ) and stored somewhere (disk, memory, db). When it is retrieved from the Session, a new object is created from serialized the old state.\n\nIn the first example each self.session[ 'current_user' ] gives you a new object, one of which you update and the other is saved to the db.\nIn the 2nd you get one object, save it to the DB but not in the session.\n\nBtw, Python does \"call by sharing\", but that has nothing to do with your problem ;-)\n", "I'm the author of gaeutilities. I contacted Nick to talk about this, and he gave me some good ideas. I'll have a solution for this in the final 1.3 release.\n", "Sorry, I'm new to this site, and I don't see where to comment on an answer?\nAnyway, I've fixed the specific issue that instigated the original post. While serialization of data is still suboptimal, happening on write, I have managed to make sure assigning model entities as items in a session will work as expected. Rather than get into rewriting the serialization (would be a major rewrite), I instead chose to detect when a model was being inserted as a session data item, and set a ReferenceProperty for it. This means that the model objects never have to get the overhead of being serialized at all.\nIf the entity the ReferenceProperty is associated is deleted, the ReferenceProperty in the session will be deleted when you try to load it as well. This does mean you'll have to catch exceptions such as KeyError that are raised by Session, as a dictionary should. I hope this is intuitive enough, but am open to any comments. I'll check this page again a few times over the next several weeks.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001449257_google_app_engine_python.txt
Q: How do I get this program to start over in python? I believe the word is "recurse" instead of 'start over.' I've created this program to hone my multiplication skills in the morning. I can get it to give me a multiplication problem, but how do I get it to ask me another one? from random import randint print 'Good Morning Pete!' X = randint(0, 10) Y = randint(0, 10) A = X * Y Z = int(raw_input('%i * %i = ? ' % (X, Y))) count = 0 if Z == A: count += 1 print 'Good Job!' else: print 'Sorry!' if count == '10': print 'Time to kill \'em' how do I get it to spit out a new problem for me to solve? I'm a beginner. Thanks all! A: Pete, you wouldn't need recursion in this case, but merely a loop. I suggest you put the bulk of the logic of this program (the part that asks the multiplcation problem and check your answer), into a function. Say One Problem(). This function could return 0 if you answered wrong, 1 if you answered correctly and -1 if you entered some key indicating that you want to stop. (BTW, this function is introduced to help you structure the program, make it more readeable but it is not needed for introducing a loop. you could well keep all this stuff inside the loop. Also, you should know that there are other loop constructs in python, for exampe while loops.) Then you'd just need in your main section something like that : GoodReplyCtr = 0 for i in range(0, 10): # or 100 or 1000 if you feel ambitious... cc = OneProblem() if cc < 0: break GoodReplyCtr += cc print(GoodReplyCtr) The concept of recursion (again not needed here), is when a function calls itself. This is a common practice when navigating graphs (like say the directory structure on you drive C:), or with some mathematical problems. We typically do not need to cover recursion early in the learning of computer languages concepts, but once you have a good mastery of things, you may find it quite useful (and challenging at time ;-) ) Keep at it! Math and python are cool. Edit: One last trick: You may find that you need to work on some multiplication tables more than other. Rather than using randint you can use the random's module random.choice() method to favor some numbers or to eliminate others. for example import random X = random.choice((2, 3, 4, 6, 7, 8, 9, 7, 9)) # see, no 0, 1,or 5 but more chance to get 7 or 9 A: I believe you want something like this from random import randint print 'Good Morning Pete!' count = 0 while True: X = randint(0, 10) Y = randint(0, 10) A = X * Y Z = int(raw_input('%i * %i = ? ' % (X, Y))) if Z == A: count += 1 print 'Good Job!' else: print 'Sorry!' if count == 10: print 'Time to kill \'em' break A: A loop? See the for statement and the range() function. They're in the Python tutorial. And you might want to read the next chapter in whatever book you're using to teach yourself programming. A: to incorporate the loop, you might add this to the beginning of your code: running = True while True: //Add your code here //Add this to the end of your code: print 'Another problem? Enter y or n' answer = raw_input().lower() if answer == 'n': running = False break elif answer == 'y': running = True this will allow the user to choose if they want another problem each time. A: I agree 100% with those who said this isn't a good case for recursion, but calls for a loop instead. However, for the sake of showing how it might be done, I post the code below: import random def do_mult(num_questions): x = random.randint(0, 10) y = random.randint(0, 10) a = x * y z = int(raw_input('%i * %i = ?' % (x, y))) if z == a: print 'good job!' else: print 'sorry!' if num_questions > 1: do_mult(num_questions - 1) do_mult(10)
How do I get this program to start over in python?
I believe the word is "recurse" instead of 'start over.' I've created this program to hone my multiplication skills in the morning. I can get it to give me a multiplication problem, but how do I get it to ask me another one? from random import randint print 'Good Morning Pete!' X = randint(0, 10) Y = randint(0, 10) A = X * Y Z = int(raw_input('%i * %i = ? ' % (X, Y))) count = 0 if Z == A: count += 1 print 'Good Job!' else: print 'Sorry!' if count == '10': print 'Time to kill \'em' how do I get it to spit out a new problem for me to solve? I'm a beginner. Thanks all!
[ "Pete, you wouldn't need recursion in this case, but merely a loop.\nI suggest you put the bulk of the logic of this program (the part that asks the multiplcation problem and check your answer), into a function. Say One Problem(). This function could return 0 if you answered wrong, 1 if you answered correctly and -1 if you entered some key indicating that you want to stop. (BTW, this function is introduced to help you structure the program, make it more readeable but it is not needed for introducing a loop. you could well keep all this stuff inside the loop. Also, you should know that there are other loop constructs in python, for exampe while loops.)\nThen you'd just need in your main section something like that :\nGoodReplyCtr = 0\nfor i in range(0, 10): # or 100 or 1000 if you feel ambitious...\n cc = OneProblem()\n if cc < 0:\n break\n GoodReplyCtr += cc\n\nprint(GoodReplyCtr)\n\nThe concept of recursion (again not needed here), is when a function calls itself. This is a common practice when navigating graphs (like say the directory structure on you drive C:), or with some mathematical problems. We typically do not need to cover recursion early in the learning of computer languages concepts, but once you have a good mastery of things, you may find it quite useful (and challenging at time ;-) )\nKeep at it! Math and python are cool.\nEdit: One last trick:\nYou may find that you need to work on some multiplication tables more than other. Rather than using randint you can use the random's module random.choice() method to favor some numbers or to eliminate others. for example\nimport random\nX = random.choice((2, 3, 4, 6, 7, 8, 9, 7, 9)) # see, no 0, 1,or 5 but more chance to get 7 or 9\n\n", "I believe you want something like this\nfrom random import randint\n\nprint 'Good Morning Pete!'\n\ncount = 0\n\nwhile True:\n X = randint(0, 10)\n Y = randint(0, 10)\n A = X * Y\n Z = int(raw_input('%i * %i = ? ' % (X, Y)))\n\n if Z == A:\n count += 1\n print 'Good Job!'\n else:\n print 'Sorry!'\n\n\n if count == 10:\n print 'Time to kill \\'em'\n break\n\n", "A loop? See the for statement and the range() function. They're in the Python tutorial.\nAnd you might want to read the next chapter in whatever book you're using to teach yourself programming.\n", "to incorporate the loop, you might add this to the beginning of your code:\nrunning = True\nwhile True:\n //Add your code here\n//Add this to the end of your code:\n\nprint 'Another problem? Enter y or n'\n answer = raw_input().lower()\n if answer == 'n':\n running = False\n break\n elif answer == 'y':\n running = True\n\n\nthis will allow the user to choose if they want another problem each time.\n", "I agree 100% with those who said this isn't a good case for recursion, but calls for a loop instead. However, for the sake of showing how it might be done, I post the code below:\nimport random\ndef do_mult(num_questions):\n x = random.randint(0, 10)\n y = random.randint(0, 10)\n a = x * y\n z = int(raw_input('%i * %i = ?' % (x, y)))\n if z == a:\n print 'good job!'\n else:\n print 'sorry!'\n if num_questions > 1:\n do_mult(num_questions - 1)\n\ndo_mult(10)\n\n" ]
[ 3, 2, 1, 0, 0 ]
[]
[]
[ "python", "recursion" ]
stackoverflow_0001463710_python_recursion.txt
Q: What is the accepted python alternative to C++ overloaded input stream operators? In C++, you can do this to easily read data into a class: istream& operator >> (istream& instream, SomeClass& someclass) { ... } In python, the only way I can find to read from the console is the "raw_input" function, which isn't very adaptable to this sort of thing. Is there a pythonic way to go about this? A: You are essentially looking for deserialization. Python has a myriad of options for this depending on the library used. The default is python pickling. There are lots of other options you can have a look here. A: No, there's no widespread Pythonic convention for "read the next instance of class X from this open input text file". I believe this applies to most languages, including e.g. Java; C++ is kind of the outlier there (and many C++ shops forbid the operator>> use in their local style guides). Serialization (to/from JSON or XML if you need allegedly-human readable text files), suggested by another answer, is one possible approach, but not too hot (no standardized way to serialize completely general class instances to either XML or JSON). A: Rather than use raw_input, you can read from sys.stdin (a file-like object): import sys input_line = sys.stdin.readline() # do something with input_line
What is the accepted python alternative to C++ overloaded input stream operators?
In C++, you can do this to easily read data into a class: istream& operator >> (istream& instream, SomeClass& someclass) { ... } In python, the only way I can find to read from the console is the "raw_input" function, which isn't very adaptable to this sort of thing. Is there a pythonic way to go about this?
[ "You are essentially looking for deserialization. Python has a myriad of options for this depending on the library used. The default is python pickling. There are lots of other options you can have a look here.\n", "No, there's no widespread Pythonic convention for \"read the next instance of class X from this open input text file\". I believe this applies to most languages, including e.g. Java; C++ is kind of the outlier there (and many C++ shops forbid the operator>> use in their local style guides). Serialization (to/from JSON or XML if you need allegedly-human readable text files), suggested by another answer, is one possible approach, but not too hot (no standardized way to serialize completely general class instances to either XML or JSON).\n", "Rather than use raw_input, you can read from sys.stdin (a file-like object):\nimport sys\ninput_line = sys.stdin.readline()\n# do something with input_line\n\n" ]
[ 6, 3, 2 ]
[]
[]
[ "input", "operator_overloading", "python" ]
stackoverflow_0001463499_input_operator_overloading_python.txt
Q: Calculate brute force size dynamically? How you could calculate size of brute force method dynamically? For example how many iterations and space would take if you printed all IPv6 addresses from 0:0:0:0:0:0:0:0 - ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff to file? The tricky parts are those when length of line varies. IP address is only example. Idea is that you give the format and maximum lenghts of given parts. So if variable type is '%c' (char), and maxlen is 26 then iteration count is 26 and needed space in human format in text file is 26 + 26 (one char for separator) def calculate(format, rules): end = format for i in rules: (vartype, maxlen) = rules[i] end = end.replace(i, vartype % maxlen) start = format for i in rules: (vartype, maxlen) = rules[i] minlen = 0 start = start.replace(i, vartype % minlen) start_bytes = len(start) end_bytes = len(end) # how to add for example IPv4 calculations # 0.0.0.0 - 9.9.9.9 # 10.10.10.10 - 99.99.99.99 # 100.100.100.100 - 255.255.255.255 iterations = 0 for i in rules: if format.find(i) is not -1: (vartype, maxlen) = rules[i] if iterations == 0: iterations = int(maxlen) + 1 else: iterations *= int(maxlen) + 1 iterations -= 1 needed_space = 0 if start_bytes == end_bytes: # +1 for separator (space / new line) needed_space = (1 + start_bytes) * iterations else: needed_space = "How to calculate?" return [iterations, needed_space, start, end, start_bytes, end_bytes] if __name__ == '__main__': # IPv4 print calculate( "%a.%b.%c.%d", { '%a': ['%d', 255], '%b': ['%d', 255], '%c': ['%d', 255], '%d': ['%d', 255] }, ) # IPv4 zero filled version print calculate( "%a.%b.%c.%d", { '%a': ['%03d', 255], '%b': ['%03d', 255], '%c': ['%03d', 255], '%d': ['%03d', 255] }, ) # IPv6 print calculate( "%a:%b:%c:%d:%e:%f:%g:%h", { '%a': ['%x', 65535], '%b': ['%x', 65535], '%c': ['%x', 65535], '%d': ['%x', 65535], '%e': ['%x', 65535], '%f': ['%x', 65535], '%g': ['%x', 65535], '%h': ['%x', 65535] }, ) # days in year, simulate with day numbers print calculate( "ddm%a", #ddmmyy { '%a': ['%03d', 365], }, ) So for example: 1.2.3.4 takes 7 bytes 9.9.9.10 takes 8 bytes 1.1.1.100 takes 9 bytes 5.7.10.100 takes 10 bytes 128.1.1.1 takes 9 bytes and so on Example 0.0.0.0 - 10.10.10.10: iterations = 0 needed_space = 0 for a in range(0, 11): for b in range(0, 11): for c in range(0, 11): for d in range(0, 11): line = "%d.%d.%d.%d\n" % (a, b, c, d) needed_space += len(line) iterations += 1 print "iterations: %d needed_space: %d bytes" % (iterations, needed_space) iterations: 14641 needed_space: 122452 bytes Into print calculate( "%a.%b.%c.%d", { '%a': ['%d', 10], '%b': ['%d', 10], '%c': ['%d', 10], '%d': ['%d', 10] }, ) Result: [14641, 122452] A: Using combinatorics and discrete math: The IPv4 address space is 256*256*256*256 = 2^32 = 4,294,967,296 addresses. IPv6 has 2^128 addresses (8 groups of 16*16*16*16). An IPv4 address use 32 bits, so 32 bits * 4,294,967,296 addresses = 16 gigabytes if stored e.g. on disk. An IPv6 address uses 128 bits, so 128 bits * 2^128 addresses = 5.07 * 10^30 gigabytes. A: First start by counting the number of lines you need. IPv6 addresses are 128 bits, so your output file will be 2128 lines long. That's roughly 3.4 × 1038 bytes, if all your lines are empty. A terabyte is only about 1012 bytes, so you would need over 3 × 1026 1-terabyte hard drives just to store the empty lines before putting any data in them. Storing 40 bytes per line would require proportionally more storage. A: Breaking it up into components is probably the way to go; for IPv4 you've got four parts, separated by three dots, and each part can be 1, 2 or 3 characters (0-9, 10-99, 100-255) long. So your combinations are: comp_length = {1: 10, 2: 90, 3: 156} You can work out the total lengths by iterating through each combo: def ipv4_comp_length(n=4): if n == 1: return comp_length res = {} for rest, restcnt in ipv4_comp_length(n-1).iteritems(): for first, firstcnt in comp_length.iteritems(): l = first + 1 + rest # "10" + "." + "0.0.127" res[l] = res.get(l,0) + (firstcnt * restcnt) return res print sum( l*c for l,c in ipv4_comp_length().iteritems() ) Note that this doesn't take record separators into account (the space in "1.2.3.4 1.2.3.5"). Extending to IPv6 should be mostly straightforward -- unless you want to cope with abbreviated IPv6 addresses like 0::0 == 0:0:0:0:0:0:0:0.
Calculate brute force size dynamically?
How you could calculate size of brute force method dynamically? For example how many iterations and space would take if you printed all IPv6 addresses from 0:0:0:0:0:0:0:0 - ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff to file? The tricky parts are those when length of line varies. IP address is only example. Idea is that you give the format and maximum lenghts of given parts. So if variable type is '%c' (char), and maxlen is 26 then iteration count is 26 and needed space in human format in text file is 26 + 26 (one char for separator) def calculate(format, rules): end = format for i in rules: (vartype, maxlen) = rules[i] end = end.replace(i, vartype % maxlen) start = format for i in rules: (vartype, maxlen) = rules[i] minlen = 0 start = start.replace(i, vartype % minlen) start_bytes = len(start) end_bytes = len(end) # how to add for example IPv4 calculations # 0.0.0.0 - 9.9.9.9 # 10.10.10.10 - 99.99.99.99 # 100.100.100.100 - 255.255.255.255 iterations = 0 for i in rules: if format.find(i) is not -1: (vartype, maxlen) = rules[i] if iterations == 0: iterations = int(maxlen) + 1 else: iterations *= int(maxlen) + 1 iterations -= 1 needed_space = 0 if start_bytes == end_bytes: # +1 for separator (space / new line) needed_space = (1 + start_bytes) * iterations else: needed_space = "How to calculate?" return [iterations, needed_space, start, end, start_bytes, end_bytes] if __name__ == '__main__': # IPv4 print calculate( "%a.%b.%c.%d", { '%a': ['%d', 255], '%b': ['%d', 255], '%c': ['%d', 255], '%d': ['%d', 255] }, ) # IPv4 zero filled version print calculate( "%a.%b.%c.%d", { '%a': ['%03d', 255], '%b': ['%03d', 255], '%c': ['%03d', 255], '%d': ['%03d', 255] }, ) # IPv6 print calculate( "%a:%b:%c:%d:%e:%f:%g:%h", { '%a': ['%x', 65535], '%b': ['%x', 65535], '%c': ['%x', 65535], '%d': ['%x', 65535], '%e': ['%x', 65535], '%f': ['%x', 65535], '%g': ['%x', 65535], '%h': ['%x', 65535] }, ) # days in year, simulate with day numbers print calculate( "ddm%a", #ddmmyy { '%a': ['%03d', 365], }, ) So for example: 1.2.3.4 takes 7 bytes 9.9.9.10 takes 8 bytes 1.1.1.100 takes 9 bytes 5.7.10.100 takes 10 bytes 128.1.1.1 takes 9 bytes and so on Example 0.0.0.0 - 10.10.10.10: iterations = 0 needed_space = 0 for a in range(0, 11): for b in range(0, 11): for c in range(0, 11): for d in range(0, 11): line = "%d.%d.%d.%d\n" % (a, b, c, d) needed_space += len(line) iterations += 1 print "iterations: %d needed_space: %d bytes" % (iterations, needed_space) iterations: 14641 needed_space: 122452 bytes Into print calculate( "%a.%b.%c.%d", { '%a': ['%d', 10], '%b': ['%d', 10], '%c': ['%d', 10], '%d': ['%d', 10] }, ) Result: [14641, 122452]
[ "Using combinatorics and discrete math:\nThe IPv4 address space is 256*256*256*256 = 2^32 = 4,294,967,296 addresses.\nIPv6 has 2^128 addresses (8 groups of 16*16*16*16).\nAn IPv4 address use 32 bits, so 32 bits * 4,294,967,296 addresses = 16 gigabytes if stored e.g. on disk.\nAn IPv6 address uses 128 bits, so 128 bits * 2^128 addresses = 5.07 * 10^30 gigabytes.\n", "First start by counting the number of lines you need. IPv6 addresses are 128 bits, so your output file will be 2128 lines long. That's roughly 3.4 × 1038 bytes, if all your lines are empty. A terabyte is only about 1012 bytes, so you would need over 3 × 1026 1-terabyte hard drives just to store the empty lines before putting any data in them.\nStoring 40 bytes per line would require proportionally more storage. \n", "Breaking it up into components is probably the way to go; for IPv4 you've got four parts, separated by three dots, and each part can be 1, 2 or 3 characters (0-9, 10-99, 100-255) long. So your combinations are:\ncomp_length = {1: 10, 2: 90, 3: 156}\n\nYou can work out the total lengths by iterating through each combo:\ndef ipv4_comp_length(n=4):\n if n == 1:\n return comp_length\n res = {}\n for rest, restcnt in ipv4_comp_length(n-1).iteritems():\n for first, firstcnt in comp_length.iteritems():\n l = first + 1 + rest # \"10\" + \".\" + \"0.0.127\"\n res[l] = res.get(l,0) + (firstcnt * restcnt)\n return res\n\nprint sum( l*c for l,c in ipv4_comp_length().iteritems() )\n\nNote that this doesn't take record separators into account (the space in \"1.2.3.4 1.2.3.5\").\nExtending to IPv6 should be mostly straightforward -- unless you want to cope with abbreviated IPv6 addresses like 0::0 == 0:0:0:0:0:0:0:0.\n" ]
[ 4, 0, 0 ]
[]
[]
[ "brute_force", "math", "python" ]
stackoverflow_0001463832_brute_force_math_python.txt
Q: Json encoder python recursive reference I don't know whether I am doing the right thing here, basically I want both of my class to be json-serializable. import json class gpagelet(json.JSONEncoder): """ Holds 1) the pagelet xpath, which is a string 2) the list of pagelet shingles, list """ def __init__(self, parent): if not isinstance( parent, gwebpage): raise Exception("Parent must be an instance of gwebpage") self.parent = parent # This must be a gwebpage instance self.xpath = None # This is just an id for the pagelet (not unique across page), historically called xpath self.visibleShingles = [] self.invisibleShingles = [] self.urls = [] def __str__(self): """String representation of this object""" ret = "" ret += "xpath: %s\n" % self.xpath def appendShingles(): ret += "shingles: \n" for each in self.shingles: ret += "%s\n" % str(each) ret += "urls:\n" for each in self.urls: ret += "%s\n" % str( each) return ret class gwebpage(json.JSONEncoder): """ Holds all the datastructure after the results have been parsed holds: 1) lists of gpagelets 2) loc, string, location of the file that represents it """ def __init__(self, url): self.url = url # This will be http:// self.netloc = False # This will be http:// too self.gpagelets = [] # Appended by functions self.page_key = "" def __str__(self): ret = "" ret += "url: %s\n" % self.url ret += "netloc: %s\n" % self.netloc ret += "page_key: %s\n" % self.page_key ret += "pagelets:\n" for each in self.gpagelets: ret += "%s\n" % each.__str__() return ret class GpageletEncoder( json.JSONEncoder): def default(self, gp): gwebpageEncoder = GwebpageEncoder() if not isinstance( gp, gpagelet): raise Exception( "Cannot use GpageletEncoder on a non gpagelet instance") u = { } u['parent'] = gwebpageEncoder.default( gp.parent) u['xpath'] = gp.xpath u['visibleShingles'] = gp.visibleShingles u['invisibleShingles'] = gp.invisibleShingles u['urls'] = gp.urls return u class GwebpageEncoder( json.JSONEncoder): def default(self, gw): gpageletEncoder = GpageletEncoder() if not isinstance( gw, gwebpage): raise Exception( "Cannot use gwebpageEncoder on a non gwebpage instance") u = { } u['url'] = gw.url u['netloc'] = gw.netloc u['gpagelets'] = [ gpageletEncoder.default( each) for each in gw.gpagelets ] u['page_key'] = gw.page_key return u if __name__ == "__main__": import simplejson mom = gwebpage('http://www.google.com') son = gpagelet( mom) mom.gpagelets.append( son) print simplejson.dumps( mom, cls=GwebpageEncoder) One of the trouble is that 1) I don't know what default is suppose to do 2) I don't know whether GWebpage's default is suppose to return the default or encoded gwebpage Now I am getting infinite recursion. Can someone help?
Json encoder python recursive reference
I don't know whether I am doing the right thing here, basically I want both of my class to be json-serializable. import json class gpagelet(json.JSONEncoder): """ Holds 1) the pagelet xpath, which is a string 2) the list of pagelet shingles, list """ def __init__(self, parent): if not isinstance( parent, gwebpage): raise Exception("Parent must be an instance of gwebpage") self.parent = parent # This must be a gwebpage instance self.xpath = None # This is just an id for the pagelet (not unique across page), historically called xpath self.visibleShingles = [] self.invisibleShingles = [] self.urls = [] def __str__(self): """String representation of this object""" ret = "" ret += "xpath: %s\n" % self.xpath def appendShingles(): ret += "shingles: \n" for each in self.shingles: ret += "%s\n" % str(each) ret += "urls:\n" for each in self.urls: ret += "%s\n" % str( each) return ret class gwebpage(json.JSONEncoder): """ Holds all the datastructure after the results have been parsed holds: 1) lists of gpagelets 2) loc, string, location of the file that represents it """ def __init__(self, url): self.url = url # This will be http:// self.netloc = False # This will be http:// too self.gpagelets = [] # Appended by functions self.page_key = "" def __str__(self): ret = "" ret += "url: %s\n" % self.url ret += "netloc: %s\n" % self.netloc ret += "page_key: %s\n" % self.page_key ret += "pagelets:\n" for each in self.gpagelets: ret += "%s\n" % each.__str__() return ret class GpageletEncoder( json.JSONEncoder): def default(self, gp): gwebpageEncoder = GwebpageEncoder() if not isinstance( gp, gpagelet): raise Exception( "Cannot use GpageletEncoder on a non gpagelet instance") u = { } u['parent'] = gwebpageEncoder.default( gp.parent) u['xpath'] = gp.xpath u['visibleShingles'] = gp.visibleShingles u['invisibleShingles'] = gp.invisibleShingles u['urls'] = gp.urls return u class GwebpageEncoder( json.JSONEncoder): def default(self, gw): gpageletEncoder = GpageletEncoder() if not isinstance( gw, gwebpage): raise Exception( "Cannot use gwebpageEncoder on a non gwebpage instance") u = { } u['url'] = gw.url u['netloc'] = gw.netloc u['gpagelets'] = [ gpageletEncoder.default( each) for each in gw.gpagelets ] u['page_key'] = gw.page_key return u if __name__ == "__main__": import simplejson mom = gwebpage('http://www.google.com') son = gpagelet( mom) mom.gpagelets.append( son) print simplejson.dumps( mom, cls=GwebpageEncoder) One of the trouble is that 1) I don't know what default is suppose to do 2) I don't know whether GWebpage's default is suppose to return the default or encoded gwebpage Now I am getting infinite recursion. Can someone help?
[]
[]
[ "FYI \nI am seeing a lot of trouble with this:\nu['gpagelets'] = [ gpageletEncoder.default( each) for each in gw.gpagelets ]\n" ]
[ -1 ]
[ "json", "python" ]
stackoverflow_0001458671_json_python.txt
Q: What is the pythonic way of checking if an object is a list? I have a function that may take in a number or a list of numbers. Whats the most pythonic way of checking which it is? So far I've come up with try/except block checking if i can slice the zero item ie. obj[0:0] Edit: I seem to have started a war of words down below by not giving enough info. For completeness let me provide more details so that I may pick and get the best answer for my situation: I'm running Django on Python 2.6 and I'm writing a function that may take in a Django model instance or a queryset object and perform operations on it one of which involves using the filter 'in' that requires a list (the queryset input), or alternately if it is not a list then I would use the 'get' filter (the django get filter). A: In such situations, you normally need to check for ANY iterable, not just lists -- if you're accepting lists OR numbers, rejecting (e.g) a tuple would be weird. The one kind of iterable you might want to treat as a "scalar" is a string -- in Python 2.*, this means str or unicode. So, either: def isNonStringIterable(x): if isinstance(x, basestring): return False try: iter(x) except: return False else: return True or, usually much handier: def makeNonStringIterable(x): if isinstance(x, basestring): return (x,) try: return iter(x) except: return (x,) where you just go for i in makeNonStringIterable(x): ... A: if isinstance(your_object, list): print("your object is a list!") This is more Pythonic than checking with type. Seems faster too: >>> timeit('isinstance(x, list)', 'x = [1, 2, 3, 4]') 0.40161490440368652 >>> timeit('type(x) is list', 'x = [1, 2, 3, 4]') 0.46065497398376465 >>> A: You don't. This works only for Python >= 2.6. If you're targeting anything below use Alex' solution. Python supports something called Duck Typing. You can look for certain functionality using the ABC classes. import collections def mymethod(myvar): # collections.Sqeuence to check for list capabilities # collections.Iterable to check for iterator capabilities if not isinstance(myvar, collections.Iterable): raise TypeError() A: I don't want to be a pest, BUT: Are you sure the query set/object is a good interface? Make two functions, like: def fobject(i): # do something def fqueryset(q): for obj in q: fobject( obj ) Might not be the pythonic way to discern an int from a list, but seems a far better design to me. Reason being: Your function should be working on ducks. As long as it quacks, whack it. Actually picking the duck up, turning it upside down to check the markings on the belly before choosing the right club to whack it is unpythonic. Sorry. Just don't go there. A: You can use isinstance to check a variables type: if isinstance(param, list): # it is a list print len(list) A: I think the way OP is doing, checking if it supports what he wants, is ok. Simpler way in this scenario would be to not check for list which can be of many types depending on definition, you may check if input is number, do something on it else try to use it as list if that throws exception bail out. e.g you may not want iterate over list but just wanted to append something to it if it is list else add to it def add2(o): try: o.append(2) except AttributeError: o += 2 l=[] n=1 s="" add2(l) add2(n) add2(s) # will throw exception, let the user take care of that ;) So bottom line is answer may vary depending on what you want to do with object
What is the pythonic way of checking if an object is a list?
I have a function that may take in a number or a list of numbers. Whats the most pythonic way of checking which it is? So far I've come up with try/except block checking if i can slice the zero item ie. obj[0:0] Edit: I seem to have started a war of words down below by not giving enough info. For completeness let me provide more details so that I may pick and get the best answer for my situation: I'm running Django on Python 2.6 and I'm writing a function that may take in a Django model instance or a queryset object and perform operations on it one of which involves using the filter 'in' that requires a list (the queryset input), or alternately if it is not a list then I would use the 'get' filter (the django get filter).
[ "In such situations, you normally need to check for ANY iterable, not just lists -- if you're accepting lists OR numbers, rejecting (e.g) a tuple would be weird. The one kind of iterable you might want to treat as a \"scalar\" is a string -- in Python 2.*, this means str or unicode. So, either:\ndef isNonStringIterable(x):\n if isinstance(x, basestring):\n return False\n try: iter(x)\n except: return False\n else: return True\n\nor, usually much handier:\ndef makeNonStringIterable(x):\n if isinstance(x, basestring):\n return (x,)\n try: return iter(x)\n except: return (x,)\n\nwhere you just go for i in makeNonStringIterable(x): ...\n", "if isinstance(your_object, list):\n print(\"your object is a list!\")\n\nThis is more Pythonic than checking with type.\nSeems faster too:\n>>> timeit('isinstance(x, list)', 'x = [1, 2, 3, 4]')\n0.40161490440368652\n>>> timeit('type(x) is list', 'x = [1, 2, 3, 4]')\n0.46065497398376465\n>>> \n\n", "You don't.\nThis works only for Python >= 2.6. If you're targeting anything below use Alex' solution.\nPython supports something called Duck Typing. You can look for certain functionality using the ABC classes.\nimport collections\ndef mymethod(myvar):\n # collections.Sqeuence to check for list capabilities\n # collections.Iterable to check for iterator capabilities\n if not isinstance(myvar, collections.Iterable):\n raise TypeError()\n\n", "I don't want to be a pest, BUT: Are you sure the query set/object is a good interface? Make two functions, like:\ndef fobject(i):\n # do something\n\ndef fqueryset(q):\n for obj in q:\n fobject( obj )\n\nMight not be the pythonic way to discern an int from a list, but seems a far better design to me.\nReason being: Your function should be working on ducks. As long as it quacks, whack it. Actually picking the duck up, turning it upside down to check the markings on the belly before choosing the right club to whack it is unpythonic. Sorry. Just don't go there.\n", "You can use isinstance to check a variables type:\nif isinstance(param, list):\n # it is a list\n print len(list)\n\n", "I think the way OP is doing, checking if it supports what he wants, is ok.\nSimpler way in this scenario would be to not check for list which can be of many types depending on definition, you may check if input is number, do something on it else try to use it as list if that throws exception bail out.\ne.g you may not want iterate over list but just wanted to append something to it if it is list else add to it\ndef add2(o):\n try:\n o.append(2)\n except AttributeError:\n o += 2\n\nl=[]\nn=1\ns=\"\"\nadd2(l)\nadd2(n)\nadd2(s) # will throw exception, let the user take care of that ;)\n\nSo bottom line is answer may vary depending on what you want to do with object\n" ]
[ 21, 13, 11, 2, 1, 0 ]
[ "Just use the type method? Or am I misinterpreting the question\nif type(objectname) is list:\n do something\nelse:\n do something else :P\n\n" ]
[ -3 ]
[ "list", "python" ]
stackoverflow_0001464028_list_python.txt
Q: wxpython drag&drop focus problem I'd like to implement drag&drop in wxPython that works in similar way that in WordPad/Eclipse etc. I mean the following: when something is being dropped to WordPad, WordPad window is on top with focus and text is added. In Eclipse editor text is pasted, Eclipse window gains focus and is on top. When I implement drag&drop using wxPython target window is not brought to front. I implemented drag&drop in similar way to (drag): import wx class DragFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) self.tree = wx.TreeCtrl(self, wx.ID_ANY) root = self.tree.AddRoot("root item") self.tree.AppendItem(root, "child 1") self.tree.Bind(wx.EVT_TREE_BEGIN_DRAG, self.__onBeginDrag) def __onBeginDrag(self, event): tdo = wx.PyTextDataObject(self.tree.GetItemText(event.GetItem())) dropSource = wx.DropSource(self.tree) dropSource.SetData(tdo) dropSource.DoDragDrop(True) app = wx.PySimpleApp() frame = DragFrame() app.SetTopWindow(frame) frame.Show() app.MainLoop() Second program (drop): import wx class TextDropTarget(wx.TextDropTarget): def __init__(self, obj): wx.TextDropTarget.__init__(self) self.obj = obj def OnDropText(self, x, y, data): self.obj.WriteText(data + '\n\n') wx.MessageBox("Error", "Error", style = wx.ICON_ERROR) class DropFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) text = wx.TextCtrl(self, wx.ID_ANY) text.SetDropTarget(TextDropTarget(text)) app = wx.PySimpleApp() frame = DropFrame() app.SetTopWindow(frame) frame.Show() app.MainLoop() When you run both programs, place windows in the centre of the screen (part of drop window is visible), then drag a node from drag window to drop window - target window displays message box which isn't visible, target window is hidden behind source window. How to implement drag&drop that will focus on the second (target) window? I've tried adding window.Show(), window.SetFocus(), even using some functions of WinAPI (through win32gui). I think there should be some standard way of doing this. What am I missing? A: Wouldn't this work? class DropFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) text = wx.TextCtrl(self, wx.ID_ANY) self.SetFocus() # Set's the focus to this window, allowing it to receive keyboard input. text.SetDropTarget(TextDropTarget(text)) wx.Frame inherits from wx.Window, that has SetFocus(self). I have just tested it and it works. Just moved SetFocus before SetDropTarget, as its a cleaner behavior. A: You need to do anything you want int DragOver method of DropTarget e.g. there you can raise and set focus on your window sample working code for target import wx class TextDropTarget(wx.TextDropTarget): def __init__(self, obj, callback): wx.TextDropTarget.__init__(self) self.obj = obj self._callback = callback def OnDropText(self, x, y, data): self.obj.WriteText(data + '\n\n') wx.MessageBox("Error", "Error", style = wx.ICON_ERROR) def OnDragOver(self, *args): wx.CallAfter(self._callback) return wx.TextDropTarget.OnDragOver(self, *args) class DropFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) text = wx.TextCtrl(self, wx.ID_ANY) text.SetDropTarget(TextDropTarget(text, self._callback)) def _callback(self): self.Raise() self.SetFocus() app = wx.PySimpleApp() frame = DropFrame() app.SetTopWindow(frame) frame.Show() app.MainLoop()
wxpython drag&drop focus problem
I'd like to implement drag&drop in wxPython that works in similar way that in WordPad/Eclipse etc. I mean the following: when something is being dropped to WordPad, WordPad window is on top with focus and text is added. In Eclipse editor text is pasted, Eclipse window gains focus and is on top. When I implement drag&drop using wxPython target window is not brought to front. I implemented drag&drop in similar way to (drag): import wx class DragFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) self.tree = wx.TreeCtrl(self, wx.ID_ANY) root = self.tree.AddRoot("root item") self.tree.AppendItem(root, "child 1") self.tree.Bind(wx.EVT_TREE_BEGIN_DRAG, self.__onBeginDrag) def __onBeginDrag(self, event): tdo = wx.PyTextDataObject(self.tree.GetItemText(event.GetItem())) dropSource = wx.DropSource(self.tree) dropSource.SetData(tdo) dropSource.DoDragDrop(True) app = wx.PySimpleApp() frame = DragFrame() app.SetTopWindow(frame) frame.Show() app.MainLoop() Second program (drop): import wx class TextDropTarget(wx.TextDropTarget): def __init__(self, obj): wx.TextDropTarget.__init__(self) self.obj = obj def OnDropText(self, x, y, data): self.obj.WriteText(data + '\n\n') wx.MessageBox("Error", "Error", style = wx.ICON_ERROR) class DropFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) text = wx.TextCtrl(self, wx.ID_ANY) text.SetDropTarget(TextDropTarget(text)) app = wx.PySimpleApp() frame = DropFrame() app.SetTopWindow(frame) frame.Show() app.MainLoop() When you run both programs, place windows in the centre of the screen (part of drop window is visible), then drag a node from drag window to drop window - target window displays message box which isn't visible, target window is hidden behind source window. How to implement drag&drop that will focus on the second (target) window? I've tried adding window.Show(), window.SetFocus(), even using some functions of WinAPI (through win32gui). I think there should be some standard way of doing this. What am I missing?
[ "Wouldn't this work?\nclass DropFrame(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None)\n text = wx.TextCtrl(self, wx.ID_ANY)\n self.SetFocus() # Set's the focus to this window, allowing it to receive keyboard input.\n text.SetDropTarget(TextDropTarget(text))\n\nwx.Frame inherits from wx.Window, that has SetFocus(self).\n\nI have just tested it and it works. Just moved SetFocus before SetDropTarget, as its a cleaner behavior.\n", "You need to do anything you want int DragOver method of DropTarget e.g. there you can raise and set focus on your window\nsample working code for target\nimport wx\nclass TextDropTarget(wx.TextDropTarget):\n def __init__(self, obj, callback):\n wx.TextDropTarget.__init__(self)\n self.obj = obj\n self._callback = callback\n\n def OnDropText(self, x, y, data):\n self.obj.WriteText(data + '\\n\\n')\n wx.MessageBox(\"Error\", \"Error\", style = wx.ICON_ERROR)\n\n def OnDragOver(self, *args):\n wx.CallAfter(self._callback)\n return wx.TextDropTarget.OnDragOver(self, *args)\n\nclass DropFrame(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None)\n text = wx.TextCtrl(self, wx.ID_ANY)\n text.SetDropTarget(TextDropTarget(text, self._callback))\n\n def _callback(self):\n self.Raise()\n self.SetFocus()\n\napp = wx.PySimpleApp()\nframe = DropFrame()\napp.SetTopWindow(frame)\nframe.Show()\napp.MainLoop()\n\n" ]
[ 1, 1 ]
[]
[]
[ "drag_and_drop", "focus", "python", "wxpython" ]
stackoverflow_0001460722_drag_and_drop_focus_python_wxpython.txt
Q: Django: how to including inline model fields in the list_display? I'm attempting to extend django's contrib.auth User model, using an inline 'Profile' model to include extra fields. from django.contrib import admin from django.contrib.auth.models import User from django.contrib.auth.admin import UserAdmin class Profile(models.Model): user = models.ForeignKey(User, unique=True, related_name='profile') avatar = '/images/avatar.png' nickname = 'Renz' class UserProfileInline(admin.StackedInline): model = Profile class UserProfileAdmin(UserAdmin): inlines = (UserProfileInline,) admin.site.unregister(User) admin.site.register(User, UserProfileAdmin) This works just fine for the admin 'Change User' page, but I can't find a way to add inline model fields in the list_display. Just specifying the names of Profile fields in list_display give me an error: UserProfileAdmin.list_display[4], 'avatar' is not a callable or an attribute of 'UserProfileAdmin' or found in the model 'User'. I can create a callable which looks up the user in the Profile table and returns the relevant field, but this leaves me without the ability to sort the list view by the inline fields, which I really need to be able to do. Any suggestions? A: You've mentioned the only solution - creating a callable. There's currently no other way to do it, and yes this does mean you can't sort by that column.
Django: how to including inline model fields in the list_display?
I'm attempting to extend django's contrib.auth User model, using an inline 'Profile' model to include extra fields. from django.contrib import admin from django.contrib.auth.models import User from django.contrib.auth.admin import UserAdmin class Profile(models.Model): user = models.ForeignKey(User, unique=True, related_name='profile') avatar = '/images/avatar.png' nickname = 'Renz' class UserProfileInline(admin.StackedInline): model = Profile class UserProfileAdmin(UserAdmin): inlines = (UserProfileInline,) admin.site.unregister(User) admin.site.register(User, UserProfileAdmin) This works just fine for the admin 'Change User' page, but I can't find a way to add inline model fields in the list_display. Just specifying the names of Profile fields in list_display give me an error: UserProfileAdmin.list_display[4], 'avatar' is not a callable or an attribute of 'UserProfileAdmin' or found in the model 'User'. I can create a callable which looks up the user in the Profile table and returns the relevant field, but this leaves me without the ability to sort the list view by the inline fields, which I really need to be able to do. Any suggestions?
[ "You've mentioned the only solution - creating a callable. There's currently no other way to do it, and yes this does mean you can't sort by that column. \n" ]
[ 7 ]
[]
[]
[ "django", "django_admin", "django_models", "python" ]
stackoverflow_0001463398_django_django_admin_django_models_python.txt
Q: vim syntax highlighting for jinja2? How do you do jinja2 aware syntax highlighting for vim? A: There appears to be a syntax highlighting file here.
vim syntax highlighting for jinja2?
How do you do jinja2 aware syntax highlighting for vim?
[ "There appears to be a syntax highlighting file here.\n" ]
[ 16 ]
[]
[]
[ "jinja2", "python", "vim", "vim_syntax_highlighting" ]
stackoverflow_0001465240_jinja2_python_vim_vim_syntax_highlighting.txt
Q: Trouble with positive look behind assertion in python regex Trying to use a reasonably long regex and it comes down to this small section that doesn't match how I'd expect it to. >>> re.search(r'(foo)((?<==)bar)?', 'foo').groups() ('foo', None) >>> re.search(r'(foo)((?<==)bar)?', 'foo=bar').groups() ('foo', None) The first one is what I'm after, the second should be returning ('foo', 'bar'). I suspect I'm just misunderstanding how lookbehinds are meant to work, some some explanation would be great. A: The look behind target is never included in the match - it's supposed to serve as an anchor, but not actually be consumed by the regex. The look behind pattern is only supposed to match if the current position is preceded by the target. In your case, after matching the "foo" in the string, the current position is at the "=", which is not preceded by a "=" - it's preceded by an "o". Another way to see this is by looking at the re documentation and reading Note that patterns which start with positive lookbehind assertions will never match at the beginning of the string being searched; After you match the foo, your look behind is trying to match at the beginning of (the remainder of) the string - this will never work. Others have suggested regexes that may probably serve you better, but I think you're probably looking for >>> re.search('(foo)(=(bar))?', 'foo=bar').groups() ('foo', '=bar', 'bar') If you find the extra group is a little annoying, you could omit the inner "()"s and just chop the first character off the matched group... A: You probably just want (foo)(?:=(bar))? using a non-capturing group (?:). A lookbehind assertion just looks to the left of the current position and checks whether the supplied expression matches or not. Hence your expression matches foo and then checks if the input to the left - the second o in foo - matches =. This of course always fails. A: Why don't you just use: (foo)=?(bar)? Also the following expression seems to be more correct as it captures the '=' within the full match, but your original expression does not capture that at all: (foo).?((?<==)bar)?
Trouble with positive look behind assertion in python regex
Trying to use a reasonably long regex and it comes down to this small section that doesn't match how I'd expect it to. >>> re.search(r'(foo)((?<==)bar)?', 'foo').groups() ('foo', None) >>> re.search(r'(foo)((?<==)bar)?', 'foo=bar').groups() ('foo', None) The first one is what I'm after, the second should be returning ('foo', 'bar'). I suspect I'm just misunderstanding how lookbehinds are meant to work, some some explanation would be great.
[ "The look behind target is never included in the match - it's supposed to serve as an anchor, but not actually be consumed by the regex.\nThe look behind pattern is only supposed to match if the current position is preceded by the target. In your case, after matching the \"foo\" in the string, the current position is at the \"=\", which is not preceded by a \"=\" - it's preceded by an \"o\".\nAnother way to see this is by looking at the re documentation and reading\n\nNote that patterns which start with positive lookbehind assertions will never match at the beginning of the string being searched;\n\nAfter you match the foo, your look behind is trying to match at the beginning of (the remainder of) the string - this will never work.\nOthers have suggested regexes that may probably serve you better, but I think you're probably looking for\n>>> re.search('(foo)(=(bar))?', 'foo=bar').groups()\n('foo', '=bar', 'bar')\n\nIf you find the extra group is a little annoying, you could omit the inner \"()\"s and just chop the first character off the matched group...\n", "You probably just want (foo)(?:=(bar))? using a non-capturing group (?:).\nA lookbehind assertion just looks to the left of the current position and checks whether the supplied expression matches or not. Hence your expression matches foo and then checks if the input to the left - the second o in foo - matches =. This of course always fails.\n", "Why don't you just use: \n(foo)=?(bar)?\n\nAlso the following expression seems to be more correct as it captures the '=' within the full match, but your original expression does not capture that at all:\n(foo).?((?<==)bar)?\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001465246_python_regex.txt
Q: mod_python interpreter's cache not getting reset on script change? I use mod_python.publisher to run Python code and discovered a problem: When I update a script the update doesn't always work right away and I get the same error I fixed with the update until I restart Apache. Sometimes it works right away, but sometimes not...but restarting Apache definitely always catches it up. It's a pain to have to restart Apache so much and I would think there is a better way to do this -- but what is it? A: This is the expected behavior of mod_python. Your code is loaded into memory and won't be refreshed until the server is restarted. You have two options: Set MaxRequestsPerChild 1 in your httpd.conf file to force Apache to reload everything for each request. Set PythonAutoReload to be On http://www.modpython.org/live/mod_python-3.2.5b/doc-html/dir-other-par.html But don't do that on a production server, as it will slow down the initialization time.
mod_python interpreter's cache not getting reset on script change?
I use mod_python.publisher to run Python code and discovered a problem: When I update a script the update doesn't always work right away and I get the same error I fixed with the update until I restart Apache. Sometimes it works right away, but sometimes not...but restarting Apache definitely always catches it up. It's a pain to have to restart Apache so much and I would think there is a better way to do this -- but what is it?
[ "This is the expected behavior of mod_python. Your code is loaded into memory and won't be refreshed until the server is restarted.\nYou have two options:\n\nSet MaxRequestsPerChild 1 in your httpd.conf file to force Apache to reload everything for each request.\nSet PythonAutoReload to be On\nhttp://www.modpython.org/live/mod_python-3.2.5b/doc-html/dir-other-par.html\n\nBut don't do that on a production server, as it will slow down the initialization time.\n" ]
[ 3 ]
[]
[]
[ "mod_python", "python" ]
stackoverflow_0001465364_mod_python_python.txt
Q: What is involved in adding to the standard Python API? What steps would be necessary, and what kind of maintenance would be expected if I wanted to contribute a module to the Python standard API? For example I have a module that encapsulates automated update functionality similar to Java's JNLP. A: Please look at Python PEP 2 for details. You'll surely find more necessary information at the PEP Index, such as PEP 1: PEP Purpose and Guidelines. Have a look through the PEP index for previous PEPs which may have been rejected in the past. Of course you should also consult the python-dev mailing list. A: First, look at modules on pypi. Download several that are related to what you're doing so you can see exactly what the state of the art is. For example, look at easy_install for an example of something like what you're proposing. After looking at other modules, write yours to look like theirs. Then publish information on your blog. When people show an interest, post it to SourceForge or something similar. This will allow you to get started slowly. When people start using it, you'll know exactly what kind of maintenance you need to do. Then, when demand ramps up, you can create the pypi information required to publish it on pypi. Finally, when it becomes so popular that people demand it be added to Python as a standard part of the library, many other folks will be involved in helping you mature your offering.
What is involved in adding to the standard Python API?
What steps would be necessary, and what kind of maintenance would be expected if I wanted to contribute a module to the Python standard API? For example I have a module that encapsulates automated update functionality similar to Java's JNLP.
[ "Please look at Python PEP 2 for details. You'll surely find more necessary information at the PEP Index, such as PEP 1: PEP Purpose and Guidelines.\nHave a look through the PEP index for previous PEPs which may have been rejected in the past.\nOf course you should also consult the python-dev mailing list.\n", "First, look at modules on pypi. Download several that are related to what you're doing so you can see exactly what the state of the art is. \nFor example, look at easy_install for an example of something like what you're proposing.\nAfter looking at other modules, write yours to look like theirs.\nThen publish information on your blog.\nWhen people show an interest, post it to SourceForge or something similar. This will allow you to get started slowly.\nWhen people start using it, you'll know exactly what kind of maintenance you need to do.\nThen, when demand ramps up, you can create the pypi information required to publish it on pypi.\nFinally, when it becomes so popular that people demand it be added to Python as a standard part of the library, many other folks will be involved in helping you mature your offering.\n" ]
[ 9, 2 ]
[]
[]
[ "api", "python" ]
stackoverflow_0001465302_api_python.txt
Q: Limiting results returned fromquery in django, python I've just started to learn how to do queries in my Django application, and I have a query that gets me the list of new users filtered by the date joined: newUsers = User.objects.filter(is_active=True).order_by("-date_joined") This as I understand it gives me ALL the users, sorted by date joined. How would I best limit it to get me the last N users? On this, does anyone recommend and reading material to learn more about these types of queries? A: User.objects.filter(is_active=True).order_by("-date_joined")[:10] will give you the last 10 users who joined. See the Django docs for details. A: Use list slice operation on constructed queries e.g. For example, this returns the first 5 objects (LIMIT 5): Entry.objects.all()[:5] This returns the sixth through tenth objects (OFFSET 5 LIMIT 5): Entry.objects.all()[5:10] Read django documentation http://docs.djangoproject.com/en/dev/topics/db/queries/ A: User.objects.filter(is_active=True).order_by("-date_joined")[:10] QuerySets are lazy [ http://docs.djangoproject.com/en/dev/ref/models/querysets/ ]. When you're slicing the list, it doesn't actually fetch all entries, load them up into a list, and then slice them. Instead, it fetches only the last 10 entries. A: BTW, if u would just divide amount of all user, (for example, 10 per site) you should interest pagination in dJango, very helpful http://docs.djangoproject.com/en/dev/topics/pagination/#topics-pagination
Limiting results returned fromquery in django, python
I've just started to learn how to do queries in my Django application, and I have a query that gets me the list of new users filtered by the date joined: newUsers = User.objects.filter(is_active=True).order_by("-date_joined") This as I understand it gives me ALL the users, sorted by date joined. How would I best limit it to get me the last N users? On this, does anyone recommend and reading material to learn more about these types of queries?
[ "User.objects.filter(is_active=True).order_by(\"-date_joined\")[:10]\n\nwill give you the last 10 users who joined. See the Django docs for details.\n", "Use list slice operation on constructed queries e.g.\nFor example, this returns the first 5 objects (LIMIT 5):\nEntry.objects.all()[:5]\n\nThis returns the sixth through tenth objects (OFFSET 5 LIMIT 5):\nEntry.objects.all()[5:10]\n\nRead django documentation\nhttp://docs.djangoproject.com/en/dev/topics/db/queries/\n", "User.objects.filter(is_active=True).order_by(\"-date_joined\")[:10]\n\nQuerySets are lazy [ http://docs.djangoproject.com/en/dev/ref/models/querysets/ ]. When you're slicing the list, it doesn't actually fetch all entries, load them up into a list, and then slice them. Instead, it fetches only the last 10 entries.\n", "BTW, if u would just divide amount of all user, (for example, 10 per site) you should interest pagination in dJango, very helpful http://docs.djangoproject.com/en/dev/topics/pagination/#topics-pagination\n" ]
[ 8, 4, 2, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001465734_django_python.txt
Q: Custom QStyledItemDelegate: adding bold items So here's the story: I have a QListview that uses a QSqlQueryModel to fill it up. Because some items should display in bold based on the value of a hidden column of the model, I decided to make my own custom delegate. I'm using PyQT 4.5.4 and thus inheriting from QStyledItemDelegate is the way to go according to the docs. I got it working but there are some problems with it. Here's my solution: class TypeSoortDelegate(QStyledItemDelegate): def paint(self, painter, option, index): model = index.model() record = model.record(index.row()) value= record.value(2).toPyObject() if value: painter.save() # change the back- and foreground colors # if the item is selected if option.state & QStyle.State_Selected: painter.setPen(QPen(Qt.NoPen)) painter.setBrush(QApplication.palette().highlight()) painter.drawRect(option.rect) painter.restore() painter.save() font = painter.font pen = painter.pen() pen.setColor(QApplication.palette().color(QPalette.HighlightedText)) painter.setPen(pen) else: painter.setPen(QPen(Qt.black)) # set text bold font = painter.font() font.setWeight(QFont.Bold) painter.setFont(font) text = record.value(1).toPyObject() painter.drawText(option.rect, Qt.AlignLeft, text) painter.restore() else: QStyledItemDelegate.paint(self, painter, option, index) The problems I'm facing now: the normal (not bold) items are slightly indented (a few pixels). This is probably some default behaviour. I could indent my item in bold also, but what happens then under a different platform? Normally when I select items there is a small border with a dotted line around it (default Windows thing?). Here also I could draw it, but I want to stay as native as possible. Now the question: Is there another way to create a custom delegate that only changes the font weight when some condition is met and leaves all the rest untouched? I also tried: if value: font = painter.font() font.setWeight(QFont.Bold) painter.setFont(font) QStyledItemDelegate.paint(self, painter, option, index) But that doesn't seem to affect the looks at all. No error, just default behaviour, and no bold items. All suggestions welcome! A: I've not tested this, but I think you can do: class TypeSoortDelegate(QStyledItemDelegate): def paint(self, painter, option, index): get value... if value: option.font.setWeight(QFont.Bold) QStyledItemDelegate.paint(self, painter, option, index)
Custom QStyledItemDelegate: adding bold items
So here's the story: I have a QListview that uses a QSqlQueryModel to fill it up. Because some items should display in bold based on the value of a hidden column of the model, I decided to make my own custom delegate. I'm using PyQT 4.5.4 and thus inheriting from QStyledItemDelegate is the way to go according to the docs. I got it working but there are some problems with it. Here's my solution: class TypeSoortDelegate(QStyledItemDelegate): def paint(self, painter, option, index): model = index.model() record = model.record(index.row()) value= record.value(2).toPyObject() if value: painter.save() # change the back- and foreground colors # if the item is selected if option.state & QStyle.State_Selected: painter.setPen(QPen(Qt.NoPen)) painter.setBrush(QApplication.palette().highlight()) painter.drawRect(option.rect) painter.restore() painter.save() font = painter.font pen = painter.pen() pen.setColor(QApplication.palette().color(QPalette.HighlightedText)) painter.setPen(pen) else: painter.setPen(QPen(Qt.black)) # set text bold font = painter.font() font.setWeight(QFont.Bold) painter.setFont(font) text = record.value(1).toPyObject() painter.drawText(option.rect, Qt.AlignLeft, text) painter.restore() else: QStyledItemDelegate.paint(self, painter, option, index) The problems I'm facing now: the normal (not bold) items are slightly indented (a few pixels). This is probably some default behaviour. I could indent my item in bold also, but what happens then under a different platform? Normally when I select items there is a small border with a dotted line around it (default Windows thing?). Here also I could draw it, but I want to stay as native as possible. Now the question: Is there another way to create a custom delegate that only changes the font weight when some condition is met and leaves all the rest untouched? I also tried: if value: font = painter.font() font.setWeight(QFont.Bold) painter.setFont(font) QStyledItemDelegate.paint(self, painter, option, index) But that doesn't seem to affect the looks at all. No error, just default behaviour, and no bold items. All suggestions welcome!
[ "I've not tested this, but I think you can do:\nclass TypeSoortDelegate(QStyledItemDelegate):\n\ndef paint(self, painter, option, index):\n get value...\n if value:\n option.font.setWeight(QFont.Bold)\n\n QStyledItemDelegate.paint(self, painter, option, index)\n\n" ]
[ 3 ]
[]
[]
[ "pyqt", "python", "user_interface" ]
stackoverflow_0001412156_pyqt_python_user_interface.txt
Q: Problem when getting the content of a listbox with python and ctypes on win32 I would like to get the content of a list box thanks to python and ctypes. item_count = ctypes.windll.user32.SendMessageA(hwnd, win32con.LB_GETCOUNT, 0, 0) items = [] for i in xrange(item_count): text_len = ctypes.windll.user32.SendMessageA(hwnd, win32con.LB_GETTEXTLEN, i, 0) buffer = ctypes.create_string_buffer("", text_len+1) ctypes.windll.user32.SendMessageA(hwnd, win32con.LB_GETTEXT, i, buffer) items.append(buffer.value) print items The number of items is correct but the text is wrong. All text_len are 4 and the text values are something like '0\xd9\xee\x02\x90' I have tried to use a unicode buffer with a similar result. I don't find my error. Any idea? A: If the list box in question is owner-drawn, this passage from the LB_GETTEXT documentation may be relevant: If you create the list box with an owner-drawn style but without the LBS_HASSTRINGS style, the buffer pointed to by the lParam parameter will receive the value associated with the item (the item data). The four bytes you received certainly look like they may be a pointer, which is a typical value to store in the per-item data. A: It looks like you need to use a packed structure for the result. I found an example online, perhaps this will assist you: http://www.brunningonline.net/simon/blog/archives/winGuiAuto.py.html # Programmer : Simon Brunning - [email protected] # Date : 25 June 2003 def _getMultipleWindowValues(hwnd, getCountMessage, getValueMessage): '''A common pattern in the Win32 API is that in order to retrieve a series of values, you use one message to get a count of available items, and another to retrieve them. This internal utility function performs the common processing for this pattern. Arguments: hwnd Window handle for the window for which items should be retrieved. getCountMessage Item count message. getValueMessage Value retrieval message. Returns: Retrieved items.''' result = [] VALUE_LENGTH = 256 bufferlength_int = struct.pack('i', VALUE_LENGTH) # This is a C style int. valuecount = win32gui.SendMessage(hwnd, getCountMessage, 0, 0) for itemIndex in range(valuecount): valuebuffer = array.array('c', bufferlength_int + " " * (VALUE_LENGTH - len(bufferlength_int))) valueLength = win32gui.SendMessage(hwnd, getValueMessage, itemIndex, valuebuffer) result.append(valuebuffer.tostring()[:valueLength]) return result def getListboxItems(hwnd): '''Returns the items in a list box control. Arguments: hwnd Window handle for the list box. Returns: List box items. Usage example: TODO ''' return _getMultipleWindowValues(hwnd, getCountMessage=win32con.LB_GETCOUNT, getValueMessage=win32con.LB_GETTEXT)
Problem when getting the content of a listbox with python and ctypes on win32
I would like to get the content of a list box thanks to python and ctypes. item_count = ctypes.windll.user32.SendMessageA(hwnd, win32con.LB_GETCOUNT, 0, 0) items = [] for i in xrange(item_count): text_len = ctypes.windll.user32.SendMessageA(hwnd, win32con.LB_GETTEXTLEN, i, 0) buffer = ctypes.create_string_buffer("", text_len+1) ctypes.windll.user32.SendMessageA(hwnd, win32con.LB_GETTEXT, i, buffer) items.append(buffer.value) print items The number of items is correct but the text is wrong. All text_len are 4 and the text values are something like '0\xd9\xee\x02\x90' I have tried to use a unicode buffer with a similar result. I don't find my error. Any idea?
[ "If the list box in question is owner-drawn, this passage from the LB_GETTEXT documentation may be relevant:\n\nIf you create the list box with an owner-drawn style but without the LBS_HASSTRINGS style, the buffer pointed to by the lParam parameter will receive the value associated with the item (the item data).\n\nThe four bytes you received certainly look like they may be a pointer, which is a typical value to store in the per-item data.\n", "It looks like you need to use a packed structure for the result. I found an example online, perhaps this will assist you:\nhttp://www.brunningonline.net/simon/blog/archives/winGuiAuto.py.html\n# Programmer : Simon Brunning - [email protected]\n# Date : 25 June 2003\ndef _getMultipleWindowValues(hwnd, getCountMessage, getValueMessage):\n '''A common pattern in the Win32 API is that in order to retrieve a\n series of values, you use one message to get a count of available\n items, and another to retrieve them. This internal utility function\n performs the common processing for this pattern.\n\n Arguments:\n hwnd Window handle for the window for which items should be\n retrieved.\n getCountMessage Item count message.\n getValueMessage Value retrieval message.\n\n Returns: Retrieved items.'''\n result = []\n\n VALUE_LENGTH = 256\n bufferlength_int = struct.pack('i', VALUE_LENGTH) # This is a C style int.\n\n valuecount = win32gui.SendMessage(hwnd, getCountMessage, 0, 0)\n for itemIndex in range(valuecount):\n valuebuffer = array.array('c',\n bufferlength_int +\n \" \" * (VALUE_LENGTH - len(bufferlength_int)))\n valueLength = win32gui.SendMessage(hwnd,\n getValueMessage,\n itemIndex,\n valuebuffer)\n result.append(valuebuffer.tostring()[:valueLength])\n return result\n\ndef getListboxItems(hwnd):\n '''Returns the items in a list box control.\n\n Arguments:\n hwnd Window handle for the list box.\n\n Returns: List box items.\n\n Usage example: TODO\n '''\n\n return _getMultipleWindowValues(hwnd,\n getCountMessage=win32con.LB_GETCOUNT,\n getValueMessage=win32con.LB_GETTEXT)\n\n" ]
[ 1, 0 ]
[]
[]
[ "ctypes", "python", "winapi" ]
stackoverflow_0001466453_ctypes_python_winapi.txt
Q: Testing for cookie existence in Django Simple stuff here... if I try to reference a cookie in Django via request.COOKIE["key"] if the cookie doesn't exist that will throw a key error. For Django's GET and POST, since they are QueryDict objects, I can just do if "foo" in request.GET which is wonderfully sophisticated... what's the closest thing to this for cookies that isn't a Try/Catch block, if anything... A: request.COOKIES is a standard Python dictionary, so the same syntax works. Another way of doing it is: request.COOKIES.get('key', 'default') which returns the value if the key exists, otherwise 'default' - you can put anything you like in place of 'default'. A: First, it's request.COOKIES not request.COOKIE. Other one will throw you an error. Second, it's a dictionary (or, dictionary-like) object, so: if "foo" in request.COOKIES.keys() will give you what you need. If you want to get the value of the cookie, you can use: request.COOKIES.get("key", None) then, if there's no key "key", you'll get a None instead of an exception.
Testing for cookie existence in Django
Simple stuff here... if I try to reference a cookie in Django via request.COOKIE["key"] if the cookie doesn't exist that will throw a key error. For Django's GET and POST, since they are QueryDict objects, I can just do if "foo" in request.GET which is wonderfully sophisticated... what's the closest thing to this for cookies that isn't a Try/Catch block, if anything...
[ "request.COOKIES is a standard Python dictionary, so the same syntax works.\nAnother way of doing it is:\nrequest.COOKIES.get('key', 'default')\n\nwhich returns the value if the key exists, otherwise 'default' - you can put anything you like in place of 'default'.\n", "First, it's \nrequest.COOKIES\n\nnot request.COOKIE. Other one will throw you an error.\nSecond, it's a dictionary (or, dictionary-like) object, so:\nif \"foo\" in request.COOKIES.keys()\n\nwill give you what you need.\nIf you want to get the value of the cookie, you can use:\nrequest.COOKIES.get(\"key\", None)\n\nthen, if there's no key \"key\", you'll get a None instead of an exception.\n" ]
[ 22, 6 ]
[]
[]
[ "cookies", "django", "http", "python" ]
stackoverflow_0001466732_cookies_django_http_python.txt
Q: Retrieving the latitude/longitude from Google Map Mobile 3.0's MyLocation feature I want to fetch my current latitude/longitude from Google Maps Mobile 3.0 with the help of some script, which I guess could be a Python one. Is this possible? And, more importantly: is the Google Maps Mobile API designed for such interaction? Any legal issues? Basically i have a S60 phone that doesnt have GPS,and I have found that Google maintains its own database to link Cell IDs with lat/longs so that depending on what Cell ID I am nearest to, it can approximate my current location. So only Cel ID info from the operator won't tell me where i am on Earth until such a linking between Cell ID and latitude/longitude is available (for which I am thinking of seeking help of GMM; of course, if it provides for this...). Secondly, the GMM 3.0 pushes my current latitude/longitude to the iGoogle Latitude gadget... so there should be some way by which I can fetch the same info by my custom gadget/script, right? A: No, you can't access it from a Python script or another S60 application because of platform security features of S60 3rd ed. Even if Google Maps application would write information to disk, your app is not able to access application specific files of other apps. Google Maps use cell-based locationing in addition to GPS or when GPS is not available. Google hasn't released any 3rd party API to do these cell-tower to lat-long conversions. A: You don't have to do this through Google Maps. The mapping for Cell ID to latitude and longitude is available from http://www.opencellid.org. You can get the raw data file and then sample part of the data to the accuracy you need. Then, using the pys60 elocation api, you can grab the current Cell ID (you need to have the DevCert in order to do this). For more details take a look at this: http://discussion.forum.nokia.com/forum/showthread.php?t=112964 A: About reading any file from the disk: Only when you have an R&D (Research and Development) certificate that grants you the AllFiles capability, can you access nearly any file.
Retrieving the latitude/longitude from Google Map Mobile 3.0's MyLocation feature
I want to fetch my current latitude/longitude from Google Maps Mobile 3.0 with the help of some script, which I guess could be a Python one. Is this possible? And, more importantly: is the Google Maps Mobile API designed for such interaction? Any legal issues? Basically i have a S60 phone that doesnt have GPS,and I have found that Google maintains its own database to link Cell IDs with lat/longs so that depending on what Cell ID I am nearest to, it can approximate my current location. So only Cel ID info from the operator won't tell me where i am on Earth until such a linking between Cell ID and latitude/longitude is available (for which I am thinking of seeking help of GMM; of course, if it provides for this...). Secondly, the GMM 3.0 pushes my current latitude/longitude to the iGoogle Latitude gadget... so there should be some way by which I can fetch the same info by my custom gadget/script, right?
[ "No, you can't access it from a Python script or another S60 application because of platform security features of S60 3rd ed. Even if Google Maps application would write information to disk, your app is not able to access application specific files of other apps. \nGoogle Maps use cell-based locationing in addition to GPS or when GPS is not available. Google hasn't released any 3rd party API to do these cell-tower to lat-long conversions.\n", "You don't have to do this through Google Maps. The mapping for Cell ID to latitude and longitude is available from http://www.opencellid.org.\nYou can get the raw data file and then sample part of the data to the accuracy you need. Then, using the pys60 elocation api, you can grab the current Cell ID (you need to have the DevCert in order to do this).\nFor more details take a look at this: http://discussion.forum.nokia.com/forum/showthread.php?t=112964\n", "About reading any file from the disk:\nOnly when you have an R&D (Research and Development) certificate that grants you the AllFiles capability, can you access nearly any file.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "google_maps", "pys60", "python", "s60" ]
stackoverflow_0000623504_google_maps_pys60_python_s60.txt
Q: how to rewrite this loop in a more efficient way in python I have a loop of the following type: a = range(10) b = [something] for i in range(len(a)-1): b.append(someFunction(b[-1], a[i], a[i+1])) However the for-loop is killing a lot of performance. I have try to write a windows generator to give me 2 elements everything time but it still require explicit for-loop in the end. Is there a way to make this shorter and more efficient in a pythonic way? Thanks edit: I forgot the element in b.. sorry guys. However the solution to my previous problem is very helpful in other problem I have too. Thanks. A: Consider this def make_b( a, seed ): yield seed for a,b in zip( a[:-1], a[1:] ): seed= someFunction( seed, a, b ) yield seed Which lets you do this a = xrange(10) b= list(make_b(a,something)) Note that you can often use this: b = make_b(a) Instead of actually creating b as a list. b as a generator function saves you considerable storage (and some time) because you may not really need a list object in the first place. Often, you only need something iterable. Similarly for a. It does not have to be a list, merely something iterable -- like a generator function with a yield statement. A: For your initially stated problem of mapping a function over pairs of an input sequence the following will work, and is about as efficient as it gets while staying in Python land. from itertools import tee a = range(10) a1, a2 = tee(a) a2.next() b = map(someFunction, a1, a2) As for the expanded problem where you need to access the result of the previous iteration - this kind of inner state is present in the functional concept unfold. But Python doesn't include an unfold construct, and for a good reason for loops are more readable in this case and most likely faster too. As for making it more Pythonic, I suggest lifting the pairwise iteration out to a function and create an explicit loop variable. def pairwise(seq): a, b = tee(seq) b.next() return izip(a, b) def unfold_over_pairwise(unfolder, seq, initial): state = initial for cur_item, next_item in pairwise(seq): state = unfolder(state, cur_item, next_item) yield state b = [something] b.extend(unfold_over_pairwise(someFunction, a, initial=b[-1])) If the looping overhead really is a problem, then someFunction must be something really simple. In that case it probably is best to write the whole loop in a faster language, such as C. A: Some loop or other will always be around, but one possibility that might reduce overhead is: import itertools def generate(a, item): a1, a2 = itertools.tee(a) next(a2) for x1, x2 in itertools.izip(a1, a2): item = someFunction(item, x1, x2) yield item to be used as: b.extend(generate(a, b[-1])) A: Try something like this: a = range(10) b = [something] s = len(b) b+= [0] * (len(a) - 1) [ b.__setitem__(i, someFunction(b[i-1], a[i-s], a[i-s+1])) for i in range(s, len(b))] Also: using functions from itertools should be useful also (earlier posts) maybe you can rewrite someFunction and use map instead of list comprehension
how to rewrite this loop in a more efficient way in python
I have a loop of the following type: a = range(10) b = [something] for i in range(len(a)-1): b.append(someFunction(b[-1], a[i], a[i+1])) However the for-loop is killing a lot of performance. I have try to write a windows generator to give me 2 elements everything time but it still require explicit for-loop in the end. Is there a way to make this shorter and more efficient in a pythonic way? Thanks edit: I forgot the element in b.. sorry guys. However the solution to my previous problem is very helpful in other problem I have too. Thanks.
[ "Consider this\ndef make_b( a, seed ):\n yield seed\n for a,b in zip( a[:-1], a[1:] ):\n seed= someFunction( seed, a, b )\n yield seed\n\nWhich lets you do this\na = xrange(10)\nb= list(make_b(a,something))\n\nNote that you can often use this: \nb = make_b(a)\n\nInstead of actually creating b as a list. b as a generator function saves you considerable storage (and some time) because you may not really need a list object in the first place. Often, you only need something iterable.\nSimilarly for a. It does not have to be a list, merely something iterable -- like a generator function with a yield statement.\n", "For your initially stated problem of mapping a function over pairs of an input sequence the following will work, and is about as efficient as it gets while staying in Python land.\nfrom itertools import tee\n\na = range(10)\na1, a2 = tee(a)\na2.next()\nb = map(someFunction, a1, a2)\n\nAs for the expanded problem where you need to access the result of the previous iteration - this kind of inner state is present in the functional concept unfold. But Python doesn't include an unfold construct, and for a good reason for loops are more readable in this case and most likely faster too. As for making it more Pythonic, I suggest lifting the pairwise iteration out to a function and create an explicit loop variable. \ndef pairwise(seq):\n a, b = tee(seq)\n b.next()\n return izip(a, b)\n\ndef unfold_over_pairwise(unfolder, seq, initial):\n state = initial\n for cur_item, next_item in pairwise(seq):\n state = unfolder(state, cur_item, next_item)\n yield state\n\nb = [something]\nb.extend(unfold_over_pairwise(someFunction, a, initial=b[-1]))\n\nIf the looping overhead really is a problem, then someFunction must be something really simple. In that case it probably is best to write the whole loop in a faster language, such as C.\n", "Some loop or other will always be around, but one possibility that might reduce overhead is:\nimport itertools\n\ndef generate(a, item):\n a1, a2 = itertools.tee(a)\n next(a2)\n for x1, x2 in itertools.izip(a1, a2):\n item = someFunction(item, x1, x2)\n yield item\n\nto be used as:\nb.extend(generate(a, b[-1]))\n\n", "Try something like this:\na = range(10) \nb = [something] \n\ns = len(b)\nb+= [0] * (len(a) - 1)\n[ b.__setitem__(i, someFunction(b[i-1], a[i-s], a[i-s+1])) for i in range(s, len(b))]\n\nAlso:\n\nusing functions from itertools should\nbe useful also (earlier posts)\nmaybe you can rewrite someFunction and use map instead of list\ncomprehension\n\n" ]
[ 8, 4, 2, 0 ]
[]
[]
[ "list", "list_comprehension", "python" ]
stackoverflow_0001466282_list_list_comprehension_python.txt
Q: Are python modules first class citizens? I mean, can I create them dynamically? A: Yes: >>> import types >>> m = types.ModuleType("mymod") >>> m <module 'mymod' (built-in)> A: You can create them dynamically, with the imp.new_module method.
Are python modules first class citizens?
I mean, can I create them dynamically?
[ "Yes:\n>>> import types\n>>> m = types.ModuleType(\"mymod\")\n>>> m\n<module 'mymod' (built-in)>\n\n", "You can create them dynamically, with the imp.new_module method.\n" ]
[ 9, 5 ]
[]
[]
[ "module", "python" ]
stackoverflow_0001467612_module_python.txt
Q: django registration module i started to learn django using the official documentation tutorial and in turn created this registration form. i have the whole app setup but am stuck at how to process the data. i mean the documentation is a little tough to understand. if i can get some help, it would be great. this is how the models are setup: class user (models.Model): username = models.CharField(max_length=20) name = models.CharField(max_length=25) collegename = models.CharField(max_length=50) event = models.CharField(max_length=30) col_roll = models.CharField(max_length=15) def __unicode__(self): return self.username and this is the form in index.html: <form action="" method="post"></form> <input name="username" type="text" maxlength="40" /> <input name="name" type="text" maxlength="40" /> <input name="collegename" type="text" maxlength="40" /> <input name="event" type="text" maxlength="40" /> <input name="col_roll" type="text" maxlength="40" /> <input type="submit" value="Register" /> </form> i do not follow how to create the view to process this registration of a new user. if anybody could help me, it would be great. The database (MySQL) is created with the name (register_user). I do not understand how to put the values from the above form in to the database. If it would have been regular python, it would have been easily done, but DJANGO i dont understand. Thanks a lot. A: You don't seem to have read the documentation on forms. It explains in detail how to create a form from your model, how to output it in a template (so you don't need to write the HTML input elements manually), and how to process it in a view. A: It took me a bit of time to figure out the form processing bit when I started. The first thing you need before you get to the form part is to create a view function. The view will use your index.html file as a template. In that view you'll want to create a form (either from Daniel's link above or by creating a form from your model). Once you have a form class, provide an instantiated copy of that in the view (e.g., form = MyRegistrationModelForm()) and then replace all of your input tags in index.html with {{ form }}. That will output a pre-formatted form, which may not be exactly what you want, but you can then view source and see what the form fields need to be named and then you can customize from there. One other thing. Not sure if you are, but you should be using django-registration for this. Rather than creating your own user class, use the built-in User class in django.contrib.auth.models and then use django-registration for the custom fields you need. You'll be able to plug in other modules much more easily if you use the defined User class. As for the database, assuming you have put the proper MySQL connection information into settings.py, running 'python manage.py syncdb' in the root of your project should create all the necessary tables. Make sure to re-run it as you add new models.
django registration module
i started to learn django using the official documentation tutorial and in turn created this registration form. i have the whole app setup but am stuck at how to process the data. i mean the documentation is a little tough to understand. if i can get some help, it would be great. this is how the models are setup: class user (models.Model): username = models.CharField(max_length=20) name = models.CharField(max_length=25) collegename = models.CharField(max_length=50) event = models.CharField(max_length=30) col_roll = models.CharField(max_length=15) def __unicode__(self): return self.username and this is the form in index.html: <form action="" method="post"></form> <input name="username" type="text" maxlength="40" /> <input name="name" type="text" maxlength="40" /> <input name="collegename" type="text" maxlength="40" /> <input name="event" type="text" maxlength="40" /> <input name="col_roll" type="text" maxlength="40" /> <input type="submit" value="Register" /> </form> i do not follow how to create the view to process this registration of a new user. if anybody could help me, it would be great. The database (MySQL) is created with the name (register_user). I do not understand how to put the values from the above form in to the database. If it would have been regular python, it would have been easily done, but DJANGO i dont understand. Thanks a lot.
[ "You don't seem to have read the documentation on forms. It explains in detail how to create a form from your model, how to output it in a template (so you don't need to write the HTML input elements manually), and how to process it in a view.\n", "It took me a bit of time to figure out the form processing bit when I started. The first thing you need before you get to the form part is to create a view function. The view will use your index.html file as a template. In that view you'll want to create a form (either from Daniel's link above or by creating a form from your model). Once you have a form class, provide an instantiated copy of that in the view (e.g., form = MyRegistrationModelForm()) and then replace all of your input tags in index.html with {{ form }}. That will output a pre-formatted form, which may not be exactly what you want, but you can then view source and see what the form fields need to be named and then you can customize from there.\nOne other thing. Not sure if you are, but you should be using django-registration for this. Rather than creating your own user class, use the built-in User class in django.contrib.auth.models and then use django-registration for the custom fields you need. You'll be able to plug in other modules much more easily if you use the defined User class.\nAs for the database, assuming you have put the proper MySQL connection information into settings.py, running 'python manage.py syncdb' in the root of your project should create all the necessary tables. Make sure to re-run it as you add new models.\n" ]
[ 1, 0 ]
[]
[]
[ "django", "mysql", "python" ]
stackoverflow_0001467735_django_mysql_python.txt
Q: python: how to get all members of an array except for ones that match a condition I'm trying to create an array of all .asm files I need to build except for one that is causing me trouble right now. Here's what I have, based on the Scons "Handling Common Cases" page: projfiles['buildasm'] = ['#build/'+os.path.splitext(x)[0]+'.asm' for x in projfiles['a']]; (this maps paths of the form 'foo.a' to '#build/foo.asm') I want to run this for each member of projfiles['a'] except if a member of the array matches 'baz.a'. How can I do this? A: projfiles['buildasm'] = ['#build/'+os.path.splitext(x)[0]+'.asm' for x in projfiles['a'] if x != 'baz.a'] or more generally: ignored_files = ['baz.a', 'foo.a', 'xyzzy.a', ] projfiles['buildasm'] = ['#build/'+os.path.splitext(x)[0]+'.asm' for x in projfiles['a'] if x not in ignored_files]
python: how to get all members of an array except for ones that match a condition
I'm trying to create an array of all .asm files I need to build except for one that is causing me trouble right now. Here's what I have, based on the Scons "Handling Common Cases" page: projfiles['buildasm'] = ['#build/'+os.path.splitext(x)[0]+'.asm' for x in projfiles['a']]; (this maps paths of the form 'foo.a' to '#build/foo.asm') I want to run this for each member of projfiles['a'] except if a member of the array matches 'baz.a'. How can I do this?
[ "projfiles['buildasm'] = ['#build/'+os.path.splitext(x)[0]+'.asm' for x in projfiles['a'] if x != 'baz.a']\n\nor more generally:\nignored_files = ['baz.a',\n 'foo.a',\n 'xyzzy.a',\n ]\nprojfiles['buildasm'] = ['#build/'+os.path.splitext(x)[0]+'.asm' for x in projfiles['a'] if x not in ignored_files]\n\n" ]
[ 7 ]
[]
[]
[ "python" ]
stackoverflow_0001467930_python.txt
Q: refer to map via. maps() action in python/ pylonshq I just started to learn python, and i'm totally new and n00b. Normally i work with php. I choose to use this framework: http://pylonshq.com/ I have created an map called ajax in my controller map. now i just need my "htaccees" file to find the ajax map. I want the file to go into the map /ajax/ where the file ajax_load.py is. Right now, it looks like below. But i can't make it work :/ map.connect('/ajax/{action}/', controller='ajax.ajax_load') I hope someone can help me ! A: solution : http://pylonshq.com/docs/en/0.9.7/configuration/ this is how it works-> $ paster controller ajax/ajax_load
refer to map via. maps() action in python/ pylonshq
I just started to learn python, and i'm totally new and n00b. Normally i work with php. I choose to use this framework: http://pylonshq.com/ I have created an map called ajax in my controller map. now i just need my "htaccees" file to find the ajax map. I want the file to go into the map /ajax/ where the file ajax_load.py is. Right now, it looks like below. But i can't make it work :/ map.connect('/ajax/{action}/', controller='ajax.ajax_load') I hope someone can help me !
[ "solution :\nhttp://pylonshq.com/docs/en/0.9.7/configuration/\nthis is how it works->\n$ paster controller ajax/ajax_load\n" ]
[ 0 ]
[]
[]
[ "pylons", "python" ]
stackoverflow_0001455446_pylons_python.txt
Q: How does this work? So I'm trying to comprehend the source file for csv2rec in matplotlib.mlab. It is used to take a csv file and parse the data into certain formats. So it may take a string '234' and convert it to int. or take a date string and make it into python datetimes. def get_converters(reader): converters = None for i, row in enumerate(reader): if i==0: converters = [mybool]*len(row) if checkrows and i>checkrows: break #print i, len(names), len(row) #print 'converters', zip(converters, row) for j, (name, item) in enumerate(zip(names, row)): func = converterd.get(j) if func is None: func = converterd.get(name) if func is None: #if not item.strip(): continue func = converters[j] if len(item.strip()): func = get_func(name, item, func) else: # how should we handle custom converters and defaults? func = with_default_value(func, None) converters[j] = func return converters My issue with this function is 'converters.' It starts off as None. Then later 'func = converters[j]' j I know is a number which is just created through enumeration. So it is looking for the corresponding converters item as indexed by j. But there is nothing in converters because it is None right? Unless python programs don't have to be read from top to bottom? In that case we get the func from the next two lines "if len(item.st....etc)" or from the 'else:' section. But, I just assumed it would have to be read from top to bottom. I don't know if any of the other things are important so I just included the whole function. converterd is a dictionary mapping I believe that the user can provide as a parameter to find a converter automatically. checkrows is just a number provided by the user as a parameter in the beginning to check for validity. It is by default None. I'm still kind of a beginner, so just fyi. =) Thanks everyone. This site is so helpful! A: Converters gets set again at the beginning of the loop with if i==0: converters = [mybool]*len(row) So after that it's not None anymore. A: Unless I'm missing something, on the first iteration "i" is 0, so the following is executed: converters = [mybool]*len(row) and that initializes "converters" A: First, converters = None sets an initial value for converters. This way, if the iteration doesn't happen (because readers might be empty) then when the function returns converters it will exist and have the value None. If the iteration over readers happens, then converters is immediately reset to a more meaningful value in the first pass through the iteration (when i==0): converters = [mybool]*len(row)
How does this work?
So I'm trying to comprehend the source file for csv2rec in matplotlib.mlab. It is used to take a csv file and parse the data into certain formats. So it may take a string '234' and convert it to int. or take a date string and make it into python datetimes. def get_converters(reader): converters = None for i, row in enumerate(reader): if i==0: converters = [mybool]*len(row) if checkrows and i>checkrows: break #print i, len(names), len(row) #print 'converters', zip(converters, row) for j, (name, item) in enumerate(zip(names, row)): func = converterd.get(j) if func is None: func = converterd.get(name) if func is None: #if not item.strip(): continue func = converters[j] if len(item.strip()): func = get_func(name, item, func) else: # how should we handle custom converters and defaults? func = with_default_value(func, None) converters[j] = func return converters My issue with this function is 'converters.' It starts off as None. Then later 'func = converters[j]' j I know is a number which is just created through enumeration. So it is looking for the corresponding converters item as indexed by j. But there is nothing in converters because it is None right? Unless python programs don't have to be read from top to bottom? In that case we get the func from the next two lines "if len(item.st....etc)" or from the 'else:' section. But, I just assumed it would have to be read from top to bottom. I don't know if any of the other things are important so I just included the whole function. converterd is a dictionary mapping I believe that the user can provide as a parameter to find a converter automatically. checkrows is just a number provided by the user as a parameter in the beginning to check for validity. It is by default None. I'm still kind of a beginner, so just fyi. =) Thanks everyone. This site is so helpful!
[ "Converters gets set again at the beginning of the loop with\nif i==0:\n converters = [mybool]*len(row)\n\nSo after that it's not None anymore.\n", "Unless I'm missing something, on the first iteration \"i\" is 0, so the following is executed:\nconverters = [mybool]*len(row)\n\nand that initializes \"converters\"\n", "First,\nconverters = None\n\nsets an initial value for converters. This way, if the iteration doesn't happen (because readers might be empty) then when the function returns converters it will exist and have the value None.\nIf the iteration over readers happens, then converters is immediately reset to a more meaningful value in the first pass through the iteration (when i==0):\nconverters = [mybool]*len(row)\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "function", "matplotlib", "python" ]
stackoverflow_0001467902_function_matplotlib_python.txt
Q: How can I print only every third index in Perl or Python? How can I do a for() or foreach() loop in Python and Perl, respectively, that only prints every third index? I need to move every third index to a new array. A: Perl: As with draegtun's answer, but using a count var: my $i; my @new = grep {not ++$i % 3} @list; A: Python print list[::3] # print it newlist = list[::3] # copy it Perl for ($i = 0; $i < @list; $i += 3) { print $list[$i]; # print it push @y, $list[$i]; # copy it } A: Perl 5.10 new state variables comes in very handy here: my @every_third = grep { state $n = 0; ++$n % 3 == 0 } @list; Also note you can provide a list of elements to slice: my @every_third = @list[ 2, 5, 8 ]; # returns 3rd, 5th & 9th items in list You can dynamically create this slice list using map (see Gugod's excellent answer) or a subroutine: my @every_third = @list[ loop( start => 2, upto => $#list, by => 3 ) ]; sub loop { my ( %p ) = @_; my @list; for ( my $i = $p{start} || 0; $i <= $p{upto}; $i += $p{by} ) { push @list, $i; } return @list; } Update: Regarding runrig's comment... this is "one way" to make it work within a loop: my @every_third = sub { grep { state $n = 0; ++$n % 3 == 0 } @list }->(); A: Perl: # The initial array my @a = (1..100); # Copy it, every 3rd elements my @b = @a[ map { 3 * $_ } 0..$#a/3 ]; # Print it. space-delimited $, = " "; say @b; A: Python: for x in a[::3]: something(x) A: You could do a slice in Perl. my @in = ( 1..10 ); # need only 1/3 as many indexes. my @index = 1..(@in/3); # adjust the indexes. $_ = 3 * $_ - 1 for @index; # These would also work # $_ *= 3, --$_ for @index; # --($_ *= 3) for @index my @out = @in[@index]; A: In Perl: $size = @array; for ($i=0; $i<$size; $i+=3) # or start from $i=2, depends what you mean by "every third index" { print "$array[$i] "; } A: @array = qw(1 2 3 4 5 6 7 8 9); print @array[(grep { ($_ + 1) % 3 == 0 } (1..$#array))];
How can I print only every third index in Perl or Python?
How can I do a for() or foreach() loop in Python and Perl, respectively, that only prints every third index? I need to move every third index to a new array.
[ "Perl:\nAs with draegtun's answer, but using a count var:\nmy $i;\nmy @new = grep {not ++$i % 3} @list;\n\n", "Python\nprint list[::3] # print it\nnewlist = list[::3] # copy it\n\nPerl\nfor ($i = 0; $i < @list; $i += 3) {\n print $list[$i]; # print it\n push @y, $list[$i]; # copy it\n}\n\n", "Perl 5.10 new state variables comes in very handy here:\nmy @every_third = grep { state $n = 0; ++$n % 3 == 0 } @list;\n\n\nAlso note you can provide a list of elements to slice:\nmy @every_third = @list[ 2, 5, 8 ]; # returns 3rd, 5th & 9th items in list\n\nYou can dynamically create this slice list using map (see Gugod's excellent answer) or a subroutine: \nmy @every_third = @list[ loop( start => 2, upto => $#list, by => 3 ) ];\n\nsub loop {\n my ( %p ) = @_;\n my @list;\n\n for ( my $i = $p{start} || 0; $i <= $p{upto}; $i += $p{by} ) {\n push @list, $i;\n }\n\n return @list;\n}\n\n\nUpdate:\nRegarding runrig's comment... this is \"one way\" to make it work within a loop:\nmy @every_third = sub { grep { state $n = 0; ++$n % 3 == 0 } @list }->();\n\n", "Perl:\n# The initial array\nmy @a = (1..100);\n\n# Copy it, every 3rd elements\nmy @b = @a[ map { 3 * $_ } 0..$#a/3 ];\n\n# Print it. space-delimited\n$, = \" \";\nsay @b;\n\n", "Python:\nfor x in a[::3]:\n something(x)\n\n", "You could do a slice in Perl.\nmy @in = ( 1..10 );\n\n# need only 1/3 as many indexes.\nmy @index = 1..(@in/3);\n\n# adjust the indexes.\n$_ = 3 * $_ - 1 for @index;\n# These would also work\n# $_ *= 3, --$_ for @index;\n# --($_ *= 3) for @index\n\nmy @out = @in[@index];\n\n", "In Perl:\n$size = @array; \nfor ($i=0; $i<$size; $i+=3) # or start from $i=2, depends what you mean by \"every third index\"\n{ \n print \"$array[$i] \"; \n} \n\n", "\n@array = qw(1 2 3 4 5 6 7 8 9);\nprint @array[(grep { ($_ + 1) % 3 == 0 } (1..$#array))];\n\n" ]
[ 16, 12, 9, 9, 8, 5, 3, 1 ]
[]
[]
[ "arrays", "perl", "python" ]
stackoverflow_0001464923_arrays_perl_python.txt
Q: How to contribute improvements to packages hosted on Cheeseshop ( pypi )? I've been using zc.buildout more and more and I'm encountering problems with some recipes that I have solutions to. These packages generally fall into several categories: Package with no obvious links to a project site Package with links to free hosted service like github or google code Setup #2 is better then #1, but not much better because for both of these situations, I would have to wait for the developer to apply these changes before i can use the updated package buildout. What I've been doing up to this point is basically forking the package, giving it a different name and uploading it to pypi, but this is creating redundancy and I think only aggravating the problem. One possible solution, is to use to use a personal server package index where I would upload updated versions of the code until the developer updates he/her package. This is doable, but it adds additional work, that I would prefer to avoid. Is there a better way to do this? Thank you A: Your "upload my personalized fork" solution sounds like a terrible idea. You should try http://pypi.python.org/pypi/collective.recipe.patch which lets you automatically patch eggs. Try setting up a local PyPi-compatible index. I think you can also point find-links = at a directory (not just a http:// url) containing your personal versions of those "almost good enough" packages. You can also try monkey patching the defective package, or take advantage of the Zope component model to override the necessary bits in a new package. Often the real authors are listed somewhere in the source code of a package, even if they decided not to put their names up on PyPi. I've been trying to cut down on the number of custom versions of packages I use. Usually I work with customized packages as develop eggs by linking src/some.project to my checkout of that project's code. I don't have to build a new egg or reinstall every time I edit those packages. A lot of Python packages used in buildouts are hosted in Plone's svn collective. It's relatively easy to get commit access to that repository.
How to contribute improvements to packages hosted on Cheeseshop ( pypi )?
I've been using zc.buildout more and more and I'm encountering problems with some recipes that I have solutions to. These packages generally fall into several categories: Package with no obvious links to a project site Package with links to free hosted service like github or google code Setup #2 is better then #1, but not much better because for both of these situations, I would have to wait for the developer to apply these changes before i can use the updated package buildout. What I've been doing up to this point is basically forking the package, giving it a different name and uploading it to pypi, but this is creating redundancy and I think only aggravating the problem. One possible solution, is to use to use a personal server package index where I would upload updated versions of the code until the developer updates he/her package. This is doable, but it adds additional work, that I would prefer to avoid. Is there a better way to do this? Thank you
[ "Your \"upload my personalized fork\" solution sounds like a terrible idea. You should try http://pypi.python.org/pypi/collective.recipe.patch which lets you automatically patch eggs. Try setting up a local PyPi-compatible index. I think you can also point find-links = at a directory (not just a http:// url) containing your personal versions of those \"almost good enough\" packages. You can also try monkey patching the defective package, or take advantage of the Zope component model to override the necessary bits in a new package. Often the real authors are listed somewhere in the source code of a package, even if they decided not to put their names up on PyPi.\nI've been trying to cut down on the number of custom versions of packages I use. Usually I work with customized packages as develop eggs by linking src/some.project to my checkout of that project's code. I don't have to build a new egg or reinstall every time I edit those packages.\nA lot of Python packages used in buildouts are hosted in Plone's svn collective. It's relatively easy to get commit access to that repository.\n" ]
[ 3 ]
[]
[]
[ "buildout", "collaboration", "pypi", "python" ]
stackoverflow_0001468476_buildout_collaboration_pypi_python.txt
Q: Twisted(asynch server) vs Django(or any other framework) I need help understanding what the advantage of using an asynch framework is. Suppose I want to develop a simple chat web app. Why cant I write python code in the Django framework that does long polling where I dont send a response back the server until someone enters a new msg. What does Twisted provide that gives it an advantage for real-time apps like the chat app? Sorry I am obviously little confused about the need for an asynchronous framework. A: First off Django is a framework for writing web apps so it provides ORM, html templating, it requires running an http server etc. Twisted helps to write much lower level code than that. You could use twisted to write the http server Django runs on. If you use Django you are limited to http model, with twisted it could be communicating in any protocol you like including push protocols. So for your chat example you get a server that scales better since it can push comments to people who have logged in VS with django every client having to poll repeatedly. edited to reflect comments by: sos-skyl A: Asynchronous servers support much larger numbers of simultaneous client connections. More conventional servers come up against thread and process limits when servicing large number of concurrent clients, particularly those with long-lived connections. Async servers can also provide better performance as they avoid the overheads of e.g. thread context switching. As well as the Twisted framework, there are also asynchronous server building blocks in Python's standard library: previously asyncore and asynchat, but now also asyncio. A: The biggest advantage for me is that Twisted gives me an application that has state, and can communicate with many different clients using many protocols. For me, my Twisted server communicates with a number of sensors installed in houses and businesses that monitor power usage. It stores the data and keeps recent data and state in handy-dandy python classes in memory. Requests via xmlrpc from django get this state and can present recent data to the user. My Gridspy stuff is still in development so the actual site at your.gridspy.co.nz is a bit pre-alpha. The best part is that you need surprisingly little code to make an effective server. An amazing amount of the work is done for you. A: In twisted you can implement protocols of your own. Django certainly can't do this. A: You could use WHIFF instead of either :). Check out http://aaron.oirt.rutgers.edu/myapp/gfChat/nucularChatRoom which uses a javascript polling loop with json to check for server updates. You could probably do something similar in Django, but I don't know how because I gave up on Django. btw: I'm hoping to move this demo onto the google app engine as a more complete service when my life calms down a bit. A: If you'd like to look at some source for integrating Twisted and Django, have a look at Yardbird.
Twisted(asynch server) vs Django(or any other framework)
I need help understanding what the advantage of using an asynch framework is. Suppose I want to develop a simple chat web app. Why cant I write python code in the Django framework that does long polling where I dont send a response back the server until someone enters a new msg. What does Twisted provide that gives it an advantage for real-time apps like the chat app? Sorry I am obviously little confused about the need for an asynchronous framework.
[ "First off Django is a framework for writing web apps so it provides ORM, html templating, it requires running an http server etc. Twisted helps to write much lower level code than that. You could use twisted to write the http server Django runs on. If you use Django you are limited to http model, with twisted it could be communicating in any protocol you like including push protocols. So for your chat example you get a server that scales better since it can push comments to people who have logged in VS with django every client having to poll repeatedly.\nedited to reflect comments by: sos-skyl\n", "Asynchronous servers support much larger numbers of simultaneous client connections. More conventional servers come up against thread and process limits when servicing large number of concurrent clients, particularly those with long-lived connections. Async servers can also provide better performance as they avoid the overheads of e.g. thread context switching.\nAs well as the Twisted framework, there are also asynchronous server building blocks in Python's standard library: previously asyncore and asynchat, but now also asyncio.\n", "The biggest advantage for me is that Twisted gives me an application that has state, and can communicate with many different clients using many protocols.\nFor me, my Twisted server communicates with a number of sensors installed in houses and businesses that monitor power usage. It stores the data and keeps recent data and state in handy-dandy python classes in memory. Requests via xmlrpc from django get this state and can present recent data to the user. My Gridspy stuff is still in development so the actual site at your.gridspy.co.nz is a bit pre-alpha.\nThe best part is that you need surprisingly little code to make an effective server. An amazing amount of the work is done for you.\n", "In twisted you can implement protocols of your own. Django certainly can't do this.\n", "You could use WHIFF instead of either :). Check out\nhttp://aaron.oirt.rutgers.edu/myapp/gfChat/nucularChatRoom\nwhich uses a javascript polling loop with json to check\nfor server updates. You could probably do something similar\nin Django, but I don't know how because I gave up on Django.\nbtw: I'm hoping to move this demo onto the google app engine\nas a more complete service when\nmy life calms down a bit.\n", "If you'd like to look at some source for integrating Twisted and Django, have a look at Yardbird.\n" ]
[ 19, 16, 5, 3, 0, 0 ]
[]
[]
[ "asynchronous", "django", "python", "real_time", "twisted" ]
stackoverflow_0001412169_asynchronous_django_python_real_time_twisted.txt
Q: Python CreateFile Cannot Find PhysicalMemory I am trying to access the Physical Memory of a Windows 2000 system (trying to do this without a memory dumping tool). My understanding is that I need to do this using the CreateFile function to create a handle. I have used an older version of win32dd to help me through this. Other documentation on the web points me to using either "\Device\PhysicalMemory" or "\\.\PhysicalMemory". Unfortunately, I get the same error for each. Traceback (most recent call last): File "testHandles.py", line 101, in (module) File "testHandles.py", line 72, in createFileHandle pywintypes.error: (3, 'CreateFile', 'The system cannot find the path specified.') Actually, the error number returned is different for each run \\.\PhysicalMemory == 3 and \Device\PhysicalMemory == 2. Review of pywin32, win32file, createfile, pyhandle, and pywintypes did not produce information as to the different return values. Here is my code. I am using py2exe to get this working on Windows 2000 (and yes it compiles successfully). I realize that I might also have a problem with DeviceIoControl but right now I am concentrating on CreateFile. # testHandles.py import ctypes import socket import struct import sys import win32file import pywintypes def createFileHandle(): outLoc = pywintypes.Unicode("C:\\Documents and Settings\\Administrator\\My Documents\\pymemdump_dotPM.dd") handleLoc = pywintypes.Unicode("\\\\.\\PhysicalMemory") #handleLoc = pywintypes.Unicode("\\Device\\PhysicalMemory") placeHolder = 0 BytesReturned = 0 # Device = CreateFile(L"\\\\.\\win32dd", GENERIC_ALL, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); # CreateFile(fileName, desiredAccess , shareMode , attributes , creationDisposition , flagsAndAttributes , hTemplateFile ) #hMemHandle = win32file.CreateFile(handleLoc, GENERIC_ALL, SHARE_READ, None, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, None) hMemHandle = win32file.CreateFile(handleLoc, win32file.GENERIC_READ, win32file.FILE_SHARE_READ, None, win32file.OPEN_EXISTING, win32file.FILE_ATTRIBUTE_NORMAL, None) print "hMemHandle: %s" % hMemHandle if (hMemHandle == NO_ERROR): print "Could not build hMemHandle" sys.exit() # We send destination path to the driver. #if (!DeviceIoControl(hMemHandle, 0x19880922, outLoc, (ULONG)(wcslen(outLoc) + 1) * sizeof(TCHAR), NULL, 0, &BytesReturned, NULL)) if (ctypes.windll.Kernel32.DeviceIoControl(hMemHandle, 0x19880922, outLoc, 5, NULL, 0, BytesReturned, NULL)): print "Error: DeviceIoControl(), Cannot send IOCTL.\n" else: print "[win32dd] Physical memory dumped. You can now check %s.\n" % outLoc # Dump memory createFileHandle() Thank you, Cutaway A: I don't believe it's possible to access the physical memory object from user mode land in Windows. As your win32dd link suggests, you will need to do it from kernel mode.
Python CreateFile Cannot Find PhysicalMemory
I am trying to access the Physical Memory of a Windows 2000 system (trying to do this without a memory dumping tool). My understanding is that I need to do this using the CreateFile function to create a handle. I have used an older version of win32dd to help me through this. Other documentation on the web points me to using either "\Device\PhysicalMemory" or "\\.\PhysicalMemory". Unfortunately, I get the same error for each. Traceback (most recent call last): File "testHandles.py", line 101, in (module) File "testHandles.py", line 72, in createFileHandle pywintypes.error: (3, 'CreateFile', 'The system cannot find the path specified.') Actually, the error number returned is different for each run \\.\PhysicalMemory == 3 and \Device\PhysicalMemory == 2. Review of pywin32, win32file, createfile, pyhandle, and pywintypes did not produce information as to the different return values. Here is my code. I am using py2exe to get this working on Windows 2000 (and yes it compiles successfully). I realize that I might also have a problem with DeviceIoControl but right now I am concentrating on CreateFile. # testHandles.py import ctypes import socket import struct import sys import win32file import pywintypes def createFileHandle(): outLoc = pywintypes.Unicode("C:\\Documents and Settings\\Administrator\\My Documents\\pymemdump_dotPM.dd") handleLoc = pywintypes.Unicode("\\\\.\\PhysicalMemory") #handleLoc = pywintypes.Unicode("\\Device\\PhysicalMemory") placeHolder = 0 BytesReturned = 0 # Device = CreateFile(L"\\\\.\\win32dd", GENERIC_ALL, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); # CreateFile(fileName, desiredAccess , shareMode , attributes , creationDisposition , flagsAndAttributes , hTemplateFile ) #hMemHandle = win32file.CreateFile(handleLoc, GENERIC_ALL, SHARE_READ, None, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, None) hMemHandle = win32file.CreateFile(handleLoc, win32file.GENERIC_READ, win32file.FILE_SHARE_READ, None, win32file.OPEN_EXISTING, win32file.FILE_ATTRIBUTE_NORMAL, None) print "hMemHandle: %s" % hMemHandle if (hMemHandle == NO_ERROR): print "Could not build hMemHandle" sys.exit() # We send destination path to the driver. #if (!DeviceIoControl(hMemHandle, 0x19880922, outLoc, (ULONG)(wcslen(outLoc) + 1) * sizeof(TCHAR), NULL, 0, &BytesReturned, NULL)) if (ctypes.windll.Kernel32.DeviceIoControl(hMemHandle, 0x19880922, outLoc, 5, NULL, 0, BytesReturned, NULL)): print "Error: DeviceIoControl(), Cannot send IOCTL.\n" else: print "[win32dd] Physical memory dumped. You can now check %s.\n" % outLoc # Dump memory createFileHandle() Thank you, Cutaway
[ "I don't believe it's possible to access the physical memory object from user mode land in Windows. As your win32dd link suggests, you will need to do it from kernel mode.\n" ]
[ 0 ]
[]
[]
[ "createfile", "ctypes", "memory", "python" ]
stackoverflow_0001468130_createfile_ctypes_memory_python.txt
Q: Why can't I import the 'math' library when embedding python in c? I'm using the example in python's 2.6 docs to begin a foray into embedding some python in C. The example C-code does not allow me to execute the following 1 line script: import math Using line: ./tmp.exe tmp foo bar it complains Traceback (most recent call last): File "/home/rbroger1/scripts/tmp.py", line 1, in <module> import math ImportError: [...]/python/2.6.2/lib/python2.6/lib-dynload/math.so: undefined symbol: PyInt_FromLong When I do nm on my generated binary (tmp.exe) it shows 0000000000420d30 T PyInt_FromLong The function seems to be defined, so why can't the shared object find the function? A: I'm using Python 2.6, and I successfully compiled and ran that same example code that you listed, without changing anything in the source. $ gcc python.c -I/usr/include/python2.6/ /usr/lib/libpython2.6.so $ ./a.out random randint 1 100 Result of call: 39 $ ./a.out random randint 1 100 Result of call: 57 I specifically chose the random module because it does have from math import log,... so it is certainly importing the math module as well. Your issue is probably due to how you're linking; see this forum post for a similar issue someone else had. I can't find the links again, but it seems like there are some common issues when trying to link against Python's static library then importing modules that require a dynamic library.
Why can't I import the 'math' library when embedding python in c?
I'm using the example in python's 2.6 docs to begin a foray into embedding some python in C. The example C-code does not allow me to execute the following 1 line script: import math Using line: ./tmp.exe tmp foo bar it complains Traceback (most recent call last): File "/home/rbroger1/scripts/tmp.py", line 1, in <module> import math ImportError: [...]/python/2.6.2/lib/python2.6/lib-dynload/math.so: undefined symbol: PyInt_FromLong When I do nm on my generated binary (tmp.exe) it shows 0000000000420d30 T PyInt_FromLong The function seems to be defined, so why can't the shared object find the function?
[ "I'm using Python 2.6, and I successfully compiled and ran that same example code that you listed, without changing anything in the source. \n\n$ gcc python.c -I/usr/include/python2.6/ /usr/lib/libpython2.6.so\n$ ./a.out random randint 1 100\nResult of call: 39\n$ ./a.out random randint 1 100\nResult of call: 57\n\nI specifically chose the random module because it does have from math import log,... so it is certainly importing the math module as well.\nYour issue is probably due to how you're linking; see this forum post for a similar issue someone else had. I can't find the links again, but it seems like there are some common issues when trying to link against Python's static library then importing modules that require a dynamic library.\n" ]
[ 2 ]
[]
[]
[ "c", "c++", "python" ]
stackoverflow_0001469370_c_c++_python.txt
Q: Networkx node traversal Using Python's Networkx library, I created an undirected graph to represent a relationship network between various people. A snippet of my code is below: import networkx as nx def creategraph(filepath): G=nx.Graph() #All the various nodes and edges are added in this stretch of code. return G From what I understand, each node is basically a dictionary. The problem that this presents to me is that I want to perform a different kind of Random Walk algorithm. Now before you jump on me and tell me to use one of the standard functions of the Networkx library, I want to point out that it is a custom algorithm. Suppose I run the creategraph function, and the G object is returned and stored in another object (let's call it X). I want to start off at a node called 'Bob.' Bob is connected to Alice and Joe. Now, I want to reassign Y to point to either Alice or Bob at random (with the data I'm dealing with, a given node could have hundreds of edges leaving it). How do I go about doing this? Also, how do I deal with unicode entries in a given node's dict (like how Alice and Joe are listed below?) X = creategraph("filename") Y=X['Bob'] print Y >> {u'Alice': {}, u'Joe': {}} A: The choice function in the random module could help with the selection process. You don't really need to worry about the distinction between unicode and string unless you're trying to write them out somewhere as sometimes unicode characters aren't translatable into the ASCII charset that Python defaults to. The way you'd use random.choice would be something along the lines of: Y = Y[random.choice(Y.keys())]
Networkx node traversal
Using Python's Networkx library, I created an undirected graph to represent a relationship network between various people. A snippet of my code is below: import networkx as nx def creategraph(filepath): G=nx.Graph() #All the various nodes and edges are added in this stretch of code. return G From what I understand, each node is basically a dictionary. The problem that this presents to me is that I want to perform a different kind of Random Walk algorithm. Now before you jump on me and tell me to use one of the standard functions of the Networkx library, I want to point out that it is a custom algorithm. Suppose I run the creategraph function, and the G object is returned and stored in another object (let's call it X). I want to start off at a node called 'Bob.' Bob is connected to Alice and Joe. Now, I want to reassign Y to point to either Alice or Bob at random (with the data I'm dealing with, a given node could have hundreds of edges leaving it). How do I go about doing this? Also, how do I deal with unicode entries in a given node's dict (like how Alice and Joe are listed below?) X = creategraph("filename") Y=X['Bob'] print Y >> {u'Alice': {}, u'Joe': {}}
[ "The choice function in the random module could help with the selection process. You don't really need to worry about the distinction between unicode and string unless you're trying to write them out somewhere as sometimes unicode characters aren't translatable into the ASCII charset that Python defaults to.\nThe way you'd use random.choice would be something along the lines of:\nY = Y[random.choice(Y.keys())]\n\n" ]
[ 4 ]
[]
[]
[ "graph", "networkx", "nodes", "python" ]
stackoverflow_0001469653_graph_networkx_nodes_python.txt
Q: Django: ImportError: cannot import name Count I just pulled from my github and tried to setup my application on my Ubuntu (I originally ran my app on a Mac at home). I re-created the database and reconfigured the settings.py -- also update the template locations, etc. However, when I run the server "python manage.py runserver" get an error that says: ImportError: cannot import name Count I imported the Count in my views.py to use the annotate(): from django.shortcuts import render_to_response from django.http import Http404, HttpResponse, HttpResponseRedirect from django.db.models import Count from mysite.blog.models import Blog from mysite.blog.models import Comment from mysite.blog.forms import CommentForm def index(request): #below, I used annotate() blog_posts = Blog.objects.all().annotate(Count('comment')).order_by('-pub_date')[:5] return render_to_response('blog/index.html', {'blog_posts': blog_posts}) Why is not working? Also, if I remove the "import Count" line, the error goes away and my app functions like normal. Thanks, Wenbert UPDATE: my models.py looks like this: from django.db import models class Blog(models.Model): author = models.CharField(max_length=200) title = models.CharField(max_length=200) content = models.TextField() pub_date = models.DateTimeField('date published') def __unicode__(self): return self.content def was_published_today(self): return self.pub_date.date() == datetime.date.today() class Comment(models.Model): blog = models.ForeignKey(Blog) author = models.CharField(max_length=200) comment = models.TextField() url = models.URLField() pub_date = models.DateTimeField('date published') def __unicode__(self): return self.comment UPDATE 2 My urls.py looks like this: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^admin/(.*)', admin.site.root), (r'^blog/$','mysite.blog.views.index'), (r'^display_meta/$','mysite.blog.views.display_meta'), (r'^blog/post/(?P<blog_id>\d+)/$','mysite.blog.views.post'), ) A: I've updated my Django and it turns out that your import statement is correct as module structure was changed a bit. Are you sure your Django is of latest version? A: This sounds like you're not using Django 1.1. Double check by opening up the Django shell and running import django print django.VERSION You should see something like (1, 1, 0, 'final', 0) if you're using 1.1
Django: ImportError: cannot import name Count
I just pulled from my github and tried to setup my application on my Ubuntu (I originally ran my app on a Mac at home). I re-created the database and reconfigured the settings.py -- also update the template locations, etc. However, when I run the server "python manage.py runserver" get an error that says: ImportError: cannot import name Count I imported the Count in my views.py to use the annotate(): from django.shortcuts import render_to_response from django.http import Http404, HttpResponse, HttpResponseRedirect from django.db.models import Count from mysite.blog.models import Blog from mysite.blog.models import Comment from mysite.blog.forms import CommentForm def index(request): #below, I used annotate() blog_posts = Blog.objects.all().annotate(Count('comment')).order_by('-pub_date')[:5] return render_to_response('blog/index.html', {'blog_posts': blog_posts}) Why is not working? Also, if I remove the "import Count" line, the error goes away and my app functions like normal. Thanks, Wenbert UPDATE: my models.py looks like this: from django.db import models class Blog(models.Model): author = models.CharField(max_length=200) title = models.CharField(max_length=200) content = models.TextField() pub_date = models.DateTimeField('date published') def __unicode__(self): return self.content def was_published_today(self): return self.pub_date.date() == datetime.date.today() class Comment(models.Model): blog = models.ForeignKey(Blog) author = models.CharField(max_length=200) comment = models.TextField() url = models.URLField() pub_date = models.DateTimeField('date published') def __unicode__(self): return self.comment UPDATE 2 My urls.py looks like this: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^admin/(.*)', admin.site.root), (r'^blog/$','mysite.blog.views.index'), (r'^display_meta/$','mysite.blog.views.display_meta'), (r'^blog/post/(?P<blog_id>\d+)/$','mysite.blog.views.post'), )
[ "I've updated my Django and it turns out that your import statement is correct as module structure was changed a bit. Are you sure your Django is of latest version?\n", "This sounds like you're not using Django 1.1. Double check by opening up the Django shell and running\nimport django\nprint django.VERSION\n\nYou should see something like (1, 1, 0, 'final', 0) if you're using 1.1\n" ]
[ 1, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001469614_django_python.txt
Q: Making a facade in Python 2.5 I want to have a Python class that acts as a wrapper for another Python class. Something like this class xxx: name = property( fset=lambda self, v: setattr(self.inner, 'name', v), fget=lambda self: getattr(self.inner, 'name' )) def setWrapper( self, obj ) self.inner = obj So when someone says xxx().x = 'hello' I want it to set the value on xxx().inner.whatever and not xxx().x. Is this possible? I have been trying lambdas, but to no avail. I'm using Python 2.5 Edit: Now my wife isn't rushing me to bed so I can flush out the code a bit more. I kind of had an idea about what you guys had below, but from the docs it seemed that you should avoid overriding __setattr__/__getattr__ and use property instead. If it is not possible to do this via the property function then I will use the __setattr__/__getattr__? Thanks A: This is where you use the __setattr__ and __getattr__ methods as documented here. In short, if you do this: class Wrapper(object): def __init__(self, wrapped): object.__setattr__(self, 'inner', wrapped) def __getattr__(self, attr): return getattr(self.inner, attr) def __setattr__(self, attr, value): setattr(self.inner, attr, value) It should do what you're after. I'd highly recommend reading the docs linked to above. EDIT: As pointed out, the original version fails due to __setattr__ being called unconditionally. I've updated this example. Note that this version is reliant on new-style class behaviour (as indicated by subclassing from object). For old-style I think you'd need to mess with __dict__ as follows: class OldStyleWrapper: def __init__(self, wrapped): self.__dict__['inner'] = wrapped def __getattr__(self, attr): return getattr(self.inner, attr) def __setattr__(self, attr, value): setattr(self.inner, attr, value) A: As another answer said, __getattr__ and __setattr__ are the key, but you need care when using the latter...: class Wrapper(object): def __init__(self, wrapped): object.__setattr__(self, 'inner', wrapped) def __getattr__(self, n): return getattr(self.inner, n) def __setattr__(self, n, value): setattr(self.inner, n, value) You don't need precautions for self.inner access, because __getattr__ gets called only for attributes that aren't otherwise present; but __setattr__ gets called for EVERY attribute, so when you actually need to set self.inner (or any other attribute), you need to explicitly bypass __setattr__ (here I'm using object for the purpose, so I also inherit from object -- highly advisable anyway, in Python 2.*, otherwise you'd be making an "old-style class" [[the kind which AT LAST disappeared in Python 3]], which you really don't want to...;-). A: I believe both Alex and Ben's answers should edit their setattr call as follows: setattr(self.inner, n, value)
Making a facade in Python 2.5
I want to have a Python class that acts as a wrapper for another Python class. Something like this class xxx: name = property( fset=lambda self, v: setattr(self.inner, 'name', v), fget=lambda self: getattr(self.inner, 'name' )) def setWrapper( self, obj ) self.inner = obj So when someone says xxx().x = 'hello' I want it to set the value on xxx().inner.whatever and not xxx().x. Is this possible? I have been trying lambdas, but to no avail. I'm using Python 2.5 Edit: Now my wife isn't rushing me to bed so I can flush out the code a bit more. I kind of had an idea about what you guys had below, but from the docs it seemed that you should avoid overriding __setattr__/__getattr__ and use property instead. If it is not possible to do this via the property function then I will use the __setattr__/__getattr__? Thanks
[ "This is where you use the __setattr__ and __getattr__ methods as documented here.\nIn short, if you do this:\nclass Wrapper(object):\n def __init__(self, wrapped):\n object.__setattr__(self, 'inner', wrapped)\n\n def __getattr__(self, attr):\n return getattr(self.inner, attr)\n\n def __setattr__(self, attr, value):\n setattr(self.inner, attr, value)\n\nIt should do what you're after. I'd highly recommend reading the docs linked to above.\nEDIT: As pointed out, the original version fails due to __setattr__ being called unconditionally. I've updated this example. Note that this version is reliant on new-style class behaviour (as indicated by subclassing from object). For old-style I think you'd need to mess with __dict__ as follows:\nclass OldStyleWrapper:\n def __init__(self, wrapped):\n self.__dict__['inner'] = wrapped\n\n def __getattr__(self, attr):\n return getattr(self.inner, attr)\n\n def __setattr__(self, attr, value):\n setattr(self.inner, attr, value)\n\n", "As another answer said, __getattr__ and __setattr__ are the key, but you need care when using the latter...:\nclass Wrapper(object):\n def __init__(self, wrapped):\n object.__setattr__(self, 'inner', wrapped)\n\n def __getattr__(self, n):\n return getattr(self.inner, n)\n\n def __setattr__(self, n, value):\n setattr(self.inner, n, value)\n\nYou don't need precautions for self.inner access, because __getattr__ gets called only for attributes that aren't otherwise present; but __setattr__ gets called for EVERY attribute, so when you actually need to set self.inner (or any other attribute), you need to explicitly bypass __setattr__ (here I'm using object for the purpose, so I also inherit from object -- highly advisable anyway, in Python 2.*, otherwise you'd be making an \"old-style class\" [[the kind which AT LAST disappeared in Python 3]], which you really don't want to...;-).\n", "I believe both Alex and Ben's answers should edit their setattr call as follows:\nsetattr(self.inner, n, value)\n\n" ]
[ 6, 5, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001469591_python.txt
Q: PGP/GnuPG to encrypt Need to use PGP/GnuPG to encrypt.can suggest what the Python packages to use that. for PGP encryption i.e. on the other side is PGP used to decrypt. A: You could try python-gnupg. Encryption is covered in the docs here.
PGP/GnuPG to encrypt
Need to use PGP/GnuPG to encrypt.can suggest what the Python packages to use that. for PGP encryption i.e. on the other side is PGP used to decrypt.
[ "You could try python-gnupg. Encryption is covered in the docs here.\n" ]
[ 0 ]
[]
[]
[ "gnupg", "python" ]
stackoverflow_0001469798_gnupg_python.txt
Q: What networking libraries/frameworks exist for Python? I was wondering what good networking libraries/frameworks there are for Python. Please provide a link to the standard API documentation for the library, and perhaps a link to a decent tutorial to get started with it. A comment or two about its advantages/disadvantages would be nice as well. A: The standard library has asyncore which is good for very simple stuff as well as the SocketServer stuff if you'd prefer something that does threads. There's also Twisted but the barrier of entry to that is a bit high if you're not used to event-driven IO. If you're after web frameworks, CherryPy is a good start or there's Django and TurboGears if you're looking for something more full-featured. A: Consider the Twisted framework. The advantage: solid reactor implementation support for almost all network protocols found in the wild well documented Disadvantages: it's huge the asynchronous APIs need some time to get used to (but once you are familiar, things are actually pretty usable) CPython itself ships with a tiny reactor/socket package. Never used it myself, though. A: Twisted is the most complete, and complex, of all Python networking frameworks. It's well-established and very complete, but it has a steep learning curve. Documentation here; FAQ here. A: In case you want to build/manipulate your own packets there is Scapy too :) The usage is pretty straight forward, it lets you do whatever you want with the packets and it's multi-Platform. Project Page: http://www.secdev.org/projects/scapy/ Docs: http://www.secdev.org/projects/scapy/doc/ Example: http://www.secdev.org/projects/scapy/demo.html
What networking libraries/frameworks exist for Python?
I was wondering what good networking libraries/frameworks there are for Python. Please provide a link to the standard API documentation for the library, and perhaps a link to a decent tutorial to get started with it. A comment or two about its advantages/disadvantages would be nice as well.
[ "The standard library has asyncore which is good for very simple stuff as well as the SocketServer stuff if you'd prefer something that does threads. There's also Twisted but the barrier of entry to that is a bit high if you're not used to event-driven IO. If you're after web frameworks, CherryPy is a good start or there's Django and TurboGears if you're looking for something more full-featured.\n", "Consider the Twisted framework. The advantage:\n\nsolid reactor implementation\nsupport for almost all network protocols found in the wild\nwell documented\n\nDisadvantages:\n\nit's huge\nthe asynchronous APIs need some time to get used to (but once you are familiar, things are actually pretty usable)\n\nCPython itself ships with a tiny reactor/socket package. Never used it myself, though.\n", "Twisted is the most complete, and complex, of all Python networking frameworks.\nIt's well-established and very complete, but it has a steep learning curve.\nDocumentation here; FAQ here.\n", "In case you want to build/manipulate your own packets there is Scapy too :)\nThe usage is pretty straight forward, it lets you do whatever you want with the packets\nand it's multi-Platform.\nProject Page: http://www.secdev.org/projects/scapy/\nDocs: http://www.secdev.org/projects/scapy/doc/\nExample: http://www.secdev.org/projects/scapy/demo.html\n" ]
[ 6, 4, 3, 2 ]
[]
[]
[ "frameworks", "networking", "python" ]
stackoverflow_0001468780_frameworks_networking_python.txt
Q: python win32 extensions documentation I'm new to both python and the python win32 extensions available at http://python.net/crew/skippy/win32/ but I can't find any documentation online or in the installation directories concerning what exactly the win32 extensions provide. Where is this information? A: You'll find documentation here: http://docs.activestate.com/activepython/2.4/pywin32/PyWin32.HTML (Note: most of the API docs are under 'modules' and 'objects'. Note that the documentation is very sparse here but rembember: since it's only a wrapper on top of the win32 API --> the 'full' documentation is also on the MSDN website, google should be helpful...) A: In addition to ChristopheD's recommendations I also find that Tim Golden's Python Stuff is very useful. A: Python Programming On Win32 from O'Reilly is a great, if dated, book on the subject. I've read it and is very good. Its not documentation, per se, but its really useful for a good introduction to COM programming with Python, among other advanced stuff. A: PyWin32 docs are included with ActivePython (which I highly recommend you to install). ChristopheD's link is for Python 2.4 which is an older version. For Python 2.6 version (which is the latest), here is the CHM file that contains PyWin32 docs. Note that this CHM file is also included with ActivePython itself. alt text http://dl.getdropbox.com/u/87045/permalinks/apy26-pywin32.png
python win32 extensions documentation
I'm new to both python and the python win32 extensions available at http://python.net/crew/skippy/win32/ but I can't find any documentation online or in the installation directories concerning what exactly the win32 extensions provide. Where is this information?
[ "You'll find documentation here:\nhttp://docs.activestate.com/activepython/2.4/pywin32/PyWin32.HTML\n(Note: most of the API docs are under 'modules' and 'objects'. Note that the documentation is very sparse here but rembember: since it's only a wrapper on top of the win32 API --> the 'full' documentation is also on the MSDN website, google should be helpful...)\n", "In addition to ChristopheD's recommendations I also find that Tim Golden's Python Stuff is very useful.\n", "Python Programming On Win32 from O'Reilly is a great, if dated, book on the subject. I've read it and is very good.\n\nIts not documentation, per se, but its really useful for a good introduction to COM programming with Python, among other advanced stuff.\n", "PyWin32 docs are included with ActivePython (which I highly recommend you to install). ChristopheD's link is for Python 2.4 which is an older version. For Python 2.6 version (which is the latest), here is the CHM file that contains PyWin32 docs. Note that this CHM file is also included with ActivePython itself.\nalt text http://dl.getdropbox.com/u/87045/permalinks/apy26-pywin32.png\n" ]
[ 13, 2, 2, 2 ]
[]
[]
[ "documentation", "python", "pywin32" ]
stackoverflow_0001468099_documentation_python_pywin32.txt
Q: What is going on with this code from the Google App Engine tutorial import cgi from google.appengine.api import users from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db class Greeting(db.Model): author = db.UserProperty() content = db.StringProperty(multiline=True) date = db.DateTimeProperty(auto_now_add=True) class MainPage(webapp.RequestHandler): def get(self): self.response.out.write('<html><body>') greetings = db.GqlQuery("SELECT * FROM Greeting ORDER BY date DESC LIMIT 10") for greeting in greetings: if greeting.author: self.response.out.write('<b>%s</b> wrote:' % greeting.author.nickname()) else: self.response.out.write('An anonymous person wrote:') self.response.out.write('<blockquote>%s</blockquote>' % cgi.escape(greeting.content)) # Write the submission form and the footer of the page self.response.out.write(""" <form action="/sign" method="post"> <div><textarea name="content" rows="3" cols="60"></textarea></div> <div><input type="submit" value="Sign Guestbook"></div> </form> </body> </html>""") class Guestbook(webapp.RequestHandler): def post(self): greeting = Greeting() if users.get_current_user(): greeting.author = users.get_current_user() greeting.content = self.request.get('content') greeting.put() self.redirect('/') application = webapp.WSGIApplication( [('/', MainPage), ('/sign', Guestbook)], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() I am new to Python and a bit confused looking at this Google App Engine tutorial code. In the Greeting class, content = db.StringProperty(multiline=True), but in the Guestbook class, "content" in the greeting object is then set to greeting.content = self.request.get('content'). I don't understand how the "content" variable is being set in the Greeting class as well as the Guestbook class, yet seemingly hold the value and properties of both statements. A: The first piece of code is a model definition: class Greeting(db.Model): content = db.StringProperty(multiline=True) It says that there is a model Greeting that has a StringProperty with the name content. In the second piece of code, you create an instance of the Greeting model and assign a value to its content property greeting = Greeting() greeting.content = self.request.get('content') edit: to answer your question in the comment: this is basic object oriented programming (or OOP) with a little bit of Python's special sauce (descriptors and metaclasses). If you're new to OOP, read this article to get a little bit more familiar with the concept (this is a complex subject, there are whole libraries on OOP, so don't except to understand everything after reading one article). You don't really have to know descriptors or metaclasses, but it can come in handy sometimes. Here's a good introduction to descriptors. A: class Greeting(db.Model): author = db.UserProperty() content = db.StringProperty(multiline=True) date = db.DateTimeProperty(auto_now_add=True) This code instructs the ORM (object relational mapper) to create a table in the database with the fields "author", "content" and "date". Notice how class Greeting is inherited from db.Model: It's a model for a table to be created in the database. class Guestbook(webapp.RequestHandler): def post(self): greeting = Greeting() if users.get_current_user(): greeting.author = users.get_current_user() greeting.content = self.request.get('content') greeting.put() self.redirect('/') Guestbook is a request handler (notice which class it's inherited from). The post() method of a request handler is called on the event of a POST request. There can be several other methods in this class as well to handle different kinds of requests. Now notice what the post method does: It instantiates the Greeting class- we now have an instance, greeting object. Next, the "author" and "content" of the greeting object are set from the request information. Finally, greeting.put() writes to the database. Additionally, note that "date" is also set automatically to the date/time of writing the object to the database. A: piquadrat's answer is good. You can read more about App Engine models here.
What is going on with this code from the Google App Engine tutorial
import cgi from google.appengine.api import users from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db class Greeting(db.Model): author = db.UserProperty() content = db.StringProperty(multiline=True) date = db.DateTimeProperty(auto_now_add=True) class MainPage(webapp.RequestHandler): def get(self): self.response.out.write('<html><body>') greetings = db.GqlQuery("SELECT * FROM Greeting ORDER BY date DESC LIMIT 10") for greeting in greetings: if greeting.author: self.response.out.write('<b>%s</b> wrote:' % greeting.author.nickname()) else: self.response.out.write('An anonymous person wrote:') self.response.out.write('<blockquote>%s</blockquote>' % cgi.escape(greeting.content)) # Write the submission form and the footer of the page self.response.out.write(""" <form action="/sign" method="post"> <div><textarea name="content" rows="3" cols="60"></textarea></div> <div><input type="submit" value="Sign Guestbook"></div> </form> </body> </html>""") class Guestbook(webapp.RequestHandler): def post(self): greeting = Greeting() if users.get_current_user(): greeting.author = users.get_current_user() greeting.content = self.request.get('content') greeting.put() self.redirect('/') application = webapp.WSGIApplication( [('/', MainPage), ('/sign', Guestbook)], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() I am new to Python and a bit confused looking at this Google App Engine tutorial code. In the Greeting class, content = db.StringProperty(multiline=True), but in the Guestbook class, "content" in the greeting object is then set to greeting.content = self.request.get('content'). I don't understand how the "content" variable is being set in the Greeting class as well as the Guestbook class, yet seemingly hold the value and properties of both statements.
[ "The first piece of code is a model definition:\nclass Greeting(db.Model):\n content = db.StringProperty(multiline=True)\n\nIt says that there is a model Greeting that has a StringProperty with the name content.\nIn the second piece of code, you create an instance of the Greeting model and assign a value to its content property\ngreeting = Greeting()\ngreeting.content = self.request.get('content')\n\nedit: to answer your question in the comment: this is basic object oriented programming (or OOP) with a little bit of Python's special sauce (descriptors and metaclasses). If you're new to OOP, read this article to get a little bit more familiar with the concept (this is a complex subject, there are whole libraries on OOP, so don't except to understand everything after reading one article). You don't really have to know descriptors or metaclasses, but it can come in handy sometimes. Here's a good introduction to descriptors.\n", "class Greeting(db.Model):\n author = db.UserProperty()\n content = db.StringProperty(multiline=True)\n date = db.DateTimeProperty(auto_now_add=True)\n\nThis code instructs the ORM (object relational mapper) to create a table in the database with the fields \"author\", \"content\" and \"date\". Notice how class Greeting is inherited from db.Model: It's a model for a table to be created in the database.\nclass Guestbook(webapp.RequestHandler):\n def post(self):\n greeting = Greeting()\n\n if users.get_current_user():\n greeting.author = users.get_current_user()\n\n greeting.content = self.request.get('content')\n greeting.put()\n self.redirect('/')\n\nGuestbook is a request handler (notice which class it's inherited from). The post() method of a request handler is called on the event of a POST request. There can be several other methods in this class as well to handle different kinds of requests. Now notice what the post method does: It instantiates the Greeting class- we now have an instance, greeting object. Next, the \"author\" and \"content\" of the greeting object are set from the request information. Finally, greeting.put() writes to the database. Additionally, note that \"date\" is also set automatically to the date/time of writing the object to the database.\n", "piquadrat's answer is good. You can read more about App Engine models here.\n" ]
[ 2, 2, 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001470405_google_app_engine_python.txt
Q: Create a wrapper class to call a pre and post function around existing functions? I want to create a class that wraps another class so that when a function is run through the wrapper class a pre and post function is run as well. I want the wrapper class to work with any class without modification. For example if i have this class. class Simple(object): def one(self): print "one" def two(self,two): print "two" + two def three(self): print "three" I could use it like this... number = Simple() number.one() number.two("2") I have so far written this wrapper class... class Wrapper(object): def __init__(self,wrapped_class): self.wrapped_class = wrapped_class() def __getattr__(self,attr): return self.wrapped_class.__getattribute__(attr) def pre(): print "pre" def post(): print "post" Which I can call like this... number = Wrapper(Simple) number.one() number.two("2") Which can be used the same as above apart from changing the first line. What I want to happen is when calling a function through the wrapper class the pre function in the wrapper class gets called then the desired function in the wrapped class then the post function. I want to be able to do this without changing the wrapped class and also without changing the way the functions are called, only changing the syntax of how the instance of the class is created. eg number = Simple() vs number = Wrapper(Simple) A: You're almost there, you just need to do some introspection inside __getattr__, returning a new wrapped function when the original attribute is callable: class Wrapper(object): def __init__(self,wrapped_class): self.wrapped_class = wrapped_class() def __getattr__(self,attr): orig_attr = self.wrapped_class.__getattribute__(attr) if callable(orig_attr): def hooked(*args, **kwargs): self.pre() result = orig_attr(*args, **kwargs) # prevent wrapped_class from becoming unwrapped if result == self.wrapped_class: return self self.post() return result return hooked else: return orig_attr def pre(self): print ">> pre" def post(self): print "<< post" Now with this code: number = Wrapper(Simple) print "\nCalling wrapped 'one':" number.one() print "\nCalling wrapped 'two':" number.two("2") The result is: Calling wrapped 'one': >> pre one << post Calling wrapped 'two': >> pre two2 << post A: I have just noticed in my original design there is no way of passing args and kwargs to the wrapped class, here is the answer updated to pass the inputs to the wrapped function... class Wrapper(object): def __init__(self,wrapped_class,*args,**kargs): self.wrapped_class = wrapped_class(*args,**kargs) def __getattr__(self,attr): orig_attr = self.wrapped_class.__getattribute__(attr) if callable(orig_attr): def hooked(*args, **kwargs): self.pre() result = orig_attr(*args, **kwargs) self.post() return result return hooked else: return orig_attr def pre(self): print ">> pre" def post(self): print "<< post"
Create a wrapper class to call a pre and post function around existing functions?
I want to create a class that wraps another class so that when a function is run through the wrapper class a pre and post function is run as well. I want the wrapper class to work with any class without modification. For example if i have this class. class Simple(object): def one(self): print "one" def two(self,two): print "two" + two def three(self): print "three" I could use it like this... number = Simple() number.one() number.two("2") I have so far written this wrapper class... class Wrapper(object): def __init__(self,wrapped_class): self.wrapped_class = wrapped_class() def __getattr__(self,attr): return self.wrapped_class.__getattribute__(attr) def pre(): print "pre" def post(): print "post" Which I can call like this... number = Wrapper(Simple) number.one() number.two("2") Which can be used the same as above apart from changing the first line. What I want to happen is when calling a function through the wrapper class the pre function in the wrapper class gets called then the desired function in the wrapped class then the post function. I want to be able to do this without changing the wrapped class and also without changing the way the functions are called, only changing the syntax of how the instance of the class is created. eg number = Simple() vs number = Wrapper(Simple)
[ "You're almost there, you just need to do some introspection inside __getattr__, returning a new wrapped function when the original attribute is callable:\nclass Wrapper(object):\n def __init__(self,wrapped_class):\n self.wrapped_class = wrapped_class()\n\n def __getattr__(self,attr):\n orig_attr = self.wrapped_class.__getattribute__(attr)\n if callable(orig_attr):\n def hooked(*args, **kwargs):\n self.pre()\n result = orig_attr(*args, **kwargs)\n # prevent wrapped_class from becoming unwrapped\n if result == self.wrapped_class:\n return self\n self.post()\n return result\n return hooked\n else:\n return orig_attr\n\n def pre(self):\n print \">> pre\"\n\n def post(self):\n print \"<< post\"\n\nNow with this code:\nnumber = Wrapper(Simple)\n\nprint \"\\nCalling wrapped 'one':\"\nnumber.one()\n\nprint \"\\nCalling wrapped 'two':\"\nnumber.two(\"2\")\n\nThe result is:\nCalling wrapped 'one':\n>> pre\none\n<< post\n\nCalling wrapped 'two':\n>> pre\ntwo2\n<< post\n\n", "I have just noticed in my original design there is no way of passing args and kwargs to the wrapped class, here is the answer updated to pass the inputs to the wrapped function...\nclass Wrapper(object):\ndef __init__(self,wrapped_class,*args,**kargs):\n self.wrapped_class = wrapped_class(*args,**kargs)\n\ndef __getattr__(self,attr):\n orig_attr = self.wrapped_class.__getattribute__(attr)\n if callable(orig_attr):\n def hooked(*args, **kwargs):\n self.pre()\n result = orig_attr(*args, **kwargs)\n self.post()\n return result\n return hooked\n else:\n return orig_attr\n\ndef pre(self):\n print \">> pre\"\n\ndef post(self):\n print \"<< post\" \n\n" ]
[ 31, 3 ]
[]
[]
[ "python" ]
stackoverflow_0001466676_python.txt
Q: Probability exercise returning different result that expected As an exercise I'm writing a program to calculate the odds of rolling 5 die with the same number. The idea is to get the result via simulation as opposed to simple math though. My program is this: # rollFive.py from random import * def main(): n = input("Please enter the number of sims to run: ") hits = simNRolls(n) hits = float(hits) n = float(n) prob = hits/n print "The odds of rolling 5 of the same number are", prob def simNRolls(n): hits = 0 for i in range(n): hits = hits + diceRoll() return hits def diceRoll(): firstDie = randrange(1,7,1) for i in range(4): nextDie = randrange(1,7,1) if nextDie!=firstDie: success = 0 break else: success = 1 return success The problem is that running this program with a value for n of 1 000 000 gives me a probability usually between 0.0006 and 0.0008 while my math makes me believe I should be getting an answer closer to 0.0001286 (aka (1/6)^5). Is there something wrong with my program? Or am I making some basic mistake with the math here? Or would I find my result revert closer to the right answer if I were able to run the program over larger iterations? A: The probability of getting a particular number five times is (1/6)^5, but the probability of getting any five numbers the same is (1/6)^4. There are two ways to see this. First, the probability of getting all 1's, for example, is (1/6)^5 since there is only one way out of six to get a 1. Multiply that by five dice, and you get (1/6)^5. But, since there are six possible numbers to get the same, then there are six ways to succeed, which is 6((1/6)^5) or (1/6)^4. Looked at another way, it doesn't matter what the first roll gives, so we exclude it. Then we have to match that number with the four remaining rolls, the probability of which is (1/6)^4. A: Your math is wrong. The probability of getting five dice with the same number is 6*(1/6)^5 = 0.0007716. A: Very simply, there are 6 ** 5 possible outcomes from rolling 5 dice, and only 6 of those outcomes are successful, so the answer is 6.0 / 6 ** 5 A: I think your expected probability is wrong, as you've stated the problem. (1/6)^5 is the probability of rolling some specific number 5 times in a row; (1/6)^4 is the probability of rolling any number 5 times in a row (because the first roll is always "successful" -- that is, the first roll will always result in some number). >>> (1.0/6.0)**4 0.00077160493827160479 Compare to running your program with 1 million iterations: [me@host:~] python roll5.py Please enter the number of sims to run: 1000000 The odds of rolling 5 of the same number are 0.000755
Probability exercise returning different result that expected
As an exercise I'm writing a program to calculate the odds of rolling 5 die with the same number. The idea is to get the result via simulation as opposed to simple math though. My program is this: # rollFive.py from random import * def main(): n = input("Please enter the number of sims to run: ") hits = simNRolls(n) hits = float(hits) n = float(n) prob = hits/n print "The odds of rolling 5 of the same number are", prob def simNRolls(n): hits = 0 for i in range(n): hits = hits + diceRoll() return hits def diceRoll(): firstDie = randrange(1,7,1) for i in range(4): nextDie = randrange(1,7,1) if nextDie!=firstDie: success = 0 break else: success = 1 return success The problem is that running this program with a value for n of 1 000 000 gives me a probability usually between 0.0006 and 0.0008 while my math makes me believe I should be getting an answer closer to 0.0001286 (aka (1/6)^5). Is there something wrong with my program? Or am I making some basic mistake with the math here? Or would I find my result revert closer to the right answer if I were able to run the program over larger iterations?
[ "The probability of getting a particular number five times is (1/6)^5, but the probability of getting any five numbers the same is (1/6)^4.\nThere are two ways to see this.\nFirst, the probability of getting all 1's, for example, is (1/6)^5 since there is only one way out of six to get a 1. Multiply that by five dice, and you get (1/6)^5. But, since there are six possible numbers to get the same, then there are six ways to succeed, which is 6((1/6)^5) or (1/6)^4.\nLooked at another way, it doesn't matter what the first roll gives, so we exclude it. Then we have to match that number with the four remaining rolls, the probability of which is (1/6)^4.\n", "Your math is wrong. The probability of getting five dice with the same number is 6*(1/6)^5 = 0.0007716.\n", "Very simply, there are 6 ** 5 possible outcomes from rolling 5 dice, and only 6 of those outcomes are successful, so the answer is 6.0 / 6 ** 5\n", "I think your expected probability is wrong, as you've stated the problem. (1/6)^5 is the probability of rolling some specific number 5 times in a row; (1/6)^4 is the probability of rolling any number 5 times in a row (because the first roll is always \"successful\" -- that is, the first roll will always result in some number).\n>>> (1.0/6.0)**4\n0.00077160493827160479\n\nCompare to running your program with 1 million iterations:\n[me@host:~] python roll5.py \nPlease enter the number of sims to run: 1000000\nThe odds of rolling 5 of the same number are 0.000755\n\n" ]
[ 6, 1, 1, 0 ]
[]
[]
[ "probability", "python" ]
stackoverflow_0001469421_probability_python.txt
Q: How do I determine the proper `paramstyle` when all I have is a `Connection` object? I have an instance of a Connection (required to DB API 2.0-compliant), but I don't have the module from which it was imported. The problem is that I am trying to use named parameters, but I don't know which paramstyle to use. Since paramstyle is a module-level constant, I can't just ask the Connection. I tried using inspect.getmodule() on my Connection instance, but it just returned None. Is there an easier way that I'm just missing, or will I need to do some try/except code to determine which paramstyle to use? A: Where did you get the instance from? I can't imagine a situation where you won't know the beforehand the source of the connection. If the user of your library is passing you a connection, ask him for the paramstyle as well. Anyway, look at the following console session: >>> import sqlite3 >>> c = sqlite3.connect('/tmp/test.db') >>> c <sqlite3.Connection object at 0xb7db2320> >>> type(c) <type 'sqlite3.Connection'> >>> type(c).__module__ 'sqlite3' >>> import sys >>> sys.modules[type(c).__module__].paramstyle 'qmark' However that sucks. I wouldn't rely on it not even for a second. I use my own connection-like objects, and I'd like to pass one of those to your library. I'd hate it when it tries to magically find out the paramstyle and fails because I am using a connection-like wrapper object. A: Pass type(connection) (connection class) to inspect.getmodule(), not connection object. The class tracks the module it was defined in so inspect can find it, while object creation is not tracked. A: You can't. You can try by looking at connection.__class__.__module__, but it's not specified by DB-API that it'll work. In fact for many common cases it won't. (eg. the class is defined in a submodule of the package that acts as the DB-API module object; or the class is a C extension with no __module__.) It is unfortunate, but you will have to pass a reference to the DB-API module object around with the DB-API connection object.
How do I determine the proper `paramstyle` when all I have is a `Connection` object?
I have an instance of a Connection (required to DB API 2.0-compliant), but I don't have the module from which it was imported. The problem is that I am trying to use named parameters, but I don't know which paramstyle to use. Since paramstyle is a module-level constant, I can't just ask the Connection. I tried using inspect.getmodule() on my Connection instance, but it just returned None. Is there an easier way that I'm just missing, or will I need to do some try/except code to determine which paramstyle to use?
[ "Where did you get the instance from? I can't imagine a situation where you won't know the beforehand the source of the connection. If the user of your library is passing you a connection, ask him for the paramstyle as well.\nAnyway, look at the following console session:\n>>> import sqlite3\n>>> c = sqlite3.connect('/tmp/test.db')\n>>> c\n<sqlite3.Connection object at 0xb7db2320>\n>>> type(c)\n<type 'sqlite3.Connection'>\n>>> type(c).__module__\n'sqlite3'\n>>> import sys\n>>> sys.modules[type(c).__module__].paramstyle\n'qmark'\n\nHowever that sucks. I wouldn't rely on it not even for a second. I use my own connection-like objects, and I'd like to pass one of those to your library. I'd hate it when it tries to magically find out the paramstyle and fails because I am using a connection-like wrapper object.\n", "Pass type(connection) (connection class) to inspect.getmodule(), not connection object. The class tracks the module it was defined in so inspect can find it, while object creation is not tracked.\n", "You can't.\nYou can try by looking at connection.__class__.__module__, but it's not specified by DB-API that it'll work. In fact for many common cases it won't. (eg. the class is defined in a submodule of the package that acts as the DB-API module object; or the class is a C extension with no __module__.)\nIt is unfortunate, but you will have to pass a reference to the DB-API module object around with the DB-API connection object.\n" ]
[ 2, 2, 2 ]
[]
[]
[ "python", "python_db_api" ]
stackoverflow_0001471304_python_python_db_api.txt
Q: Django models - pass additional information to manager I'm trying to implement row-based security checks for Django models. The idea is that when I access model manager I specify some additional info which is used in database queries so that only allowed instances are fetched from database. For example, we can have two models: Users and, say, Items. Each Item belongs to some User and User may be connected to many Items. And let there be some restrictions, according to which a user may see or may not see Items of another User. I want to separate this restrictions from other query elements and write something like: items = Item.scoped.forceRule('user1').all() # all items visible for 'user1' or # show all items of 'user2' visible by 'user1' items = Item.scoped.forceRule('user1').filter(author__username__exact = 'user2') To acheive this I made following: class SecurityManager(models.Manager): def forceRule(self, onBehalf) : modelSecurityScope = getattr(self.model, 'securityScope', None) if modelSecurityScope : return super(SecurityManager, self).get_query_set().filter(self.model.securityScope(onBehalf)) else : return super(SecurityManager, self).get_query_set() def get_query_set(self) : # # I need to know that 'onBehalf' parameter here # return super(SecurityManager, self).get_query_set() class User(models.Model) : username = models.CharField(max_length=32, unique=True) class Item(models.Model) : author = models.ForeignKey(User) private = models.BooleanField() name = models.CharField(max_length=32) scoped = SecurityManager() @staticmethod def securityScope(onBehalf) : return Q(author__username__exact = onBehalf) | Q(bookmark__private__exact = False) For shown examples it works fine, but dies on following: items = Item.scoped.forceRule('user1').filter(author__username__exact = 'user2') # (*) items2 = items[0].author.item_set.all() # (**) Certainly, items2 is populated by all items of 'user2', not only those which conform the rule. That is because when all() is executed SecurityManager.get_query_set() has no information about the restriction set. Though it could. For example, in forceRule() I could add a field for every instance and then, if I could access that field from manager, apply the rule needed. So, the question is - is there any way to pass an argument provided to forceRule() in statement (*) to manager, called in statement (**). Or another question - am I doing strange things that I shouldn't do at all? Thank you. A: From my reading of the documentation I think there are two problems: The SecurityManager will not be used for the related objects (and instance of django.db.models.Manager will be used instead) You can fix the above, but the documentation goes to great lengths to specify that get_query_set() should not filter out any rows for related queries. I suggest creating a function that takes a QuerySet and applies the filter you need to it. This can then be used whenever you get to a QS of Items and want to process them further.
Django models - pass additional information to manager
I'm trying to implement row-based security checks for Django models. The idea is that when I access model manager I specify some additional info which is used in database queries so that only allowed instances are fetched from database. For example, we can have two models: Users and, say, Items. Each Item belongs to some User and User may be connected to many Items. And let there be some restrictions, according to which a user may see or may not see Items of another User. I want to separate this restrictions from other query elements and write something like: items = Item.scoped.forceRule('user1').all() # all items visible for 'user1' or # show all items of 'user2' visible by 'user1' items = Item.scoped.forceRule('user1').filter(author__username__exact = 'user2') To acheive this I made following: class SecurityManager(models.Manager): def forceRule(self, onBehalf) : modelSecurityScope = getattr(self.model, 'securityScope', None) if modelSecurityScope : return super(SecurityManager, self).get_query_set().filter(self.model.securityScope(onBehalf)) else : return super(SecurityManager, self).get_query_set() def get_query_set(self) : # # I need to know that 'onBehalf' parameter here # return super(SecurityManager, self).get_query_set() class User(models.Model) : username = models.CharField(max_length=32, unique=True) class Item(models.Model) : author = models.ForeignKey(User) private = models.BooleanField() name = models.CharField(max_length=32) scoped = SecurityManager() @staticmethod def securityScope(onBehalf) : return Q(author__username__exact = onBehalf) | Q(bookmark__private__exact = False) For shown examples it works fine, but dies on following: items = Item.scoped.forceRule('user1').filter(author__username__exact = 'user2') # (*) items2 = items[0].author.item_set.all() # (**) Certainly, items2 is populated by all items of 'user2', not only those which conform the rule. That is because when all() is executed SecurityManager.get_query_set() has no information about the restriction set. Though it could. For example, in forceRule() I could add a field for every instance and then, if I could access that field from manager, apply the rule needed. So, the question is - is there any way to pass an argument provided to forceRule() in statement (*) to manager, called in statement (**). Or another question - am I doing strange things that I shouldn't do at all? Thank you.
[ "From my reading of the documentation I think there are two problems:\n\nThe SecurityManager will not be used for the related objects (and instance of django.db.models.Manager will be used instead)\nYou can fix the above, but the documentation goes to great lengths to specify that get_query_set() should not filter out any rows for related queries.\n\nI suggest creating a function that takes a QuerySet and applies the filter you need to it. This can then be used whenever you get to a QS of Items and want to process them further.\n" ]
[ 4 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001467245_django_django_models_python.txt
Q: Copying modules into Django, "No module named [moduleName]" I run into this problem pretty consistently... keep in mind I am quite new to Django and a total Python amateur. It seems that, for example, whenever I check out my Django project on a new computer after a clean install of Python and Django, it can never find the project/apps I create or copy in. So right now I have an app that is working, and I downloaded a 3rd party Django module and installed it into my app directory, include it in my settings, and the web server quits because it cannot find the module. This is the first time I've imported an third party module. In the past when it couldn't find modules I created, I would just rename the folder and run "manage.py startapp appname", delete the folder it created, and name my original folder back, boom, problem solved... But that's obviously a hack, I am wondering if anyone can explain the the heck is going on here and how best to approach it. I can't be the only one who has run into this, but I couldn't find any other questions on this site that seemed to match my issue. Happens on both OS X and Windows 7. A: They way Django works is pretty much how Python works. At default the folder you create when you run django-admin.py startproject name is added to your python path. That means that anything you put into there you can get to. But you have to mind that when you write the app into the installed app list. If you have an app at project/apps/appname, you would have to write 'app.appname' in the installed apps list. Now there are some ways to go about adding 3rd party apps located somewhere else to your project. You can either add them to your python path, put in your python path, or make a link to your python path. However, you can also add a sys.path.insert(...) in your manage.py file where you add the folder of your liking to your python path. Doing this will allow you to add folders to your python path for that project only, and will keep your python path more clean. A: Your third party django module should be searchable by PYTHONPATH, because django module is no other than a python module. Now there are two ways to do this: Create a folder (anywhere you want), put your third party django module under there. Now set that directory to environment variable $PYTHONPATH e.g (on Linux box): export PYTHONPATH = /home/me/pythonmodules/ Create a folder (anywhere you want), put the third party django module under there. Now if you are on Unix box, create a symlink to that directory to python site-packages. Use this command to find out where your python site packages is: python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"
Copying modules into Django, "No module named [moduleName]"
I run into this problem pretty consistently... keep in mind I am quite new to Django and a total Python amateur. It seems that, for example, whenever I check out my Django project on a new computer after a clean install of Python and Django, it can never find the project/apps I create or copy in. So right now I have an app that is working, and I downloaded a 3rd party Django module and installed it into my app directory, include it in my settings, and the web server quits because it cannot find the module. This is the first time I've imported an third party module. In the past when it couldn't find modules I created, I would just rename the folder and run "manage.py startapp appname", delete the folder it created, and name my original folder back, boom, problem solved... But that's obviously a hack, I am wondering if anyone can explain the the heck is going on here and how best to approach it. I can't be the only one who has run into this, but I couldn't find any other questions on this site that seemed to match my issue. Happens on both OS X and Windows 7.
[ "They way Django works is pretty much how Python works. At default the folder you create when you run django-admin.py startproject name is added to your python path. That means that anything you put into there you can get to. But you have to mind that when you write the app into the installed app list. If you have an app at project/apps/appname, you would have to write 'app.appname' in the installed apps list.\nNow there are some ways to go about adding 3rd party apps located somewhere else to your project. You can either add them to your python path, put in your python path, or make a link to your python path. However, you can also add a sys.path.insert(...) in your manage.py file where you add the folder of your liking to your python path. Doing this will allow you to add folders to your python path for that project only, and will keep your python path more clean.\n", "Your third party django module should be searchable by PYTHONPATH, because django module is no other than a python module. Now there are two ways to do this:\n\nCreate a folder (anywhere you want), put your third party django module under there. Now set that directory to environment variable $PYTHONPATH\ne.g (on Linux box): \nexport PYTHONPATH = /home/me/pythonmodules/\nCreate a folder (anywhere you want), put the third party django module under there. Now if you are on Unix box, create a symlink to that directory to python site-packages. Use this command to find out where your python site packages is:\npython -c \"from distutils.sysconfig import get_python_lib; print get_python_lib()\"\n\n" ]
[ 2, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001471707_django_python.txt
Q: Twisted network client with multiprocessing workers? So, I've got an application that uses Twisted + Stomper as a STOMP client which farms out work to a multiprocessing.Pool of workers. This appears to work ok when I just use a python script to fire this up, which (simplified) looks something like this: # stompclient.py logging.config.fileConfig(config_path) logger = logging.getLogger(__name__) # Add observer to make Twisted log via python twisted.python.log.PythonLoggingObserver().start() # initialize the process pool. (child processes get forked off immediately) pool = multiprocessing.Pool(processes=processes) StompClientFactory.username = username StompClientFactory.password = password StompClientFactory.destination = destination reactor.connectTCP(host, port, StompClientFactory()) reactor.run() As this gets packaged for deployment, I thought I would take advantage of the twistd script and run this from a tac file. Here's my very-similar-looking tac file: # stompclient.tac logging.config.fileConfig(config_path) logger = logging.getLogger(__name__) # Add observer to make Twisted log via python twisted.python.log.PythonLoggingObserver().start() # initialize the process pool. (child processes get forked off immediately) pool = multiprocessing.Pool(processes=processes) StompClientFactory.username = username StompClientFactory.password = password StompClientFactory.destination = destination application = service.Application('myapp') service = internet.TCPClient(host, port, StompClientFactory()) service.setServiceParent(application) For the sake of illustration, I have collapsed or changed a few details; hopefully they were not the essence of the problem. For example, my app has a plugin system, the pool is initialized by a separate method, and then work is delegated to the pool using pool.apply_async() passing one of my plugin's process() methods. So, if I run the script (stompclient.py), everything works as expected. It also appears to work OK if I run twist in non-daemon mode (-n): twistd -noy stompclient.tac however, it does not work when I run in daemon mode: twistd -oy stompclient.tac The application appears to start up OK, but when it attempts to fork off work, it just hangs. By "hangs", I mean that it appears that the child process is never asked to do anything and the parent (that called pool.apply_async()) just sits there waiting for the response to return. I'm sure that I'm doing something stupid with Twisted + multiprocessing, but I'm really hoping that someone can explain to my the flaw in my approach. Thanks in advance! A: Since the difference between your working invocation and your non-working invocation is only the "-n" option, it seems most likely that the problem is caused by the daemonization process (which "-n" prevents from happening). On POSIX, one of the steps involved in daemonization is forking and having the parent exit. Among of things, this has the consequence of having your code run in a different process than the one in which the .tac file was evaluated. This also re-arranges the child/parent relationship of processes which were started in the .tac file - as your pool of multiprocessing processes were. The multiprocessing pool's processes start off with a parent of the twistd process you start. However, when that process exits as part of daemonization, their parent becomes the system init process. This may cause some problems, although probably not the hanging problem you described. There are probably other similarly low-level implementation details which normally allow the multiprocessing module to work but which are disrupted by the daemonization process. Fortunately, avoiding this strange interaction should be straightforward. Twisted's service APIs allow you to run code after daemonization has completed. If you use these APIs, then you can delay the initialization of the multiprocessing module's process pool until after daemonization and hopefully avoid the problem. Here's an example of what that might look like: from twisted.application.service import Service class MultiprocessingService(Service): def startService(self): self.pool = multiprocessing.Pool(processes=processes) MultiprocessingService().setServiceParent(application) Now, separately, you may also run into problems relating to clean up of the multiprocessing module's child processes, or possibly issues with processes created with Twisted's process creation API, reactor.spawnProcess. This is because part of dealing with child processes correctly generally involves handling the SIGCHLD signal. Twisted and multiprocessing aren't going to be cooperating in this regard, though, so one of them is going to get notified of all children exiting and the other will never be notified. If you don't use Twisted's API for creating child processes at all, then this may be okay for you - but you might want to check to make sure any signal handler the multiprocessing module tries to install actually "wins" and doesn't get replaced by Twisted's own handler. A: A possible idea for you... When running in daemon mode twistd will close stdin, stdout and stderr. Does something that your clients do read or write to these?
Twisted network client with multiprocessing workers?
So, I've got an application that uses Twisted + Stomper as a STOMP client which farms out work to a multiprocessing.Pool of workers. This appears to work ok when I just use a python script to fire this up, which (simplified) looks something like this: # stompclient.py logging.config.fileConfig(config_path) logger = logging.getLogger(__name__) # Add observer to make Twisted log via python twisted.python.log.PythonLoggingObserver().start() # initialize the process pool. (child processes get forked off immediately) pool = multiprocessing.Pool(processes=processes) StompClientFactory.username = username StompClientFactory.password = password StompClientFactory.destination = destination reactor.connectTCP(host, port, StompClientFactory()) reactor.run() As this gets packaged for deployment, I thought I would take advantage of the twistd script and run this from a tac file. Here's my very-similar-looking tac file: # stompclient.tac logging.config.fileConfig(config_path) logger = logging.getLogger(__name__) # Add observer to make Twisted log via python twisted.python.log.PythonLoggingObserver().start() # initialize the process pool. (child processes get forked off immediately) pool = multiprocessing.Pool(processes=processes) StompClientFactory.username = username StompClientFactory.password = password StompClientFactory.destination = destination application = service.Application('myapp') service = internet.TCPClient(host, port, StompClientFactory()) service.setServiceParent(application) For the sake of illustration, I have collapsed or changed a few details; hopefully they were not the essence of the problem. For example, my app has a plugin system, the pool is initialized by a separate method, and then work is delegated to the pool using pool.apply_async() passing one of my plugin's process() methods. So, if I run the script (stompclient.py), everything works as expected. It also appears to work OK if I run twist in non-daemon mode (-n): twistd -noy stompclient.tac however, it does not work when I run in daemon mode: twistd -oy stompclient.tac The application appears to start up OK, but when it attempts to fork off work, it just hangs. By "hangs", I mean that it appears that the child process is never asked to do anything and the parent (that called pool.apply_async()) just sits there waiting for the response to return. I'm sure that I'm doing something stupid with Twisted + multiprocessing, but I'm really hoping that someone can explain to my the flaw in my approach. Thanks in advance!
[ "Since the difference between your working invocation and your non-working invocation is only the \"-n\" option, it seems most likely that the problem is caused by the daemonization process (which \"-n\" prevents from happening).\nOn POSIX, one of the steps involved in daemonization is forking and having the parent exit. Among of things, this has the consequence of having your code run in a different process than the one in which the .tac file was evaluated. This also re-arranges the child/parent relationship of processes which were started in the .tac file - as your pool of multiprocessing processes were.\nThe multiprocessing pool's processes start off with a parent of the twistd process you start. However, when that process exits as part of daemonization, their parent becomes the system init process. This may cause some problems, although probably not the hanging problem you described. There are probably other similarly low-level implementation details which normally allow the multiprocessing module to work but which are disrupted by the daemonization process.\nFortunately, avoiding this strange interaction should be straightforward. Twisted's service APIs allow you to run code after daemonization has completed. If you use these APIs, then you can delay the initialization of the multiprocessing module's process pool until after daemonization and hopefully avoid the problem. Here's an example of what that might look like:\nfrom twisted.application.service import Service\n\nclass MultiprocessingService(Service):\n def startService(self):\n self.pool = multiprocessing.Pool(processes=processes)\n\nMultiprocessingService().setServiceParent(application)\n\nNow, separately, you may also run into problems relating to clean up of the multiprocessing module's child processes, or possibly issues with processes created with Twisted's process creation API, reactor.spawnProcess. This is because part of dealing with child processes correctly generally involves handling the SIGCHLD signal. Twisted and multiprocessing aren't going to be cooperating in this regard, though, so one of them is going to get notified of all children exiting and the other will never be notified. If you don't use Twisted's API for creating child processes at all, then this may be okay for you - but you might want to check to make sure any signal handler the multiprocessing module tries to install actually \"wins\" and doesn't get replaced by Twisted's own handler.\n", "A possible idea for you...\nWhen running in daemon mode twistd will close stdin, stdout and stderr. Does something that your clients do read or write to these?\n" ]
[ 12, 0 ]
[]
[]
[ "multiprocessing", "python", "twisted" ]
stackoverflow_0001470850_multiprocessing_python_twisted.txt
Q: Small Tables in Python? Let's say I don't have more than one or two dozen objects with different properties, such as the following: UID, Name, Value, Color, Type, Location I want to be able to call up all objects with Location = "Boston", or Type = "Primary". Classic database query type stuff. Most table solutions (pytables, *sql) are really overkill for such a small set of data. Should I simply iterate over all the objects and create a separate dictionary for each data column (adding values to dictionaries as I add new objects)? This would create dicts like this: {'Boston' : [234, 654, 234], 'Chicago' : [324, 765, 342] } - where those 3 digit entries represent things like UID's. As you can see, querying this would be a bit of a pain. Any thoughts of an alternative? A: For small relational problems I love using Python's builtin sets. For the example of location = 'Boston' OR type = 'Primary', if you had this data: users = { 1: dict(Name="Mr. Foo", Location="Boston", Type="Secondary"), 2: dict(Name="Mr. Bar", Location="New York", Type="Primary"), 3: dict(Name="Mr. Quux", Location="Chicago", Type="Secondary"), #... } You can do the WHERE ... OR ... query like this: set1 = set(u for u in users if users[u]['Location'] == 'Boston') set2 = set(u for u in users if users[u]['Type'] == 'Primary') result = set1.union(set2) Or with just one expression: result = set(u for u in users if users[u]['Location'] == 'Boston' or users[u]['Type'] == 'Primary') You can also use the functions in itertools to create fairly efficient queries of the data. For example if you want to do something similar to a GROUP BY city: cities = ('Boston', 'New York', 'Chicago') cities_users = dict(map(lambda city: (city, ifilter(lambda u: users[u]['Location'] == city, users)), cities)) You could also build indexes manually (build a dict mapping Location to User ID) to speed things up. If this becomes too slow or unwieldy then I would probably switch to sqlite, which is now included in the Python (2.5) standard library. A: I do not think sqlite would be "overkill" -- it comes with standard Python since 2.5, so no need to install stuff, and it can make and handle databases in either memory or local disk files. Really, how could it be simpler...? If you want everything in-memory including the initial values, and want to use dicts to express those initial values, for example...: import sqlite3 db = sqlite3.connect(':memory:') db.execute('Create table Users (Name, Location, Type)') db.executemany('Insert into Users values(:Name, :Location, :Type)', [ dict(Name="Mr. Foo", Location="Boston", Type="Secondary"), dict(Name="Mr. Bar", Location="New York", Type="Primary"), dict(Name="Mr. Quux", Location="Chicago", Type="Secondary"), ]) db.commit() db.row_factory = sqlite3.Row and now your in-memory tiny "db" is ready to go. It's no harder to make a DB in a disk file and/or read the initial values from a text file, a CSV, and so forth, of course. Querying is especially flexible, easy and sweet, e.g., you can mix string insertion and parameter substitution at will...: def where(w, *a): c = db.cursor() c.execute('Select * From Users where %s' % w, *a) return c.fetchall() print [r["Name"] for r in where('Type="Secondary"')] emits [u'Mr. Foo', u'Mr. Quux'], just like the more elegant but equivalent print [r["Name"] for r in where('Type=?', ["Secondary"])] and your desired query's just: print [r["Name"] for r in where('Location="Boston" or Type="Primary"')] etc. Seriously -- what's not to like? A: If it's really a small amount of data, I'd not bother with an index and probably just write a helper function: users = [ dict(Name="Mr. Foo", Location="Boston", Type="Secondary"), dict(Name="Mr. Bar", Location="New York", Type="Primary"), dict(Name="Mr. Quux", Location="Chicago", Type="Secondary"), ] def search(dictlist, **kwargs): def match(d): for k,v in kwargs.iteritems(): try: if d[k] != v: return False except KeyError: return False return True return [d for d in dictlist if match(d)] Which will allow nice looking queries like this: result = search(users, Type="Secondary")
Small Tables in Python?
Let's say I don't have more than one or two dozen objects with different properties, such as the following: UID, Name, Value, Color, Type, Location I want to be able to call up all objects with Location = "Boston", or Type = "Primary". Classic database query type stuff. Most table solutions (pytables, *sql) are really overkill for such a small set of data. Should I simply iterate over all the objects and create a separate dictionary for each data column (adding values to dictionaries as I add new objects)? This would create dicts like this: {'Boston' : [234, 654, 234], 'Chicago' : [324, 765, 342] } - where those 3 digit entries represent things like UID's. As you can see, querying this would be a bit of a pain. Any thoughts of an alternative?
[ "For small relational problems I love using Python's builtin sets.\nFor the example of location = 'Boston' OR type = 'Primary', if you had this data:\nusers = {\n 1: dict(Name=\"Mr. Foo\", Location=\"Boston\", Type=\"Secondary\"),\n 2: dict(Name=\"Mr. Bar\", Location=\"New York\", Type=\"Primary\"),\n 3: dict(Name=\"Mr. Quux\", Location=\"Chicago\", Type=\"Secondary\"),\n #...\n}\n\nYou can do the WHERE ... OR ... query like this:\nset1 = set(u for u in users if users[u]['Location'] == 'Boston')\nset2 = set(u for u in users if users[u]['Type'] == 'Primary')\nresult = set1.union(set2)\n\nOr with just one expression:\nresult = set(u for u in users if users[u]['Location'] == 'Boston'\n or users[u]['Type'] == 'Primary')\n\nYou can also use the functions in itertools to create fairly efficient queries of the data. For example if you want to do something similar to a GROUP BY city:\ncities = ('Boston', 'New York', 'Chicago')\ncities_users = dict(map(lambda city: (city, ifilter(lambda u: users[u]['Location'] == city, users)), cities))\n\nYou could also build indexes manually (build a dict mapping Location to User ID) to speed things up. If this becomes too slow or unwieldy then I would probably switch to sqlite, which is now included in the Python (2.5) standard library.\n", "I do not think sqlite would be \"overkill\" -- it comes with standard Python since 2.5, so no need to install stuff, and it can make and handle databases in either memory or local disk files. Really, how could it be simpler...? If you want everything in-memory including the initial values, and want to use dicts to express those initial values, for example...:\nimport sqlite3\n\ndb = sqlite3.connect(':memory:')\ndb.execute('Create table Users (Name, Location, Type)')\ndb.executemany('Insert into Users values(:Name, :Location, :Type)', [\n dict(Name=\"Mr. Foo\", Location=\"Boston\", Type=\"Secondary\"),\n dict(Name=\"Mr. Bar\", Location=\"New York\", Type=\"Primary\"),\n dict(Name=\"Mr. Quux\", Location=\"Chicago\", Type=\"Secondary\"),\n ])\ndb.commit()\ndb.row_factory = sqlite3.Row\n\nand now your in-memory tiny \"db\" is ready to go. It's no harder to make a DB in a disk file and/or read the initial values from a text file, a CSV, and so forth, of course.\nQuerying is especially flexible, easy and sweet, e.g., you can mix string insertion and parameter substitution at will...:\ndef where(w, *a):\n c = db.cursor()\n c.execute('Select * From Users where %s' % w, *a)\n return c.fetchall()\n\nprint [r[\"Name\"] for r in where('Type=\"Secondary\"')]\n\nemits [u'Mr. Foo', u'Mr. Quux'], just like the more elegant but equivalent\nprint [r[\"Name\"] for r in where('Type=?', [\"Secondary\"])]\n\nand your desired query's just:\nprint [r[\"Name\"] for r in where('Location=\"Boston\" or Type=\"Primary\"')]\n\netc. Seriously -- what's not to like?\n", "If it's really a small amount of data, I'd not bother with an index and probably just write a helper function:\nusers = [\n dict(Name=\"Mr. Foo\", Location=\"Boston\", Type=\"Secondary\"),\n dict(Name=\"Mr. Bar\", Location=\"New York\", Type=\"Primary\"),\n dict(Name=\"Mr. Quux\", Location=\"Chicago\", Type=\"Secondary\"),\n ]\n\ndef search(dictlist, **kwargs):\n def match(d):\n for k,v in kwargs.iteritems():\n try: \n if d[k] != v: \n return False\n except KeyError:\n return False\n return True\n\n return [d for d in dictlist if match(d)] \n\nWhich will allow nice looking queries like this:\nresult = search(users, Type=\"Secondary\")\n\n" ]
[ 14, 6, 2 ]
[]
[]
[ "lookup_tables", "python" ]
stackoverflow_0001471924_lookup_tables_python.txt
Q: What is the Python equivalent of application & session scope variables? Recently started on python, wondered what the equivalent object was for storing session & application scope data? I'm using Google App Engine too so if it has any extra features (can't seem to find any immediate references myself) that would be useful A: I assume you're talking about a web session, used to retain state between http requests. Python is a general programming language and by itself doesn't contain the concept of a session. However most different python web frameworks have session implementations. Which one are you using? I've linked to the session documentation on some python web frameworks: mod_python django paste web.py
What is the Python equivalent of application & session scope variables?
Recently started on python, wondered what the equivalent object was for storing session & application scope data? I'm using Google App Engine too so if it has any extra features (can't seem to find any immediate references myself) that would be useful
[ "I assume you're talking about a web session, used to retain state between http requests. Python is a general programming language and by itself doesn't contain the concept of a session. \nHowever most different python web frameworks have session implementations. Which one are you using? I've linked to the session documentation on some python web frameworks:\n\nmod_python\ndjango\npaste\nweb.py\n\n" ]
[ 0 ]
[]
[]
[ "python", "scope", "session", "web_applications" ]
stackoverflow_0001472279_python_scope_session_web_applications.txt
Q: Using Heapy's Memory Profile Browser with Twisted.web I am trying to profile twisted python code with Heapy. For example (pseudo code): from twisted.web import resource, server from twisted.internet import reactor from guppy import hpy class RootResource(resource.Resource): render_GET(self, path, request): return "Hello World" if __name__ == '__main__': h = hpy() port = 8080 site = server.Site(RootResource(mq)) reactor.listenTCP(port, site) reactor.run() What do I need to do to view Heapy profile results in the profile browser? A: After looking over the guppy website and not finding any information about how to launch the profile browser there, I started looking around the guppy source and eventually found guppy/heapy/Prof.py, at the end of which I saw a docstring containing this line: [0] heapy_Use.html#heapykinds.Use.pb Then, remembering that I had see some documentation giving the return type of guppy.hpy as Use, I checked to see if guppy.hpy().pb() would do anything. And, indeed, it does. So that appears to be how the profiler browser is launched. I'm not sure if this is what you were asking, but I needed to figure it out before I could answer the other possible part of your question. :) It seems the simplest way to make this information available would be to make a resource in your web server that invokes Use.pb as part of its rendering process. There are other approaches, such as embedding a manhole in your application, or using a signal handler to trigger it, but I like the resource idea. So, for example: class ProfileBrowser(Resource): def render_GET(self, request): h.pb() return "You saw it, right?" ... root = RootResource(mq) root.putChild("profile-browser", ProfileBrowser()) ... Then you can visit /profile-browser whenever you want to look at the profile browser. The "pb" call blocks until the profile browser is exited (note, just closing the window with the wm destroy button doesn't seem to cause it to return - only the exit menu item seems to) so your server is hung until you dismiss the window, but for debugging purposes that seems like it may be fine.
Using Heapy's Memory Profile Browser with Twisted.web
I am trying to profile twisted python code with Heapy. For example (pseudo code): from twisted.web import resource, server from twisted.internet import reactor from guppy import hpy class RootResource(resource.Resource): render_GET(self, path, request): return "Hello World" if __name__ == '__main__': h = hpy() port = 8080 site = server.Site(RootResource(mq)) reactor.listenTCP(port, site) reactor.run() What do I need to do to view Heapy profile results in the profile browser?
[ "After looking over the guppy website and not finding any information about how to launch the profile browser there, I started looking around the guppy source and eventually found guppy/heapy/Prof.py, at the end of which I saw a docstring containing this line:\n[0] heapy_Use.html#heapykinds.Use.pb\n\nThen, remembering that I had see some documentation giving the return type of guppy.hpy as Use, I checked to see if guppy.hpy().pb() would do anything. And, indeed, it does. So that appears to be how the profiler browser is launched. I'm not sure if this is what you were asking, but I needed to figure it out before I could answer the other possible part of your question. :)\nIt seems the simplest way to make this information available would be to make a resource in your web server that invokes Use.pb as part of its rendering process. There are other approaches, such as embedding a manhole in your application, or using a signal handler to trigger it, but I like the resource idea. So, for example:\nclass ProfileBrowser(Resource):\n def render_GET(self, request):\n h.pb()\n return \"You saw it, right?\"\n\n...\nroot = RootResource(mq)\nroot.putChild(\"profile-browser\", ProfileBrowser())\n...\n\nThen you can visit /profile-browser whenever you want to look at the profile browser. The \"pb\" call blocks until the profile browser is exited (note, just closing the window with the wm destroy button doesn't seem to cause it to return - only the exit menu item seems to) so your server is hung until you dismiss the window, but for debugging purposes that seems like it may be fine.\n" ]
[ 6 ]
[]
[]
[ "heap_memory", "heapy", "profiling", "python", "twisted" ]
stackoverflow_0001331561_heap_memory_heapy_profiling_python_twisted.txt
Q: Understanding Python profile output I'm trying to use the Python profiler to speed up my code. I've been able to identify the specific function where nearly all of the time is spent, but I can't figure out where in that function the time is being spent. Below I have the profile output, which shows that "appendBallot" is the primary culprit and consumes nearly 116 seconds. Further below, I have the code for "appendBallot". I cannot figure out from the profile output, which part of "appendBallot" I need to optimize as the next highest time entry is less than a second. I'm sure many of you could tell me just from my code, but I'd like to understand how to get that information from the profile output. Any help would be greatly appreciated. Profile output: ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 116.168 116.168 <string>:1(<module>) 1 0.001 0.001 116.168 116.168 {execfile} 1 0.003 0.003 116.167 116.167 foo.py:1(<module>) 1 0.000 0.000 116.139 116.139 ballots.py:330(loadKnown) 1 0.000 0.000 116.109 116.109 plugins.py:148(load) 1 0.196 0.196 116.108 116.108 BltBallotLoader.py:37(loadFile) 100000 114.937 0.001 115.912 0.001 ballots.py:133(appendBallot) 100000 0.480 0.000 0.790 0.000 ballots.py:117(newBallot) 316668 0.227 0.000 0.310 0.000 ballots.py:107(getNumCandidates) 417310/417273 0.111 0.000 0.111 0.000 {len} 200510 0.071 0.000 0.071 0.000 {method 'append' of 'list' objects} 99996 0.045 0.000 0.045 0.000 {method 'add' of 'set' objects} 100000 0.042 0.000 0.042 0.000 {method 'has_key' of 'dict' objects} 1 0.000 0.000 0.030 0.030 plugins.py:202(getLoaderPluginClasses) 1 0.000 0.000 0.030 0.030 plugins.py:179(getPluginClasses) 1 0.000 0.000 0.030 0.030 plugins.py:205(getLoaderPluginClass) 3 0.016 0.005 0.029 0.010 {__import__} 1 0.022 0.022 0.025 0.025 ballots.py:1(<module>) 1 0.010 0.010 0.013 0.013 BltBallotLoader.py:1(<module>) 7 0.000 0.000 0.003 0.000 re.py:227(_compile) Code: def appendBallot(self, ballot, ballotID=None): "Append a ballot to this Ballots object." # String representation of ballot for determining whether ballot is unique ballotString = str(list(ballot)) # Ballot as the appropriate array to conserve memory ballot = self.newBallot(ballot) # Assign a ballot ID if one has not been given if ballotID is None: ballotID = len(self.ballotIDs) assert(ballotID not in self.ballotIDs) self.ballotIDs.append(ballotID) # Check to see if we have seen this ballot before if self.uniqueBallotsLookup.has_key(ballotString): i = self.uniqueBallotsLookup[ballotString] self.uniqueBallotIDs[i].add(ballotID) else: i = len(self.uniqueBallots) self.uniqueBallotsLookup[ballotString] = i self.uniqueBallots.append(ballot) self.uniqueBallotIDs.append(set([ballotID])) self.ballotOrder.append(i) A: Yeah I came across that same problem as well. The only way I know to work around this is to wrap your large function into several smaller function calls. This will allow the profiler to take into account each of the smaller function calls. Interesting enough, the process of doing this (for me, anyway) made it obvious where the inefficiencies were, so I didn't even have to run the profiler. A: I've had a look at your code, and it looks like you make a lot of function calls and attribute lookups as part of your 'checking' or looking ahead before leaping. You also have a lot of code dedicated to track the same condition, i.e many bits of code looking at creating 'unique' IDs. instead of trying to assign some kind of unique string to each ballot, couldn't you just use the ballotID (an integer number?) now you could have a dictionary (uniqueBallotIDs) mapping ballotID and the actual ballot object. the process might be something like this: def appendBallot(self, ballot, ballotID=None): if ballotID is None: ballotID = self._getuniqueid() # maybe just has a counter? up to you. # check to see if we have seen this ballot before. if not self._isunique(ballotID): # code for non-unique ballot ids. else: # code for unique ballot ids. self.ballotOrder.append(i) You might be able to handle some of your worries about the dictionary missing a given key by using a defaultdict (from the collections module). collection docs Edit for completeness I will include a sample usage of the defaultdict: >>> from collections import defaultdict >>> ballotIDmap = defaultdict(list) >>> ballotID, ballot = 1, object() # some nominal ballotID and object. >>> # I will now try to save my ballotID. >>> ballotIDmap[ballotID].append(ballot) >>> ballotIDmap.items() [(1, [<object object at 0x009BB950>])] A: I'll support Fragsworth by saying that you'll want to split up your function into smaller ones. Having said that, you are reading the output correctly: the tottime is the one to watch. Now for where your slowdown is likely to be: Since there seem to be 100000 calls to appendBallot, and there aren't any obvious loops, I'd suggest it is in your assert. Because you are executing: assert(ballotID not in self.ballotIDs) This will actually act as a loop. Thus, the first time you call this function, it will iterate through a (probably empty) array, and then assert if the value was found. The 100000th time it will iterate through the entire array. And there is actually a possible bug here: if a ballot is deleted, then the next ballot added would have the same id as the last added one (unless that were the one deleted). I think you would be better off using a simple counter. That way you can just increment it each time you add a ballot. Alternatively, you could use a UUID to get unique ids. Alternatively, if you are looking at some level of persistence, use an ORM, and get it to do the ID generation, and unique checking for you. A: Profilers can be like that. The method I use is this. It gets right to the heart of the problem in no time. A: I have used this decorator in my code, and it helped me with my pyparsing tuning work. A: You have two problems in this little slice of code: # Assign a ballot ID if one has not been given if ballotID is None: ballotID = len(self.ballotIDs) assert(ballotID not in self.ballotIDs) self.ballotIDs.append(ballotID) Firstly it appears that self.ballotIDs is a list, so the assert statement will cause quadratic behaviour. As you didn't give any documentation at all for your data structures, it's not possible to be prescriptive, but if the order of appearance doesn't matter, you could use a set instead of a list. Secondly, the logic (in the absence of documentation on what a ballotID is all about, and what a not-None ballotID arg means) seems seriously bugged: obj.appendBallot(ballota, 2) # self.ballotIDs -> [2] obj.appendBallot(ballotb) # self.ballotIDs -> [2, 1] obj.appendBallot(ballotc) # wants to add 2 but triggers assertion Other comments: Instead of adict.has_key(key), use key in adict -- it's faster and looks better. You may like to consider reviewing your data structures ... they appear to be slightly baroque; there may be a fair bit of CPU time involved in building them.
Understanding Python profile output
I'm trying to use the Python profiler to speed up my code. I've been able to identify the specific function where nearly all of the time is spent, but I can't figure out where in that function the time is being spent. Below I have the profile output, which shows that "appendBallot" is the primary culprit and consumes nearly 116 seconds. Further below, I have the code for "appendBallot". I cannot figure out from the profile output, which part of "appendBallot" I need to optimize as the next highest time entry is less than a second. I'm sure many of you could tell me just from my code, but I'd like to understand how to get that information from the profile output. Any help would be greatly appreciated. Profile output: ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 116.168 116.168 <string>:1(<module>) 1 0.001 0.001 116.168 116.168 {execfile} 1 0.003 0.003 116.167 116.167 foo.py:1(<module>) 1 0.000 0.000 116.139 116.139 ballots.py:330(loadKnown) 1 0.000 0.000 116.109 116.109 plugins.py:148(load) 1 0.196 0.196 116.108 116.108 BltBallotLoader.py:37(loadFile) 100000 114.937 0.001 115.912 0.001 ballots.py:133(appendBallot) 100000 0.480 0.000 0.790 0.000 ballots.py:117(newBallot) 316668 0.227 0.000 0.310 0.000 ballots.py:107(getNumCandidates) 417310/417273 0.111 0.000 0.111 0.000 {len} 200510 0.071 0.000 0.071 0.000 {method 'append' of 'list' objects} 99996 0.045 0.000 0.045 0.000 {method 'add' of 'set' objects} 100000 0.042 0.000 0.042 0.000 {method 'has_key' of 'dict' objects} 1 0.000 0.000 0.030 0.030 plugins.py:202(getLoaderPluginClasses) 1 0.000 0.000 0.030 0.030 plugins.py:179(getPluginClasses) 1 0.000 0.000 0.030 0.030 plugins.py:205(getLoaderPluginClass) 3 0.016 0.005 0.029 0.010 {__import__} 1 0.022 0.022 0.025 0.025 ballots.py:1(<module>) 1 0.010 0.010 0.013 0.013 BltBallotLoader.py:1(<module>) 7 0.000 0.000 0.003 0.000 re.py:227(_compile) Code: def appendBallot(self, ballot, ballotID=None): "Append a ballot to this Ballots object." # String representation of ballot for determining whether ballot is unique ballotString = str(list(ballot)) # Ballot as the appropriate array to conserve memory ballot = self.newBallot(ballot) # Assign a ballot ID if one has not been given if ballotID is None: ballotID = len(self.ballotIDs) assert(ballotID not in self.ballotIDs) self.ballotIDs.append(ballotID) # Check to see if we have seen this ballot before if self.uniqueBallotsLookup.has_key(ballotString): i = self.uniqueBallotsLookup[ballotString] self.uniqueBallotIDs[i].add(ballotID) else: i = len(self.uniqueBallots) self.uniqueBallotsLookup[ballotString] = i self.uniqueBallots.append(ballot) self.uniqueBallotIDs.append(set([ballotID])) self.ballotOrder.append(i)
[ "Yeah I came across that same problem as well.\nThe only way I know to work around this is to wrap your large function into several smaller function calls. This will allow the profiler to take into account each of the smaller function calls.\nInteresting enough, the process of doing this (for me, anyway) made it obvious where the inefficiencies were, so I didn't even have to run the profiler.\n", "I've had a look at your code, and it looks like you make a lot of function calls and attribute lookups as part of your 'checking' or looking ahead before leaping. You also have a lot of code dedicated to track the same condition, i.e many bits of code looking at creating 'unique' IDs. \ninstead of trying to assign some kind of unique string to each ballot, couldn't you just\nuse the ballotID (an integer number?)\nnow you could have a dictionary (uniqueBallotIDs) mapping ballotID and the actual ballot object.\nthe process might be something like this:\ndef appendBallot(self, ballot, ballotID=None):\n if ballotID is None:\n ballotID = self._getuniqueid() # maybe just has a counter? up to you.\n # check to see if we have seen this ballot before.\n if not self._isunique(ballotID):\n # code for non-unique ballot ids.\n else:\n # code for unique ballot ids.\n\n self.ballotOrder.append(i)\n\nYou might be able to handle some of your worries about the dictionary missing a given key\nby using a defaultdict (from the collections module). collection docs\nEdit for completeness I will include a sample usage of the defaultdict:\n>>> from collections import defaultdict \n\n>>> ballotIDmap = defaultdict(list)\n>>> ballotID, ballot = 1, object() # some nominal ballotID and object.\n>>> # I will now try to save my ballotID.\n>>> ballotIDmap[ballotID].append(ballot)\n>>> ballotIDmap.items()\n[(1, [<object object at 0x009BB950>])]\n\n", "I'll support Fragsworth by saying that you'll want to split up your function into smaller ones.\nHaving said that, you are reading the output correctly: the tottime is the one to watch. \nNow for where your slowdown is likely to be:\nSince there seem to be 100000 calls to appendBallot, and there aren't any obvious loops, I'd suggest it is in your assert. Because you are executing:\nassert(ballotID not in self.ballotIDs)\n\nThis will actually act as a loop. Thus, the first time you call this function, it will iterate through a (probably empty) array, and then assert if the value was found. The 100000th time it will iterate through the entire array.\nAnd there is actually a possible bug here: if a ballot is deleted, then the next ballot added would have the same id as the last added one (unless that were the one deleted). I think you would be better off using a simple counter. That way you can just increment it each time you add a ballot. Alternatively, you could use a UUID to get unique ids.\nAlternatively, if you are looking at some level of persistence, use an ORM, and get it to do the ID generation, and unique checking for you.\n", "Profilers can be like that. The method I use is this. It gets right to the heart of the problem in no time.\n", "I have used this decorator in my code, and it helped me with my pyparsing tuning work.\n", "You have two problems in this little slice of code:\n# Assign a ballot ID if one has not been given\nif ballotID is None:\n ballotID = len(self.ballotIDs)\nassert(ballotID not in self.ballotIDs)\nself.ballotIDs.append(ballotID)\n\nFirstly it appears that self.ballotIDs is a list, so the assert statement will cause quadratic behaviour. As you didn't give any documentation at all for your data structures, it's not possible to be prescriptive, but if the order of appearance doesn't matter, you could use a set instead of a list.\nSecondly, the logic (in the absence of documentation on what a ballotID is all about, and what a not-None ballotID arg means) seems seriously bugged:\nobj.appendBallot(ballota, 2) # self.ballotIDs -> [2]\nobj.appendBallot(ballotb) # self.ballotIDs -> [2, 1]\nobj.appendBallot(ballotc) # wants to add 2 but triggers assertion\n\nOther comments:\nInstead of adict.has_key(key), use key in adict -- it's faster and looks better.\nYou may like to consider reviewing your data structures ... they appear to be slightly baroque; there may be a fair bit of CPU time involved in building them.\n" ]
[ 7, 5, 5, 5, 4, 2 ]
[]
[]
[ "profile", "profiling", "python" ]
stackoverflow_0001469679_profile_profiling_python.txt
Q: How can I link against libpython.a such that the runtime linker can find all the symbols in libpython.a? In a sequel question to this question, my corporate environment lacks the libpython2.6.so shared object but has the libpython2.6.a file. Is there a way that I can compile in libpython2.6.a while retaining the symbols in libpython2.6.a such that dynamic libraries can find these symbols at runtime? My current compile with the static library looks like: g++ -I/usr/CORP/pkgs/python/2.6.2/include/python2.6 \ ~/tmp.cpp -pthread -lm -ldl -lutil \ /usr/CORP/pkgs/python/2.6.2/lib/python2.6/config/libpython2.6.a \ -o tmp.exe However, if I load a module like 'math', it dies with: undefined symbol: PyInt_FromLong A: You need to pass --export-dynamic to the linker. So from g++ it's... g++ -Wl,--export-dynamic ...
How can I link against libpython.a such that the runtime linker can find all the symbols in libpython.a?
In a sequel question to this question, my corporate environment lacks the libpython2.6.so shared object but has the libpython2.6.a file. Is there a way that I can compile in libpython2.6.a while retaining the symbols in libpython2.6.a such that dynamic libraries can find these symbols at runtime? My current compile with the static library looks like: g++ -I/usr/CORP/pkgs/python/2.6.2/include/python2.6 \ ~/tmp.cpp -pthread -lm -ldl -lutil \ /usr/CORP/pkgs/python/2.6.2/lib/python2.6/config/libpython2.6.a \ -o tmp.exe However, if I load a module like 'math', it dies with: undefined symbol: PyInt_FromLong
[ "You need to pass --export-dynamic to the linker. So from g++ it's...\ng++ -Wl,--export-dynamic ...\n\n" ]
[ 3 ]
[]
[]
[ "g++", "gcc", "python" ]
stackoverflow_0001472828_g++_gcc_python.txt
Q: Why is the WindowsError while deleting the temporary file? I have created a temporary file. Added some data to the file created. Saved it and then trying to delete it. But I am getting WindowsError. I have closed the file after editing it. How do I check which other process is accessing the file. C:\Documents and Settings\Administrator>python Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tempfile >>> __, filename = tempfile.mkstemp() >>> print filename c:\docume~1\admini~1\locals~1\temp\tmpm5clkb >>> fptr = open(filename, "wb") >>> fptr.write("Hello World!") >>> fptr.close() >>> import os >>> os.remove(filename) Traceback (most recent call last): File "<stdin>", line 1, in <module> WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\admini~1\\locals~1\\temp\\tmpm5clkb' A: From the documentation: mkstemp() returns a tuple containing an OS-level handle to an open file (as would be returned by os.open()) and the absolute pathname of that file, in that order. New in version 2.3. So, mkstemp returns both the OS file handle to and the filename of the temporary file. When you re-open the temp file, the original returned file handle is still open (no-one stops you from opening twice or more the same file in your program). If you want to operate on that OS file handle as a python file object, you can: >>> __, filename = tempfile.mkstemp() >>> fptr= os.fdopen(__) and then continue with your normal code. A: The file is still open. Do this: fh, filename = tempfile.mkstemp() ... os.close(fh) os.remove(filename) A: I believe you need to release the fptr to close the file cleanly. Try setting fptr to None.
Why is the WindowsError while deleting the temporary file?
I have created a temporary file. Added some data to the file created. Saved it and then trying to delete it. But I am getting WindowsError. I have closed the file after editing it. How do I check which other process is accessing the file. C:\Documents and Settings\Administrator>python Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tempfile >>> __, filename = tempfile.mkstemp() >>> print filename c:\docume~1\admini~1\locals~1\temp\tmpm5clkb >>> fptr = open(filename, "wb") >>> fptr.write("Hello World!") >>> fptr.close() >>> import os >>> os.remove(filename) Traceback (most recent call last): File "<stdin>", line 1, in <module> WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\admini~1\\locals~1\\temp\\tmpm5clkb'
[ "From the documentation:\n\nmkstemp() returns a tuple containing an OS-level handle to an open file (as would be returned by os.open()) and the absolute pathname of that file, in that order. New in version 2.3. \n\nSo, mkstemp returns both the OS file handle to and the filename of the temporary file. When you re-open the temp file, the original returned file handle is still open (no-one stops you from opening twice or more the same file in your program).\nIf you want to operate on that OS file handle as a python file object, you can:\n>>> __, filename = tempfile.mkstemp()\n>>> fptr= os.fdopen(__)\n\nand then continue with your normal code.\n", "The file is still open. Do this:\nfh, filename = tempfile.mkstemp()\n...\nos.close(fh)\nos.remove(filename)\n\n", "I believe you need to release the fptr to close the file cleanly. Try setting fptr to None.\n" ]
[ 11, 7, 0 ]
[]
[]
[ "python", "temporary_files" ]
stackoverflow_0001470350_python_temporary_files.txt
Q: select_related does not join columns marked with nulll=True I have a Django model - class NoticedUser(models.Model): user = models.ForeignKey(User, null=False) text = models.CharField(max_length=255, null=True) photo = models.ForeignKey(Photo, null=True, blank=True) article = models.ForeignKey(Article, null=True, blank=True) date = models.DateTimeField(default=datetime.now) When I try to fetch the objects, i.e. with NoticedUser.objects.all().select_related(), resulting query does not contain joins with 'photo' and 'article' tables. I have looked through the Django sources and it seems that fields containing null=True should result in left join instead of inner join, but I have not found why appropriate left joins do not appear in resulting query. This causes additional queries while displaying related objects, as well as there is no possibility to perform custom joins for 'photo' and 'article' tables used in our project. Actually, joins appear only for fields with null=False, but I cannot change the field definitions. How can I add joins for fields with null=True to resulting query? Django version I use is 1.0.2. Thanks. A: It doesn't follow these relations by default when using select_related() with no parameters. You have to explicitly specify the names: NoticedUser.objects.all().select_related('article', 'photo')
select_related does not join columns marked with nulll=True
I have a Django model - class NoticedUser(models.Model): user = models.ForeignKey(User, null=False) text = models.CharField(max_length=255, null=True) photo = models.ForeignKey(Photo, null=True, blank=True) article = models.ForeignKey(Article, null=True, blank=True) date = models.DateTimeField(default=datetime.now) When I try to fetch the objects, i.e. with NoticedUser.objects.all().select_related(), resulting query does not contain joins with 'photo' and 'article' tables. I have looked through the Django sources and it seems that fields containing null=True should result in left join instead of inner join, but I have not found why appropriate left joins do not appear in resulting query. This causes additional queries while displaying related objects, as well as there is no possibility to perform custom joins for 'photo' and 'article' tables used in our project. Actually, joins appear only for fields with null=False, but I cannot change the field definitions. How can I add joins for fields with null=True to resulting query? Django version I use is 1.0.2. Thanks.
[ "It doesn't follow these relations by default when using select_related() with no parameters. You have to explicitly specify the names:\nNoticedUser.objects.all().select_related('article', 'photo')\n\n" ]
[ 4 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001472974_django_python.txt
Q: What's a good resource for starting to write a programming language, that's not context free? I'm looking to write a programming language for fun, however most of the resource I have seen are for writing a context free language, however I wish to write a language that, like python, uses indentation, which to my understanding means it can't be context free. A: A context-free grammar is, simply, one that doesn't require a symbol table in order to correctly parse the code. A context-sensitive grammar does. The D programming language is an example of a context free grammar. C++ is a context sensitive one. (For example, is T*x declaring x to be pointer to T, or is it multiplying T by x ? We can only tell by looking up T in the symbol table to see if it is a type or a variable.) Whitespace has nothing to do with it. D uses a context free grammar in order to greatly simplify parsing it, and so that simple tools can parse it (such as syntax highlighting editors). A: You might want to read this rather well written essay on parsing Python, Python: Myths about Indentation. While I haven't tried to write a context free parser using something like yacc, I think it may be possible using a conditional lexer to return the indentation change tokens as described in the url. By the way, here is the official python grammar from python.org: http://www.python.org/doc/current/ref/grammar.txt A: I would familiarize myself with the problem first by reading up on some of the literature that's available on the subject. The classic Compilers book by Aho et. al. may be heavy on the math and comp sci, but a much more aproachable text is the Let's Build a Compiler articles by Jack Crenshaw. This is a series of articles that Mr. Crenshaw wrote back in the late 80's and it's the most under-appreciated text on compilers ever written. The approach is simple and to the point: Mr. Crenshaw shows "A" approach that works. You can easily go through the content in the span of a few evenings and have a much better understanding of what a compiler is all about. A couple of caveats are that the examples in the text are written in Turbo Pascal and the compilers emit 68K assembler. The examples are easy enough to port to a more current programming language and I recomment Python for that. But if you want to follow along as the examples are presented you will at least need Turbo Pascal 5.5 and a 68K assembler and emulator. The text is still relevant today and using these old technologies is really fun. I highly recommend it as anyone's first text on compilers. The great news is that languages like Python and Ruby are open sourced and you can download and study the C source code in order to better understand how it's done. A: "Context-free" is a relative term. Most context-free parsers actually parse a superset of the language which is context-free and then check the resulting parse tree to see if it is valid. For example, the following two C programs are valid according to the context-free grammar of C, but one quickly fails during context-checking: int main() { int i; i = 1; return 0; } int main() { int i; i = "Hello, world"; return 0; } Free of context, i = "Hello, world"; is a perfectly valid assignment, but in context you can see that the types are all wrong. If the context were char* i; it would be okay. So the context-free parser will see nothing wrong with that assignment. It's not until the compiler starts checking types (which are context dependent) that it will catch the error. Anything that can be produced with a keyboard can be parsed as context-free; at the very least you can check that all the characters used are valid (the set of all strings containing only displayable Unicode Characters is a context-free grammar). The only limitation is how useful your grammar is and how much context-sensitive checking you have to do on your resulting parse tree. Whitespace-dependent languages like Python make your context-free grammar less useful and therefore require more context-sensitive checking later on (much of this is done at runtime in Python through dynamic typing). But there is still plenty that a context-free parser can do before context-sensitive checking is needed. A: I don't know of any tutorials/guides, but you could try looking at the source for tinypy, it's a very small implementation of a python like language. A: Using indentation in a language doesn't necessarily mean that the language's grammar can not be context free. I.e. the indentation will determine in which scope a statement exists. A statement will still be a statement no matter which scope it is defined within (scope can often be handled by a different part of the compiler/interpreter, generally during a semantic parse). That said a good resource is the antlr tool (http://www.antlr.org). The author of the tool has also produced a book on creating parsers for languages using antlr (http://www.pragprog.com/titles/tpantlr/the-definitive-antlr-reference). There is pretty good documentation and lots of example grammars. A: If you're really going to take a whack at language design and implementation, you might want to add the following to your bookshelf: Programming Language Pragmatics, Scott et al. Design Concepts in Programming Languages, Turbak et al. Modern Compiler Design, Grune et al. (I sacrilegiously prefer this to "The Dragon Book" by Aho et al.) Gentler introductions such as: Crenshaw's tutorial (as suggested by @'Jonas Gorauskas' here) The Definitive ANTLR Reference by Parr Martin Fowler's recent work on DSLs You should also consider your implementation language. This is one of those areas where different languages vastly differ in what they facilitate. You should consider languages such as LISP, F# / OCaml, and Gilad Bracha's new language Newspeak. A: I would recommend that you write your parser by hand, in which case having significant whitespace should not present any real problems. The main problem with using a parser generator is that it is difficult to get good error recovery in the parser. If you plan on implementing an IDE for your language, then having good error recovery is important for getting things like Intellisence to work. Intellisence always works on incomplete syntactic constructs, and the better the parser is at figuring out what construct the user is trying to type, the better an intellisence experience you can deliver. If you write a hand-written top-down parser, you can pretty much implement what ever rules you want, where ever you want to. This is what makes it easy to provide error recovery. It will also make it trivial for you to implement significant whitespace. You can simply store what the current indentation level is in a variable inside your parser class, and can stop parsing blocks when you encounter a token on a new line that has a column position that is less than the current indentation level. Also, chances are that you are going to run into ambiguities in your grammar. Most “production” languages in wide use have syntactic ambiguities. A good example is generics in C# (there are ambiguities around "<" in an expression context, it can be either a "less-than" operator, or the start of a "generic argument list"). In a hand-written parser solving ambiguities like that are trivial. You can just add a little bit of non-determinism where you need it with relatively little impact on the rest of the parser, Furthermore, because you are designing the language yourself, you should assume it's design is going to evolve rapidly (for some languages with standards committees, like C++ this is not the case). Making changes to automatically generated parsers to either handle ambiguities, or evolve the language, may require you to do significant refactoring of the grammar, which can be both irritating and time consuming. Changes to hand written parsers, particularly for top-down parsers, are usually pretty localized. I would say that parser generators are only a good choice if: You never plan on writing an IDE ever, The language has really simple syntax, or You need a parser extremely quickly, and are ok with a bad user experience A: Have you read Aho, Sethi, Ullman: "Compilers: Principles, Techniques, and Tools"? It is a classical language reference book. /Allan A: If you've never written a parser before, start with something simple. Parsers are surprisingly subtle, and you can get into all sorts of trouble writing them if you've never studied the structure of programming languages. Reading Aho, Sethi, and Ullman (it's known as "The Dragon Book") is a good plan. Contrary to other contributors, I say you should play with simpler parser generators like Yacc and Bison first, and only when you get burned because you can't do something with that tool should you go on to try to build something with an LL(*) parser like Antlr. A: Just because a language uses significant indentation doesn't mean that it is inherently context-sensitive. As an example, Haskell makes use of significant indentation, and (to my knowledge) its grammar is context-free. An example of source requiring a context-sensitive grammar could be this snippet from Ruby: my_essay = << END_STR This is within the string END_STR << self def other_method ... end end Another example would be Scala's XML mode: def doSomething() = { val xml = <code>def val <tag/> class</code> xml } As a general rule, context-sensitive languages are slightly harder to imagine in any precise sense and thus far less common. Even Ruby and Scala don't really count since their context sensitive features encompass only a minor sub-set of the language. If I were you, I would formulate my grammar as inspiration dictates and then worry about parsing methodologies at a later date. I think you'll find that whatever you come up with will be naturally context-free, or very close to it. As a final note, if you really need context-sensitive parsing tools, you might try some of the less rigidly formal techniques. Parser combinators are used in Scala's parsing. They have some annoying limitations (no lexing), but they aren't a bad tool. LL(*) tools like ANTLR also seem to be more adept at expressing such "ad hoc" parsing escapes. Don't try to use Yacc or Bison with a context-sensitive grammar, they are far to strict to express such concepts easily. A: A context-sensitive language? This one's non-indented: Protium (http://www.protiumble.com)
What's a good resource for starting to write a programming language, that's not context free?
I'm looking to write a programming language for fun, however most of the resource I have seen are for writing a context free language, however I wish to write a language that, like python, uses indentation, which to my understanding means it can't be context free.
[ "A context-free grammar is, simply, one that doesn't require a symbol table in order to correctly parse the code. A context-sensitive grammar does.\nThe D programming language is an example of a context free grammar. C++ is a context sensitive one. (For example, is T*x declaring x to be pointer to T, or is it multiplying T by x ? We can only tell by looking up T in the symbol table to see if it is a type or a variable.)\nWhitespace has nothing to do with it.\nD uses a context free grammar in order to greatly simplify parsing it, and so that simple tools can parse it (such as syntax highlighting editors).\n", "You might want to read this rather well written essay on parsing Python, Python: Myths about Indentation.\nWhile I haven't tried to write a context free parser using something like yacc, I think it may be possible using a conditional lexer to return the indentation change tokens as described in the url.\nBy the way, here is the official python grammar from python.org: http://www.python.org/doc/current/ref/grammar.txt\n", "I would familiarize myself with the problem first by reading up on some of the literature that's available on the subject. The classic Compilers book by Aho et. al. may be heavy on the math and comp sci, but a much more aproachable text is the Let's Build a Compiler articles by Jack Crenshaw. This is a series of articles that Mr. Crenshaw wrote back in the late 80's and it's the most under-appreciated text on compilers ever written. The approach is simple and to the point: Mr. Crenshaw shows \"A\" approach that works. You can easily go through the content in the span of a few evenings and have a much better understanding of what a compiler is all about. A couple of caveats are that the examples in the text are written in Turbo Pascal and the compilers emit 68K assembler. The examples are easy enough to port to a more current programming language and I recomment Python for that. But if you want to follow along as the examples are presented you will at least need Turbo Pascal 5.5 and a 68K assembler and emulator. The text is still relevant today and using these old technologies is really fun. I highly recommend it as anyone's first text on compilers. The great news is that languages like Python and Ruby are open sourced and you can download and study the C source code in order to better understand how it's done. \n", "\"Context-free\" is a relative term. Most context-free parsers actually parse a superset of the language which is context-free and then check the resulting parse tree to see if it is valid. For example, the following two C programs are valid according to the context-free grammar of C, but one quickly fails during context-checking:\nint main()\n{\n int i;\n i = 1;\n return 0;\n}\n\nint main()\n{\n int i;\n i = \"Hello, world\";\n return 0;\n}\n\nFree of context, i = \"Hello, world\"; is a perfectly valid assignment, but in context you can see that the types are all wrong. If the context were char* i; it would be okay. So the context-free parser will see nothing wrong with that assignment. It's not until the compiler starts checking types (which are context dependent) that it will catch the error.\nAnything that can be produced with a keyboard can be parsed as context-free; at the very least you can check that all the characters used are valid (the set of all strings containing only displayable Unicode Characters is a context-free grammar). The only limitation is how useful your grammar is and how much context-sensitive checking you have to do on your resulting parse tree.\nWhitespace-dependent languages like Python make your context-free grammar less useful and therefore require more context-sensitive checking later on (much of this is done at runtime in Python through dynamic typing). But there is still plenty that a context-free parser can do before context-sensitive checking is needed.\n", "I don't know of any tutorials/guides, but you could try looking at the source for tinypy, it's a very small implementation of a python like language.\n", "Using indentation in a language doesn't necessarily mean that the language's grammar can not be context free. I.e. the indentation will determine in which scope a statement exists. A statement will still be a statement no matter which scope it is defined within (scope can often be handled by a different part of the compiler/interpreter, generally during a semantic parse).\nThat said a good resource is the antlr tool (http://www.antlr.org). The author of the tool has also produced a book on creating parsers for languages using antlr (http://www.pragprog.com/titles/tpantlr/the-definitive-antlr-reference). There is pretty good documentation and lots of example grammars.\n", "If you're really going to take a whack at language design and implementation, you might want to add the following to your bookshelf:\n\nProgramming Language Pragmatics, Scott et al.\nDesign Concepts in Programming Languages, Turbak et al.\nModern Compiler Design, Grune et al. (I sacrilegiously prefer this to \"The Dragon Book\" by Aho et al.)\n\nGentler introductions such as:\n\nCrenshaw's tutorial (as suggested by @'Jonas Gorauskas' here)\nThe Definitive ANTLR Reference by Parr\nMartin Fowler's recent work on DSLs\n\nYou should also consider your implementation language. This is one of those areas where different languages vastly differ in what they facilitate. You should consider languages such as LISP, F# / OCaml, and Gilad Bracha's new language Newspeak. \n", "I would recommend that you write your parser by hand, in which case having significant whitespace should not present any real problems.\nThe main problem with using a parser generator is that it is difficult to get good error recovery in the parser. If you plan on implementing an IDE for your language, then having good error recovery is important for getting things like Intellisence to work. Intellisence always works on incomplete syntactic constructs, and the better the parser is at figuring out what construct the user is trying to type, the better an intellisence experience you can deliver.\nIf you write a hand-written top-down parser, you can pretty much implement what ever rules you want, where ever you want to. This is what makes it easy to provide error recovery. It will also make it trivial for you to implement significant whitespace. You can simply store what the current indentation level is in a variable inside your parser class, and can stop parsing blocks when you encounter a token on a new line that has a column position that is less than the current indentation level. Also, chances are that you are going to run into ambiguities in your grammar. Most “production” languages in wide use have syntactic ambiguities. A good example is generics in C# (there are ambiguities around \"<\" in an expression context, it can be either a \"less-than\" operator, or the start of a \"generic argument list\"). In a hand-written parser solving ambiguities like that are trivial. You can just add a little bit of non-determinism where you need it with relatively little impact on the rest of the parser, \nFurthermore, because you are designing the language yourself, you should assume it's design is going to evolve rapidly (for some languages with standards committees, like C++ this is not the case). Making changes to automatically generated parsers to either handle ambiguities, or evolve the language, may require you to do significant refactoring of the grammar, which can be both irritating and time consuming. Changes to hand written parsers, particularly for top-down parsers, are usually pretty localized.\nI would say that parser generators are only a good choice if:\n\nYou never plan on writing an IDE ever,\nThe language has really simple syntax, or\nYou need a parser extremely quickly, and are ok with a bad user experience\n\n", "Have you read Aho, Sethi, Ullman: \"Compilers: Principles, Techniques, and Tools\"? It is a classical language reference book.\n/Allan\n", "If you've never written a parser before, start with something simple. Parsers are surprisingly subtle, and you can get into all sorts of trouble writing them if you've never studied the structure of programming languages.\nReading Aho, Sethi, and Ullman (it's known as \"The Dragon Book\") is a good plan. Contrary to other contributors, I say you should play with simpler parser generators like Yacc and Bison first, and only when you get burned because you can't do something with that tool should you go on to try to build something with an LL(*) parser like Antlr.\n", "Just because a language uses significant indentation doesn't mean that it is inherently context-sensitive. As an example, Haskell makes use of significant indentation, and (to my knowledge) its grammar is context-free.\nAn example of source requiring a context-sensitive grammar could be this snippet from Ruby:\nmy_essay = << END_STR\nThis is within the string\nEND_STR\n\n<< self\n def other_method\n ...\n end\nend\n\nAnother example would be Scala's XML mode:\ndef doSomething() = {\n val xml = <code>def val <tag/> class</code>\n xml\n}\n\nAs a general rule, context-sensitive languages are slightly harder to imagine in any precise sense and thus far less common. Even Ruby and Scala don't really count since their context sensitive features encompass only a minor sub-set of the language. If I were you, I would formulate my grammar as inspiration dictates and then worry about parsing methodologies at a later date. I think you'll find that whatever you come up with will be naturally context-free, or very close to it.\nAs a final note, if you really need context-sensitive parsing tools, you might try some of the less rigidly formal techniques. Parser combinators are used in Scala's parsing. They have some annoying limitations (no lexing), but they aren't a bad tool. LL(*) tools like ANTLR also seem to be more adept at expressing such \"ad hoc\" parsing escapes. Don't try to use Yacc or Bison with a context-sensitive grammar, they are far to strict to express such concepts easily.\n", "A context-sensitive language? This one's non-indented: Protium (http://www.protiumble.com)\n" ]
[ 19, 6, 5, 3, 2, 2, 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "compiler_construction", "interpreter", "programming_languages", "python" ]
stackoverflow_0000068243_compiler_construction_interpreter_programming_languages_python.txt
Q: open source data mining/text analysis tools in python I have a database full of reviews of various products. My task is to perform various calculation and "create" another "database/xml-export" with aggregated data. I am thinking of writing command line programs in python to do that. But I know someone have done this before and I know that there is some open source python solution or similar which probably gives lot more interesting "aggregated data" then I can possibly think off. The problem is I don't really know much about this area other then basic data manipulation from command line nor I know what are the terms I should use to even search for this thing.. I am really not looking for some scientific/visualization stuff (not that I don't mind if the tool provides), something simple to start with and gradually see/develop stuff what I need. My only requirement is either the "end aggregated data" be in a database or export as XML file no proprietary stuff. Its a bit robust then my python scripts as I have to deal with "lots" of data across 4 machines. Any hint where should I start my research? Thanks. A: What kind of analysis are you trying to do? If you're analyzing text take a look at the Natural Language Toolkit (NLTK). If you want to index and search the data, take a look at the whoosh search engine. Please provide some more detail on what kind of analysis you're looking to do. A: Looks like you are looking for a Data Integration solution. One suggestion is the open source Kettle project part of the Pentaho suite. For python, a quick search yielded PyDI and SnapLogic
open source data mining/text analysis tools in python
I have a database full of reviews of various products. My task is to perform various calculation and "create" another "database/xml-export" with aggregated data. I am thinking of writing command line programs in python to do that. But I know someone have done this before and I know that there is some open source python solution or similar which probably gives lot more interesting "aggregated data" then I can possibly think off. The problem is I don't really know much about this area other then basic data manipulation from command line nor I know what are the terms I should use to even search for this thing.. I am really not looking for some scientific/visualization stuff (not that I don't mind if the tool provides), something simple to start with and gradually see/develop stuff what I need. My only requirement is either the "end aggregated data" be in a database or export as XML file no proprietary stuff. Its a bit robust then my python scripts as I have to deal with "lots" of data across 4 machines. Any hint where should I start my research? Thanks.
[ "What kind of analysis are you trying to do?\nIf you're analyzing text take a look at the Natural Language Toolkit (NLTK).\nIf you want to index and search the data, take a look at the whoosh search engine.\nPlease provide some more detail on what kind of analysis you're looking to do.\n", "Looks like you are looking for a Data Integration solution.\nOne suggestion is the open source Kettle project part of the Pentaho suite.\nFor python, a quick search yielded PyDI and SnapLogic\n" ]
[ 1, 1 ]
[]
[]
[ "analyzer", "data_mining", "database", "python" ]
stackoverflow_0001473087_analyzer_data_mining_database_python.txt
Q: How to create a hardlink on attached Volumes on Mac? os.link is not working for the attached Volumes on Mac. ~ $ python Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.link("/Volumes/ARCHANA/JULY 09/PRAMANPATRA.doc", "/Volumes/ARCHANA/temp") Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: [Errno 45] Operation not supported >>> A: You're working on a mac, yet the volume ARCHANA might not have a link-able file system. (The uppercase label makes it suspicious.) Also, you are trying to refer a hard link to a directory and "Hard links may not normally refer to directories and may not span file systems." (from the man page.) One last thing to try seems the directory name 'July 09'. It might be worth inspecting the os.link function to check that it works with spaces in directory names. A: What filesystem in on ARCHANA? And are you trying to link to a directory? Not all file systems support hardlinks, and very few support hardlinks to directories. In particular USB mass-storage devices are generally formatted as with FAT filesystems which do not support links.
How to create a hardlink on attached Volumes on Mac?
os.link is not working for the attached Volumes on Mac. ~ $ python Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.link("/Volumes/ARCHANA/JULY 09/PRAMANPATRA.doc", "/Volumes/ARCHANA/temp") Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: [Errno 45] Operation not supported >>>
[ "You're working on a mac, yet the volume ARCHANA might not have a link-able file system. (The uppercase label makes it suspicious.)\nAlso, you are trying to refer a hard link to a directory and \"Hard links may not normally refer to directories and may not span file systems.\" (from the man page.)\nOne last thing to try seems the directory name 'July 09'. It might be worth inspecting the os.link function to check that it works with spaces in directory names.\n", "What filesystem in on ARCHANA? And are you trying to link to a directory? Not all file systems support hardlinks, and very few support hardlinks to directories.\nIn particular USB mass-storage devices are generally formatted as with FAT filesystems which do not support links.\n" ]
[ 2, 1 ]
[]
[]
[ "macos", "python" ]
stackoverflow_0001473368_macos_python.txt
Q: Do I need multiple cursor objects to loop over a recordset and update at the same time? So I've got a large database that I can't hold in memory at once. I've got to loop over every item in a table, process it, and put the processed data into another column in the table. While I'm looping over my cursor, if I try to run an update statement it truncates the recordset (I believe because it's re-purposing the cursor object). Questions: Will creating a second cursor object to run the update statements allow me to continue looping over the original select statement? Do I need a second connection to the database in order to have a second cursor object, that will allow me do do this? How would sqlite respond to having two connections to the database, one reading from the table, the other writing to it? My code (simplified): import sqlite3 class DataManager(): """ Manages database (used below). I cut this class way down to avoid confusion in the question. """ def __init__(self, db_path): self.connection = sqlite3.connect(db_path) self.connection.text_factory = str self.cursor = self.connection.cursor() def genRecordset(self, str_sql, subs=tuple()): """ Generate records as tuples, for str_sql. """ self.cursor.execute(str_sql, subs) for row in self.cursor: yield row select = """ SELECT id, unprocessed_content FROM data_table WHERE processed_content IS NULL """ update = """ UPDATE data_table SET processed_content = ? WHERE id = ? """ data_manager = DataManager(r'C:\myDatabase.db') subs = [] for row in data_manager.genRecordset(str_sql): id, unprocessed_content = row processed_content = processContent(unprocessed_content) subs.append((processed_content, id)) #every n records update the database (whenever I run out of memory) if len(subs) >= 1000: data_manager.cursor.executemany(update, subs) data_manager.connection.commit() subs = [] #update remaining records if subs: data_manager.cursor.executemany(update, subs) data_manager.connection.commit() The other method I tried was to modify my select statement to be: select = """ SELECT id, unprocessed_content FROM data_table WHERE processed_content IS NULL LIMIT 1000 """ Then I would do: recordset = data_manager.cursor.execute(select) while recordset: #do update stuff... recordset = data_manager.cursor.execute(select) The problem I had with this was that my real select statement has a JOIN in it and takes a while, so executing the JOIN that many times is very time intensive. I'm trying to speed up the process by only doing the select once, then using a generator so I don't have to hold it all in memory. Solution: Ok, so the answer to my first two questions is "No." To my third question, once a connection is made to a database, it locks the entire database, so another connection won't be able to do anything until the first connection is closed. I couldn't find the source code for it, but from empirical evidence I believe that a connection can only use one cursor object at a time and the last run query takes precedence. This means that, while I'm looping over the selected recordset yielding one row at a time, as soon as I run my first update statement my generator stops yielding rows. My solution is to create a temporary database that I stick the processed_content in with the id, so that I have one connection/cursor object per database and can continue looping over the selected recordset, while inserting into the temporary database periodically. Once I reach the end of my selected recordset I transfer the data in the temporary database back to the original. If anyone knows for sure about the connection/cursor objects, let me know in a comment. A: I think you have roughly the right architecture -- presenting it in terms of "cursors" WILL confuse the "old SQL hands", because they'll be thinking of the many issues connected with DECLARE foo CURSOR, FETCH FROM CURSOR, WHERE CURRENT OF CURSOR, and other such beauts having to do with SQL cursors. Python DB API's "cursor" is simply a convenient way to package and execute SQL statements, not necessarily connected with SQL cursors -- it won't suffer from any of those problems -- though it may present its (completely original) own ones;-) But, with the "batching" of results you're doing, your proper commits, etc, you have preventively finessed most of those "original problems" I had in mind. On some other engines I'd suggest doing first a select into a temporary table, then reading from that temporary table while updating the primary one, but I'm uncertain how the performance would be affected in sqlite, depending on what indices you have (if no index is affected by your update, then I suspect that such a temporary table would not be an optimization at all in sqlite -- but I can't run benchmarks on your data, the only real way to check performance hypotheses). So, I'd say, go for it!-) A: Is it possible to create a DB function that will process your content? If so, you should be able to write a single update statement and let the database do all the work. Eg; Update data_table set processed_col = Process_Column(col_to_be_processed) A: Cursors are bad bad bad for a multitude of reasons. I'd suggest (and a lot of others will definitely chime in) that you use a single UPDATE statement instead of going the CURSOR route. Can your Processed_Content be sent as a parameter to a single query that does set based operations like so: UPDATE data_table SET processed_content = ? WHERE processed_content IS NULL LIMIT 1000 Edited based on responses: Since every row has a unique value for Processed_Content, you have no option but to use a recordset and a loop. I have done this in the past on multiple occasions. What you are suggesting should work effectively.
Do I need multiple cursor objects to loop over a recordset and update at the same time?
So I've got a large database that I can't hold in memory at once. I've got to loop over every item in a table, process it, and put the processed data into another column in the table. While I'm looping over my cursor, if I try to run an update statement it truncates the recordset (I believe because it's re-purposing the cursor object). Questions: Will creating a second cursor object to run the update statements allow me to continue looping over the original select statement? Do I need a second connection to the database in order to have a second cursor object, that will allow me do do this? How would sqlite respond to having two connections to the database, one reading from the table, the other writing to it? My code (simplified): import sqlite3 class DataManager(): """ Manages database (used below). I cut this class way down to avoid confusion in the question. """ def __init__(self, db_path): self.connection = sqlite3.connect(db_path) self.connection.text_factory = str self.cursor = self.connection.cursor() def genRecordset(self, str_sql, subs=tuple()): """ Generate records as tuples, for str_sql. """ self.cursor.execute(str_sql, subs) for row in self.cursor: yield row select = """ SELECT id, unprocessed_content FROM data_table WHERE processed_content IS NULL """ update = """ UPDATE data_table SET processed_content = ? WHERE id = ? """ data_manager = DataManager(r'C:\myDatabase.db') subs = [] for row in data_manager.genRecordset(str_sql): id, unprocessed_content = row processed_content = processContent(unprocessed_content) subs.append((processed_content, id)) #every n records update the database (whenever I run out of memory) if len(subs) >= 1000: data_manager.cursor.executemany(update, subs) data_manager.connection.commit() subs = [] #update remaining records if subs: data_manager.cursor.executemany(update, subs) data_manager.connection.commit() The other method I tried was to modify my select statement to be: select = """ SELECT id, unprocessed_content FROM data_table WHERE processed_content IS NULL LIMIT 1000 """ Then I would do: recordset = data_manager.cursor.execute(select) while recordset: #do update stuff... recordset = data_manager.cursor.execute(select) The problem I had with this was that my real select statement has a JOIN in it and takes a while, so executing the JOIN that many times is very time intensive. I'm trying to speed up the process by only doing the select once, then using a generator so I don't have to hold it all in memory. Solution: Ok, so the answer to my first two questions is "No." To my third question, once a connection is made to a database, it locks the entire database, so another connection won't be able to do anything until the first connection is closed. I couldn't find the source code for it, but from empirical evidence I believe that a connection can only use one cursor object at a time and the last run query takes precedence. This means that, while I'm looping over the selected recordset yielding one row at a time, as soon as I run my first update statement my generator stops yielding rows. My solution is to create a temporary database that I stick the processed_content in with the id, so that I have one connection/cursor object per database and can continue looping over the selected recordset, while inserting into the temporary database periodically. Once I reach the end of my selected recordset I transfer the data in the temporary database back to the original. If anyone knows for sure about the connection/cursor objects, let me know in a comment.
[ "I think you have roughly the right architecture -- presenting it in terms of \"cursors\" WILL confuse the \"old SQL hands\", because they'll be thinking of the many issues connected with DECLARE foo CURSOR, FETCH FROM CURSOR, WHERE CURRENT OF CURSOR, and other such beauts having to do with SQL cursors. Python DB API's \"cursor\" is simply a convenient way to package and execute SQL statements, not necessarily connected with SQL cursors -- it won't suffer from any of those problems -- though it may present its (completely original) own ones;-) But, with the \"batching\" of results you're doing, your proper commits, etc, you have preventively finessed most of those \"original problems\" I had in mind.\nOn some other engines I'd suggest doing first a select into a temporary table, then reading from that temporary table while updating the primary one, but I'm uncertain how the performance would be affected in sqlite, depending on what indices you have (if no index is affected by your update, then I suspect that such a temporary table would not be an optimization at all in sqlite -- but I can't run benchmarks on your data, the only real way to check performance hypotheses).\nSo, I'd say, go for it!-)\n", "Is it possible to create a DB function that will process your content? If so, you should be able to write a single update statement and let the database do all the work. Eg;\nUpdate data_table\nset processed_col = Process_Column(col_to_be_processed)\n\n", "Cursors are bad bad bad for a multitude of reasons.\nI'd suggest (and a lot of others will definitely chime in) that you use a single UPDATE statement instead of going the CURSOR route.\nCan your Processed_Content be sent as a parameter to a single query that does set based operations like so:\nUPDATE data_table\nSET processed_content = ?\nWHERE processed_content IS NULL\nLIMIT 1000\n\nEdited based on responses:\nSince every row has a unique value for Processed_Content, you have no option but to use a recordset and a loop. I have done this in the past on multiple occasions. What you are suggesting should work effectively.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "database", "database_cursor", "python", "sqlite" ]
stackoverflow_0001462511_database_database_cursor_python_sqlite.txt
Q: django view function code runs after return I have a django view that looks like... def add_user(request): if User.objects.get(username__exact = request.POST['username']): context = { 'message': "Username already taken"} return render_to_response("mytemplate.html", context, RequestContext(request)) newUser = User(username="freeandclearusername") newUser.save() #then other code that is related to setting up a new user. The other code that is related to setting up the user is still ran even if the initial conditional statement fails and the "return render_to_response()" is called. The page is rendered with the correct context but other information is added to the database after the initial return. I thought that the code after the "return render_to_response()" would not run. Can anyone confirm or explain this? UPDATE.... Ok so if I add a conditional.... def add_user(request): if User.objects.get(username__exact = request.POST['username']): bad_user = True context = { 'message': "Username already taken"} return render_to_response("mytemplate.html", context, RequestContext(request)) newUser = User(username="freeandclearusername") newUser.save() if bad_user != True: #then other code that is related to setting up a new user. context = { 'message': "Username is great!!!!!"} return render_to_response("mytemplate.html", context, RequestContext(request)) This behaves as expected. Also if I remove the RequestConext() it seems to behave correctly as well. Any ideas? I think the problem lies in how I'm using RequestContext. A: The return statement will indeed terminate the function. So if you see other code being executed, you either don't execute the return statement, and thus produce the output somehow differently, or have other code (before the function is called, or in a middleware) that makes the database changes. A: You are correct, assuming your conditions are met, the view will exit on your return statement. The only other thing I can think of that hasn't already been mentioned is indentation -- double-check that you do not have a mix of tabs and spaces. That can sometimes result in the unexpected.
django view function code runs after return
I have a django view that looks like... def add_user(request): if User.objects.get(username__exact = request.POST['username']): context = { 'message': "Username already taken"} return render_to_response("mytemplate.html", context, RequestContext(request)) newUser = User(username="freeandclearusername") newUser.save() #then other code that is related to setting up a new user. The other code that is related to setting up the user is still ran even if the initial conditional statement fails and the "return render_to_response()" is called. The page is rendered with the correct context but other information is added to the database after the initial return. I thought that the code after the "return render_to_response()" would not run. Can anyone confirm or explain this? UPDATE.... Ok so if I add a conditional.... def add_user(request): if User.objects.get(username__exact = request.POST['username']): bad_user = True context = { 'message': "Username already taken"} return render_to_response("mytemplate.html", context, RequestContext(request)) newUser = User(username="freeandclearusername") newUser.save() if bad_user != True: #then other code that is related to setting up a new user. context = { 'message': "Username is great!!!!!"} return render_to_response("mytemplate.html", context, RequestContext(request)) This behaves as expected. Also if I remove the RequestConext() it seems to behave correctly as well. Any ideas? I think the problem lies in how I'm using RequestContext.
[ "The return statement will indeed terminate the function. So if you see other code being executed, you either\n\ndon't execute the return statement, and thus produce the output somehow differently, or\nhave other code (before the function is called, or in a middleware) that makes the database changes.\n\n", "You are correct, assuming your conditions are met, the view will exit on your return statement. The only other thing I can think of that hasn't already been mentioned is indentation -- double-check that you do not have a mix of tabs and spaces. That can sometimes result in the unexpected.\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python", "views" ]
stackoverflow_0001473543_django_python_views.txt
Q: How should I handle software packages? I am trying to install pysqlite and have troubles with that. I found out that the most probable reason of that is missing sqlite headers and I have to install them. My platform: CentOS release 5.3 (Final). I have Python-2.6.2. I also found out that I need .rpm files. As far as I have them I execute: rpm -i sqlite3-devel-3.n.n.n.rpm and everything should be fine. However, I do not know where to find sqlite3-devel-3.n.n.n.rpm file. Should it already be on my system? I could not locate it with "locate sqlite3-devel-3". Should I download this file? If yes where I can find it and which version should I use? I mean, the .rpm file should be, probably, consistent with the version of sqlite that I have on my computer? If it is the case, how can I find out the version of my sqlite? If I type "from pysqlite2 import dbapi2 as sqlite" I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named pysqlite2 "yum search pysqlite" gives me the following: Loaded plugins: fastestmirror Excluding Packages in global exclude list Finished ==== Matched: pysqlite ==== python-sqlite.x86_64 : Python bindings for sqlite. By the way, I have the following directory: /home/myname/opt/lib/python2.6/sqlite3 and there I have the following files: dbapi2.py dbapi2.pyc dbapi2.pyo dump.py dump.pyc dump.pyo __init__.py __init__.pyc __init__.pyo test If I type "import unittest" and then "import sqlite3 as sqlite" I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/myname/opt/lib/python2.6/sqlite3/__init__.py", line 24, in <module> from dbapi2 import * File "/home/myname/opt/lib/python2.6/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: No module named _sqlite3 Thank you in advance. A: Python 2.6 (and some earlier) include sqlite Python org library ref so you should not need to do this. Just import it and run A: You can use buildout to create localized version of your project. This will install all necessary packages without having sudo access to the server. To give it try, do the following: mkdir tmp cd tmp wget http://svn.zope.org/*checkout*/zc.buildout/trunk/bootstrap/bootstrap.py python bootstrap.py init vim buildout.cfg edit buildout.cfg and replace it with following: [buildout] parts = sqlite [sqlite] recipe = zc.recipe.egg eggs = pysqlite interpreter = mypython Now, run ./bin/buildout to rebuild the project. This will download all of the necessary packages and create a new interpreter for you that you can use test that you can access sqlite. ./bin/buildout ./bin/mypython >>> import sqlite3 This gives you a controlled environment that you can use to develop inside of. To learn more about buildout, you can watch videos from pycon 2009 on Setuptools, Distutils and Buildout. Eggs and Buildout Deployment in Python - Part 1 Eggs and Buildout Deployment in Python - Part 2 Eggs and Buildout Deployment in Python - Part 3 Good luck A: Typically, you should install the python sqlite module through yum, something like: yum install python-sqlite and then edit your code changing sqlite2 references to sqlite3. By the way, whenever you read directions to install sqlite3-devel-3.n.n.n.rpm, the n parts are not literal; they're supposed to be replaced with numbers specifying a version of the rpm package.
How should I handle software packages?
I am trying to install pysqlite and have troubles with that. I found out that the most probable reason of that is missing sqlite headers and I have to install them. My platform: CentOS release 5.3 (Final). I have Python-2.6.2. I also found out that I need .rpm files. As far as I have them I execute: rpm -i sqlite3-devel-3.n.n.n.rpm and everything should be fine. However, I do not know where to find sqlite3-devel-3.n.n.n.rpm file. Should it already be on my system? I could not locate it with "locate sqlite3-devel-3". Should I download this file? If yes where I can find it and which version should I use? I mean, the .rpm file should be, probably, consistent with the version of sqlite that I have on my computer? If it is the case, how can I find out the version of my sqlite? If I type "from pysqlite2 import dbapi2 as sqlite" I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named pysqlite2 "yum search pysqlite" gives me the following: Loaded plugins: fastestmirror Excluding Packages in global exclude list Finished ==== Matched: pysqlite ==== python-sqlite.x86_64 : Python bindings for sqlite. By the way, I have the following directory: /home/myname/opt/lib/python2.6/sqlite3 and there I have the following files: dbapi2.py dbapi2.pyc dbapi2.pyo dump.py dump.pyc dump.pyo __init__.py __init__.pyc __init__.pyo test If I type "import unittest" and then "import sqlite3 as sqlite" I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/myname/opt/lib/python2.6/sqlite3/__init__.py", line 24, in <module> from dbapi2 import * File "/home/myname/opt/lib/python2.6/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: No module named _sqlite3 Thank you in advance.
[ "Python 2.6 (and some earlier) include sqlite Python org library ref so you should not need to do this. Just import it and run\n", "You can use buildout to create localized version of your project. This will install all necessary packages without having sudo access to the server.\nTo give it try, do the following:\nmkdir tmp\ncd tmp\nwget http://svn.zope.org/*checkout*/zc.buildout/trunk/bootstrap/bootstrap.py\npython bootstrap.py init\nvim buildout.cfg\n\nedit buildout.cfg and replace it with following:\n[buildout]\nparts = sqlite \n\n[sqlite]\nrecipe = zc.recipe.egg\neggs = pysqlite\ninterpreter = mypython\n\nNow, run ./bin/buildout to rebuild the project. This will download all of the necessary packages and create a new interpreter for you that you can use test that you can access sqlite.\n./bin/buildout\n./bin/mypython\n>>> import sqlite3\n\nThis gives you a controlled environment that you can use to develop inside of.\nTo learn more about buildout, you can watch videos from pycon 2009 on Setuptools, Distutils and Buildout.\nEggs and Buildout Deployment in Python - Part 1\nEggs and Buildout Deployment in Python - Part 2 \nEggs and Buildout Deployment in Python - Part 3\nGood luck\n", "Typically, you should install the python sqlite module through yum, something like:\nyum install python-sqlite\n\nand then edit your code changing sqlite2 references to sqlite3.\nBy the way, whenever you read directions to install sqlite3-devel-3.n.n.n.rpm, the n parts are not literal; they're supposed to be replaced with numbers specifying a version of the rpm package.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "pysqlite", "python", "rpm", "sqlite" ]
stackoverflow_0001471567_pysqlite_python_rpm_sqlite.txt
Q: Django ORM - assigning a raw value to DecimalField EDIT!!! - The casting of the value to a string seems to work fine when I create a new object, but when I try to edit an existing object, it does not allow it. So I have a decimal field in one of my models of Decimal(3,2) When I query up all these objects and try to set this field: fieldName = 0.85 OR fieldName = .85 It will throw a hissy fit, "Cannot convert float to DecimalField, try converting to a string first"... So then I do: fieldName = str(0.85) same error. I even tried: fieldName = "0.85" Same error. Am I running into some sort of framework bug here, or what? Note that when I actually go into Django Admin and manually edit the objects, it works fine. I am running Django 1.1 on Python 2.6 A: from decimal import Decimal object.fieldName = Decimal("0.85") or f = 0.85 object.fieldName = Decimal(str(f)) A: The Django DecimalField is "...represented in by a python Decimal instance." You might try: >>> obj.fieldName = Decimal("0.85") Behavior may also vary depending on the database backend you are using. With sqlite, I am able to assign string values to DecimalFields in new and existing objects without error.
Django ORM - assigning a raw value to DecimalField
EDIT!!! - The casting of the value to a string seems to work fine when I create a new object, but when I try to edit an existing object, it does not allow it. So I have a decimal field in one of my models of Decimal(3,2) When I query up all these objects and try to set this field: fieldName = 0.85 OR fieldName = .85 It will throw a hissy fit, "Cannot convert float to DecimalField, try converting to a string first"... So then I do: fieldName = str(0.85) same error. I even tried: fieldName = "0.85" Same error. Am I running into some sort of framework bug here, or what? Note that when I actually go into Django Admin and manually edit the objects, it works fine. I am running Django 1.1 on Python 2.6
[ "from decimal import Decimal\nobject.fieldName = Decimal(\"0.85\")\n\nor\nf = 0.85\nobject.fieldName = Decimal(str(f))\n\n", "The Django DecimalField is \"...represented in by a python Decimal instance.\" You might try:\n>>> obj.fieldName = Decimal(\"0.85\")\n\nBehavior may also vary depending on the database backend you are using. With sqlite, I am able to assign string values to DecimalFields in new and existing objects without error.\n" ]
[ 11, 3 ]
[]
[]
[ "casting", "django", "django_models", "django_orm", "python" ]
stackoverflow_0001473332_casting_django_django_models_django_orm_python.txt
Q: Can Python's MiniMock create mock of functions defined in the same file? I'm using the Python MiniMock library for unit testing. I'd like to mock out a function defined in the same Python file as my doctest. Can MiniMock handle that? The naive approach fails: def foo(): raise ValueError, "Don't call me during testing!" def bar(): """ Returns twice the value of foo() >>> from minimock import mock >>> mock('foo',returns=5) >>> bar() Called foo() 10 """ return foo() * 2 if __name__ == "__main__": import doctest doctest.testmod() Here's what happens if I try to run this code: ********************************************************************** File "test.py", line 9, in __main__.bar Failed example: bar() Exception raised: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/doctest.py", line 1212, in __run compileflags, 1) in test.globs File "<doctest __main__.bar[2]>", line 1, in <module> bar() File "test.py", line 13, in bar return foo() * 2 File "test.py", line 2, in foo raise ValueError, "Don't call me!" ValueError: Don't call me! ********************************************************************** 1 items had failures: 1 of 3 in __main__.bar ***Test Failed*** 1 failures. Edit: As per the answers below, this has been identified as a bug, and has been fixed in MiniMock. A: I just replied on the mailing list with a MiniMock patch that fixes this. Until that's applied, instead of the following two lines in itsadok's snippet: >>> mock('foo',returns=5) >>> bar.func_globals['foo'] = foo you could also use >>> mock('foo', nsdicts=(bar.func_globals,), returns=5) A: This works: def foo(): raise ValueError, "Don't call me during testing!" def bar(): """ Returns twice the value of foo() >>> from minimock import mock >>> mock('foo',returns=5) >>> bar.func_globals['foo'] = foo >>> bar() Called foo() 10 """ return foo() * 2 if __name__ == "__main__": import doctest doctest.testmod() It seems that the foo in bar is already bound to the original function by the time the mocking takes place. This happens because when running the doctests, the doctest module runs in the context of a copy of the module's global name space, but bar's globals remain their original self. So the mock function changes the foo that is in the copied namespace, but bar is still looking at the original. I don't know if there's a better way to do this. EDIT 2: I take it back. MiniMock was specifically designed to be used in doctests. I suspect you found a bug. EDIT: I guess the recommended way to do this is to set up the mocking before starting the tests, like so: def foo(): raise ValueError, "Don't call me during testing!" def bar(): """ Returns twice the value of foo() >>> bar() 10 """ return foo() * 2 if __name__ == "__main__": from minimock import mock mock('foo',returns=5) import doctest doctest.testmod() This way the "Called foo()" message is also not in the doctest.
Can Python's MiniMock create mock of functions defined in the same file?
I'm using the Python MiniMock library for unit testing. I'd like to mock out a function defined in the same Python file as my doctest. Can MiniMock handle that? The naive approach fails: def foo(): raise ValueError, "Don't call me during testing!" def bar(): """ Returns twice the value of foo() >>> from minimock import mock >>> mock('foo',returns=5) >>> bar() Called foo() 10 """ return foo() * 2 if __name__ == "__main__": import doctest doctest.testmod() Here's what happens if I try to run this code: ********************************************************************** File "test.py", line 9, in __main__.bar Failed example: bar() Exception raised: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/doctest.py", line 1212, in __run compileflags, 1) in test.globs File "<doctest __main__.bar[2]>", line 1, in <module> bar() File "test.py", line 13, in bar return foo() * 2 File "test.py", line 2, in foo raise ValueError, "Don't call me!" ValueError: Don't call me! ********************************************************************** 1 items had failures: 1 of 3 in __main__.bar ***Test Failed*** 1 failures. Edit: As per the answers below, this has been identified as a bug, and has been fixed in MiniMock.
[ "I just replied on the mailing list with a MiniMock patch that fixes this.\nUntil that's applied, instead of the following two lines in itsadok's snippet:\n>>> mock('foo',returns=5)\n>>> bar.func_globals['foo'] = foo\n\nyou could also use\n>>> mock('foo', nsdicts=(bar.func_globals,), returns=5)\n\n", "This works:\ndef foo():\n raise ValueError, \"Don't call me during testing!\"\n\ndef bar():\n \"\"\"\n Returns twice the value of foo()\n\n >>> from minimock import mock\n >>> mock('foo',returns=5)\n >>> bar.func_globals['foo'] = foo\n >>> bar()\n Called foo()\n 10\n\n \"\"\"\n return foo() * 2\n\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod()\n\nIt seems that the foo in bar is already bound to the original function by the time the mocking takes place.\nThis happens because when running the doctests, the doctest module runs in the context of a copy of the module's global name space, but bar's globals remain their original self. So the mock function changes the foo that is in the copied namespace, but bar is still looking at the original.\nI don't know if there's a better way to do this.\nEDIT 2: I take it back. MiniMock was specifically designed to be used in doctests. I suspect you found a bug.\nEDIT: I guess the recommended way to do this is to set up the mocking before starting the tests, like so:\ndef foo():\n raise ValueError, \"Don't call me during testing!\"\n\ndef bar():\n \"\"\"\n Returns twice the value of foo()\n\n >>> bar()\n 10\n\n \"\"\"\n return foo() * 2\n\nif __name__ == \"__main__\":\n from minimock import mock\n mock('foo',returns=5)\n import doctest\n doctest.testmod()\n\nThis way the \"Called foo()\" message is also not in the doctest.\n" ]
[ 5, 1 ]
[]
[]
[ "mocking", "python" ]
stackoverflow_0001462381_mocking_python.txt
Q: global counter in Django Application? I was wondering if there is "global counter" in Django application, like the way I store "global counter" in Servlet Context scope in Tomcat. something like getServletContext().getAttribute("counter"); counter++; A: When you write a django application (or any wsgi application, for that matter), you don't know beforehand if your application will end up running standalone on a single server, or multithreaded, or multiprocessed, or even in multiple separate machines as part of a load balancing strategy. If you're going to make the constraint "my application only works on single-process servers" then you can use something like this: from django import settings settings.counter += 1 However that constraint is often not feasible. So you must use external storage to your counter. If you want to keep it on memory, maybe a memcached Maybe you just log the requests to this view. So when you want the counter just count the number of entries in the log. The log could be file-based, or it could be a table in the database, just define a new model on your models.py.
global counter in Django Application?
I was wondering if there is "global counter" in Django application, like the way I store "global counter" in Servlet Context scope in Tomcat. something like getServletContext().getAttribute("counter"); counter++;
[ "When you write a django application (or any wsgi application, for that matter), you don't know beforehand if your application will end up running standalone on a single server, or multithreaded, or multiprocessed, or even in multiple separate machines as part of a load balancing strategy.\nIf you're going to make the constraint \"my application only works on single-process servers\" then you can use something like this:\nfrom django import settings\nsettings.counter += 1\n\nHowever that constraint is often not feasible. So you must use external storage to your counter. \nIf you want to keep it on memory, maybe a memcached \nMaybe you just log the requests to this view. So when you want the counter just count the number of entries in the log.\nThe log could be file-based, or it could be a table in the database, just define a new model on your models.py.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001475264_django_python.txt
Q: Google App Engine get PolyModel as child class When I run Google App Engine likeso: from google.appengine.ext import db from google.appengine.ext.db import polymodel class Father(polymodel.PolyModel): def hello(self): print "Father says hi" class Son(Father): def hello(self): print "Spawn says hi" When I run, e.g. s = Son() s.put() son_from_father = Father.get_by_id(s.key().id()) son_from_father.hello() This prints "Father says hi". I would expect this to print "Son says hi". Does anyone know how to make this do what's expected, here? EDIT: The problem was, ultimately, that I was saving Spawn objects as Father objects. GAE was happy to do even though the Father objects (in my application) have fewer properties. GAE didn't complain because I (silently) removed any values not in Model.properties() from the data being saved. I've fixed the improper type saving and added a check for extra values not being saved (which was helpfully a TODO comment right where that check should happen). The check I do for data when saving is basically: def save_obj(obj, data, Model): for prop in Model.properties(): # checks/other things happen in this loop setattr(obj, prop, data.get(prop)) extra_data = set(data).difference(Model.properties()) if extra_data: logging.debug("Extra data!") The posts here were helpful - thank you. GAE is working as expected, now that I'm using it as directed. :) A: I can't reproduce your problem -- indeed, your code just dies with an import error (PolyModel is not in module db) on my GAE (version 1.2.5). Once I've fixed things enough to let the code run...: import wsgiref.handlers from google.appengine.ext import webapp from google.appengine.ext.db import polymodel class Father(polymodel.PolyModel): def hello(self): return "Father says hi" class Son(Father): def hello(self): return "Spawn says hi" class MainHandler(webapp.RequestHandler): def get(self): s = Son() s.put() son_from_father = Father.get_by_id(s.key().id()) x = son_from_father.hello() self.response.out.write(x) def main(): application = webapp.WSGIApplication([('/', MainHandler)], debug=True) wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() ...I see "Spawn says hi" as expected. What App Engine release do you have? What happen if you use exactly the code I'm giving?
Google App Engine get PolyModel as child class
When I run Google App Engine likeso: from google.appengine.ext import db from google.appengine.ext.db import polymodel class Father(polymodel.PolyModel): def hello(self): print "Father says hi" class Son(Father): def hello(self): print "Spawn says hi" When I run, e.g. s = Son() s.put() son_from_father = Father.get_by_id(s.key().id()) son_from_father.hello() This prints "Father says hi". I would expect this to print "Son says hi". Does anyone know how to make this do what's expected, here? EDIT: The problem was, ultimately, that I was saving Spawn objects as Father objects. GAE was happy to do even though the Father objects (in my application) have fewer properties. GAE didn't complain because I (silently) removed any values not in Model.properties() from the data being saved. I've fixed the improper type saving and added a check for extra values not being saved (which was helpfully a TODO comment right where that check should happen). The check I do for data when saving is basically: def save_obj(obj, data, Model): for prop in Model.properties(): # checks/other things happen in this loop setattr(obj, prop, data.get(prop)) extra_data = set(data).difference(Model.properties()) if extra_data: logging.debug("Extra data!") The posts here were helpful - thank you. GAE is working as expected, now that I'm using it as directed. :)
[ "I can't reproduce your problem -- indeed, your code just dies with an import error (PolyModel is not in module db) on my GAE (version 1.2.5). Once I've fixed things enough to let the code run...:\nimport wsgiref.handlers\nfrom google.appengine.ext import webapp\nfrom google.appengine.ext.db import polymodel\n\nclass Father(polymodel.PolyModel):\n def hello(self):\n return \"Father says hi\"\n\nclass Son(Father):\n def hello(self):\n return \"Spawn says hi\"\n\nclass MainHandler(webapp.RequestHandler):\n\n def get(self):\n s = Son()\n s.put()\n son_from_father = Father.get_by_id(s.key().id())\n x = son_from_father.hello()\n self.response.out.write(x)\n\ndef main():\n application = webapp.WSGIApplication([('/', MainHandler)],\n debug=True)\n wsgiref.handlers.CGIHandler().run(application)\n\n\nif __name__ == '__main__':\n main()\n\n...I see \"Spawn says hi\" as expected. What App Engine release do you have? What happen if you use exactly the code I'm giving?\n" ]
[ 1 ]
[ "You did a \"Father.get...\" so you created an object from the Father class. \nSo why wouldn't it say \"Father says hi\". \nIf you Father class had lastname and firstname, and your Son class had middle name, you won't get the middle name unless you specifically retrieve the 'Son' record. \nIf you want to do a polymorphic type query, here's one way to do it. I know it works with attributes, but haven't tried it with methods. \n fatherList = Father.all().fetch(1000) \n counter = 0 \n #I'm using lower case father for object and upper case Father for your class...\n for father in fatherList:\n counter += 1 \n if isinstance(father,Son):\n self.response.out.write(\"display a Son field or do a Son method\") \n if isinstance(father,Daughter):\n self.response.out.write(\"display a Daughter field or do a Daughter method\")\n\nNeal Walters\n" ]
[ -1 ]
[ "google_app_engine", "polymodel", "python" ]
stackoverflow_0001474868_google_app_engine_polymodel_python.txt
Q: What are the best benefits of using Pinax? I recently discovered Pinax that appear to be an django stack with added most-used apps so easy and speed up development. I never used or heard of Pinax before and like to know if you have feedback about it. I love Django and would like to understand what are to parts of web dev Pinax helps with and using what tools. A: Pinax is a collection of Django-Apps that have already been glued together for you with some code and sample templates. It's not plug&play, because Django is not a CMS and Apps are not plugins, but you can get your site going really fast. You just have to remove the stuff you don't need, add other Django Apps that you'd like to use from around the web and write the stuff that nobody has written before and that makes your site special. I worked on a site with Pinax and had to remove quite a lot, to make it more simple, but it was still totally worth it. It's a great example (probably the best) of how Django Apps are reusable and how to make them work together best. Concrete example, here you go: Pinax comes with all the "User" Part of an online community: login, registration, OpenID, E-Mail-Confirmation. That's an example of what you don't have to write. A: I'm about to start using Pinax, and I'm glad I discovered it. Our todo list for the site has a lot of things on it, such as new user sign-up with email verification, discussions, and a news feed for users that blends site-wide updates and updates for that user. We can code all of this up, but it'll take a while. It'd daunting. Luckily, I discovered Pinax. Instead of coding all those features I'll only need to learn the Pinax structure and write some glue. I bet it will take 1/50th of the time that would have been required to write the features we need. A: As the two other posts said, it comes with a lot of pre-packaged apps that take care of common tasks in modern websites. Here's a list of the external apps that come packaged: https://github.com/pinax/pinax/blob/master/requirements/pinax.txt It also gives you project templates to start from, which you can see here: https://github.com/pinax/pinax/tree/master/pinax/projects/ The projects have working default settings in place so that you can run syncdb then runserver to get going immediately, unlike default Django. Its design also encourages you to write your own apps in such a way that they are more reusable. As they put it, "By integrating numerous reusable Django apps to take care of the things that many sites have in common, it lets you focus on what makes your site different." It does have a small learning curve of its own but I've personally been very happy with it and learned a lot more about Django (and git and virtualenv) by using Pinax.
What are the best benefits of using Pinax?
I recently discovered Pinax that appear to be an django stack with added most-used apps so easy and speed up development. I never used or heard of Pinax before and like to know if you have feedback about it. I love Django and would like to understand what are to parts of web dev Pinax helps with and using what tools.
[ "Pinax is a collection of Django-Apps that have already been glued together for you with some code and sample templates.\nIt's not plug&play, because Django is not a CMS and Apps are not plugins, but you can get your site going really fast. You just have to remove the stuff you don't need, add other Django Apps that you'd like to use from around the web and write the stuff that nobody has written before and that makes your site special.\nI worked on a site with Pinax and had to remove quite a lot, to make it more simple, but it was still totally worth it.\nIt's a great example (probably the best) of how Django Apps are reusable and how to make them work together best.\nConcrete example, here you go:\nPinax comes with all the \"User\" Part of an online community: login, registration, OpenID, E-Mail-Confirmation. That's an example of what you don't have to write.\n", "I'm about to start using Pinax, and I'm glad I discovered it.\nOur todo list for the site has a lot of things on it, such as new user sign-up with email verification, discussions, and a news feed for users that blends site-wide updates and updates for that user. We can code all of this up, but it'll take a while. It'd daunting.\nLuckily, I discovered Pinax. Instead of coding all those features I'll only need to learn the Pinax structure and write some glue. I bet it will take 1/50th of the time that would have been required to write the features we need.\n", "As the two other posts said, it comes with a lot of pre-packaged apps that take care of common tasks in modern websites. Here's a list of the external apps that come packaged: https://github.com/pinax/pinax/blob/master/requirements/pinax.txt\nIt also gives you project templates to start from, which you can see here: https://github.com/pinax/pinax/tree/master/pinax/projects/\nThe projects have working default settings in place so that you can run syncdb then runserver to get going immediately, unlike default Django. Its design also encourages you to write your own apps in such a way that they are more reusable. As they put it, \"By integrating numerous reusable Django apps to take care of the things that many sites have in common, it lets you focus on what makes your site different.\"\nIt does have a small learning curve of its own but I've personally been very happy with it and learned a lot more about Django (and git and virtualenv) by using Pinax.\n" ]
[ 14, 8, 5 ]
[]
[]
[ "django", "pinax", "python" ]
stackoverflow_0001448292_django_pinax_python.txt
Q: BitString error on Windows XP? Scott, I'd like to thank you for your BitString program. I am working on interpreting data from a neutron detector, and I've found that this module is just the tool I need. Unfortunately, I have yet to get the module to successfully pass test-bitstring.py. I'm running Windows XP and Python 3.1. I've downloaded your file bitstring-0.4.1.zip from your website and extracted both bitstring.py and test-bitstring.py into the \lib folder of my Python directory. Upon running test-bitstring.py, I get 11 errors. :( I've triple-checked that I have downloaded the correct version, and that both of the .py files successfully made it to me \lib folder. Is there a known complication using Windows with BitString? It is probably something I am doing, but I'm at a loss as to where to go from here. In your documentation, you explicitly say to contact you if the version is correct and the errors persist. I'm fairly certain that I'm missing something obvious, but I wanted to check that this is not some sort of compatibility issue? Thank you for taking the time to read this. Sorry to bother you, as I'm sure you get questions about this quite a lot. If you get the chance at all to get back to me, I'm very interested in why you think it might fail the test. Thanks again! A: I just tested that bitstring-0.4.1 's test-bitstring.py works flawlessly on both Python 3.0 and Python 3.1, on a Windows XP host. The 3.1 version, specifically, this is what happens. '3.1.1 (r311:74483, Aug 17 2009, 17:02:12) [MSC v.1500 32 bit (Intel)]' c:\python31\python test_bitstring.py ................................................................................ ................................................................................ .................................................... ---------------------------------------------------------------------- Ran 212 tests in 0.297s OK OP should provide more details, in particular the list of the 11 failed tests (or at least a few of them, as they probably fail for similar reasons. A: feel free to email me queries like this (that's what I meant when I said contact me in the documentation) - I'm somewhat surprised to find a direct question to me on S.O., but I just happened to see it! You should update to the latest version for Python 3 (1.0.1). I think the problem was a strange platform dependent issue with struct.unpack that was fixed in rev. 445.
BitString error on Windows XP?
Scott, I'd like to thank you for your BitString program. I am working on interpreting data from a neutron detector, and I've found that this module is just the tool I need. Unfortunately, I have yet to get the module to successfully pass test-bitstring.py. I'm running Windows XP and Python 3.1. I've downloaded your file bitstring-0.4.1.zip from your website and extracted both bitstring.py and test-bitstring.py into the \lib folder of my Python directory. Upon running test-bitstring.py, I get 11 errors. :( I've triple-checked that I have downloaded the correct version, and that both of the .py files successfully made it to me \lib folder. Is there a known complication using Windows with BitString? It is probably something I am doing, but I'm at a loss as to where to go from here. In your documentation, you explicitly say to contact you if the version is correct and the errors persist. I'm fairly certain that I'm missing something obvious, but I wanted to check that this is not some sort of compatibility issue? Thank you for taking the time to read this. Sorry to bother you, as I'm sure you get questions about this quite a lot. If you get the chance at all to get back to me, I'm very interested in why you think it might fail the test. Thanks again!
[ "I just tested that bitstring-0.4.1 's test-bitstring.py works flawlessly on both Python 3.0 and Python 3.1, on a Windows XP host.\nThe 3.1 version, specifically, this is what happens.\n'3.1.1 (r311:74483, Aug 17 2009, 17:02:12) [MSC v.1500 32 bit (Intel)]'\n\nc:\\python31\\python test_bitstring.py\n................................................................................\n................................................................................\n....................................................\n----------------------------------------------------------------------\nRan 212 tests in 0.297s\n\nOK\n\nOP should provide more details, in particular the list of the 11 failed tests (or at least a few of them, as they probably fail for similar reasons.\n", "feel free to email me queries like this (that's what I meant when I said contact me in the documentation) - I'm somewhat surprised to find a direct question to me on S.O., but I just happened to see it!\nYou should update to the latest version for Python 3 (1.0.1). I think the problem was a strange platform dependent issue with struct.unpack that was fixed in rev. 445.\n" ]
[ 2, 1 ]
[]
[]
[ "bitstring", "python", "python_3.x", "windows_xp" ]
stackoverflow_0001475033_bitstring_python_python_3.x_windows_xp.txt
Q: Why is host aborting connection? I'm teaching myself Python networking, and I recalled that back when I was teaching myself threading, I came across this page, so I copied the scripts, updated them for Python 3.1.1 and ran them. They worked perfectly. Then I made a few modifications. My goal is to do something simple: The client pickles an integer and sends it to the server. The server receives the pickled integer, unpickles it, doubles it, then pickles it and sends it back to the client. The client receives the pickled (and doubled) integer, unpickles it, and outputs it. Here's the server: import pickle import socket import threading class ClientThread(threading.Thread): def __init__(self, channel, details): self.channel = channel self.details = details threading.Thread.__init__ ( self ) def run(self): print('Received connection:', self.details[0]) request = self.channel.recv(1024) response = pickle.dumps(pickle.loads(request) * 2) self.channel.send(response) self.channel.close() print('Closed connection:', self.details [ 0 ]) server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(('', 2727)) server.listen(5) while True: channel, details = server.accept() ClientThread(channel, details).start() And here is the client: import pickle import socket import threading class ConnectionThread(threading.Thread): def run(self): client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('localhost', 2727)) for x in range(10): client.send(pickle.dumps(x)) print('Sent:',str(x)) print('Received:',repr(pickle.loads(client.recv(1024)))) client.close() for x in range(5): ConnectionThread().start() The server runs fine, and when I run the client it successfully connects and starts sending integers and receiving them back doubled as expected. However, very quickly it exceptions out: Exception in thread Thread-2: Traceback (most recent call last): File "C:\Python30\lib\threading.py", line 507, in _bootstrap_inner self.run() File "C:\Users\Imagist\Desktop\server\client.py", line 13, in run print('Received:',repr(pickle.loads(client.recv(1024)))) socket.error: [Errno 10053] An established connection was aborted by the softwar e in your host machine The server continues to run and receives connections just fine; only the client crashes. What's causing this? EDIT: I got the client working with the following code: import pickle import socket import threading class ConnectionThread(threading.Thread): def run(self): for x in range(10): client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('localhost', 2727)) client.send(pickle.dumps(x)) print('Sent:',str(x)) print('Received:',repr(pickle.loads(client.recv(1024)))) client.close() for x in range(5): ConnectionThread().start() However, I still don't understand what's going on. Isn't this just opening and closing the socket a bunch of times? Shouldn't there be time limitations to that (you shouldn't be able to open a socket so soon after closing it)? A: Your client is now correct - you want to open the socket send the data, receive the reply and then close the socket. The error original error was caused by the server closing the socket after it sent the first response which caused the client to receive a connection closed message when it tried to send the second message on the same connection. However, I still don't understand what's going on. Isn't this just opening and closing the socket a bunch of times? Yes. This is acceptable, if not the highest performance way of doing things. Shouldn't there be time limitations to that (you shouldn't be able to open a socket so soon after closing it)? You can open a client socket as quickly as you like as every time you open a socket you will get a new local port number, meaning that the connections won't interfere. In the server code above, it will start a new thread for each incoming connection. There are 4 parts to every IP connection (source_address, source_port, destination_address, destination_port) and this quad (as it is known) must change for ever connection. Everything except source_port is fixed for a client socket so that is what the OS changes for you. Opening server sockets is more troublesome - if you want to open a new server socket quickly, your server.bind(('', 2727)) Above then you need to read up on SO_REUSEADDR.
Why is host aborting connection?
I'm teaching myself Python networking, and I recalled that back when I was teaching myself threading, I came across this page, so I copied the scripts, updated them for Python 3.1.1 and ran them. They worked perfectly. Then I made a few modifications. My goal is to do something simple: The client pickles an integer and sends it to the server. The server receives the pickled integer, unpickles it, doubles it, then pickles it and sends it back to the client. The client receives the pickled (and doubled) integer, unpickles it, and outputs it. Here's the server: import pickle import socket import threading class ClientThread(threading.Thread): def __init__(self, channel, details): self.channel = channel self.details = details threading.Thread.__init__ ( self ) def run(self): print('Received connection:', self.details[0]) request = self.channel.recv(1024) response = pickle.dumps(pickle.loads(request) * 2) self.channel.send(response) self.channel.close() print('Closed connection:', self.details [ 0 ]) server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(('', 2727)) server.listen(5) while True: channel, details = server.accept() ClientThread(channel, details).start() And here is the client: import pickle import socket import threading class ConnectionThread(threading.Thread): def run(self): client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('localhost', 2727)) for x in range(10): client.send(pickle.dumps(x)) print('Sent:',str(x)) print('Received:',repr(pickle.loads(client.recv(1024)))) client.close() for x in range(5): ConnectionThread().start() The server runs fine, and when I run the client it successfully connects and starts sending integers and receiving them back doubled as expected. However, very quickly it exceptions out: Exception in thread Thread-2: Traceback (most recent call last): File "C:\Python30\lib\threading.py", line 507, in _bootstrap_inner self.run() File "C:\Users\Imagist\Desktop\server\client.py", line 13, in run print('Received:',repr(pickle.loads(client.recv(1024)))) socket.error: [Errno 10053] An established connection was aborted by the softwar e in your host machine The server continues to run and receives connections just fine; only the client crashes. What's causing this? EDIT: I got the client working with the following code: import pickle import socket import threading class ConnectionThread(threading.Thread): def run(self): for x in range(10): client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect(('localhost', 2727)) client.send(pickle.dumps(x)) print('Sent:',str(x)) print('Received:',repr(pickle.loads(client.recv(1024)))) client.close() for x in range(5): ConnectionThread().start() However, I still don't understand what's going on. Isn't this just opening and closing the socket a bunch of times? Shouldn't there be time limitations to that (you shouldn't be able to open a socket so soon after closing it)?
[ "Your client is now correct - you want to open the socket send the data, receive the reply and then close the socket.\nThe error original error was caused by the server closing the socket after it sent the first response which caused the client to receive a connection closed message when it tried to send the second message on the same connection.\n\nHowever, I still don't understand\n what's going on. Isn't this just\n opening and closing the socket a bunch\n of times?\n\nYes. This is acceptable, if not the highest performance way of doing things.\n\nShouldn't there be time\n limitations to that (you shouldn't be\n able to open a socket so soon after\n closing it)?\n\nYou can open a client socket as quickly as you like as every time you open a socket you will get a new local port number, meaning that the connections won't interfere. In the server code above, it will start a new thread for each incoming connection.\nThere are 4 parts to every IP connection (source_address, source_port, destination_address, destination_port) and this quad (as it is known) must change for ever connection. Everything except source_port is fixed for a client socket so that is what the OS changes for you.\nOpening server sockets is more troublesome - if you want to open a new server socket quickly, your\nserver.bind(('', 2727))\n\nAbove then you need to read up on SO_REUSEADDR.\n" ]
[ 11 ]
[]
[]
[ "networking", "python" ]
stackoverflow_0001472876_networking_python.txt
Q: Install CherryPy on Linux hosting provider without command line access I have a linux based web hosting provider (fatcow.com) that doesn't give any command line access and won't run the setup script for CherryPy (python web server) for me. Is there any way to run get around this limitation so that I have a working install of CherryPy? This might be more or a serverfault.com question, but maybe someone here has dealt with this before. A: If CherryPy is pure Python, then you may be able to simply put the cherrypy folder in the same place your project resides. This will enable you to import the necessary things from CherryPy without needing to copy it to the official install directory. I've personally never used CherryPy, so I don't know precisely what's being installed and how it's used, but I've done this same thing with Django without a hitch. OK, I just downloaded CherryPy 3.1.2, unzipped it, and copied the contents of ./cherrypy/tutorial to ., ran the suggested tut101_helloworld.py and it seems to work. As far as hooking it up to Apache, it depends on what's available on your host. I think the most common Python interface is mod_python. When following these instructions, it's important to set the sys.path right in order for mod_python to be able to see cherrypy. A: An alternative to mod_python is mod_wsgi - http://code.google.com/p/modwsgi/wiki/IntegrationWithCherryPy But as Kyle mentioned, youll need to be able to edit your apache conf.
Install CherryPy on Linux hosting provider without command line access
I have a linux based web hosting provider (fatcow.com) that doesn't give any command line access and won't run the setup script for CherryPy (python web server) for me. Is there any way to run get around this limitation so that I have a working install of CherryPy? This might be more or a serverfault.com question, but maybe someone here has dealt with this before.
[ "If CherryPy is pure Python, then you may be able to simply put the cherrypy folder in the same place your project resides. This will enable you to import the necessary things from CherryPy without needing to copy it to the official install directory. I've personally never used CherryPy, so I don't know precisely what's being installed and how it's used, but I've done this same thing with Django without a hitch.\nOK, I just downloaded CherryPy 3.1.2, unzipped it, and copied the contents of ./cherrypy/tutorial to ., ran the suggested tut101_helloworld.py and it seems to work. \nAs far as hooking it up to Apache, it depends on what's available on your host. I think the most common Python interface is mod_python. When following these instructions, it's important to set the sys.path right in order for mod_python to be able to see cherrypy.\n", "An alternative to mod_python is mod_wsgi - http://code.google.com/p/modwsgi/wiki/IntegrationWithCherryPy\nBut as Kyle mentioned, youll need to be able to edit your apache conf.\n" ]
[ 2, 0 ]
[]
[]
[ "cherrypy", "linux", "python" ]
stackoverflow_0000938185_cherrypy_linux_python.txt
Q: what happens to a python object when you throw an exception from it My class contains a socket that connects to a server. Some of the methods of the class can throw an exception. The script I'm running contains an outer loop that catches the exception, logs an error, and creates a new class instance that tries to reconnect to the server. Problem is that the server only handles one connection at a time (by design) and the "old" socket is still connected. So the new connection attempt hangs the script. I can work around this by forcing the old socket closed, but I wonder: why doesn't the socket automatically close? When it is "stuck", netstat shows two sockets connected to the port. The server is waiting for input from the first socket though, it isn't handling the new one yet. I run this against a dummy server that replies "error\n" to every incoming line. EDIT: see my comment on Mark Rushakoff's answer below. An assert(False) [that I subsequently catch] from within the exception handler seems to force the socket closed. import socket class MyException(Exception): pass class MyClient(object): def __init__(self, port): self.sock = socket.create_connection(('localhost', port)) self.sockfile = self.sock.makefile() def do_stuff(self): self._send("do_stuff\n") response = self._receive() if response != "ok\n": raise MyException() return response def _send(self, cmd): self.sockfile.write(cmd) self.sockfile.flush() def _receive(self): return self.sockfile.readline() def connect(): c = MyClient(9989) # On the second iteration, do_stuff() tries to send data and # hangs indefinitely. print c.do_stuff() if __name__ == '__main__': for _ in xrange(3): try: connect() except MyException, e: print 'Caught:', e # This would be the workaround if I had access to the # MyClient object: #c.sock.close() #c.sockfile.close() EDIT: Here's the (ugly) server code: import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) s.bind(('localhost', 9989)) s.listen(5) (c,a) = s.accept() f = c.makefile() print f.readline() f.write('error\n') f.flush() (c2,a) = s.accept() f = c.makefile() print f.readline() s.close() A: This is an artifact of garbage collection. Even though the object is out of scope, it is not necessarily collected and therefore destroyed until a garbage collection run occurs -- this is not like C++ where a destructor is called as soon as an object loses scope. You can probably work around this particular issue by changing connect to def connect(): try: c = MyClient(9989) # On the second iteration, do_stuff() tries to send data and # hangs indefinitely. print c.do_stuff() finally: c.sock.close() c.sockfile.close() Alternatively, you could define __enter__ and __exit__ for MyClient, and do a with statement: def connect(): with MyClient(9989) as c: print c.do_stuff() Which is effectively the same as a try-finally. A: Can you really handle the Exception in connect()? I think you should provide a MyClient.close() method, and write connect() like this: def connect():     try:         c = MyClient(9989)         print c.do_stuff()     finally:         c.close() This is in complete analogy with file-like objects (and the with statement) A: Ok, here's the final version. Explicitly close the socket objects when something gets borked. import socket class MyException(Exception): pass class MyClient(object): def __init__(self, port): self.sock = socket.create_connection(('localhost', port)) self.sockfile = self.sock.makefile() def close(self): self.sock.close() self.sockfile.close() def do_stuff(self): self._send("do_stuff\n") response = self._receive() if response != "ok\n": raise MyException() return response def _send(self, cmd): self.sockfile.write(cmd) self.sockfile.flush() def _receive(self): return self.sockfile.readline() def connect(): try: c = MyClient(9989) print c.do_stuff() except MyException: print 'Caught MyException' finally: c.close() if __name__ == '__main__': for _ in xrange(2): connect() A: The garbage collector can be flagged to clean up by setting the relevant object to None.
what happens to a python object when you throw an exception from it
My class contains a socket that connects to a server. Some of the methods of the class can throw an exception. The script I'm running contains an outer loop that catches the exception, logs an error, and creates a new class instance that tries to reconnect to the server. Problem is that the server only handles one connection at a time (by design) and the "old" socket is still connected. So the new connection attempt hangs the script. I can work around this by forcing the old socket closed, but I wonder: why doesn't the socket automatically close? When it is "stuck", netstat shows two sockets connected to the port. The server is waiting for input from the first socket though, it isn't handling the new one yet. I run this against a dummy server that replies "error\n" to every incoming line. EDIT: see my comment on Mark Rushakoff's answer below. An assert(False) [that I subsequently catch] from within the exception handler seems to force the socket closed. import socket class MyException(Exception): pass class MyClient(object): def __init__(self, port): self.sock = socket.create_connection(('localhost', port)) self.sockfile = self.sock.makefile() def do_stuff(self): self._send("do_stuff\n") response = self._receive() if response != "ok\n": raise MyException() return response def _send(self, cmd): self.sockfile.write(cmd) self.sockfile.flush() def _receive(self): return self.sockfile.readline() def connect(): c = MyClient(9989) # On the second iteration, do_stuff() tries to send data and # hangs indefinitely. print c.do_stuff() if __name__ == '__main__': for _ in xrange(3): try: connect() except MyException, e: print 'Caught:', e # This would be the workaround if I had access to the # MyClient object: #c.sock.close() #c.sockfile.close() EDIT: Here's the (ugly) server code: import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) s.bind(('localhost', 9989)) s.listen(5) (c,a) = s.accept() f = c.makefile() print f.readline() f.write('error\n') f.flush() (c2,a) = s.accept() f = c.makefile() print f.readline() s.close()
[ "This is an artifact of garbage collection. Even though the object is out of scope, it is not necessarily collected and therefore destroyed until a garbage collection run occurs -- this is not like C++ where a destructor is called as soon as an object loses scope.\nYou can probably work around this particular issue by changing connect to \ndef connect():\n try:\n c = MyClient(9989)\n # On the second iteration, do_stuff() tries to send data and\n # hangs indefinitely.\n print c.do_stuff()\n finally:\n c.sock.close()\n c.sockfile.close()\n\nAlternatively, you could define __enter__ and __exit__ for MyClient, and do a with statement:\ndef connect():\n with MyClient(9989) as c:\n print c.do_stuff()\n\nWhich is effectively the same as a try-finally.\n", "Can you really handle the Exception in connect()?\nI think you should provide a MyClient.close() method, and write connect() like this:\ndef connect():\n    try:\n        c = MyClient(9989)\n        print c.do_stuff()\n    finally:\n        c.close()\n\nThis is in complete analogy with file-like objects (and the with statement)\n", "Ok, here's the final version. Explicitly close the socket objects when something gets borked.\nimport socket\n\nclass MyException(Exception):\n pass\n\nclass MyClient(object):\n\n def __init__(self, port):\n self.sock = socket.create_connection(('localhost', port))\n self.sockfile = self.sock.makefile()\n\n def close(self):\n self.sock.close()\n self.sockfile.close()\n\n def do_stuff(self):\n self._send(\"do_stuff\\n\")\n response = self._receive()\n if response != \"ok\\n\":\n raise MyException()\n return response\n\n def _send(self, cmd):\n self.sockfile.write(cmd)\n self.sockfile.flush()\n\n def _receive(self):\n return self.sockfile.readline()\n\ndef connect():\n try:\n c = MyClient(9989)\n print c.do_stuff()\n except MyException:\n print 'Caught MyException'\n finally:\n c.close()\n\nif __name__ == '__main__':\n for _ in xrange(2):\n connect()\n\n", "The garbage collector can be flagged to clean up by setting the relevant object to None.\n" ]
[ 5, 1, 0, 0 ]
[]
[]
[ "exception", "garbage_collection", "python", "sockets" ]
stackoverflow_0001475193_exception_garbage_collection_python_sockets.txt
Q: Python: Referencing another project I want to be able to run my Python project from the command line. I am referencing other projects, so I need to be able run modules in other folders. One method of making this work would be to modify the Pythonpath environment variable, but I think this is an abuse. Another hack would be to copy all the files I want into a single directory and then run Python. Is there a better method of doing this? Note: I am actually programming in Eclipse, but I want to be able to run the program remotely. Similar questions: Referencing another project: This question is basically asking how to import A: If you import sys, it contains a list of the directories in PYTHONPATH as sys.path Adding directories to this list (sys.path.append("my/path")) allows you to import from those locations in the current module as normal without changing the global settings on your system. A: Take a look at tools like virtualenv, to set up a virtual python, in which you can install your modules without getting them globally. http://pypi.python.org/pypi/virtualenv Setuptools, which allows you to specify (and automatically install) dependencies for your modules. http://pypi.python.org/pypi/setuptools (If you have problems with setuptools, take a look at Distribute, a maintained fork. http://pypi.python.org/pypi/distribute ) Buildout, which allows you deploy a complete application environment, including third-party software such as MySQL or anything else. http://pypi.python.org/pypi/zc.buildout/ A: First, I make sure that the module I want to include hasn't been installed globally. Then I add a symlink within the includee's directory: # With pwd == module to which I want to add functionality. ln -s /path/to/some_other_module_to_include . and then I can do a standard import. This allows multiple versions etc. It does not require changing any global settings, and you don't need to change the program code if you work on different machines (just change the symlink). A: If by "run modules" you mean importing them, you might be interested in this question I asked a while ago. A: I just realised that I have actually solved this problem before. Here is the approach I used - much more complex than mavnn, but I was also solving the problem of running a Python2.x program from a Python 3.0 import os import subprocess env=os.environ.copy() env['PYTHONPATH']=my_libraries kwargs={"stdin":subprocess.PIPE, "env":env} subprocess.Popen(["python","-u",program_path],**kwargs)
Python: Referencing another project
I want to be able to run my Python project from the command line. I am referencing other projects, so I need to be able run modules in other folders. One method of making this work would be to modify the Pythonpath environment variable, but I think this is an abuse. Another hack would be to copy all the files I want into a single directory and then run Python. Is there a better method of doing this? Note: I am actually programming in Eclipse, but I want to be able to run the program remotely. Similar questions: Referencing another project: This question is basically asking how to import
[ "If you import sys, it contains a list of the directories in PYTHONPATH as sys.path\nAdding directories to this list (sys.path.append(\"my/path\")) allows you to import from those locations in the current module as normal without changing the global settings on your system.\n", "Take a look at tools like\n\nvirtualenv, to set up a virtual python, in which you can install your modules without getting them globally. http://pypi.python.org/pypi/virtualenv\nSetuptools, which allows you to specify (and automatically install) dependencies for your modules. http://pypi.python.org/pypi/setuptools (If you have problems with setuptools, take a look at Distribute, a maintained fork. http://pypi.python.org/pypi/distribute )\nBuildout, which allows you deploy a complete application environment, including third-party software such as MySQL or anything else. http://pypi.python.org/pypi/zc.buildout/\n\n", "First, I make sure that the module I want to include hasn't been installed globally. Then I add a symlink within the includee's directory:\n# With pwd == module to which I want to add functionality.\nln -s /path/to/some_other_module_to_include .\n\nand then I can do a standard import. This allows multiple versions etc. It does not require changing any global settings, and you don't need to change the program code if you work on different machines (just change the symlink).\n", "If by \"run modules\" you mean importing them, you might be interested in this question I asked a while ago.\n", "I just realised that I have actually solved this problem before. Here is the approach I used - much more complex than mavnn, but I was also solving the problem of running a Python2.x program from a Python 3.0\nimport os\nimport subprocess\nenv=os.environ.copy()\nenv['PYTHONPATH']=my_libraries\nkwargs={\"stdin\":subprocess.PIPE, \"env\":env}\nsubprocess.Popen([\"python\",\"-u\",program_path],**kwargs)\n\n" ]
[ 10, 5, 1, 0, 0 ]
[]
[]
[ "command_line", "python" ]
stackoverflow_0001476111_command_line_python.txt
Q: python multiprocessing manager & composite pattern sharing I'm trying to share a composite structure through a multiprocessing manager but I felt in trouble with a "RuntimeError: maximum recursion depth exceeded" when trying to use just one of the Composite class methods. The class is token from code.activestate and tested by me before inclusion into the manager. When retrieving the class into a process and invoking its addChild() method I kept the RunTimeError, while outside the process it works. The composite class inheritates from a SpecialDict class, that implements a ** ____getattr()____ ** method. Could be possible that while calling addChild() the interpreter of python looks for a different ** ____getattr()____ ** because the right one is not proxied by the manager? If so It's not clear to me the right way to make a proxy to that class/method The following code reproduce exactly this condition: 1) this is the manager.py: from multiprocessing.managers import BaseManager from CompositeDict import * class PlantPurchaser(): def __init__(self): self.comp = CompositeDict('Comp') def get_cp(self): return self.comp class Manager(): def __init__(self): self.comp = QueuePurchaser().get_cp() BaseManager.register('get_comp', callable=lambda:self.comp) self.m = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra') self.s = self.m.get_server() self.s.serve_forever() 2) I want to use the composite into this consumer.py: from multiprocessing.managers import BaseManager class Consumer(): def __init__(self): BaseManager.register('get_comp') self.m = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra') self.m.connect() self.comp = self.m.get_comp() ret = self.comp.addChild('consumer') 3) run all launching by a controller.py: from multiprocessing import Process class Controller(): def __init__(self): for child in _run_children(): child.join() def _run_children(): from manager import Manager from consumer import Consumer as Consumer procs = ( Process(target=Manager, name='Manager' ), Process(target=Consumer, name='Consumer'), ) for proc in procs: proc.daemon = 1 proc.start() return procs c = Controller() Take a look this related questions on how to do a proxy for CompositeDict() class as suggested by AlberT. The solution given by tgray works but cannot avoid race conditions A: Python has a default maximum recursion depth of 1000 (or 999, I forget...). But you can change the default behavior thusly: import sys sys.setrecursionlimit(n) Where n is the number of recursions you wish to allow. Edit: The above answer does nothing to solve the root cause of this problem (as pointed out in the comments). It only needs to be used if you are intentionally recursing more than 1000 times. If you are in an infinite loop (like in this problem), you will eventually hit whatever limit you set. To address your actual problem, I re-wrote your code from scratch starting as simply as I could make it and built it up to what I believe is what you want: import sys from multiprocessing import Process from multiprocessing.managers import BaseManager from CompositDict import * class Shared(): def __init__(self): self.comp = CompositeDict('Comp') def get_comp(self): return self.comp def set_comp(self, c): self.comp = c class Manager(): def __init__(self): shared = Shared() BaseManager.register('get_shared', callable=lambda:shared) mgr = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra') srv = mgr.get_server() srv.serve_forever() class Consumer(): def __init__(self, child_name): BaseManager.register('get_shared') mgr = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra') mgr.connect() shared = mgr.get_shared() comp = shared.get_comp() child = comp.addChild(child_name) shared.set_comp(comp) print comp class Controller(): def __init__(self): pass def main(self): m = Process(target=Manager, name='Manager') m.daemon = True m.start() consumers = [] for i in xrange(3): p = Process(target=Consumer, name='Consumer', args=('Consumer_' + str(i),)) p.daemon = True consumers.append(p) for c in consumers: c.start() for c in consumers: c.join() return 0 if __name__ == '__main__': con = Controller() sys.exit(con.main()) I did this all in one file, but you shouldn't have any trouble breaking it up. I added a child_name argument to your consumer so that I could check that the CompositDict was getting updated. Note that there is both a getter and a setter for your CompositDict object. When I only had a getter, each Consumer was overwriting the CompositDict when it added a child. This is why I also changed your registered method to get_shared instead of get_comp, as you will want access to the setter as well as the getter within your Consumer class. Also, I don't think you want to try joining your manager process, as it will "serve forever". If you look at the source for the BaseManager (./Lib/multiprocessing/managers.py:Line 144) you'll notice that the serve_forever() function puts you into an infinite loop that is only broken by KeyboardInterrupt or SystemExit. Bottom line is that this code works without any recursive looping (as far as I can tell), but let me know if you still experience your error. A: Is it possible there is a circular reference between the classes? For example, the outer class has a reference to the composite class, and the composite class has a reference back to the outer class. The multiprocessing manager works well, but when you have large, complicated class structures, then you are likely to run into an error where a type/reference can not be serialized correctly. The other problem is that errors from multiprocessing manager are very cryptic. This makes debugging failure conditions even more difficult. A: I think the problem is that you have to instruct the Manager on how to manage you object, which is not a standard python type. In other worlds you have to create a proxy for you CompositeDict You could look at this doc for an example: http://ruffus.googlecode.com/svn/trunk/doc/html/sharing_data_across_jobs_example.html
python multiprocessing manager & composite pattern sharing
I'm trying to share a composite structure through a multiprocessing manager but I felt in trouble with a "RuntimeError: maximum recursion depth exceeded" when trying to use just one of the Composite class methods. The class is token from code.activestate and tested by me before inclusion into the manager. When retrieving the class into a process and invoking its addChild() method I kept the RunTimeError, while outside the process it works. The composite class inheritates from a SpecialDict class, that implements a ** ____getattr()____ ** method. Could be possible that while calling addChild() the interpreter of python looks for a different ** ____getattr()____ ** because the right one is not proxied by the manager? If so It's not clear to me the right way to make a proxy to that class/method The following code reproduce exactly this condition: 1) this is the manager.py: from multiprocessing.managers import BaseManager from CompositeDict import * class PlantPurchaser(): def __init__(self): self.comp = CompositeDict('Comp') def get_cp(self): return self.comp class Manager(): def __init__(self): self.comp = QueuePurchaser().get_cp() BaseManager.register('get_comp', callable=lambda:self.comp) self.m = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra') self.s = self.m.get_server() self.s.serve_forever() 2) I want to use the composite into this consumer.py: from multiprocessing.managers import BaseManager class Consumer(): def __init__(self): BaseManager.register('get_comp') self.m = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra') self.m.connect() self.comp = self.m.get_comp() ret = self.comp.addChild('consumer') 3) run all launching by a controller.py: from multiprocessing import Process class Controller(): def __init__(self): for child in _run_children(): child.join() def _run_children(): from manager import Manager from consumer import Consumer as Consumer procs = ( Process(target=Manager, name='Manager' ), Process(target=Consumer, name='Consumer'), ) for proc in procs: proc.daemon = 1 proc.start() return procs c = Controller() Take a look this related questions on how to do a proxy for CompositeDict() class as suggested by AlberT. The solution given by tgray works but cannot avoid race conditions
[ "Python has a default maximum recursion depth of 1000 (or 999, I forget...). But you can change the default behavior thusly:\nimport sys\nsys.setrecursionlimit(n)\n\nWhere n is the number of recursions you wish to allow. \nEdit:\nThe above answer does nothing to solve the root cause of this problem (as pointed out in the comments). It only needs to be used if you are intentionally recursing more than 1000 times. If you are in an infinite loop (like in this problem), you will eventually hit whatever limit you set.\nTo address your actual problem, I re-wrote your code from scratch starting as simply as I could make it and built it up to what I believe is what you want:\nimport sys\nfrom multiprocessing import Process\nfrom multiprocessing.managers import BaseManager\nfrom CompositDict import *\n\nclass Shared():\n def __init__(self):\n self.comp = CompositeDict('Comp')\n\n def get_comp(self):\n return self.comp\n\n def set_comp(self, c):\n self.comp = c\n\nclass Manager():\n def __init__(self):\n shared = Shared()\n BaseManager.register('get_shared', callable=lambda:shared)\n mgr = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra')\n srv = mgr.get_server()\n srv.serve_forever()\n\nclass Consumer():\n def __init__(self, child_name):\n BaseManager.register('get_shared')\n mgr = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra')\n mgr.connect()\n\n shared = mgr.get_shared()\n comp = shared.get_comp()\n child = comp.addChild(child_name)\n shared.set_comp(comp)\n print comp\n\nclass Controller():\n def __init__(self):\n pass\n\n def main(self):\n m = Process(target=Manager, name='Manager')\n m.daemon = True\n m.start()\n\n consumers = []\n for i in xrange(3):\n p = Process(target=Consumer, name='Consumer', args=('Consumer_' + str(i),))\n p.daemon = True\n consumers.append(p)\n\n for c in consumers:\n c.start()\n for c in consumers:\n c.join()\n return 0\n\n\nif __name__ == '__main__':\n con = Controller()\n sys.exit(con.main())\n\nI did this all in one file, but you shouldn't have any trouble breaking it up.\nI added a child_name argument to your consumer so that I could check that the CompositDict was getting updated.\nNote that there is both a getter and a setter for your CompositDict object. When I only had a getter, each Consumer was overwriting the CompositDict when it added a child.\nThis is why I also changed your registered method to get_shared instead of get_comp, as you will want access to the setter as well as the getter within your Consumer class.\nAlso, I don't think you want to try joining your manager process, as it will \"serve forever\". If you look at the source for the BaseManager (./Lib/multiprocessing/managers.py:Line 144) you'll notice that the serve_forever() function puts you into an infinite loop that is only broken by KeyboardInterrupt or SystemExit.\nBottom line is that this code works without any recursive looping (as far as I can tell), but let me know if you still experience your error.\n", "Is it possible there is a circular reference between the classes? For example, the outer class has a reference to the composite class, and the composite class has a reference back to the outer class.\nThe multiprocessing manager works well, but when you have large, complicated class structures, then you are likely to run into an error where a type/reference can not be serialized correctly. The other problem is that errors from multiprocessing manager are very cryptic. This makes debugging failure conditions even more difficult.\n", "I think the problem is that you have to instruct the Manager on how to manage you object, which is not a standard python type.\nIn other worlds you have to create a proxy for you CompositeDict\nYou could look at this doc for an example: http://ruffus.googlecode.com/svn/trunk/doc/html/sharing_data_across_jobs_example.html\n" ]
[ 0, 0, 0 ]
[]
[]
[ "multiprocessing", "python", "python_multiprocessing" ]
stackoverflow_0001473625_multiprocessing_python_python_multiprocessing.txt
Q: python arbitrarily incrementing an iterator inside a loop I am probably going about this in the wrong manner, but I was wondering how to handle this in python. First some c code: int i; for(i=0;i<100;i++){ if(i == 50) i = i + 10; printf("%i\n", i); } Ok so we never see the 50's... My question is, how can I do something similar in python? For instance: for line in cdata.split('\n'): if exp.match(line): #increment the position of the iterator by 5? pass print line With my limited experience in python, I only have one solution, introduce a counter and another if statement. break the loop until the counter reaches 5 after exp.match(line) is true. There has got to be a better way to do this, hopefully one that does not involve importing another module. Thanks in advance! A: There is a fantastic package in Python called itertools. But before I get into that, it'd serve well to explain how the iteration protocol is implemented in Python. When you want to provide iteration over your container, you specify the __iter__() class method that provides an iterator type. "Understanding Python's 'for' statement" is a nice article covering how the for-in statement actually works in Python and provides a nice overview on how the iterator types work. Take a look at the following: >>> sequence = [1, 2, 3, 4, 5] >>> iterator = sequence.__iter__() >>> iterator.next() 1 >>> iterator.next() 2 >>> for number in iterator: print number 3 4 5 Now back to itertools. The package contains functions for various iteration purposes. If you ever need to do special sequencing, this is the first place to look into. At the bottom you can find the Recipes section that contain recipes for creating an extended toolset using the existing itertools as building blocks. And there's an interesting function that does exactly what you need: def consume(iterator, n): '''Advance the iterator n-steps ahead. If n is none, consume entirely.''' collections.deque(itertools.islice(iterator, n), maxlen=0) Here's a quick, readable, example on how it works (Python 2.5): >>> import itertools, collections >>> def consume(iterator, n): collections.deque(itertools.islice(iterator, n)) >>> iterator = range(1, 16).__iter__() >>> for number in iterator: if (number == 5): # Disregard 6, 7, 8, 9 (5 doesn't get printed just as well) consume(iterator, 4) else: print number 1 2 3 4 10 11 12 13 14 15 A: itertools.islice: lines = iter(cdata.splitlines()) for line in lines: if exp.match(line): #increment the position of the iterator by 5 for _ in itertools.islice(lines, 4): pass continue # skip 1+4 lines print line For example, if exp, cdata are: exp = re.compile(r"skip5") cdata = """ before skip skip5 1 never see it 2 ditto 3 .. 4 .. 5 after skip 6 """ Then the output is: before skip 5 after skip 6 Python implementation of the C example i = 0 while i < 100: if i == 50: i += 10 print i i += 1 As @[Glenn Maynard] pointed out in the comment if you need to do a very large jumps such as i += 100000000 then you should use explicit while loop instead of just skipping steps in a for loop. Here's the example that uses explicit while loop instead islice: lines = cdata.splitlines() i = 0 while i < len(lines): if exp.match(lines[i]): #increment the position of the iterator by 5 i += 5 else: print lines[i] i += 1 This example produces the same output as the above islice example. A: If you're doing it with numbers a list comprehension can work: for i in [x for x in range(0, 99) if x < 50 and x > 59]: print i Moving an iterator forward is a bit more difficult though. I'd suggest setting your list up beforehand if you don't want to do the counter approach, probably by splitting cdata, then working out the indexes of the matching line and removing that line and the following ones. Apart from that you're stuck with the counter approach which isn't nearly as unpleasant as you make it out to be to be honest. Another option is this: iterator = iter(cdata.split('\n')) for line in iterator: if exp.match(line): for i in range(0, 5): try: iterator.next() except StopIteration: break else: print line A: Not exactly sure I follow your thought process but here is something to feed on.. for i in range(len(cdata.split('\n'))): if i in range(50,60): continue line = cdata[i] if exp.match(line): #increment the position of the iterator by 5? pass print line Not sure what you are really after but the range(len(..)) should help you. A: You can drop values from an iterator def dropvalues(iterator, vals): for i in xrange(vals): iterator.next() Now just make sure you have an iterator object to work on with lines = iter(cdata.split('\n')); and loop over it. A: Maybe with genexps. Not pretty but... Something like that: >>> gx = (line for line in '1 2 x 3 4 5 6 7 x 9 10 11 12 x 1'.split('\n')) >>> for line in gx: ... if line == 'x': ... for i in range(2): ... line = gx.next() ... print line The only problem is to ensure that gx can be next()-ed. The above example purposely generates an exception due to the last x. A: for your example, as you're working with lists (indexable sequences) and not with iterators, I would recommend the following: lines = cdata.split("\n") for line in lines[:50]+lines[60:]: print line that's not the most efficient since it potentially constructs 3 new lists (but if the skipped part is bigger that the processed part, it could be more efficient than the other options), but it's quite clean and explicit. If you don't mind to use the itertools module, you can convert the lists to sequences easily: from itertools import chain, islice for line in chain(islice(lines, None, 50), islice(lines, 60,None)): print line
python arbitrarily incrementing an iterator inside a loop
I am probably going about this in the wrong manner, but I was wondering how to handle this in python. First some c code: int i; for(i=0;i<100;i++){ if(i == 50) i = i + 10; printf("%i\n", i); } Ok so we never see the 50's... My question is, how can I do something similar in python? For instance: for line in cdata.split('\n'): if exp.match(line): #increment the position of the iterator by 5? pass print line With my limited experience in python, I only have one solution, introduce a counter and another if statement. break the loop until the counter reaches 5 after exp.match(line) is true. There has got to be a better way to do this, hopefully one that does not involve importing another module. Thanks in advance!
[ "There is a fantastic package in Python called itertools.\nBut before I get into that, it'd serve well to explain how the iteration protocol is implemented in Python. When you want to provide iteration over your container, you specify the __iter__() class method that provides an iterator type. \"Understanding Python's 'for' statement\" is a nice article covering how the for-in statement actually works in Python and provides a nice overview on how the iterator types work.\nTake a look at the following:\n>>> sequence = [1, 2, 3, 4, 5]\n>>> iterator = sequence.__iter__()\n>>> iterator.next()\n1\n>>> iterator.next()\n2\n>>> for number in iterator:\n print number \n3\n4\n5\n\nNow back to itertools. The package contains functions for various iteration purposes. If you ever need to do special sequencing, this is the first place to look into.\nAt the bottom you can find the Recipes section that contain recipes for creating an extended toolset using the existing itertools as building blocks.\nAnd there's an interesting function that does exactly what you need:\ndef consume(iterator, n):\n '''Advance the iterator n-steps ahead. If n is none, consume entirely.'''\n collections.deque(itertools.islice(iterator, n), maxlen=0)\n\nHere's a quick, readable, example on how it works (Python 2.5):\n>>> import itertools, collections\n>>> def consume(iterator, n):\n collections.deque(itertools.islice(iterator, n))\n>>> iterator = range(1, 16).__iter__()\n>>> for number in iterator:\n if (number == 5):\n # Disregard 6, 7, 8, 9 (5 doesn't get printed just as well)\n consume(iterator, 4)\n else:\n print number\n\n1\n2\n3\n4\n10\n11\n12\n13\n14\n15\n\n", "itertools.islice:\nlines = iter(cdata.splitlines())\nfor line in lines:\n if exp.match(line):\n #increment the position of the iterator by 5\n for _ in itertools.islice(lines, 4):\n pass\n continue # skip 1+4 lines\n print line\n\nFor example, if exp, cdata are:\nexp = re.compile(r\"skip5\")\ncdata = \"\"\"\nbefore skip\nskip5\n1 never see it\n2 ditto\n3 ..\n4 ..\n5 after skip\n6 \n\"\"\"\n\nThen the output is:\n\n\nbefore skip\n5 after skip\n6 \n\nPython implementation of the C example\ni = 0\nwhile i < 100:\n if i == 50:\n i += 10\n print i\n i += 1\n\nAs @[Glenn Maynard] pointed out in the comment if you need to do a very large jumps such as i += 100000000 then you should use explicit while loop instead of just skipping steps in a for loop.\nHere's the example that uses explicit while loop instead islice:\nlines = cdata.splitlines()\ni = 0\nwhile i < len(lines):\n if exp.match(lines[i]):\n #increment the position of the iterator by 5\n i += 5\n else:\n print lines[i]\n i += 1\n\nThis example produces the same output as the above islice example.\n", "If you're doing it with numbers a list comprehension can work:\nfor i in [x for x in range(0, 99) if x < 50 and x > 59]:\n print i\n\nMoving an iterator forward is a bit more difficult though. I'd suggest setting your list up beforehand if you don't want to do the counter approach, probably by splitting cdata, then working out the indexes of the matching line and removing that line and the following ones. Apart from that you're stuck with the counter approach which isn't nearly as unpleasant as you make it out to be to be honest.\nAnother option is this:\niterator = iter(cdata.split('\\n'))\nfor line in iterator:\n if exp.match(line):\n for i in range(0, 5):\n try:\n iterator.next()\n except StopIteration:\n break\n else:\n print line\n\n", "Not exactly sure I follow your thought process but here is something to feed on..\nfor i in range(len(cdata.split('\\n'))):\n if i in range(50,60): continue\n line = cdata[i]\n if exp.match(line):\n #increment the position of the iterator by 5?\n pass\n print line\n\nNot sure what you are really after but the range(len(..)) should help you.\n", "You can drop values from an iterator\ndef dropvalues(iterator, vals):\n for i in xrange(vals): iterator.next()\n\nNow just make sure you have an iterator object to work on with lines = iter(cdata.split('\\n')); and loop over it.\n", "Maybe with genexps. Not pretty but...\nSomething like that:\n>>> gx = (line for line in '1 2 x 3 4 5 6 7 x 9 10 11 12 x 1'.split('\\n'))\n>>> for line in gx:\n... if line == 'x':\n... for i in range(2):\n... line = gx.next()\n... print line\n\nThe only problem is to ensure that gx can be next()-ed. The above example purposely generates an exception due to the last x.\n", "for your example, as you're working with lists (indexable sequences) and not with iterators, I would recommend the following:\nlines = cdata.split(\"\\n\")\nfor line in lines[:50]+lines[60:]:\n print line\n\nthat's not the most efficient since it potentially constructs 3 new lists (but if the skipped part is bigger that the processed part, it could be more efficient than the other options), but it's quite clean and explicit.\nIf you don't mind to use the itertools module, you can convert the lists to sequences easily:\nfrom itertools import chain, islice\nfor line in chain(islice(lines, None, 50), islice(lines, 60,None)):\n print line\n\n" ]
[ 46, 17, 2, 1, 1, 1, 1 ]
[ "I can't parse the question vary well because there's this block of confusing and irrelevant C code. Please delete it.\nFocusing on just the Python code and the question about how to skip 5 lines...\nlineIter= iter( cdata.splitlines() )\nfor line in lineIter:\n if exp.match(line):\n for count in range(5):\n line = lineIter.next()\n print line\n\n" ]
[ -6 ]
[ "iterator", "python" ]
stackoverflow_0001474646_iterator_python.txt
Q: How to find and run my old (global) version of Python? I have locally installed a newer version of Python. For that I did the following: $ cd $ mkdir opt $ mkdir downloads $ cd downloads $ wget http://www.python.org/ftp/python/2.6.2/Python-2.6.2.tgz $ tar xvzf Python-2.6.2.tgz $ cd Python-2.6.2 $ ./configure --prefix=$HOME/opt/ --enable-unicode=ucs4 $ make $ make install In .bash_profile I put the following: export PATH=$HOME/opt/bin/:$PATH export PYTHONPATH=$HOME/opt/lib:$HOME/opt/lib/site-packages:$PYTHONPATH And than I executed: $ cd $ source .bash_profile $ python -V It worked. I got a new working versions of Python. However, now I would like to try something with my old version, which is a "global" version installed by root for all users. Can anybody, pleas, tell me how I can do it? P.S. I tired to remove changes in .bash_profile. I have commented the 2 last lines which were added when I installed the new version. So, now I have the following .bash_profile file: # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH #export PATH=$HOME/opt/bin/:$PATH #export PYTHONPATH=$HOME/opt/lib:$HOME/opt/lib/site-packages:$PYTHONPATH And I source the new version of the file (source .bash_profile). But I still get the old version of Python. When I type "Python -V" I get "Python 2.6.2". A: You can directly call the program with something like "/usr/local/bin/python myscript.py". You just need to know where your standard installation of python is. If you don't know, you can undo your changes and then type "which python" to find out what actually gets executed when you type "python" on the command line". For example: $ /usr/bin/python -V Python 2.3.4 $ /usr/bin/python2.4 -V Python 2.4.4 $ /opt/local/bin/python2.7 -V Python 2.7a0 $ python -V Python 2.5.2 $ which python /usr/bin/python To make things easier you can also create aliases: $ alias python2.4=/usr/bin/python2.4 $ alias python2.5=/usr/bin/python2.5 $ python2.4 -V Python 2.4.4 $ python2.5 -V Python 2.5.2 Aliases make it pretty easy for you to run different versions of python. Place them in your .bashrc file so they are always defined. A: Undo your changes to .bash_profile and execute source .bash_profile.
How to find and run my old (global) version of Python?
I have locally installed a newer version of Python. For that I did the following: $ cd $ mkdir opt $ mkdir downloads $ cd downloads $ wget http://www.python.org/ftp/python/2.6.2/Python-2.6.2.tgz $ tar xvzf Python-2.6.2.tgz $ cd Python-2.6.2 $ ./configure --prefix=$HOME/opt/ --enable-unicode=ucs4 $ make $ make install In .bash_profile I put the following: export PATH=$HOME/opt/bin/:$PATH export PYTHONPATH=$HOME/opt/lib:$HOME/opt/lib/site-packages:$PYTHONPATH And than I executed: $ cd $ source .bash_profile $ python -V It worked. I got a new working versions of Python. However, now I would like to try something with my old version, which is a "global" version installed by root for all users. Can anybody, pleas, tell me how I can do it? P.S. I tired to remove changes in .bash_profile. I have commented the 2 last lines which were added when I installed the new version. So, now I have the following .bash_profile file: # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH #export PATH=$HOME/opt/bin/:$PATH #export PYTHONPATH=$HOME/opt/lib:$HOME/opt/lib/site-packages:$PYTHONPATH And I source the new version of the file (source .bash_profile). But I still get the old version of Python. When I type "Python -V" I get "Python 2.6.2".
[ "You can directly call the program with something like \"/usr/local/bin/python myscript.py\". You just need to know where your standard installation of python is. If you don't know, you can undo your changes and then type \"which python\" to find out what actually gets executed when you type \"python\" on the command line\". \nFor example:\n$ /usr/bin/python -V\nPython 2.3.4\n$ /usr/bin/python2.4 -V\nPython 2.4.4\n$ /opt/local/bin/python2.7 -V\nPython 2.7a0\n$ python -V\nPython 2.5.2\n$ which python\n/usr/bin/python\n\nTo make things easier you can also create aliases:\n$ alias python2.4=/usr/bin/python2.4\n$ alias python2.5=/usr/bin/python2.5\n$ python2.4 -V\nPython 2.4.4\n$ python2.5 -V\nPython 2.5.2\n\nAliases make it pretty easy for you to run different versions of python. Place them in your .bashrc file so they are always defined.\n", "Undo your changes to .bash_profile and execute source .bash_profile.\n" ]
[ 2, 0 ]
[]
[]
[ "installation", "python" ]
stackoverflow_0001477300_installation_python.txt
Q: Form Validation in Admin with Inline formset and Model form I have a model, OrderedList, which is intended to be a listing of content objects ordered by the user. The OrderedList has several attributes, including a site which it belongs to. The content objects are attached to it via an OrderedListRow class, which is brought into OrderedList's admin via an inline formset in the admin. class OrderedList(GenericList): objects = models.Manager() published = GenericListManager() class OrderedListRow(models.Model): list = models.ForeignKey(OrderedList) content_type = models.ForeignKey(ContentType) object_id = models.PositiveSmallIntegerField() content_object = generic.GenericForeignKey("content_type", "object_id") order = models.IntegerField('order', blank = True, null = True) (OrderedList inherits the site field from the larger GenericList abstract). Here's my problem; when the user saves the admin form, I want to verify that each content object mapped to by each OrderedListRow belongs to the same site that the OrderedList does (the list can only belong to 1 site; the content objects can belong to multiple). I can override OrderedList's admin form's clean(), but it doesn't include the inline formset which contains the OrderedListRows, so it can't reach that data. I can override the OrderedListRows' inline formset's clean, but it can't reach the list. I need some way within the context of form validation to reach both the OrderedList's form data and the formset's form data so I can check all the sites of the OrderedListRow's content objects against the site of the OrderedList, and throw a validation error if there's a problem. So far I haven't found a function that the cleaned data for both OrderedRow and the OrderedListRows are contained in. A: In the inline formset, self.instance should refer to the parent object, ie the OrderedList. A: I am dealing with the same issue. And unfortunately I don't think the answer above covers things entirely. If there are changes in both the inline formset and the admin form, accessing self.instance will not give accurate data, since you will base the validation on the database and then save the formset which overwrites that data you just used to validate things. Basically this makes your validation one save behind. I suppose the real question here is which gets saved first. After digging int he source code, it seems like the admin site saved the form first. This means that, logically, doing validation on the formset and from there accessing the 'parent' instance should get consistent values.
Form Validation in Admin with Inline formset and Model form
I have a model, OrderedList, which is intended to be a listing of content objects ordered by the user. The OrderedList has several attributes, including a site which it belongs to. The content objects are attached to it via an OrderedListRow class, which is brought into OrderedList's admin via an inline formset in the admin. class OrderedList(GenericList): objects = models.Manager() published = GenericListManager() class OrderedListRow(models.Model): list = models.ForeignKey(OrderedList) content_type = models.ForeignKey(ContentType) object_id = models.PositiveSmallIntegerField() content_object = generic.GenericForeignKey("content_type", "object_id") order = models.IntegerField('order', blank = True, null = True) (OrderedList inherits the site field from the larger GenericList abstract). Here's my problem; when the user saves the admin form, I want to verify that each content object mapped to by each OrderedListRow belongs to the same site that the OrderedList does (the list can only belong to 1 site; the content objects can belong to multiple). I can override OrderedList's admin form's clean(), but it doesn't include the inline formset which contains the OrderedListRows, so it can't reach that data. I can override the OrderedListRows' inline formset's clean, but it can't reach the list. I need some way within the context of form validation to reach both the OrderedList's form data and the formset's form data so I can check all the sites of the OrderedListRow's content objects against the site of the OrderedList, and throw a validation error if there's a problem. So far I haven't found a function that the cleaned data for both OrderedRow and the OrderedListRows are contained in.
[ "In the inline formset, self.instance should refer to the parent object, ie the OrderedList.\n", "I am dealing with the same issue. And unfortunately I don't think the answer above covers things entirely.\nIf there are changes in both the inline formset and the admin form, accessing self.instance will not give accurate data, since you will base the validation on the database and then save the formset which overwrites that data you just used to validate things. Basically this makes your validation one save behind. \nI suppose the real question here is which gets saved first. After digging int he source code, it seems like the admin site saved the form first. This means that, logically, doing validation on the formset and from there accessing the 'parent' instance should get consistent values.\n" ]
[ 6, 1 ]
[]
[]
[ "django", "django_admin", "django_forms", "python" ]
stackoverflow_0000967045_django_django_admin_django_forms_python.txt
Q: How do you bind a language (python, for example) to another (say, C++)? I'm far from a python expert but I hear this one all the time, about its C/C++ bindings. How does this concept work, and how does Python (and Java) bind to C-based APIs like OpenGL? This stuff has always been a mystery to me. A: Interpreters Written in C89 with Reflection, Who Knew? I have a feeling you are looking for an explanation of the mechanism and not a link to the API or instructions on how to code it. So, as I understand it . . . The main interpreter is typically written in C and is dynamically linked. In a dynamically linked environment, even C89 has a certain amount of reflective behavior. In particular, the dlopen(3) and dlsym(3) calls will load a dynamic (typically ELF) library and look up the address of a symbol named by a string. Give that address, the interpreter can call a function. Even if statically linked, the interpreter can know the address of C functions whose names are compiled into it. So then, it's just a simple matter of having the interpreted code tell the interpreter to call a particular native function in a particular native library. The mechanism can be modular. An extension library for the interpreter, written in the script, can itself invoke the bare hooks for dlopen(3) and dlsym(3) and hook up to a new library that the interpreter never knew about. For passing simple objects by value, a few prototype functions will typically allow various calls. But for structured data objects (imagine stat(2)) the wrapper module needs to know the layout of the data. At some point, either when packaging the extension module or when installing it, a C interface module includes the appropriate header files and in conjunction with handwritten code constructs an interface object. This is why you may need to install something like libsqlite3-dev even if you already had sqlite3 on your system; only the -dev package has the .h files needed to recompile the linkage code. I suppose we could sum this up by saying: "it's done with brute force and ignorance". :-) A: The main general concept is known as FFI, "Foreign Function Interface" -- for Java it's JNI, for Python it's the "Python C API", for Perl it's XS, etc, etc, but I think it's important to give you the general term of art to help you research it more thoroughly. Given a FFI, you can write (e.g.) C programs that respect it directly, and/or you can have code generators that produce such C code from metainformation they receive and/or introspect from code written in other languages (often with some help, e.g., to drive the SWIG code generator you typically decorate the info that's in a .h C header file with extra info that's SWIG-specific to get a better wrapper). There are also special languages such as Cython, an "extended subset" of Python that's geared towards easy generation of FFI code while matching much of Python's syntax and semantics -- may often be the easiest way for mostly-Python programmers to write a Python extension module that compiles down to speedy machine code and maybe uses some existing C-callable libraries. The ctypes approach is different from the traditional FFI approaches, though it self-describes as a "foreign function library for Python" -- it relies on the foreign code being available in a DLL (or equivalent, such as an .so dynamic library in Linux), and generates and executes code at run-time to reach into such dynamically loaded C code (typically all done via explicit programming in Python -- I don't know of ctypes wrappers based on introspection and ctypes-code generation, yet). Handy to avoid having to install anything special for simple tasks of accessing existing DLLs with Python, but I think it doesn't scale up as well as the FFI "linker-based" approaches (as it requires more runtime exertion, etc, etc). I don't know of any other implementation of such an approach, targeting other languages, beyond ctypes for Python (I imagine some do exist, given today's prevalence of DLL and .so packaging, and would be curious to learn about them). A: Generally these languages have a way to load extensions written in C. The Java interface is called JNI (Java Native Interface). Python has comprehensive documentation about its extension interface. Another option for Python is the ctypes module which allows you to work with dynamically loadable C libraries without having to write custom extension code. A: The concepts below can be generalized relatively easily, however I'm going to refer specifically to C and Python a lot for clarity. Calling C from Python This can work because most lower level languages/architectures/operating systems have well-defined Application Binary Interfaces which specify all the low-level details of how applications interact with each other and the operating system. As an example here is the ABI for x86-64(AMD64): AMD64 System V Application Binary Interface . It specifies all the details of things like calling conventions for functions and linking against C object files. With this information, it's up to the language implementors to Implement the ABI of the language you wish to call into Provide an interface via the language/library to access the implementation (1) is actually almost gotten for free in most languages due to the sole fact their interpreters/compilers are coded in C, which obviously supports the C ABI :). This is also why there is difficulty in calling C code from implementations of languages not coded in C, for example IronPython (Python implementation in C#) and PyPy (Python implementation in Python) do not have particularly good support for calling C code, though I believe there has been some work in regard to this in IronPython. So to make this concrete, let's assume we have CPython (The standard implementation of Python, done in C). We get (1) for free since our interpreter is written in C and we can access C libraries from our interpreter in the same way we would from any other C program (dlopen,LoadLibrary, whatever). Now we need to offer a way for people writing in our language to access these facilities. Python does this via The Python C/C++ API or ctypes. Whenever a programmer writes code using these APIs, we can execute the appropriate library loading/calling code to call into the libraries. Calling Python from C This direction is actually a bit simpler to explain. Continuing from the previous example, our interpreter, CPython is nothing more than a program written in C, so it can export functions and be compiled as a library/linked against by any program we want to write in C. CPython exports a set of C functions for accessing/running Python program and we can just call these functions to run Python code from our application. For example one of the functions exported by the CPython library is: PyObject* PyRun_StringFlags(const char *str, int start, PyObject *globals, PyObject *locals, PyCompilerFlags *flags)¶ Return value: New reference. Execute Python source code from str in the context specified by the dictionaries globals and locals with the compiler flags specified by flags. The parameter start specifies the start token that should be used to parse the source code. We can literally execute Python code by passing this function a string containing valid Python code (and some other details necessary for execution.) See Embedding Python in another application for details. A: For Perl, there are two ways to call C++ subroutines: Perl XS (eXternal Subroutine) (See also Wiki) - allows calling subroutines from other languages (mainly, but not exclusively, C) from Perl by compiling C code into modules usable from Perl. SWIG (Simplified wrapper and interface generator) is a software development tool that connects programs written in C and C++ with a variety of high-level / scripting languages including Perl, PHP, Python, Tcl and Ruby (though it seems SWIG's origins are bindings with Python). This is a paper that goes into details of how SWIG works, if it was your interest to understand what happens under the hood. A: There are basically two ways of integrating c/c++ with python: extending: accessing c/c++ from python embedding: accessing the python interpreter from c/c++ What you mention is the first case. Its usually achieved by writing wrapper functions that serves as glue code between the different languages that converts the function arguments and data types to match the needed language. Usually a tool called SWIG is used to generate this glue code. For an extensive explanation, see this tutorial.
How do you bind a language (python, for example) to another (say, C++)?
I'm far from a python expert but I hear this one all the time, about its C/C++ bindings. How does this concept work, and how does Python (and Java) bind to C-based APIs like OpenGL? This stuff has always been a mystery to me.
[ "Interpreters Written in C89 with Reflection, Who Knew?\n\nI have a feeling you are looking for an explanation of the mechanism and not a link to the API or instructions on how to code it. So, as I understand it . . .\nThe main interpreter is typically written in C and is dynamically linked. In a dynamically linked environment, even C89 has a certain amount of reflective behavior. In particular, the dlopen(3) and dlsym(3) calls will load a dynamic (typically ELF) library and look up the address of a symbol named by a string. Give that address, the interpreter can call a function. Even if statically linked, the interpreter can know the address of C functions whose names are compiled into it.\nSo then, it's just a simple matter of having the interpreted code tell the interpreter to call a particular native function in a particular native library.\nThe mechanism can be modular. An extension library for the interpreter, written in the script, can itself invoke the bare hooks for dlopen(3) and dlsym(3) and hook up to a new library that the interpreter never knew about.\nFor passing simple objects by value, a few prototype functions will typically allow various calls. But for structured data objects (imagine stat(2)) the wrapper module needs to know the layout of the data. At some point, either when packaging the extension module or when installing it, a C interface module includes the appropriate header files and in conjunction with handwritten code constructs an interface object. This is why you may need to install something like libsqlite3-dev even if you already had sqlite3 on your system; only the -dev package has the .h files needed to recompile the linkage code.\nI suppose we could sum this up by saying: \"it's done with brute force and ignorance\". :-)\n", "The main general concept is known as FFI, \"Foreign Function Interface\" -- for Java it's JNI, for Python it's the \"Python C API\", for Perl it's XS, etc, etc, but I think it's important to give you the general term of art to help you research it more thoroughly.\nGiven a FFI, you can write (e.g.) C programs that respect it directly, and/or you can have code generators that produce such C code from metainformation they receive and/or introspect from code written in other languages (often with some help, e.g., to drive the SWIG code generator you typically decorate the info that's in a .h C header file with extra info that's SWIG-specific to get a better wrapper).\nThere are also special languages such as Cython, an \"extended subset\" of Python that's geared towards easy generation of FFI code while matching much of Python's syntax and semantics -- may often be the easiest way for mostly-Python programmers to write a Python extension module that compiles down to speedy machine code and maybe uses some existing C-callable libraries.\nThe ctypes approach is different from the traditional FFI approaches, though it self-describes as a \"foreign function library for Python\" -- it relies on the foreign code being available in a DLL (or equivalent, such as an .so dynamic library in Linux), and generates and executes code at run-time to reach into such dynamically loaded C code (typically all done via explicit programming in Python -- I don't know of ctypes wrappers based on introspection and ctypes-code generation, yet). Handy to avoid having to install anything special for simple tasks of accessing existing DLLs with Python, but I think it doesn't scale up as well as the FFI \"linker-based\" approaches (as it requires more runtime exertion, etc, etc). I don't know of any other implementation of such an approach, targeting other languages, beyond ctypes for Python (I imagine some do exist, given today's prevalence of DLL and .so packaging, and would be curious to learn about them).\n", "Generally these languages have a way to load extensions written in C. The Java interface is called JNI (Java Native Interface). Python has comprehensive documentation about its extension interface.\nAnother option for Python is the ctypes module which allows you to work with dynamically loadable C libraries without having to write custom extension code.\n", "The concepts below can be generalized relatively easily, however I'm going to refer specifically to C and Python a lot for clarity.\nCalling C from Python\nThis can work because most lower level languages/architectures/operating systems have well-defined Application Binary Interfaces which specify all the low-level details of how applications interact with each other and the operating system. As an example here is the ABI for x86-64(AMD64): AMD64 System V Application Binary Interface . It specifies all the details of things like calling conventions for functions and linking against C object files.\nWith this information, it's up to the language implementors to \n\nImplement the ABI of the language\nyou wish to call into \nProvide an interface via the\nlanguage/library to access the\nimplementation\n\n(1) is actually almost gotten for free in most languages due to the sole fact their interpreters/compilers are coded in C, which obviously supports the C ABI :). This is also why there is difficulty in calling C code from implementations of languages not coded in C, for example IronPython (Python implementation in C#) and PyPy (Python implementation in Python) do not have particularly good support for calling C code, though I believe there has been some work in regard to this in IronPython.\nSo to make this concrete, let's assume we have CPython (The standard implementation of Python, done in C). We get (1) for free since our interpreter is written in C and we can access C libraries from our interpreter in the same way we would from any other C program (dlopen,LoadLibrary, whatever). Now we need to offer a way for people writing in our language to access these facilities. Python does this via The Python C/C++ API or ctypes. Whenever a programmer writes code using these APIs, we can execute the appropriate library loading/calling code to call into the libraries.\nCalling Python from C\nThis direction is actually a bit simpler to explain. Continuing from the previous example, our interpreter, CPython is nothing more than a program written in C, so it can export functions and be compiled as a library/linked against by any program we want to write in C. CPython exports a set of C functions for accessing/running Python program and we can just call these functions to run Python code from our application. For example one of the functions exported by the CPython library is:\nPyObject* PyRun_StringFlags(const char *str, int start, PyObject *globals, PyObject *locals, PyCompilerFlags *flags)¶\n\n\nReturn value: New reference.\nExecute Python source code from str in\n the context specified by the\n dictionaries globals and locals with\n the compiler flags specified by flags.\n The parameter start specifies the\n start token that should be used to\n parse the source code.\n\nWe can literally execute Python code by passing this function a string containing valid Python code (and some other details necessary for execution.) See Embedding Python in another application for details.\n", "For Perl, there are two ways to call C++ subroutines:\n\nPerl XS (eXternal Subroutine) (See also Wiki) - allows calling subroutines from other languages (mainly, but not exclusively, C) from Perl by compiling C code into modules usable from Perl.\nSWIG (Simplified wrapper and interface generator) is a software development tool that connects programs written in C and C++ with a variety of high-level / scripting languages including Perl, PHP, Python, Tcl and Ruby (though it seems SWIG's origins are bindings with Python). \nThis is a paper that goes into details of how SWIG works, if it was your interest to understand what happens under the hood. \n\n", "There are basically two ways of integrating c/c++ with python:\n\nextending: accessing c/c++ from python\nembedding: accessing the python interpreter from c/c++\n\nWhat you mention is the first case. Its usually achieved by writing wrapper functions that serves as glue code between the different languages that converts the function arguments and data types to match the needed language. Usually a tool called SWIG is used to generate this glue code.\nFor an extensive explanation, see this tutorial.\n" ]
[ 12, 7, 4, 3, 2, 2 ]
[]
[]
[ "binding", "java", "python" ]
stackoverflow_0001475637_binding_java_python.txt
Q: Python chat : delete variables to clean memory in functions? I'm creating a chat daemon in python and twisted framework. And I'm wondering if I have to delete every variable create in my functions to save memory in the long run when multiple users are connected, or are those variable automatically clear?. Here's a strip down version of my code to illustrate my point: class Chat(LineOnlyReceiver): LineOnlyReceiver.MAX_LENGTH = 500 def lineReceived(self, data): self.sendMessage(data) def sendMessage(self, data): try: message = data.split(None,1)[1] except IndexError: return self.factory.sendAll(message) #QUESTION : do i have to delete message and date?????????????????? del message del data class ChatFactory(Factory): protocol = Chat def __init__(self): self.clients = [] def addClient(self, newclient): self.clients.append(newclient) def delClient(self, client): self.clients.remove(client) def sendAll(self, message): for client in self.clients: client.transport.write(message + "\n") A: C Python (the reference implementation) uses reference counting and garbage collection. When count of references to object decrease to 0, it is automatically reclaimed. The garbage collection normally reclaims only those objects that refer to each other (or other objects from them) and thus cannot be reclaimed by reference counting. Thus, in most cases, local variables are reclaimed at the end of the function, because at the exit from the function, the objects cease being referenced from anywhere. So your "del" statements are completely unnecessary, because Python does that anyway. A: Python objects are never explicitly deleted. The only way to truly reclaim memory from unreferenced Python objects is via the garbage collector. The del keyword simply unbinds a name from an object, but the object still needs to be garbage collected. If you really think you have to, you can force the garbage collector to run using the gc module, but this is almost certainly a premature optimization, and you are quite likely to garbage collect at inopportune times or otherwise inefficiently unless you really know what you're doing. Using del as you have above has no real effect, since those names would have been deleted as they went out of scope anyway. You would need to follow up with an explicit garbage collection to be sure(r). A: Python uses garbage collection. This means you don't have to care about memory as it's freed automatically when it's not used anymore.
Python chat : delete variables to clean memory in functions?
I'm creating a chat daemon in python and twisted framework. And I'm wondering if I have to delete every variable create in my functions to save memory in the long run when multiple users are connected, or are those variable automatically clear?. Here's a strip down version of my code to illustrate my point: class Chat(LineOnlyReceiver): LineOnlyReceiver.MAX_LENGTH = 500 def lineReceived(self, data): self.sendMessage(data) def sendMessage(self, data): try: message = data.split(None,1)[1] except IndexError: return self.factory.sendAll(message) #QUESTION : do i have to delete message and date?????????????????? del message del data class ChatFactory(Factory): protocol = Chat def __init__(self): self.clients = [] def addClient(self, newclient): self.clients.append(newclient) def delClient(self, client): self.clients.remove(client) def sendAll(self, message): for client in self.clients: client.transport.write(message + "\n")
[ "C Python (the reference implementation) uses reference counting and garbage collection. When count of references to object decrease to 0, it is automatically reclaimed. The garbage collection normally reclaims only those objects that refer to each other (or other objects from them) and thus cannot be reclaimed by reference counting. \nThus, in most cases, local variables are reclaimed at the end of the function, because at the exit from the function, the objects cease being referenced from anywhere. So your \"del\" statements are completely unnecessary, because Python does that anyway.\n", "Python objects are never explicitly deleted. The only way to truly reclaim memory from unreferenced Python objects is via the garbage collector. The del keyword simply unbinds a name from an object, but the object still needs to be garbage collected. \nIf you really think you have to, you can force the garbage collector to run using the gc module, but this is almost certainly a premature optimization, and you are quite likely to garbage collect at inopportune times or otherwise inefficiently unless you really know what you're doing.\nUsing del as you have above has no real effect, since those names would have been deleted as they went out of scope anyway. You would need to follow up with an explicit garbage collection to be sure(r).\n", "Python uses garbage collection. This means you don't have to care about memory as it's freed automatically when it's not used anymore.\n" ]
[ 16, 7, 4 ]
[]
[]
[ "class", "function", "python", "twisted" ]
stackoverflow_0001477980_class_function_python_twisted.txt
Q: How was the syntax chosen for static methods in Python? I've been working with Python for a while and I find the syntax for declaring methods as static to be peculiar. A regular method would be declared: def mymethod(self, params) ... return A static method is declared: def mystaticethod(params) ... return mystaticmethod = staticmethod(mystaticmethod) If you don't add the static method line, the compiler complains about self missing. This is a very complex way of doing something very simple that in other languages simply use a keyword and a declaration grammar to. Can anyone tell me about the evolution of this syntax? Is this merely because classes were added into the existing language? Since I can move the staticmethod line to later in the class, it also suggests that the parser is working extra hard on bookkeeping. Note that I'm aware of the decorator syntax that was added later, I'm interested to know how the original syntax came about from a language design perspective. The only think I can think of is that the staticmethod application invokes an operation that transforms the function object into a static method. A: Static methods were added to Python long after classes were (classes were added very early on, possibly even before 1.0; static methods didn't show up until sometime about 2.0). They were implemented as a modification of normal methods — you create a static method object from a function to get a static method, whereas the compiler generates instance methods by default. As with many things in Python, static methods got introduced and then refined as people used them and wanted better syntax. The initial round was a way of introducing the semantics without adding new syntax to the language (and Python is quite resistant to syntax changes). I'm not Guido, so I'm not exactly sure what was going on in his head and this is somewhat speculative, but Python tends to move slowly, develop incrementally, and refine things as they gain more experience with them (in particular, they don't like adding something until they've figured out the right way to do it. This could have been why there wasn't special syntax for static methods from the beginning). As mjv indicated, though, there is an easier way now, through some syntax sugar added in 2.2 or 2.3 called "decorators": @staticmethod def mystaticmethod(params) ... return The @staticmethod syntax is sugar for putting mystaticmethod = staticmethod(mystaticmethod) after the method definition. A: voyager and adurdin do a good job, between them, of explaining what happened: with the introduction of new-style classes and descriptors in Python 2.2, new and deep semantic possibilities arose -- and the most obviously useful examples (static methods, class methods, properties) were supported by built-in descriptor types without any new syntax (the syntax @foo for decorators was added a couple releases later, once the new descriptors had amply proven their real-world usefulness). I'm not really qualified to channel Guido (where's Tim Peters when you need him!-), but I was already a Python committer at the time and participated in these developments, and I can confirm that's indeed what happened. voyager's observation on this reminding him of C is right on-target: I've long claimed that Python captures more of the "Spirit of C" than any of the languages who have mimicked the syntax of C (braces, parentheses after if/while, etc). "The spirit of C" is actually described in the (non-normative) Rationale part of the ISO C Standard, and comprises five principles (none of which requires braces!-) of which I claim Python matches 4.5 (there are several videos on the web of presentations of mine on "Python for Programmers" where I cover this if you're curious). In particular, the Spirit of C's "provide only one way to do an operation" matches the Zen of Python's "There should be one-- and preferably only one --obvious way to do it" -- and C and Python, I believe, are the only two widespread languages to explicitly adopt such a design ideal of uniformity and non-redundancy (it's an ideal, and can't be sensibly reached 100% -- e.g. if a and b are integers, a+b and b+a had BETTER be two identically-obvious ways to get their sum!-) -- but it's a goal to aim for!-). A: Static methods in python date back to the introduction of the so-called “new-style classes” in Python 2.2. Previous to this, methods on classes were just ordinary functions, stored as attributes on the class: class OldStyleClass: def method(self): print "'self' is just the first argument of this function" instance = OldStyleClass() OldStyleClass.method(instance) # Just an ordinary function call print repr(OldStyleClass.method) # "unbound method..." Method calls on instances were specially handled to automatically bind the instance to the first argument of the function: instance.method() # 'instance' is automatically passed in as the first parameter print repr(instance.method) # "bound method..." In Python 2.2, much of the class system was rethought and reengineered as “new-style classes”—classes that inherit from object. One of the features of new-style classes was “descriptors”, essentially an object in a class that is responsible for describing, getting, and setting the class's attributes. A descriptor has a __get__ method, that gets passed the class and the instance, and should return the requested attribute of the class or instance. Descriptors made it possible to use a single API to implement complex behaviour for class attributes, like properties, class methods, and static methods. For example, the staticmethod descriptor could be implemented like this: class staticmethod(object): """Create a static method from a function.""" def __init__(self, func): self.func = func def __get__(self, instance, cls=None): return self.func Compare this with a hypothetical pure-python descriptor for an ordinary method, which is used by default for all plain functions in the classes attributes (this is not exactly what happens with method lookup from an instance, but it does handle the automatic 'self' argument): class method(object): """Create a method from a function--it will get the instance passed in as its first argument.""" def __init__(self, func): self.func = func def __get__(self, instance, cls=None): # Create a wrapper function that passes the instance as first argument # to the original function def boundmethod(*args, **kwargs): return self.func(self, *args, **kwargs) return boundmethod So when you write method = staticmethod(method), you are actually creating a new descriptor whose job it is to return the original function unchanged, and storing this descriptor in the class's "method" attribute. If that seems like a lot of work to go to just to get the original function back—you’re right, it is. But since normal method calls are the default case, static methods and class methods need to be implemented separately, and descriptors give a way of enabling these and other complex behaviours with one simple API. As others have already pointed out, the decorator syntax introduced in Python 2.4 gives a more convenient way of declaring static methods, but it is just a syntactic convenience, and doesn't change anything of how static methods work. See http://www.python.org/doc/2.2.3/whatsnew/sect-rellinks.html and http://users.rcn.com/python/download/Descriptor.htm for more details on the new-style classes and descriptors. A: Guido has always been wary of adding new constructs to the language. When static methods were proposed, it was showed that you could already do it (there is a staticmethod() decorator since 2.2), you just didn't have the syntactic sugar for it. If you read the PEP you can see all the discussion that goes into adding something. I, for one, like that approach. It reminds me of C in that there are no unnecessary keywords. When the new syntax for decorators where added to Python 2.4, you could use the existing decorators with a cleaner syntax. Anyway, they aren't so different if you have to maintain an old system. #>2.4 class MyClass(object): @staticmethod def mystaticmethod(params) pass return #<2.4 class MyClass(object): def mystaticmethod(params) '''Static Method''' pass return staticmethod(mystaticmethod) I'd recommend you add a comment or docstring to the static method screaming that is a static method. A: Starting in Python 2.4, one can also use a decorator as in: @staticmethod def mystaticethod(params) ... return But I do not have any insight as to the genesis of this feature, as implemented orginially, in the language. But then again, I'm not Dutch :-) Do see Michael E's response in this post, regarding the late arrival of static methods in the evolution of Python. BTW, for all their simplicity, as @MyDeco someObject is merely "syntactic sugar" for MyDeco(someObject) decorators can be used for many other cool things! A: The static method situation in Python is a rather direct consequence of the design decisions of first-class everything and everything is an executable statement. As others have stated, staticmethod only became available with new semantics allowed by the Python 2.2 descriptor protocol and made syntactically sweeter by function decorators in Python 2.4. There's a simple reason why static methods have gotten so little attention - they don't extend the power of the language in any way and make syntax only slightly better. Semantically they are the equivalent of plain old functions. That's why they were only implemented when the power of the language grew enough to make them implementable in terms of other language features. A: Guido writes the blog The History of Python. I think there is a way to contact him with request to expand on this particular topic. A: Perhaps, the design did not initially think static method is needed when function can be used. Since python didn't have data hiding so there is really no need static methods rather than using classes as name spaces.
How was the syntax chosen for static methods in Python?
I've been working with Python for a while and I find the syntax for declaring methods as static to be peculiar. A regular method would be declared: def mymethod(self, params) ... return A static method is declared: def mystaticethod(params) ... return mystaticmethod = staticmethod(mystaticmethod) If you don't add the static method line, the compiler complains about self missing. This is a very complex way of doing something very simple that in other languages simply use a keyword and a declaration grammar to. Can anyone tell me about the evolution of this syntax? Is this merely because classes were added into the existing language? Since I can move the staticmethod line to later in the class, it also suggests that the parser is working extra hard on bookkeeping. Note that I'm aware of the decorator syntax that was added later, I'm interested to know how the original syntax came about from a language design perspective. The only think I can think of is that the staticmethod application invokes an operation that transforms the function object into a static method.
[ "Static methods were added to Python long after classes were (classes were added very early on, possibly even before 1.0; static methods didn't show up until sometime about 2.0). They were implemented as a modification of normal methods — you create a static method object from a function to get a static method, whereas the compiler generates instance methods by default.\nAs with many things in Python, static methods got introduced and then refined as people used them and wanted better syntax. The initial round was a way of introducing the semantics without adding new syntax to the language (and Python is quite resistant to syntax changes). I'm not Guido, so I'm not exactly sure what was going on in his head and this is somewhat speculative, but Python tends to move slowly, develop incrementally, and refine things as they gain more experience with them (in particular, they don't like adding something until they've figured out the right way to do it. This could have been why there wasn't special syntax for static methods from the beginning).\nAs mjv indicated, though, there is an easier way now, through some syntax sugar added in 2.2 or 2.3 called \"decorators\":\n@staticmethod\ndef mystaticmethod(params)\n ...\n return\n\nThe @staticmethod syntax is sugar for putting mystaticmethod = staticmethod(mystaticmethod) after the method definition.\n", "voyager and adurdin do a good job, between them, of explaining what happened: with the introduction of new-style classes and descriptors in Python 2.2, new and deep semantic possibilities arose -- and the most obviously useful examples (static methods, class methods, properties) were supported by built-in descriptor types without any new syntax (the syntax @foo for decorators was added a couple releases later, once the new descriptors had amply proven their real-world usefulness). I'm not really qualified to channel Guido (where's Tim Peters when you need him!-), but I was already a Python committer at the time and participated in these developments, and I can confirm that's indeed what happened.\nvoyager's observation on this reminding him of C is right on-target: I've long claimed that Python captures more of the \"Spirit of C\" than any of the languages who have mimicked the syntax of C (braces, parentheses after if/while, etc). \"The spirit of C\" is actually described in the (non-normative) Rationale part of the ISO C Standard, and comprises five principles (none of which requires braces!-) of which I claim Python matches 4.5 (there are several videos on the web of presentations of mine on \"Python for Programmers\" where I cover this if you're curious).\nIn particular, the Spirit of C's \"provide only one way to do an operation\" matches the Zen of Python's \"There should be one-- and preferably only one --obvious way to do it\" -- and C and Python, I believe, are the only two widespread languages to explicitly adopt such a design ideal of uniformity and non-redundancy (it's an ideal, and can't be sensibly reached 100% -- e.g. if a and b are integers, a+b and b+a had BETTER be two identically-obvious ways to get their sum!-) -- but it's a goal to aim for!-).\n", "Static methods in python date back to the introduction of the so-called “new-style classes” in Python 2.2. Previous to this, methods on classes were just ordinary functions, stored as attributes on the class:\nclass OldStyleClass:\n def method(self):\n print \"'self' is just the first argument of this function\"\n\ninstance = OldStyleClass()\nOldStyleClass.method(instance) # Just an ordinary function call\nprint repr(OldStyleClass.method) # \"unbound method...\"\n\nMethod calls on instances were specially handled to automatically bind the instance to the first argument of the function:\ninstance.method() # 'instance' is automatically passed in as the first parameter\nprint repr(instance.method) # \"bound method...\"\n\nIn Python 2.2, much of the class system was rethought and reengineered as “new-style classes”—classes that inherit from object. One of the features of new-style classes was “descriptors”, essentially an object in a class that is responsible for describing, getting, and setting the class's attributes. A descriptor has a __get__ method, that gets passed the class and the instance, and should return the requested attribute of the class or instance.\nDescriptors made it possible to use a single API to implement complex behaviour for class attributes, like properties, class methods, and static methods. For example, the staticmethod descriptor could be implemented like this:\nclass staticmethod(object):\n \"\"\"Create a static method from a function.\"\"\"\n\n def __init__(self, func):\n self.func = func\n\n def __get__(self, instance, cls=None):\n return self.func\n\nCompare this with a hypothetical pure-python descriptor for an ordinary method, which is used by default for all plain functions in the classes attributes (this is not exactly what happens with method lookup from an instance, but it does handle the automatic 'self' argument):\nclass method(object):\n \"\"\"Create a method from a function--it will get the instance\n passed in as its first argument.\"\"\"\n\n def __init__(self, func):\n self.func = func\n\n def __get__(self, instance, cls=None):\n # Create a wrapper function that passes the instance as first argument\n # to the original function\n def boundmethod(*args, **kwargs):\n return self.func(self, *args, **kwargs)\n return boundmethod\n\nSo when you write method = staticmethod(method), you are actually creating a new descriptor whose job it is to return the original function unchanged, and storing this descriptor in the class's \"method\" attribute.\nIf that seems like a lot of work to go to just to get the original function back—you’re right, it is. But since normal method calls are the default case, static methods and class methods need to be implemented separately, and descriptors give a way of enabling these and other complex behaviours with one simple API.\nAs others have already pointed out, the decorator syntax introduced in Python 2.4 gives a more convenient way of declaring static methods, but it is just a syntactic convenience, and doesn't change anything of how static methods work.\nSee http://www.python.org/doc/2.2.3/whatsnew/sect-rellinks.html and http://users.rcn.com/python/download/Descriptor.htm for more details on the new-style classes and descriptors.\n", "Guido has always been wary of adding new constructs to the language. When static methods were proposed, it was showed that you could already do it (there is a staticmethod() decorator since 2.2), you just didn't have the syntactic sugar for it.\nIf you read the PEP you can see all the discussion that goes into adding something. I, for one, like that approach. It reminds me of C in that there are no unnecessary keywords.\nWhen the new syntax for decorators where added to Python 2.4, you could use the existing decorators with a cleaner syntax.\nAnyway, they aren't so different if you have to maintain an old system.\n#>2.4\nclass MyClass(object):\n @staticmethod\n def mystaticmethod(params)\n pass\n return\n\n#<2.4\nclass MyClass(object):\n\n def mystaticmethod(params)\n '''Static Method'''\n pass\n return\n staticmethod(mystaticmethod)\n\nI'd recommend you add a comment or docstring to the static method screaming that is a static method.\n", "Starting in Python 2.4, one can also use a decorator as in:\n @staticmethod\n def mystaticethod(params)\n ...\n return\n\nBut I do not have any insight as to the genesis of this feature, as implemented orginially, in the language. But then again, I'm not Dutch :-) Do see Michael E's response in this post, regarding the late arrival of static methods in the evolution of Python.\nBTW, for all their simplicity, as \n\n @MyDeco\n someObject\n\n is merely \"syntactic sugar\" for\n\n MyDeco(someObject)\n\ndecorators can be used for many other cool things!\n", "The static method situation in Python is a rather direct consequence of the design decisions of first-class everything and everything is an executable statement. As others have stated, staticmethod only became available with new semantics allowed by the Python 2.2 descriptor protocol and made syntactically sweeter by function decorators in Python 2.4. There's a simple reason why static methods have gotten so little attention - they don't extend the power of the language in any way and make syntax only slightly better. Semantically they are the equivalent of plain old functions. That's why they were only implemented when the power of the language grew enough to make them implementable in terms of other language features.\n", "Guido writes the blog The History of Python. I think there is a way to contact him with request to expand on this particular topic.\n", "Perhaps, the design did not initially think static method is needed when function can be used. Since python didn't have data hiding so there is really no need static methods rather than using classes as name spaces.\n" ]
[ 12, 11, 4, 3, 2, 2, 1, 0 ]
[]
[]
[ "language_history", "python", "static", "syntax" ]
stackoverflow_0001477545_language_history_python_static_syntax.txt
Q: To do RegEx, what are the advantages/disadvantages to use UTF-8 string instead of unicode? Usually, the best practice in python, when using international languages, is to use unicode and to convert early any input to unicode and to convert late to a string encoding (UTF-8 most of the times). But when I need to do RegEx on unicode I don't find the process really friendly. For example, if I need to find the 'é' character follow by one ore more spaces I have to write (Note: my shell or python file are set to UTF-8): re.match('(?u)\xe9\s+', unicode) So I have to write the unicode code of 'é'. That's not really convenient and if I need to built the RegEx from a variable, things start to come ugly. Example: word_to_match = 'Élisa™'.decode('utf-8') # that return a unicode object regex = '(?u)%s\s+' % word_to_match re.match(regex, unicode) And this is a simple example. So if you have a lot of Regexs to do one after another with special characters in it, I found more easy and natural to do the RegEx on a string encoded in UTF-8. Example: re.match('Élisa\s+', string) re.match('Geneviève\s+', string) re.match('DrØshtit\s+', string) Is there's something I'm missing ? What are the drawbacks of the UTF-8 approach ? UPDATE Ok, I find the problem. I was doing my tests in ipython but unfortunately it seems to mess the encoding. Example: In the python shell >>> string_utf8 = 'Test « with theses » quotes Éléments' >>> string_utf8 'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments' >>> print string_utf8 Test « with theses » quotes Éléments >>> >>> unicode_string = u'Test « with theses » quotes Éléments' >>> unicode_string u'Test \xab with theses \xbb quotes \xc9l\xe9ments' >>> print unicode_string Test « with theses » quotes Éléments >>> >>> unicode_decoded_from_utf8 = string_utf8.decode('utf-8') >>> unicode_decoded_from_utf8 u'Test \xab with theses \xbb quotes \xc9l\xe9ments' >>> print unicode_decoded_from_utf8 Test « with theses » quotes Éléments In ipython In [1]: string_utf8 = 'Test « with theses » quotes Éléments' In [2]: string_utf8 Out[2]: 'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments' In [3]: print string_utf8 Test « with theses » quotes Éléments In [4]: unicode_string = u'Test « with theses » quotes Éléments' In [5]: unicode_string Out[5]: u'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments' In [6]: print unicode_string Test « with theses » quotes Ãléments In [7]: unicode_decoded_from_utf8 = string_utf8.decode('utf-8') In [8]: unicode_decoded_from_utf8 Out[8]: u'Test \xab with theses \xbb quotes \xc9l\xe9ments' In [9]: print unicode_decoded_from_utf8 Test « with theses » quotes Éléments As you can see, ipython is messing with encoding when using the u'' notation. That was the source of my problems. The bug is mentionned here: https://bugs.launchpad.net/ipython/+bug/339642 A: If you're using utf-8 in your python source, you can just write: u'Élisa' and that would be a unicode string, equivalent to writing: u'\xc9lisa' So the 'u' prefix makes the decode thing unneeded. If you leave out the 'u' and write: 'Élisa' Then you'd have a (utf-8 encoded) bytestring, equivalent to: '\xc3\x89lisa' A: You're using Python 2.x? If so, it's generally considered rather bad form to leave your non-ASCII characters in byte strings. Just use Unicode strings the whole way through: re.match(u'Élisa™\\s+', unicodestring) It may look a bit funny writing ‘u’ at the start of your string literals, but that goes away in Python 3.x, and it's really not that bad. Matching UTF-8 strings with regex works for a limited subset of expressions. But if you want to use case-insensitive matches, or non-ASCII characters in a [group], or length-sensitive expressions, it'll go wrong. Best stick with Unicode. (You probably don't especially need the (?u) if you are only using \s, that only brings in some of the more unusual spaces that you may not want to match anyway. Useful for case-insensitive matching on Unicode strings though.)
To do RegEx, what are the advantages/disadvantages to use UTF-8 string instead of unicode?
Usually, the best practice in python, when using international languages, is to use unicode and to convert early any input to unicode and to convert late to a string encoding (UTF-8 most of the times). But when I need to do RegEx on unicode I don't find the process really friendly. For example, if I need to find the 'é' character follow by one ore more spaces I have to write (Note: my shell or python file are set to UTF-8): re.match('(?u)\xe9\s+', unicode) So I have to write the unicode code of 'é'. That's not really convenient and if I need to built the RegEx from a variable, things start to come ugly. Example: word_to_match = 'Élisa™'.decode('utf-8') # that return a unicode object regex = '(?u)%s\s+' % word_to_match re.match(regex, unicode) And this is a simple example. So if you have a lot of Regexs to do one after another with special characters in it, I found more easy and natural to do the RegEx on a string encoded in UTF-8. Example: re.match('Élisa\s+', string) re.match('Geneviève\s+', string) re.match('DrØshtit\s+', string) Is there's something I'm missing ? What are the drawbacks of the UTF-8 approach ? UPDATE Ok, I find the problem. I was doing my tests in ipython but unfortunately it seems to mess the encoding. Example: In the python shell >>> string_utf8 = 'Test « with theses » quotes Éléments' >>> string_utf8 'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments' >>> print string_utf8 Test « with theses » quotes Éléments >>> >>> unicode_string = u'Test « with theses » quotes Éléments' >>> unicode_string u'Test \xab with theses \xbb quotes \xc9l\xe9ments' >>> print unicode_string Test « with theses » quotes Éléments >>> >>> unicode_decoded_from_utf8 = string_utf8.decode('utf-8') >>> unicode_decoded_from_utf8 u'Test \xab with theses \xbb quotes \xc9l\xe9ments' >>> print unicode_decoded_from_utf8 Test « with theses » quotes Éléments In ipython In [1]: string_utf8 = 'Test « with theses » quotes Éléments' In [2]: string_utf8 Out[2]: 'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments' In [3]: print string_utf8 Test « with theses » quotes Éléments In [4]: unicode_string = u'Test « with theses » quotes Éléments' In [5]: unicode_string Out[5]: u'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments' In [6]: print unicode_string Test « with theses » quotes Ãléments In [7]: unicode_decoded_from_utf8 = string_utf8.decode('utf-8') In [8]: unicode_decoded_from_utf8 Out[8]: u'Test \xab with theses \xbb quotes \xc9l\xe9ments' In [9]: print unicode_decoded_from_utf8 Test « with theses » quotes Éléments As you can see, ipython is messing with encoding when using the u'' notation. That was the source of my problems. The bug is mentionned here: https://bugs.launchpad.net/ipython/+bug/339642
[ "If you're using utf-8 in your python source, you can just write:\nu'Élisa'\n\nand that would be a unicode string, equivalent to writing:\nu'\\xc9lisa'\n\nSo the 'u' prefix makes the decode thing unneeded. If you leave out the 'u' and write:\n'Élisa'\n\nThen you'd have a (utf-8 encoded) bytestring, equivalent to:\n'\\xc3\\x89lisa'\n\n", "You're using Python 2.x? If so, it's generally considered rather bad form to leave your non-ASCII characters in byte strings. Just use Unicode strings the whole way through:\nre.match(u'Élisa™\\\\s+', unicodestring)\n\nIt may look a bit funny writing ‘u’ at the start of your string literals, but that goes away in Python 3.x, and it's really not that bad.\nMatching UTF-8 strings with regex works for a limited subset of expressions. But if you want to use case-insensitive matches, or non-ASCII characters in a [group], or length-sensitive expressions, it'll go wrong. Best stick with Unicode.\n(You probably don't especially need the (?u) if you are only using \\s, that only brings in some of the more unusual spaces that you may not want to match anyway. Useful for case-insensitive matching on Unicode strings though.)\n" ]
[ 3, 3 ]
[]
[]
[ "python", "regex", "unicode", "utf_8" ]
stackoverflow_0001478178_python_regex_unicode_utf_8.txt
Q: How to import bookmarks from users web browsers using python? I am working on an RSS Reader type program and I would like it to be able to automatically import RSS feeds from the users browser bookmarks. I assume different browsers use different methods to store bookmarks. Is there any library out there just for this purpose? I only need it to work on Linux so I don't care about Windows or Mac only browsers. A: Take a look at: XBEL
How to import bookmarks from users web browsers using python?
I am working on an RSS Reader type program and I would like it to be able to automatically import RSS feeds from the users browser bookmarks. I assume different browsers use different methods to store bookmarks. Is there any library out there just for this purpose? I only need it to work on Linux so I don't care about Windows or Mac only browsers.
[ "Take a look at: XBEL\n" ]
[ 1 ]
[]
[]
[ "browser", "linux", "python", "xbel" ]
stackoverflow_0001478375_browser_linux_python_xbel.txt
Q: Why my python does not see pysqlite? I would like to have an interface between Python and sqlite. Both are installed on the machine. I had an old version of Python (2.4.3). So, pysqlite was not included by default. First, I tried to solve this problem by installing pysqlite but I did not succeed in this direction. My second attempt to solve the problem was to install a new version of Python. I do not have the root permissions on the machine. So, I installed it locally. The new version of Python is (2.6.2). As far as I know this version should contain pysqlite by default (and now it is called "sqlite3", not "pysqlite2", as before). However, if I type: from sqlite3 import * I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/verrtex/opt/lib/python2.6/sqlite3/__init__.py", line 24, in <module> from dbapi2 import * File "/home/verrtex/opt/lib/python2.6/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: No module named _sqlite3 It has to be noted, that the above error message is different from those which I get if I type "from blablabla import *": Traceback (most recent call last): File "", line 1, in ImportError: No module named blablabla So, python see something related with pysqlite but still has some problems. Can anybody help me, pleas, with that issue? P.S. I use CentOS release 5.3 (Final). A: On Windows, _sqlite3.pyd resides in C:\Python26\DLLs. On *nix, it should be under a path similar to /usr/lib/python2.6/lib-dynload/_sqlite3.so. Chances are that either you are missing that shared library or your PYTHONPATH is set up incorrectly. Since you said you did not install as a superuser, it's probably a malformed path; you can manually have Python search a path for _sqlite3.so by doing import sys sys.path.append("/path/to/my/libs") but the preferred approach would probably be to change PYTHONPATH in your .bashrc or other login file. A: You have a "slite3.py" (actually its equivalent for a package, sqlite3/__init__.py, so import sqlite3 per se is fine, BUT that module in turns tries to import _sqlite3 and fails, so it's not finding _sqlite3.so. It should be in python2.6/lib-dynload under your local Python root, AND ld should be instructed that it has permission to load dynamic libraries from that directory as well (typically by setting appropriate environment variables e.g. in your .bashrc). Do you have that lib-dynload directory? What's in it? What environment variables do you have which contain the string LD (uppercase), i.e. env|grep LD at your shell prompt?
Why my python does not see pysqlite?
I would like to have an interface between Python and sqlite. Both are installed on the machine. I had an old version of Python (2.4.3). So, pysqlite was not included by default. First, I tried to solve this problem by installing pysqlite but I did not succeed in this direction. My second attempt to solve the problem was to install a new version of Python. I do not have the root permissions on the machine. So, I installed it locally. The new version of Python is (2.6.2). As far as I know this version should contain pysqlite by default (and now it is called "sqlite3", not "pysqlite2", as before). However, if I type: from sqlite3 import * I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/verrtex/opt/lib/python2.6/sqlite3/__init__.py", line 24, in <module> from dbapi2 import * File "/home/verrtex/opt/lib/python2.6/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: No module named _sqlite3 It has to be noted, that the above error message is different from those which I get if I type "from blablabla import *": Traceback (most recent call last): File "", line 1, in ImportError: No module named blablabla So, python see something related with pysqlite but still has some problems. Can anybody help me, pleas, with that issue? P.S. I use CentOS release 5.3 (Final).
[ "On Windows, _sqlite3.pyd resides in C:\\Python26\\DLLs. On *nix, it should be under a path similar to /usr/lib/python2.6/lib-dynload/_sqlite3.so. Chances are that either you are missing that shared library or your PYTHONPATH is set up incorrectly. \nSince you said you did not install as a superuser, it's probably a malformed path; you can manually have Python search a path for _sqlite3.so by doing\nimport sys\nsys.path.append(\"/path/to/my/libs\")\n\nbut the preferred approach would probably be to change PYTHONPATH in your .bashrc or other login file.\n", "You have a \"slite3.py\" (actually its equivalent for a package, sqlite3/__init__.py, so import sqlite3 per se is fine, BUT that module in turns tries to import _sqlite3 and fails, so it's not finding _sqlite3.so. It should be in python2.6/lib-dynload under your local Python root, AND ld should be instructed that it has permission to load dynamic libraries from that directory as well (typically by setting appropriate environment variables e.g. in your .bashrc). Do you have that lib-dynload directory? What's in it? What environment variables do you have which contain the string LD (uppercase), i.e. env|grep LD at your shell prompt?\n" ]
[ 1, 1 ]
[]
[]
[ "pysqlite", "python", "sqlite" ]
stackoverflow_0001478479_pysqlite_python_sqlite.txt
Q: for line in open(filename) I frequently see python code similar to for line in open(filename): do_something(line) When does filename get closed with this code? Would it be better to write with open(filename) as f: for line in f.readlines(): do_something(line) A: filename would be closed when it falls out of scope. That normally would be the end of the method. Yes, it's better to use with. Once you have a file object, you perform all file I/O by calling methods of this object. [...] When you are done with the file, you should finish by calling the close method on the object, to close the connection to the file: input.close() In short scripts, people often omit this step, as Python automatically closes the file when a file object is reclaimed during garbage collection (which in mainstream Python means the file is closed just about at once, although other important Python implementations, such as Jython and IronPython, have other, more relaxed garbage collection strategies). Nevertheless, it is good programming practice to close your files as soon as possible, and it is especially a good idea in larger programs, which otherwise may be at more risk of having excessive numbers of uselessly open files lying about. Note that try/finally is particularly well suited to ensuing that a file gets closed, even when a function terminates due to an uncaught exception. Python Cookbook, Page 59. A: Drop .readlines(). It is redundant and undesirable for large files (due to memory consumption). The variant with 'with' block always closes file. with open(filename) as file_: for line in file_: do_something(line) When file will be closed in the bare 'for'-loop variant depends on Python implementation. A: The with part is better because it close the file afterwards. You don't even have to use readlines(). for line in file is enough. I don't think the first one closes it. A: python is garbage-collected - cpython has reference counting and a backup cycle detecting garbage collector. File objects close their file handle when the are deleted/finalized. Thus the file will be eventually closed, and in cpython will closed as soon as the for loop finishes.
for line in open(filename)
I frequently see python code similar to for line in open(filename): do_something(line) When does filename get closed with this code? Would it be better to write with open(filename) as f: for line in f.readlines(): do_something(line)
[ "filename would be closed when it falls out of scope. That normally would be the end of the method.\nYes, it's better to use with.\n\nOnce you have a file object, you perform all file I/O by calling methods of this object. [...] When you are done with the file, you should finish by calling the close method on the object, to close the connection to the file:\ninput.close()\n\nIn short scripts, people often omit this step, as Python automatically closes the file when a file object is reclaimed during garbage collection (which in mainstream Python means the file is closed just about at once, although other important Python implementations, such as Jython and IronPython, have other, more relaxed garbage collection strategies). Nevertheless, it is good programming practice to close your files as soon as possible, and it is especially a good idea in larger programs, which otherwise may be at more risk of having excessive numbers of uselessly open files lying about. Note that try/finally is particularly well suited to ensuing that a file gets closed, even when a function terminates due to an uncaught exception.\n\nPython Cookbook, Page 59.\n", "Drop .readlines(). It is redundant and undesirable for large files (due to memory consumption). The variant with 'with' block always closes file. \nwith open(filename) as file_:\n for line in file_:\n do_something(line)\n\nWhen file will be closed in the bare 'for'-loop variant depends on Python implementation.\n", "The with part is better because it close the file afterwards.\nYou don't even have to use readlines(). for line in file is enough.\nI don't think the first one closes it.\n", "python is garbage-collected - cpython has reference counting and a backup cycle detecting garbage collector.\nFile objects close their file handle when the are deleted/finalized. \nThus the file will be eventually closed, and in cpython will closed as soon as the for loop finishes. \n" ]
[ 40, 9, 8, 3 ]
[]
[]
[ "file", "garbage_collection", "python" ]
stackoverflow_0001478697_file_garbage_collection_python.txt
Q: Python: "1-2-3-4" to [1, 2, 3, 4] What is the best way to convert a string on the format "1-2-3-4" to a list [1, 2, 3, 4]? The string may also be empty, in which case the conversion should return an empty list []. This is what I have: map(lambda x: int(x), filter(lambda x: x != '', "1-2-3-4".split('-'))) EDIT: Sorry all of those who answered before I corrected my question, it was unclear for the first minute or so. A: You can use a list comprehension to make it shorter. Use the if to account for the empty string. the_string = '1-2-3-4' [int(x) for x in the_string.split('-') if x != ''] A: >>> for s in ["", "0", "-0-0", "1-2-3-4"]: ... print(map(int, filter(None, s.split('-')))) ... [] [0] [0, 0] [1, 2, 3, 4] A: Convert the higher-order functions to a more readable list-comprehension [ int(n) for n in "1-2-3-4".split('-') if n != '' ] The rest is fine. A: From the format of your example, you want int's in the list. If so, then you will need to convert the string numbers to int's. If not, then you are done after the string split. text="1-2-3-4" numlist=[int(ith) for ith in text.split('-')] print numlist [1, 2, 3, 4] textlist=text.split('-') print textlist ['1', '2', '3', '4'] EDIT: Revising my answer to reflect the update in the question. If the list can be malformed then "try...catch" if your friend. This will enforce that the list is either well formed, or you get an empty list. >>> def convert(input): ... try: ... templist=[int(ith) for ith in input.split('-')] ... except: ... templist=[] ... return templist ... >>> convert('1-2-3-4') [1, 2, 3, 4] >>> convert('') [] >>> convert('----1-2--3--4---') [] >>> convert('Explicit is better than implicit.') [] >>> convert('1-1 = 0') [] A: I'd go with this: >>> the_string = '1-2-3-4- -5- 6-' >>> >>> [int(x.strip()) for x in the_string.split('-') if len(x)] [1, 2, 3, 4, 5, 6] A: def convert(s): if s: return map(int, s.split("-")) else: return [] A: you don't need the lambda, and split won't give you empty elements: map(int, filter(None,x.split("-")))
Python: "1-2-3-4" to [1, 2, 3, 4]
What is the best way to convert a string on the format "1-2-3-4" to a list [1, 2, 3, 4]? The string may also be empty, in which case the conversion should return an empty list []. This is what I have: map(lambda x: int(x), filter(lambda x: x != '', "1-2-3-4".split('-'))) EDIT: Sorry all of those who answered before I corrected my question, it was unclear for the first minute or so.
[ "You can use a list comprehension to make it shorter. Use the if to account for the empty string.\nthe_string = '1-2-3-4'\n\n[int(x) for x in the_string.split('-') if x != '']\n\n", ">>> for s in [\"\", \"0\", \"-0-0\", \"1-2-3-4\"]:\n... print(map(int, filter(None, s.split('-'))))\n... \n[]\n[0]\n[0, 0]\n[1, 2, 3, 4]\n\n", "Convert the higher-order functions to a more readable list-comprehension\n[ int(n) for n in \"1-2-3-4\".split('-') if n != '' ]\n\nThe rest is fine.\n", "From the format of your example, you want int's in the list. If so, then you will need to convert the string numbers to int's. If not, then you are done after the string split.\ntext=\"1-2-3-4\"\n\nnumlist=[int(ith) for ith in text.split('-')]\nprint numlist\n[1, 2, 3, 4]\n\ntextlist=text.split('-')\nprint textlist\n['1', '2', '3', '4']\n\nEDIT: Revising my answer to reflect the update in the question.\nIf the list can be malformed then \"try...catch\" if your friend. This will enforce that the list is either well formed, or you get an empty list. \n>>> def convert(input):\n... try:\n... templist=[int(ith) for ith in input.split('-')]\n... except:\n... templist=[]\n... return templist\n... \n>>> convert('1-2-3-4')\n[1, 2, 3, 4]\n>>> convert('')\n[]\n>>> convert('----1-2--3--4---')\n[]\n>>> convert('Explicit is better than implicit.')\n[]\n>>> convert('1-1 = 0')\n[]\n\n", "I'd go with this:\n>>> the_string = '1-2-3-4- -5- 6-'\n>>>\n>>> [int(x.strip()) for x in the_string.split('-') if len(x)]\n[1, 2, 3, 4, 5, 6]\n\n", "def convert(s):\n if s:\n return map(int, s.split(\"-\"))\n else:\n return []\n\n", "you don't need the lambda, and split won't give you empty elements:\nmap(int, filter(None,x.split(\"-\")))\n\n" ]
[ 12, 8, 4, 2, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001478908_python.txt
Q: Build failure during install py25-gtk on Mac OS X 10.6 using MacPorts 1.8 When I do this command : sudo port clean py25-gtk sudo port install py25-gtk I get this error : ---> Computing dependencies for py25-gtk ---> Building getopt Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_sysutils_getopt/work/getopt-1.1.4" && /usr/bin/make -j2 all LIBCGETOPT=0 prefix=/opt/local mandir=/opt/local/share/man CC=/usr/bin/gcc-4.2 " returned error 2 Command output: _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _parse_error in getopt.o _our_realloc in getopt.o _our_malloc in getopt.o _set_shell in getopt.o _set_shell in getopt.o _add_longopt in getopt.o _add_long_options in getopt.o _add_long_options in getopt.o _normalize in getopt.o _main in getopt.o _main in getopt.o _main in getopt.o _main in getopt.o _main in getopt.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [getopt] Error 1 Error: The following dependencies failed to build: atk gtk-doc gnome-doc-utils rarian getopt intltool gnome-common p5-pathtools p5-scalar-list-utils gtk2 cairo libpixman pango shared-mime-info xorg-libXcursor xorg-libXrandr libglade2 py25-cairo py25-numpy fftw-3 py25-nose py25-gobject Error: Status 1 encountered during processing. For information getopt isn't installed with macports, it's in /usr/bin/getopt A: The solution is to reinstall all ports because I upgraded to a new OS version (10.5 -> 10.6). To reinstall your ports, save the list of your installed ports: port installed > myports.txt Clean any partially completed builds, and uninstall all installed ports: sudo port clean installed sudo port -f uninstall installed Browse myports.txt and install the ports that you actually want to use (as opposed to those that are only needed as dependencies) one by one, remembering to specify the appropriate variants: sudo port install portname +variant1 +variant2 ... To resolve my problem, i can do and : sudo port install py25-gtk Now it's work ! Read the complete documentation to reinstall ports at http://trac.macports.org/wiki/Migration
Build failure during install py25-gtk on Mac OS X 10.6 using MacPorts 1.8
When I do this command : sudo port clean py25-gtk sudo port install py25-gtk I get this error : ---> Computing dependencies for py25-gtk ---> Building getopt Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_sysutils_getopt/work/getopt-1.1.4" && /usr/bin/make -j2 all LIBCGETOPT=0 prefix=/opt/local mandir=/opt/local/share/man CC=/usr/bin/gcc-4.2 " returned error 2 Command output: _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _print_help in getopt.o _parse_error in getopt.o _our_realloc in getopt.o _our_malloc in getopt.o _set_shell in getopt.o _set_shell in getopt.o _add_longopt in getopt.o _add_long_options in getopt.o _add_long_options in getopt.o _normalize in getopt.o _main in getopt.o _main in getopt.o _main in getopt.o _main in getopt.o _main in getopt.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [getopt] Error 1 Error: The following dependencies failed to build: atk gtk-doc gnome-doc-utils rarian getopt intltool gnome-common p5-pathtools p5-scalar-list-utils gtk2 cairo libpixman pango shared-mime-info xorg-libXcursor xorg-libXrandr libglade2 py25-cairo py25-numpy fftw-3 py25-nose py25-gobject Error: Status 1 encountered during processing. For information getopt isn't installed with macports, it's in /usr/bin/getopt
[ "The solution is to reinstall all ports because I upgraded to a new OS version (10.5 -> 10.6).\nTo reinstall your ports, save the list of your installed ports:\nport installed > myports.txt\n\nClean any partially completed builds, and uninstall all installed ports:\nsudo port clean installed\nsudo port -f uninstall installed\n\nBrowse myports.txt and install the ports that you actually want to use (as opposed to those that are only needed as dependencies) one by one, remembering to specify the appropriate variants:\nsudo port install portname +variant1 +variant2 ...\n\nTo resolve my problem, i can do and :\nsudo port install py25-gtk\n\nNow it's work !\nRead the complete documentation to reinstall ports at http://trac.macports.org/wiki/Migration\n" ]
[ 1 ]
[]
[]
[ "getopt", "gtk", "macos", "macports", "python" ]
stackoverflow_0001478263_getopt_gtk_macos_macports_python.txt