content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: paths not being consistent python django I'm trying to import sorl-thumbnail into my app in django. Now the way that I have the site set up, using mod_wsgi on CentOS 5 with cpanel, the path for the apps must have the project name when importing... which is a pain. Obviously this is a cause of concern with portability of the app. I'm importing sorl-thumbnail, in previous apps I've just added sorl.thumbnail to the installed apps and it's worked. However now it's causing issues unless I have the project name www. in front of the import path. It's never done this before and I can't seem to get around the path issue. I've added www.sorl.thumbnail also but then the rest of the paths in the sorl files have errors. Any ideas on how to remedy this or fix a work around? A: You shouldn't need to use the project name when importing - just make sure that the apps are somewhere on your python path. Something along the lines of: sys.path.append('/etc/django/domains/mydomain.com/myproject/') ... in your .wsgi file should do it (with the path to your own project, of course). Ideally reusable apps should be outside of your project directory anyway, so consider creating a folder such as '/etc/django/lib/' to contain reusable apps and appending that to sys.path in your wsgi handler too. Or, if you don't like that, perhaps use virtualenv and add your reusable apps directly to site-packages. Or, if you don't like that, put your reusable apps somewhere else and symlink them to site-packages or somewhere on your python path. In short, just make sure the package/module you're importing is on your python path. If you find yourself adding the project name or 'www' to a bunch of import paths, then you're probably doing something wrong.
paths not being consistent python django
I'm trying to import sorl-thumbnail into my app in django. Now the way that I have the site set up, using mod_wsgi on CentOS 5 with cpanel, the path for the apps must have the project name when importing... which is a pain. Obviously this is a cause of concern with portability of the app. I'm importing sorl-thumbnail, in previous apps I've just added sorl.thumbnail to the installed apps and it's worked. However now it's causing issues unless I have the project name www. in front of the import path. It's never done this before and I can't seem to get around the path issue. I've added www.sorl.thumbnail also but then the rest of the paths in the sorl files have errors. Any ideas on how to remedy this or fix a work around?
[ "You shouldn't need to use the project name when importing - just make sure that the apps are somewhere on your python path. Something along the lines of:\nsys.path.append('/etc/django/domains/mydomain.com/myproject/')\n\n... in your .wsgi file should do it (with the path to your own project, of course).\nIdeally reusable apps should be outside of your project directory anyway, so consider creating a folder such as '/etc/django/lib/' to contain reusable apps and appending that to sys.path in your wsgi handler too. \nOr, if you don't like that, perhaps use virtualenv and add your reusable apps directly to site-packages.\nOr, if you don't like that, put your reusable apps somewhere else and symlink them to site-packages or somewhere on your python path.\nIn short, just make sure the package/module you're importing is on your python path. If you find yourself adding the project name or 'www' to a bunch of import paths, then you're probably doing something wrong.\n" ]
[ 3 ]
[]
[]
[ "django", "import", "python", "sorl_thumbnail" ]
stackoverflow_0001628129_django_import_python_sorl_thumbnail.txt
Q: Naming Python loggers In Django, I've got loggers all over the place, currently with hard-coded names. For module-level logging (i.e., in a module of view functions) I have the urge to do this. log = logging.getLogger(__name__) For class-level logging (i.e., in a class __init__ method) I have the urge to do this. self.log = logging.getLogger("%s.%s" % ( self.__module__, self.__class__.__name__)) I'm looking for second opinions before I tackle several dozen occurrences of getLogger("hard.coded.name"). Will this work? Anyone else naming their loggers with the same unimaginative ways? Further, should I break down and write a class decorator for this log definition? A: I typically don't use or find a need for class-level loggers, but I keep my modules at a few classes at most. A simple: import logging LOG = logging.getLogger(__name__) At the top of the module and subsequent: LOG.info('Spam and eggs are tasty!') from anywhere in the file typically gets me to where I want to be. This avoids the need for self.log all over the place, which tends to bother me from both a put-it-in-every-class perspective and makes me 5 characters closer to 79 character lines that fit. You could always use a pseudo-class-decorator: >>> import logging >>> class Foo(object): ... def __init__(self): ... self.log.info('Meh') ... >>> def logged_class(cls): ... cls.log = logging.getLogger('{0}.{1}'.format(__name__, cls.__name__)) ... >>> logged_class(Foo) >>> logging.basicConfig(level=logging.DEBUG) >>> f = Foo() INFO:__main__.Foo:Meh A: For class level logging, as an alternative to a pseudo-class decorator, you could use a metaclass to make the logger for you at class creation time... import logging class Foo(object): class __metaclass__(type): def __init__(cls, name, bases, attrs): type.__init__(name, bases, attrs) cls.log = logging.getLogger('%s.%s' % (attrs['__module__'], name)) def __init__(self): self.log.info('here I am, a %s!' % type(self).__name__) if __name__ == '__main__': logging.basicConfig(level=logging.DEBUG) foo = Foo() A: That looks like it will work, except that self won't have a __module__ attribute; its class will. The class-level logger call should look like: self.log = logging.getLogger( "%s.%s" % ( self.__class__.__module__, self.__class__.__name__ ) )
Naming Python loggers
In Django, I've got loggers all over the place, currently with hard-coded names. For module-level logging (i.e., in a module of view functions) I have the urge to do this. log = logging.getLogger(__name__) For class-level logging (i.e., in a class __init__ method) I have the urge to do this. self.log = logging.getLogger("%s.%s" % ( self.__module__, self.__class__.__name__)) I'm looking for second opinions before I tackle several dozen occurrences of getLogger("hard.coded.name"). Will this work? Anyone else naming their loggers with the same unimaginative ways? Further, should I break down and write a class decorator for this log definition?
[ "I typically don't use or find a need for class-level loggers, but I keep my modules at a few classes at most. A simple:\nimport logging\nLOG = logging.getLogger(__name__)\n\nAt the top of the module and subsequent:\nLOG.info('Spam and eggs are tasty!')\n\nfrom anywhere in the file typically gets me to where I want to be. This avoids the need for self.log all over the place, which tends to bother me from both a put-it-in-every-class perspective and makes me 5 characters closer to 79 character lines that fit.\nYou could always use a pseudo-class-decorator:\n>>> import logging\n>>> class Foo(object):\n... def __init__(self):\n... self.log.info('Meh')\n... \n>>> def logged_class(cls):\n... cls.log = logging.getLogger('{0}.{1}'.format(__name__, cls.__name__))\n... \n>>> logged_class(Foo)\n>>> logging.basicConfig(level=logging.DEBUG)\n>>> f = Foo()\nINFO:__main__.Foo:Meh\n\n", "For class level logging, as an alternative to a pseudo-class decorator, you could use a metaclass to make the logger for you at class creation time...\nimport logging\n\nclass Foo(object):\n class __metaclass__(type):\n def __init__(cls, name, bases, attrs):\n type.__init__(name, bases, attrs)\n cls.log = logging.getLogger('%s.%s' % (attrs['__module__'], name))\n def __init__(self):\n self.log.info('here I am, a %s!' % type(self).__name__)\n\nif __name__ == '__main__':\n logging.basicConfig(level=logging.DEBUG)\n foo = Foo()\n\n", "That looks like it will work, except that self won't have a __module__ attribute; its class will. The class-level logger call should look like:\nself.log = logging.getLogger( \"%s.%s\" % ( self.__class__.__module__, self.__class__.__name__ ) )\n\n" ]
[ 68, 3, 2 ]
[]
[]
[ "django", "logging", "python" ]
stackoverflow_0000401277_django_logging_python.txt
Q: Is it possible access other webpages from within another page Basically, what I'm trying to do is simply make a small script that accesses finds the most recent post in a forum and pulls some text or an image out of it. I have this working in python, using the htmllib module and some regex. But, the script still isn't very convenient as is, it would be much nicer if I could somehow put it into an HTML document. It appears that simply embedding Python scripts is not possible, so I'm looking to see if theres a similar feature like python's htmllib that can be used to access some other webpage and extract some information from it. (Essentially, if I could get this script going in the form of an html document, I could just open one html document, rather than navigate to several different pages to get the information I want to check) I'm pretty sure that javascript doesn't have the functionality I need, but I was wondering about other languages such as jQuery, or even something like AJAX? A: As Greg mentions, an Ajax solution will not work "out of the box" when trying to load from remote servers. If, however, you are trying to load from the same server, it should be fairly straightforward. I'm presenting this answer to show how this could be done using jQuery in just a few lines of code. <div id="placeholder">Please wait, loading...</div> <script type="text/javascript" src="/path/to/jquery.js"> </script> <script type="text/javascript> $(document).ready(function() { $('#placeholder').load('/path/to/my/locally-served/page.html'); }); </script> If you are trying to load a resource from a different server than the one you're on, one way around the security limitations would be to offer a proxy script, which could fetch the remote content on the server, and make it seem like it's coming from your own domain. Here are the docs on jQuery's load method : http://docs.jquery.com/Ajax/load There is one other nice feature to note, which is partial-page-loading. For example, lets say your remote page is a full HTML document, but you only want the content of a single div in that page. You can pass a selector to the load method, as in my example above, and this will further simplify your task. For example, $('#placeholder').load('/path/to/my/locally-served/page.html #someTargetDiv'); Best of luck!-Mike A: There are two general approaches: Modify your Python code so that it runs as a CGI (or WSGI or whatever) module and generate the page of interest by running some server side code. Use Javascript with jQuery to load the content of interest by running some client side code. The difference between these two approaches is where the third party server sees the requests coming from. In the first case, it's from your web server. In the second case, it's from the browser of the user accessing your page. Some browsers may not handle loading content from third party servers very gracefully (that is, they might pop up warning boxes or something). A: You can embed Python. The most straightforward way would be to use the cgi module. If the script will be run often and you're using Apache it would be more efficient to use mod_python or mod_wsgi. You could even use a Python framework like Django and code the entire site in Python. You could also code this in Javascript, but it would be much trickier. There's a lot of security concerns with cross-site requests (ah, the unsafe internet) and so it tends to be a tricky domain when you try to do it through the browser.
Is it possible access other webpages from within another page
Basically, what I'm trying to do is simply make a small script that accesses finds the most recent post in a forum and pulls some text or an image out of it. I have this working in python, using the htmllib module and some regex. But, the script still isn't very convenient as is, it would be much nicer if I could somehow put it into an HTML document. It appears that simply embedding Python scripts is not possible, so I'm looking to see if theres a similar feature like python's htmllib that can be used to access some other webpage and extract some information from it. (Essentially, if I could get this script going in the form of an html document, I could just open one html document, rather than navigate to several different pages to get the information I want to check) I'm pretty sure that javascript doesn't have the functionality I need, but I was wondering about other languages such as jQuery, or even something like AJAX?
[ "As Greg mentions, an Ajax solution will not work \"out of the box\" when trying to load from remote servers.\nIf, however, you are trying to load from the same server, it should be fairly straightforward. I'm presenting this answer to show how this could be done using jQuery in just a few lines of code.\n<div id=\"placeholder\">Please wait, loading...</div>\n\n<script type=\"text/javascript\" src=\"/path/to/jquery.js\">\n</script>\n<script type=\"text/javascript>\n$(document).ready(function() {\n $('#placeholder').load('/path/to/my/locally-served/page.html');\n});\n</script>\n\nIf you are trying to load a resource from a different server than the one you're on, one way around the security limitations would be to offer a proxy script, which could fetch the remote content on the server, and make it seem like it's coming from your own domain.\nHere are the docs on jQuery's load method : http://docs.jquery.com/Ajax/load\nThere is one other nice feature to note, which is partial-page-loading. For example, lets say your remote page is a full HTML document, but you only want the content of a single div in that page. You can pass a selector to the load method, as in my example above, and this will further simplify your task. For example,\n$('#placeholder').load('/path/to/my/locally-served/page.html #someTargetDiv');\n\nBest of luck!-Mike\n", "There are two general approaches:\n\nModify your Python code so that it runs as a CGI (or WSGI or whatever) module and generate the page of interest by running some server side code.\nUse Javascript with jQuery to load the content of interest by running some client side code.\n\nThe difference between these two approaches is where the third party server sees the requests coming from. In the first case, it's from your web server. In the second case, it's from the browser of the user accessing your page.\nSome browsers may not handle loading content from third party servers very gracefully (that is, they might pop up warning boxes or something).\n", "You can embed Python. The most straightforward way would be to use the cgi module. If the script will be run often and you're using Apache it would be more efficient to use mod_python or mod_wsgi. You could even use a Python framework like Django and code the entire site in Python.\nYou could also code this in Javascript, but it would be much trickier. There's a lot of security concerns with cross-site requests (ah, the unsafe internet) and so it tends to be a tricky domain when you try to do it through the browser.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "javascript", "jquery", "python" ]
stackoverflow_0001628564_javascript_jquery_python.txt
Q: python script to match C function signature in multiple lines I am reading .c file to look out for functions defined in it and count number of lines in each function. My problem is that I am unable to look for function name/signature spanned across multiple ines. I have the list of function names of .c file and i am matching the function names of this list with functions in .c file to process further. e.g. My .c file is: 1. int main( 2. void 3. ) here main signature is spanned over three lines. and I have a list of functions as: int main(void); how can i match "int main(void)" with multiline main in .c file ? I want to start counting lines once function is matched. A: I suggest you write a simple parser for the C language. One of the examples in the ANTLR book does something similar to what you're after. Pyparsing is a very nice Python library for writing parsers. Here is a parser for ANSI C: http://code.google.com/p/pycparser/ (written using another Python parser library, Ply). A: If you have definition that matches "exactly" then you can use regex: int\s+main\s*\(\s*void\s*\)\s*; where \s* means zero or more whitechars, and \s+ one or more whitechars. To use it with multiline search you can define it like: RE_MAIN = re.compile(r'int\s+main\s*\(\s*void\s*\)\s*;', re.MULTILINE | re.DOTALL) By "exactly" I mean that it does not match function definition like int main(); (void omitted) This way you can find where the function begins, then do simple char scanner counting { and } remembering to ignore comments and ignore chars and strings constants
python script to match C function signature in multiple lines
I am reading .c file to look out for functions defined in it and count number of lines in each function. My problem is that I am unable to look for function name/signature spanned across multiple ines. I have the list of function names of .c file and i am matching the function names of this list with functions in .c file to process further. e.g. My .c file is: 1. int main( 2. void 3. ) here main signature is spanned over three lines. and I have a list of functions as: int main(void); how can i match "int main(void)" with multiline main in .c file ? I want to start counting lines once function is matched.
[ "I suggest you write a simple parser for the C language.\nOne of the examples in the ANTLR book does something similar to what you're after.\nPyparsing is a very nice Python library for writing parsers.\nHere is a parser for ANSI C: http://code.google.com/p/pycparser/ (written using another Python parser library, Ply).\n", "If you have definition that matches \"exactly\" then you can use regex:\nint\\s+main\\s*\\(\\s*void\\s*\\)\\s*;\n\nwhere \\s* means zero or more whitechars, and \\s+ one or more whitechars.\nTo use it with multiline search you can define it like:\nRE_MAIN = re.compile(r'int\\s+main\\s*\\(\\s*void\\s*\\)\\s*;', re.MULTILINE | re.DOTALL) \n\nBy \"exactly\" I mean that it does not match function definition like\nint main();\n\n(void omitted)\nThis way you can find where the function begins, then do simple char scanner counting { and } remembering to ignore comments and ignore chars and strings constants\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001629611_python.txt
Q: python: sth like parametrized inheritance I want my classes X and Y to have a method f(x) which calls a function func(x, y) so that X.f(x) always calls func(x, 1) and Y.f(x) always calls func(x, 2) class X(object): def f(self, x): func(x, 1) class Y(object): def f(self, x): func(x, 2) But I want to place f in a common base class B for X and Y. How can I pass that value (1 or 2) when I inherit X and Y from B? Can I have sth like this (C++ like pseudocode): class B(object)<y>: # y is sth like inheritance parameter def f(self, x): func(x, y) class X(B<1>): pass class Y(B<2>): pass What techniques are used in Python for such tasks? A: Python is more flexible than you give it credit. I think you want something like this: class B(object): def f(self,x): func(x, self.param) class X(B): param=1 class Y(B): param=2 NB note the method f has self as the first parameter. the param= lines are class variables. A: You could use a class decorator (Python 2.6 and up) if you just want to add a common function to several classes (instead of using inheritance). def addF(y): def f(self, x): return "Hello", x, "and", y def decorate(cls): cls.f = f return cls return decorate @addF(1) class X(object): pass @addF(2) class Y(object): pass print X().f("X") print Y().f("Y") >>> Hello X and 1 Hello Y and 2
python: sth like parametrized inheritance
I want my classes X and Y to have a method f(x) which calls a function func(x, y) so that X.f(x) always calls func(x, 1) and Y.f(x) always calls func(x, 2) class X(object): def f(self, x): func(x, 1) class Y(object): def f(self, x): func(x, 2) But I want to place f in a common base class B for X and Y. How can I pass that value (1 or 2) when I inherit X and Y from B? Can I have sth like this (C++ like pseudocode): class B(object)<y>: # y is sth like inheritance parameter def f(self, x): func(x, y) class X(B<1>): pass class Y(B<2>): pass What techniques are used in Python for such tasks?
[ "Python is more flexible than you give it credit.\nI think you want something like this:\nclass B(object):\n def f(self,x):\n func(x, self.param)\n\nclass X(B):\n param=1\n\nclass Y(B):\n param=2\n\nNB \n\nnote the method f has self as the\nfirst parameter.\nthe param= lines are class variables.\n\n", "You could use a class decorator (Python 2.6 and up) if you just want to add a common function to several classes (instead of using inheritance).\ndef addF(y):\n def f(self, x):\n return \"Hello\", x, \"and\", y\n\n def decorate(cls):\n cls.f = f\n return cls\n\n return decorate\n\n\n@addF(1)\nclass X(object):\n pass\n\n@addF(2)\nclass Y(object):\n pass\n\nprint X().f(\"X\")\nprint Y().f(\"Y\")\n\n>>> \nHello X and 1\nHello Y and 2\n\n" ]
[ 6, 1 ]
[]
[]
[ "inheritance", "python" ]
stackoverflow_0001629597_inheritance_python.txt
Q: Python string formatting special characters How do you make the following code work? example = "%%(test)%" % {'test':'name',} print example Where the desired output is "%name%" Thanks A: An alternative is to use the new Advanced String Formatting >>> example = "%{test}%".format(test="name") >>> print example %name% A: example = "%%%(test)s%%" % {'test':'name',} print example %(key)s is a placeholder for a string identified by key. %% escapes % when using the % operator.
Python string formatting special characters
How do you make the following code work? example = "%%(test)%" % {'test':'name',} print example Where the desired output is "%name%" Thanks
[ "An alternative is to use the new Advanced String Formatting\n>>> example = \"%{test}%\".format(test=\"name\")\n>>> print example\n%name%\n\n", "example = \"%%%(test)s%%\" % {'test':'name',}\nprint example\n\n%(key)s is a placeholder for a string identified by key. %% escapes % when using the % operator.\n" ]
[ 7, 5 ]
[]
[]
[ "python", "string_formatting" ]
stackoverflow_0001629755_python_string_formatting.txt
Q: Are there any known issues with django and multithreading? I need to develop an app that runs side by side with a django-app. This will be the first time i develop a multithreaded app that runs next to a django-app so are there any 'gotchas' and 'traps' i should be aware of? A: Generally, your Django app already is multi-threaded. That's the way most of the standard Django servers operate -- they can tolerate multiple WSGI threads sending requests to them. Further, you'll almost always have Django running under Apache, which is also multi-threaded. If you use mod_wsgi, then Django may be part of the Apache process or a separate process. Anything that is running "side-by-side" (Whatever that means) will be outside Apache, outside Django, and in a separate process. So any multi-threading considerations don't apply between your Apache process (which contains Django) and your other process. A: What do you mean side by side with a django-app? Could you please elaborate a bit on what you are you planning to achieve? Then helping out/answering should be easier. Answer on OP's 1st edit Ah. Okey. I have encountered such an application which does exactly the thing you want. It's called feedjack and you can found it http://www.feedjack.org . I had tried to do something similar. Generally, I think you would be okey with such a case (separate process using Django's ORM to populate the DB with data). At least, I had not such problems when I was using their script along with a similar django app of mine. A: If you want to expose your django-app to some external software, you need to create an API for your application. You should look at REST http://code.google.com/p/django-rest-interface/ and XMLRPC http://code.google.com/p/django-xmlrpc/ The multi-threaded nature of the external app is not a problem for django served by a production webserver (Apache for example) because by design django is able to serve many requests in parrallel I hope it helps
Are there any known issues with django and multithreading?
I need to develop an app that runs side by side with a django-app. This will be the first time i develop a multithreaded app that runs next to a django-app so are there any 'gotchas' and 'traps' i should be aware of?
[ "Generally, your Django app already is multi-threaded. That's the way most of the standard Django servers operate -- they can tolerate multiple WSGI threads sending requests to them.\nFurther, you'll almost always have Django running under Apache, which is also multi-threaded.\nIf you use mod_wsgi, then Django may be part of the Apache process or a separate process.\nAnything that is running \"side-by-side\" (Whatever that means) will be outside Apache, outside Django, and in a separate process.\nSo any multi-threading considerations don't apply between your Apache process (which contains Django) and your other process. \n", "What do you mean side by side with a django-app? Could you please elaborate a bit on what you are you planning to achieve? Then helping out/answering should be easier.\n\nAnswer on OP's 1st edit\n\nAh. Okey. I have encountered such an application which does exactly the thing you want. It's called feedjack and you can found it http://www.feedjack.org . I had tried to do something similar. Generally, I think you would be okey with such a case (separate process using Django's ORM to populate the DB with data). At least, I had not such problems when I was using their script along with a similar django app of mine.\n", "If you want to expose your django-app to some external software, you need to create an API for your application.\nYou should look at REST http://code.google.com/p/django-rest-interface/ and XMLRPC http://code.google.com/p/django-xmlrpc/\nThe multi-threaded nature of the external app is not a problem for django served by a production webserver (Apache for example) because by design django is able to serve many requests in parrallel\nI hope it helps\n" ]
[ 2, 0, 0 ]
[]
[]
[ "django", "multithreading", "python" ]
stackoverflow_0001629800_django_multithreading_python.txt
Q: django flatpage redirects I want to make sure all of my flatpages have the www subdomain and redirect to it if they don't. I've looked at some middlewares that redirect to www, but (1), they usually redirect all urls to www and (2), the ones I've found don't work with flatpages. I don't want all of my site urls to redirect to include the www subdomain, just the flatpages. Anyone know how I should go about doing this? A: One option is to modify a middleware, so that it only redirects if response.status_code == 404. Put the middleware just before the flatpage middleware in settings.py. This would redirect http://example.com/flatpage/ -> http://www.example.com/flatpage/ but also http://example.com/invalidurl/ -> http://www.example.com/invalidurl/ before returning a 404 error. Another option would be to write your own flatpage middleware based on the official one. You can see the code for the FlatpageFallbackMiddleware class on the django website. In the try, except block, check to see if a flatpage exists. Then redirect if appropriate. If you don't redirect, return the flatpage. ... try: fp = flatpage(request, request.path_info) # Code to redirect to www goes here return fp except Http404: ... A: In your urls.py file do something like this: urlpatterns = patterns('', (r'^flat/(?P<static>.*)$', 'django.views.generic.simple.redirect_to', {'url': 'http://www.mysite.com/flat/%(static)s'}), # other stuff )
django flatpage redirects
I want to make sure all of my flatpages have the www subdomain and redirect to it if they don't. I've looked at some middlewares that redirect to www, but (1), they usually redirect all urls to www and (2), the ones I've found don't work with flatpages. I don't want all of my site urls to redirect to include the www subdomain, just the flatpages. Anyone know how I should go about doing this?
[ "One option is to modify a middleware, so that it only redirects if response.status_code == 404. Put the middleware just before the flatpage middleware in settings.py. This would redirect\nhttp://example.com/flatpage/ -> http://www.example.com/flatpage/\n\nbut also\nhttp://example.com/invalidurl/ -> http://www.example.com/invalidurl/\n\nbefore returning a 404 error.\n\nAnother option would be to write your own flatpage middleware based on the official one. You can see the code for the FlatpageFallbackMiddleware class on the django website. \nIn the try, except block, check to see if a flatpage exists. Then redirect if appropriate. If you don't redirect, return the flatpage.\n...\ntry:\n fp = flatpage(request, request.path_info)\n\n # Code to redirect to www goes here\n\n return fp\nexcept Http404:\n...\n\n", "In your urls.py file do something like this:\nurlpatterns = patterns('',\n (r'^flat/(?P<static>.*)$', 'django.views.generic.simple.redirect_to', {'url': 'http://www.mysite.com/flat/%(static)s'}),\n # other stuff\n)\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_flatpages", "python" ]
stackoverflow_0001627146_django_django_flatpages_python.txt
Q: Alter XML while preserving layout What would you use to alter an XML-file while preserving as much as possible of layout, including indentation and comments? My problem is that I have a couple of massive hand-edited XML-files describing a user interface, and now I need to translate several attributes to another language. I've tried doing this using Python + ElementTree, but it did not preserve neither whitespace nor comments. I've seen XSLT being suggested for similar questions, but I don't think that is an alternative in this case, since I need to do some logic and lookups for each attribute. It would be preferable if attribute order in each element is preserved as well, but I can tolerate changed order. A: Any DOM manipulation module should suite your needs. Layout is just a text data, so it's represented as text nodes in DOM: >>> from xml.dom.minidom import parseString >>> dom = parseString('''\ ... <message> ... <text> ... Hello! ... </text> ... </message>''') >>> dom.childNodes[0].childNodes [<DOM Text node "u'\n '">, <DOM Element: text at 0xb765782c>, <DOM Text node "u'\n'">] >>> text = dom.getElementsByTagName('text')[0].childNodes[0] >>> text.data = text.data.replace(u'Hello', u'Hello world') >>> print dom.toxml() <?xml version="1.0" ?><message> <text> Hello world! </text> </message> A: If you use an XSLT processor such as xt, then you can write extension methods in Java that can perform any arbitrary transformation you need. Having said that, I have used Python's xml.dom.minidom module successfully for this sort of transformation. It does preserve whitespace and layout.
Alter XML while preserving layout
What would you use to alter an XML-file while preserving as much as possible of layout, including indentation and comments? My problem is that I have a couple of massive hand-edited XML-files describing a user interface, and now I need to translate several attributes to another language. I've tried doing this using Python + ElementTree, but it did not preserve neither whitespace nor comments. I've seen XSLT being suggested for similar questions, but I don't think that is an alternative in this case, since I need to do some logic and lookups for each attribute. It would be preferable if attribute order in each element is preserved as well, but I can tolerate changed order.
[ "Any DOM manipulation module should suite your needs. Layout is just a text data, so it's represented as text nodes in DOM:\n>>> from xml.dom.minidom import parseString\n>>> dom = parseString('''\\\n... <message>\n... <text>\n... Hello!\n... </text>\n... </message>''')\n>>> dom.childNodes[0].childNodes\n[<DOM Text node \"u'\\n '\">, <DOM Element: text at 0xb765782c>, <DOM Text node \"u'\\n'\">]\n>>> text = dom.getElementsByTagName('text')[0].childNodes[0]\n>>> text.data = text.data.replace(u'Hello', u'Hello world')\n>>> print dom.toxml()\n<?xml version=\"1.0\" ?><message>\n <text>\n Hello world!\n </text>\n</message>\n\n", "If you use an XSLT processor such as xt, then you can write extension methods in Java that can perform any arbitrary transformation you need.\nHaving said that, I have used Python's xml.dom.minidom module successfully for this sort of transformation. It does preserve whitespace and layout.\n" ]
[ 2, 1 ]
[]
[]
[ "elementtree", "python", "xml" ]
stackoverflow_0001629687_elementtree_python_xml.txt
Q: virtualenv confusion So I open a terminal, cd to my desktop, and run: virtualenv test_env I then create the following file in my normal environment: /home/jesse/.local/lib/python2.6/site-packages/foo_package/__init__.py This file contains one line: print "importing from normal env" In the test_env I create: /home/jesse/Desktop/test_env/lib/python2.6/site-packages/foo_package/__init__.py Containing: print "importing from test env" Now I open a terminal and run: $ /home/jesse/Desktop/test_env/bin/python And then do: >>> import foo_package Which outputs: importing from normal env Why doesn't it import the file from test_env? I thought that was the whole point of virtualenv. Am I missing something here? Edit: Jon H informed me that I need to activate the environment. But this doesn't seem to fix the problem... jesse@jesse-laptop:~/Desktop/test_env$ source bin/activate (test_env)jesse@jesse-laptop:~/Desktop/test_env$ bin/python Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import foo_package importing from normal env >>> Using Ubuntu 9.04 / Python 2.6.2 / virtualenv 1.33 in case that's relevant. Edit 2: Haes asked me what sys.path was in my virtualenv... jesse@jesse-laptop:~/Desktop/test_env$ source bin/activate (test_env)jesse@jesse-laptop:~/Desktop/test_env$ bin/python Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path Output: ['', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/enum-0.4.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/clonedigger-1.0.9_beta-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETS-3.2.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsGUI-3.0.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsBackendWX-3.1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsBackendQt-3.1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/yolk-0.4.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pylint-0.18.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/logilab_astng-0.19.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/logilab_common-0.39.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pudb-0.92.7-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Pygments-1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETSProjectTools-0.5.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pydee-0.4.24-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/visionegg-1.2.1-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/PyOpenGL-3.0.0c1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Whoosh-0.2.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyinotify-0.8.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/svgbatch-0.1.9-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyglet-1.1.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/lepton-1.0b2-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/rope-0.9.2-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/simplejson-2.0.9-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/pymunk-0.8.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/cssutils-0.9.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Shapely-1.0.14-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/sympy-0.6.5-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/virtualenvwrapper-1.20-py2.6.egg', '/home/jesse/Desktop/test_env/lib/python2.6', '/home/jesse/Desktop/test_env/lib/python2.6/plat-linux2', '/home/jesse/Desktop/test_env/lib/python2.6/lib-tk', '/home/jesse/Desktop/test_env/lib/python2.6/lib-old', '/home/jesse/Desktop/test_env/lib/python2.6/lib-dynload', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/home/jesse/.local/lib/python2.6/site-packages', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages', '/usr/local/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages/Numeric', '/usr/lib/python2.6/dist-packages/PIL', '/usr/lib/python2.6/dist-packages/gst-0.10', '/var/lib/python-support/python2.6', '/usr/lib/python2.6/dist-packages/gtk-2.0', '/var/lib/python-support/python2.6/gtk-2.0', '/usr/lib/python2.6/dist-packages/wx-2.8-gtk2-unicode'] Edit 3: I found this: https://bugs.launchpad.net/ubuntu/+source/python-virtualenv/+bug/339904 Apparently there are some issues with virtualenv + python 2.6 + ubuntu 9.04. Not sure if that's related to my issue... I tried uninstalling the python-virtualenv package via Synaptic and then installing version 1.3.4 of virtualenv via easy_install, but still have the same problem... jesse@jesse-laptop:~/Desktop/test_env$ source bin/activate (test_env)jesse@jesse-laptop:~/Desktop/test_env$ bin/python Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import foo_package importing from normal env >>> import sys >>> print sys.path ['', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg', '/home/jesse/Desktop/test_env/lib/python2.6', '/home/jesse/Desktop/test_env/lib/python2.6/plat-linux2', '/home/jesse/Desktop/test_env/lib/python2.6/lib-tk', '/home/jesse/Desktop/test_env/lib/python2.6/lib-old', '/home/jesse/Desktop/test_env/lib/python2.6/lib-dynload', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/home/jesse/.local/lib/python2.6/site-packages', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages', '/usr/local/lib/python2.6/dist-packages/enum-0.4.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/clonedigger-1.0.9_beta-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETS-3.2.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsGUI-3.0.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsBackendWX-3.1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsBackendQt-3.1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/yolk-0.4.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pylint-0.18.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/logilab_astng-0.19.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/logilab_common-0.39.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pudb-0.92.7-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Pygments-1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETSProjectTools-0.5.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pydee-0.4.24-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/visionegg-1.2.1-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/PyOpenGL-3.0.0c1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Whoosh-0.2.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyinotify-0.8.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/svgbatch-0.1.9-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyglet-1.1.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/lepton-1.0b2-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/rope-0.9.2-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/simplejson-2.0.9-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/pymunk-0.8.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/cssutils-0.9.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Shapely-1.0.14-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/sympy-0.6.5-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/virtualenvwrapper-1.20-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/virtualenv-1.3.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/enum-0.4.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETS-3.2.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/yolk-0.4.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Whoosh-0.2.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyinotify-0.8.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyglet-1.1.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/simplejson-2.0.9-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages', '/usr/local/lib/python2.6/site-packages/gtk-2.0', '/usr/local/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages/Numeric', '/usr/lib/python2.6/dist-packages/PIL', '/usr/lib/python2.6/dist-packages/gst-0.10', '/var/lib/python-support/python2.6', '/usr/lib/python2.6/dist-packages/gtk-2.0', '/var/lib/python-support/python2.6/gtk-2.0', '/usr/lib/python2.6/dist-packages/wx-2.8-gtk2-unicode'] This looks like a step forward, because test_env stuff is appearing in the path, but it's still not working. I think my current problem is that '/home/jesse/.local/lib/python2.6/site-packages' occurs in the path before '/home/jesse/Desktop/test_env/lib/python2.6/site-packages' Edit 4: Roger suggested creating the env with the --no-site-packages option. I tried that. Same problem. jesse@jesse-laptop:~/Desktop/test_env$ source bin/activate (test_env)jesse@jesse-laptop:~/Desktop/test_env$ bin/python Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import foo_package importing from normal env >>> import sys >>> sys.path ['', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg', '/home/jesse/Desktop/test_env/lib/python2.6', '/home/jesse/Desktop/test_env/lib/python2.6/plat-linux2', '/home/jesse/Desktop/test_env/lib/python2.6/lib-tk', '/home/jesse/Desktop/test_env/lib/python2.6/lib-old', '/home/jesse/Desktop/test_env/lib/python2.6/lib-dynload', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/home/jesse/.local/lib/python2.6/site-packages', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages'] >>> Again, it looks like the problem is the site-packages in my ".local" appears earlier in the path than the site-packages in "test_env". A: You're running into a bug in virtualenv. It has not yet been updated to handle .local directories properly. I've filed an issue for this at the bug tracker. UPDATE: this bug is now fixed in virtualenv 1.4.2 and later. A: From the steps you mentioned, it seems you haven't activated the virtual env. Do: source bin/activate .. within the virtualenv you created. You should see something like: (test_env)computername:foldername$ Running python from here should get your virtualenv version. Without this step it will still use your default Python installation. A: Edit: Post above me is correct, you forgot to activate. Using virtualenvwrapper I've never really done that step so my bad :) Looking at that, it looks like you're doing everything right but I would like to make a suggestion incase you've never heard of it: virtualenvwrapper makes working with virtualenv so much quicker and easier. Might be fun to try it out and see if you still get the same issue, maybe you'll find what you were missing. A: And you need to create the virtual environment with the option --no-site-packages
virtualenv confusion
So I open a terminal, cd to my desktop, and run: virtualenv test_env I then create the following file in my normal environment: /home/jesse/.local/lib/python2.6/site-packages/foo_package/__init__.py This file contains one line: print "importing from normal env" In the test_env I create: /home/jesse/Desktop/test_env/lib/python2.6/site-packages/foo_package/__init__.py Containing: print "importing from test env" Now I open a terminal and run: $ /home/jesse/Desktop/test_env/bin/python And then do: >>> import foo_package Which outputs: importing from normal env Why doesn't it import the file from test_env? I thought that was the whole point of virtualenv. Am I missing something here? Edit: Jon H informed me that I need to activate the environment. But this doesn't seem to fix the problem... jesse@jesse-laptop:~/Desktop/test_env$ source bin/activate (test_env)jesse@jesse-laptop:~/Desktop/test_env$ bin/python Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import foo_package importing from normal env >>> Using Ubuntu 9.04 / Python 2.6.2 / virtualenv 1.33 in case that's relevant. Edit 2: Haes asked me what sys.path was in my virtualenv... jesse@jesse-laptop:~/Desktop/test_env$ source bin/activate (test_env)jesse@jesse-laptop:~/Desktop/test_env$ bin/python Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path Output: ['', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/enum-0.4.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/clonedigger-1.0.9_beta-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETS-3.2.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsGUI-3.0.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsBackendWX-3.1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsBackendQt-3.1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/yolk-0.4.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pylint-0.18.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/logilab_astng-0.19.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/logilab_common-0.39.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pudb-0.92.7-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Pygments-1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETSProjectTools-0.5.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pydee-0.4.24-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/visionegg-1.2.1-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/PyOpenGL-3.0.0c1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Whoosh-0.2.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyinotify-0.8.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/svgbatch-0.1.9-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyglet-1.1.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/lepton-1.0b2-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/rope-0.9.2-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/simplejson-2.0.9-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/pymunk-0.8.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/cssutils-0.9.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Shapely-1.0.14-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/sympy-0.6.5-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/virtualenvwrapper-1.20-py2.6.egg', '/home/jesse/Desktop/test_env/lib/python2.6', '/home/jesse/Desktop/test_env/lib/python2.6/plat-linux2', '/home/jesse/Desktop/test_env/lib/python2.6/lib-tk', '/home/jesse/Desktop/test_env/lib/python2.6/lib-old', '/home/jesse/Desktop/test_env/lib/python2.6/lib-dynload', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/home/jesse/.local/lib/python2.6/site-packages', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages', '/usr/local/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages/Numeric', '/usr/lib/python2.6/dist-packages/PIL', '/usr/lib/python2.6/dist-packages/gst-0.10', '/var/lib/python-support/python2.6', '/usr/lib/python2.6/dist-packages/gtk-2.0', '/var/lib/python-support/python2.6/gtk-2.0', '/usr/lib/python2.6/dist-packages/wx-2.8-gtk2-unicode'] Edit 3: I found this: https://bugs.launchpad.net/ubuntu/+source/python-virtualenv/+bug/339904 Apparently there are some issues with virtualenv + python 2.6 + ubuntu 9.04. Not sure if that's related to my issue... I tried uninstalling the python-virtualenv package via Synaptic and then installing version 1.3.4 of virtualenv via easy_install, but still have the same problem... jesse@jesse-laptop:~/Desktop/test_env$ source bin/activate (test_env)jesse@jesse-laptop:~/Desktop/test_env$ bin/python Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import foo_package importing from normal env >>> import sys >>> print sys.path ['', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg', '/home/jesse/Desktop/test_env/lib/python2.6', '/home/jesse/Desktop/test_env/lib/python2.6/plat-linux2', '/home/jesse/Desktop/test_env/lib/python2.6/lib-tk', '/home/jesse/Desktop/test_env/lib/python2.6/lib-old', '/home/jesse/Desktop/test_env/lib/python2.6/lib-dynload', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/home/jesse/.local/lib/python2.6/site-packages', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages', '/usr/local/lib/python2.6/dist-packages/enum-0.4.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/clonedigger-1.0.9_beta-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETS-3.2.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsGUI-3.0.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsBackendWX-3.1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/TraitsBackendQt-3.1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/yolk-0.4.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pylint-0.18.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/logilab_astng-0.19.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/logilab_common-0.39.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pudb-0.92.7-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Pygments-1.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETSProjectTools-0.5.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pydee-0.4.24-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/visionegg-1.2.1-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/PyOpenGL-3.0.0c1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Whoosh-0.2.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyinotify-0.8.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/svgbatch-0.1.9-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyglet-1.1.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/lepton-1.0b2-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/rope-0.9.2-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/simplejson-2.0.9-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/dist-packages/pymunk-0.8.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/cssutils-0.9.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Shapely-1.0.14-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/sympy-0.6.5-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/virtualenvwrapper-1.20-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/virtualenv-1.3.4-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/enum-0.4.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/ETS-3.2.0-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/yolk-0.4.1-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/Whoosh-0.2.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyinotify-0.8.6-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/pyglet-1.1.3-py2.6.egg', '/usr/local/lib/python2.6/dist-packages/simplejson-2.0.9-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages', '/usr/local/lib/python2.6/site-packages/gtk-2.0', '/usr/local/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages/Numeric', '/usr/lib/python2.6/dist-packages/PIL', '/usr/lib/python2.6/dist-packages/gst-0.10', '/var/lib/python-support/python2.6', '/usr/lib/python2.6/dist-packages/gtk-2.0', '/var/lib/python-support/python2.6/gtk-2.0', '/usr/lib/python2.6/dist-packages/wx-2.8-gtk2-unicode'] This looks like a step forward, because test_env stuff is appearing in the path, but it's still not working. I think my current problem is that '/home/jesse/.local/lib/python2.6/site-packages' occurs in the path before '/home/jesse/Desktop/test_env/lib/python2.6/site-packages' Edit 4: Roger suggested creating the env with the --no-site-packages option. I tried that. Same problem. jesse@jesse-laptop:~/Desktop/test_env$ source bin/activate (test_env)jesse@jesse-laptop:~/Desktop/test_env$ bin/python Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import foo_package importing from normal env >>> import sys >>> sys.path ['', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg', '/home/jesse/Desktop/test_env/lib/python2.6', '/home/jesse/Desktop/test_env/lib/python2.6/plat-linux2', '/home/jesse/Desktop/test_env/lib/python2.6/lib-tk', '/home/jesse/Desktop/test_env/lib/python2.6/lib-old', '/home/jesse/Desktop/test_env/lib/python2.6/lib-dynload', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/home/jesse/.local/lib/python2.6/site-packages', '/home/jesse/Desktop/test_env/lib/python2.6/site-packages'] >>> Again, it looks like the problem is the site-packages in my ".local" appears earlier in the path than the site-packages in "test_env".
[ "You're running into a bug in virtualenv. It has not yet been updated to handle .local directories properly. I've filed an issue for this at the bug tracker.\nUPDATE: this bug is now fixed in virtualenv 1.4.2 and later.\n", "From the steps you mentioned, it seems you haven't activated the virtual env. Do:\nsource bin/activate\n\n.. within the virtualenv you created. You should see something like:\n(test_env)computername:foldername$\n\nRunning python from here should get your virtualenv version.\nWithout this step it will still use your default Python installation.\n", "Edit: Post above me is correct, you forgot to activate. Using virtualenvwrapper I've never really done that step so my bad :)\nLooking at that, it looks like you're doing everything right but I would like to make a suggestion incase you've never heard of it: virtualenvwrapper makes working with virtualenv so much quicker and easier. Might be fun to try it out and see if you still get the same issue, maybe you'll find what you were missing.\n", "And you need to create the virtual environment with the option --no-site-packages\n" ]
[ 7, 2, 0, 0 ]
[]
[]
[ "python", "virtualenv" ]
stackoverflow_0001624245_python_virtualenv.txt
Q: SAS and Web Data I have been taking a few graduate classes with a professor I like alot and she raves about SAS all of the time. I "grew up" learning stats using SPSS, and with their recent decisions to integrate their stats engine with R and Python, I find it difficult to muster up the desire to learn anything else. I am not that strong in Python, but I can get by with most tasks that I want to accomplish. Admittedly, I do see the upside to SAS, but I have learned to do some pretty cool things combining SPSS and Python, like grabbing data from the web and analyzing it real-time. Plus, I really like that I can use the GUI to generate the base for my code before I add my final modifications. In SAS, it looks like I would have to program everything by hand (ignoring Enterprise Guide). My question is this. Can you grab data from the web and parse it into SAS datasets? This is a deal-breaker for me. What about interfacing with API's like Google Analytics, Twitter, etc? Are there external IDE's that you can use to write and execute SAS programs? Any help will be greatly appreciated. Brock A: Incidentally, SAS is now offering integration with R. http://support.sas.com/rnd/app/studio/Rinterface2.html There are all sorts of ways to get data off the web. One example is to use the url access methods on filename statements to pull in xml data off the web. For example: filename cmap "yldmap.map"; /* an xml map I created to parse the data */ filename curyld url "http://www.ustreas.gov/offices/domestic-finance/debt-management/interest-rate/yield.xml"; libname curyld xml xmlmap=cmap; A: yes. sas 9.2 can interact with soap and restful apis. i haven't had much success with twitter. i have had some success with google spreadsheets (in sas 9.1.3) and i've seen code to pull google analytics (in sas 9.2). as with python and r, you can write the code in any text editor, but you'll need to have sas to actually execute it. lately, i've been bouncing between eclipse, pspad, and sas's enhanced editor for writing code, but i always have to submit in sas.
SAS and Web Data
I have been taking a few graduate classes with a professor I like alot and she raves about SAS all of the time. I "grew up" learning stats using SPSS, and with their recent decisions to integrate their stats engine with R and Python, I find it difficult to muster up the desire to learn anything else. I am not that strong in Python, but I can get by with most tasks that I want to accomplish. Admittedly, I do see the upside to SAS, but I have learned to do some pretty cool things combining SPSS and Python, like grabbing data from the web and analyzing it real-time. Plus, I really like that I can use the GUI to generate the base for my code before I add my final modifications. In SAS, it looks like I would have to program everything by hand (ignoring Enterprise Guide). My question is this. Can you grab data from the web and parse it into SAS datasets? This is a deal-breaker for me. What about interfacing with API's like Google Analytics, Twitter, etc? Are there external IDE's that you can use to write and execute SAS programs? Any help will be greatly appreciated. Brock
[ "Incidentally, SAS is now offering integration with R. \nhttp://support.sas.com/rnd/app/studio/Rinterface2.html\nThere are all sorts of ways to get data off the web. One example is to use the url access methods on filename statements to pull in xml data off the web. \nFor example:\nfilename cmap \"yldmap.map\"; /* an xml map I created to parse the data */\nfilename curyld\n url \"http://www.ustreas.gov/offices/domestic-finance/debt-management/interest-rate/yield.xml\";\n\nlibname curyld xml xmlmap=cmap;\n\n", "yes. sas 9.2 can interact with soap and restful apis. i haven't had much success with twitter. i have had some success with google spreadsheets (in sas 9.1.3) and i've seen code to pull google analytics (in sas 9.2).\nas with python and r, you can write the code in any text editor, but you'll need to have sas to actually execute it. lately, i've been bouncing between eclipse, pspad, and sas's enhanced editor for writing code, but i always have to submit in sas.\n" ]
[ 6, 5 ]
[]
[]
[ "python", "sas", "statistics" ]
stackoverflow_0001628372_python_sas_statistics.txt
Q: Django Zip upload permission problem I have few uploads in this app, uploading csv files is working fine. I have a model that has zip upload in it. Zip file is uploaded, can be viewed, but having issues extracting it. class Message(models.Model): uploadFile = models.FileField(_('images file (.zip)'), upload_to='message/', storage=FileSystemStorage(), help_text=_('')) The error is IOError at /backend/media/new (13, 'Permission denied') A: It's not really an issue with the zip file, it's probably an issue with your directory's permissions. Take a look at the permissions for /backend/media/new. Is new a folder being created by the zip or is that where you're trying to unzip too? Make sure the groups for the folders also match. Here's a great tutorial on chmod and permissions in general. A: it works with ZipFile.extractall
Django Zip upload permission problem
I have few uploads in this app, uploading csv files is working fine. I have a model that has zip upload in it. Zip file is uploaded, can be viewed, but having issues extracting it. class Message(models.Model): uploadFile = models.FileField(_('images file (.zip)'), upload_to='message/', storage=FileSystemStorage(), help_text=_('')) The error is IOError at /backend/media/new (13, 'Permission denied')
[ "It's not really an issue with the zip file, it's probably an issue with your directory's permissions.\nTake a look at the permissions for /backend/media/new. Is new a folder being created by the zip or is that where you're trying to unzip too? Make sure the groups for the folders also match.\nHere's a great tutorial on chmod and permissions in general.\n", "it works with ZipFile.extractall \n" ]
[ 1, 0 ]
[]
[]
[ "django", "python", "zip" ]
stackoverflow_0001630427_django_python_zip.txt
Q: In django, how to write a query that selects all possible combinations of four integers? I'm writing a Game Website, where the draw is a series of four digits. e.g 1234 I"m trying to write a query in django that will select all winners based on the four digits entered. winners are any combination of the same numbers or the same combination, 1 2 3 4, 2 3 1 4, 4 1 3 2 are all winners. how is the most efficient way to write this query. --------------------- Edit, sorry for not providing model samples here there are below: ----------- class Draw(models.Model): digit1 = models.PositiveSmallIntegerField(null=True,blank=True) digit2 = models.PositiveSmallIntegerField(null=True,blank=True) digit3 = models.PositiveSmallIntegerField(null=True,blank=True) digit4 = models.PositiveSmallIntegerField(null=True,blank=True) draw_date = models.DateTimeField() closed = models.BooleanField() winner = models.BooleanField() def __unicode__(self): return "Draw For Week Ending %s" %(self.draw_date) def get_absolute_url(self): return "/draw/%s/" % (self.draw_date) def save(self, force_insert=False, force_update=False): if self.digit1 and self.digit2 and self.digit3 and self.digit4: #check if there are winners try: winners = Ticket.objects.filter(draw=self.id,digit1=self.digit1,digit2=self.digit2,digit3=self.digit3,digit4=self.digit4) self.winner = True except Ticket.DoesNotExist: self.winner = False #close & save draw/winners self.closed = True # Add new Draw for following week. new_date = self.draw_date + datetime.timedelta(hours=168) new_draw= Draw(draw_date=new_date) new_draw.save() super(Draw, self).save(force_insert, force_update) # Call the "real" save() method. class Serial(models.Model): serial = models.CharField(max_length=4) closed = models.BooleanField(unique=False) def __unicode__(self): return "%s" %(self.serial) def get_absolute_url(self): return "/draw/serial/%s/" % (self.serial) class Ticket(models.Model): draw = models.ForeignKey(Draw) digit1 = models.PositiveSmallIntegerField() digit2 = models.PositiveSmallIntegerField() digit3 = models.PositiveSmallIntegerField() digit4 = models.PositiveSmallIntegerField() date = models.DateField(auto_now_add=True,editable=False) active = models.BooleanField(default=True) serial_used = models.ForeignKey(Serial,related_name="ticket_serial_used") def __unicode__(self): return "#: %s - %s" %(self.id,self.draw) def get_absolute_url(self): return "/ticket/%s/" % (self.id) def save(self, force_insert=False, force_update=False): if self.serial_used: serial = Serial.objects.get(pk=self.serial_used.id) serial.closed = True serial.save() super(Ticket, self).save(force_insert, force_update) # Call the "real" save() method. A: I'd advise adjusting the code to save the digits so that they are saved in sorted order. E.g. if the user puts in "5262" then it should store that as "2256". Then, when you select a winning set of digits, you can sort those, and filter by simple equality. This will perform much, much better than trying to check for all possible combinations. If you need the unsorted selection for other purposes, then add a new field to the model sortedDigits or something so that you have it to compare against. A: Code: from itertools import permutations winning_numbers = "1234" winning_combinations = map(lambda v: "".join(v), list(permutations(winning_numbers, 4))) winners = GamesPlayed.objects.filter(numbers__in=winning_combinations) Assuming GamesPlayed is the model object for all games played, with a text field numbers containing the four selected numbers in the format NNNN. If you're on Python 2.5 itertools does not have permutations. The docs have an implementation you can use: http://docs.python.org/library/itertools.html#itertools.permutations A: Is the order of the numbers important? If not, you could sort the digits for tickets and draws in ascending order, then use your code winners = Ticket.objects.filter(draw=self.id,digit1=self.digit1,digit2=self.digit2,digit3=self.digit3,digit4=self.digit4) As an aside, Your try... except block won't catch the situation when there are no winners. The DoesNotExist exception is thrown by the get method (see docs). If there isn't a winning ticket, the filter method will return an empty queryset, but not raise an error. You can then check whether there are winners using an if statement. if winners # there are winners self.winner = True else: # there are not winners self.winner = False
In django, how to write a query that selects all possible combinations of four integers?
I'm writing a Game Website, where the draw is a series of four digits. e.g 1234 I"m trying to write a query in django that will select all winners based on the four digits entered. winners are any combination of the same numbers or the same combination, 1 2 3 4, 2 3 1 4, 4 1 3 2 are all winners. how is the most efficient way to write this query. --------------------- Edit, sorry for not providing model samples here there are below: ----------- class Draw(models.Model): digit1 = models.PositiveSmallIntegerField(null=True,blank=True) digit2 = models.PositiveSmallIntegerField(null=True,blank=True) digit3 = models.PositiveSmallIntegerField(null=True,blank=True) digit4 = models.PositiveSmallIntegerField(null=True,blank=True) draw_date = models.DateTimeField() closed = models.BooleanField() winner = models.BooleanField() def __unicode__(self): return "Draw For Week Ending %s" %(self.draw_date) def get_absolute_url(self): return "/draw/%s/" % (self.draw_date) def save(self, force_insert=False, force_update=False): if self.digit1 and self.digit2 and self.digit3 and self.digit4: #check if there are winners try: winners = Ticket.objects.filter(draw=self.id,digit1=self.digit1,digit2=self.digit2,digit3=self.digit3,digit4=self.digit4) self.winner = True except Ticket.DoesNotExist: self.winner = False #close & save draw/winners self.closed = True # Add new Draw for following week. new_date = self.draw_date + datetime.timedelta(hours=168) new_draw= Draw(draw_date=new_date) new_draw.save() super(Draw, self).save(force_insert, force_update) # Call the "real" save() method. class Serial(models.Model): serial = models.CharField(max_length=4) closed = models.BooleanField(unique=False) def __unicode__(self): return "%s" %(self.serial) def get_absolute_url(self): return "/draw/serial/%s/" % (self.serial) class Ticket(models.Model): draw = models.ForeignKey(Draw) digit1 = models.PositiveSmallIntegerField() digit2 = models.PositiveSmallIntegerField() digit3 = models.PositiveSmallIntegerField() digit4 = models.PositiveSmallIntegerField() date = models.DateField(auto_now_add=True,editable=False) active = models.BooleanField(default=True) serial_used = models.ForeignKey(Serial,related_name="ticket_serial_used") def __unicode__(self): return "#: %s - %s" %(self.id,self.draw) def get_absolute_url(self): return "/ticket/%s/" % (self.id) def save(self, force_insert=False, force_update=False): if self.serial_used: serial = Serial.objects.get(pk=self.serial_used.id) serial.closed = True serial.save() super(Ticket, self).save(force_insert, force_update) # Call the "real" save() method.
[ "I'd advise adjusting the code to save the digits so that they are saved in sorted order. E.g. if the user puts in \"5262\" then it should store that as \"2256\". Then, when you select a winning set of digits, you can sort those, and filter by simple equality. This will perform much, much better than trying to check for all possible combinations.\nIf you need the unsorted selection for other purposes, then add a new field to the model sortedDigits or something so that you have it to compare against.\n", "Code:\nfrom itertools import permutations\nwinning_numbers = \"1234\"\nwinning_combinations = map(lambda v: \"\".join(v), list(permutations(winning_numbers, 4)))\n\nwinners = GamesPlayed.objects.filter(numbers__in=winning_combinations)\n\nAssuming GamesPlayed is the model object for all games played, with a text field numbers containing the four selected numbers in the format NNNN.\nIf you're on Python 2.5 itertools does not have permutations. The docs have an implementation you can use: http://docs.python.org/library/itertools.html#itertools.permutations\n", "Is the order of the numbers important? \nIf not, you could sort the digits for tickets and draws in ascending order, then use your code\nwinners = Ticket.objects.filter(draw=self.id,digit1=self.digit1,digit2=self.digit2,digit3=self.digit3,digit4=self.digit4)\n\nAs an aside, Your try... except block won't catch the situation when there are no winners. The DoesNotExist exception is thrown by the get method (see docs).\nIf there isn't a winning ticket, the filter method will return an empty queryset, but not raise an error. You can then check whether there are winners using an if statement.\nif winners\n # there are winners\n self.winner = True\nelse:\n # there are not winners\n self.winner = False\n\n" ]
[ 5, 0, 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001631333_django_django_models_python.txt
Q: Python: Public methods calling their 'brother' private methods I have been writing Python code for only a couple of weeks, so I'm still figuring out the lay of the land. But let's say I have a method that MAY be called on by a 'user' on occasion as well as used HEAVILY internally (ie, the arguments have already been checked before the call). Here is what I am currently doing: #The method the 'user' should call: def do_something(self, arg1, arg2, arg3): #write code to do error checking on arg1, agr2, arg3 #raise exceptions, return codes, etc: depends on whether you are an explicit lover #or an implicit lover, it seems. :-) ... error checking code here... #Now call the 'brother' method that does the real work. return self._do_something(self, arg1, arg2, arg3, arg3) #The method other private methods should call with already validated parameters def _do_something(self, arg1, arg2, arg3, arg3): #don't do error checking on the parameters. get to work... ... do what you do... return whatever you're supposed to return This seems logical to me. Is there a better Python-ish way to do this? Paul A: That is fine. The call to the "brother" method is wrong in your code, though. You should do it like this: # Now call the 'brother' method that does the real work. return self._do_something(arg1, arg2, arg3, arg3) That is, you should call it "through" the self reference, since it's an object method and not a global function. A: There is no "true" support for private members in python, but the pythonic way to indicate a member as private is to use two leading underscores. In your case, __do_something. For further details, see python.org - classes A: well, unless the error checking code is very expensive, I would have only one method, which always does the error checking. It may repeat some checks, but it does offer you more security, and it may come in handy if someone inherits from your class. If later on you need performance you can always cache results or do something else.
Python: Public methods calling their 'brother' private methods
I have been writing Python code for only a couple of weeks, so I'm still figuring out the lay of the land. But let's say I have a method that MAY be called on by a 'user' on occasion as well as used HEAVILY internally (ie, the arguments have already been checked before the call). Here is what I am currently doing: #The method the 'user' should call: def do_something(self, arg1, arg2, arg3): #write code to do error checking on arg1, agr2, arg3 #raise exceptions, return codes, etc: depends on whether you are an explicit lover #or an implicit lover, it seems. :-) ... error checking code here... #Now call the 'brother' method that does the real work. return self._do_something(self, arg1, arg2, arg3, arg3) #The method other private methods should call with already validated parameters def _do_something(self, arg1, arg2, arg3, arg3): #don't do error checking on the parameters. get to work... ... do what you do... return whatever you're supposed to return This seems logical to me. Is there a better Python-ish way to do this? Paul
[ "That is fine. The call to the \"brother\" method is wrong in your code, though. You should do it like this:\n# Now call the 'brother' method that does the real work.\nreturn self._do_something(arg1, arg2, arg3, arg3)\n\nThat is, you should call it \"through\" the self reference, since it's an object method and not a global function.\n", "There is no \"true\" support for private members in python, but the pythonic way to indicate a member as private is to use two leading underscores. In your case, __do_something.\nFor further details, see python.org - classes\n", "well, unless the error checking code is very expensive, I would have only one method, which always does the error checking. It may repeat some checks, but it does offer you more security, and it may come in handy if someone inherits from your class.\nIf later on you need performance you can always cache results or do something else.\n" ]
[ 2, 0, 0 ]
[ "I'm just learning python myself (and enjoying it) but I think that's the way to do it.\nHowever, the private method should have two underscores and called like 'self.__do_something()'.\n" ]
[ -1 ]
[ "methods", "public_method", "python" ]
stackoverflow_0001631855_methods_public_method_python.txt
Q: How to check if a FileField has been modified in the Admin of Django? I am trying to do a model with a file that shouldn't be modified. But the comment of the file can be. Here is what I did, but we cannot modify the comment. How can I test if a new file (using the browse button) as been sent and in this case only, create a new instance of the model ? If no upload of a new file, update the comment. admin.py class CGUAdminForm(forms.ModelForm): class Meta: model = ConditionsUtilisation def clean_file(self): if self.instance and self.instance.pk is not None: raise forms.ValidationError(_(u'You cannot modify the file. Thank you to create a new instance.')) # do something that validates your data return self.cleaned_data["file"] class CGUAdmin(admin.ModelAdmin): form = CGUAdminForm admin.site.register(ConditionsUtilisation, CGUAdmin) models.py class ConditionsUtilisation(models.Model): date = models.DateField(_(u'Date d\'upload'), editable=False, auto_now_add=True) comment = models.TextField(_(u'Commentaire de modification')) file = models.FileField(_(u'CGU'), upload_to='subscription/cgu/', storage=CGUFileSystemStorage()) A: if 'file' in form.changed_data: """ File is changed """ raise forms.ValidationError("No, don't change the file because blah blah") else: """ File is not changed """
How to check if a FileField has been modified in the Admin of Django?
I am trying to do a model with a file that shouldn't be modified. But the comment of the file can be. Here is what I did, but we cannot modify the comment. How can I test if a new file (using the browse button) as been sent and in this case only, create a new instance of the model ? If no upload of a new file, update the comment. admin.py class CGUAdminForm(forms.ModelForm): class Meta: model = ConditionsUtilisation def clean_file(self): if self.instance and self.instance.pk is not None: raise forms.ValidationError(_(u'You cannot modify the file. Thank you to create a new instance.')) # do something that validates your data return self.cleaned_data["file"] class CGUAdmin(admin.ModelAdmin): form = CGUAdminForm admin.site.register(ConditionsUtilisation, CGUAdmin) models.py class ConditionsUtilisation(models.Model): date = models.DateField(_(u'Date d\'upload'), editable=False, auto_now_add=True) comment = models.TextField(_(u'Commentaire de modification')) file = models.FileField(_(u'CGU'), upload_to='subscription/cgu/', storage=CGUFileSystemStorage())
[ "if 'file' in form.changed_data:\n \"\"\"\n File is changed\n \"\"\"\n raise forms.ValidationError(\"No, don't change the file because blah blah\")\nelse:\n \"\"\"\n File is not changed\n \"\"\"\n\n" ]
[ 8 ]
[]
[]
[ "django", "django_admin", "django_file_upload", "django_forms", "python" ]
stackoverflow_0001628676_django_django_admin_django_file_upload_django_forms_python.txt
Q: How to allow scaling with uniform aspect ratio in (Py)Qt? If you have a QImage wrapped inside a QLabel, is it possible to scale it up or down when you resize the window and maintain the aspect ratio (so the image doesn't become distorted)? I figured out that it can scale using setScaledContents(), and you can set a minimum and maximum size, but the image still loses its aspect. It would be great if this could be explained using Python (since I don't know c++), but I'll take what I can get. :-) Thanks in advance! A: I'm showing this as C++, which is what the documentation I'm looking at is in. It shouldn't be too difficult to convert to python. You need to create a custom derivative of QLayoutItem, which overrides bool hasHeightForWidth() and int heightForWidth( int width) to preserve the aspect ratio somehow. You could either pass the image in and query it, or you could just set the ratio directly. You'll also need to make sure the widget() function returns a pointer to the proper label. Once that is done, you can add a layout item to a layout in the same manner you would a widget. So when your label gets added, change it to use your custom layout item class. I haven't actually tested any of this, so it is a theoretical solution at this point. I don't know of any way to do this solution through designer, if that was desired. A: Convert the image to a QPixmap (use QPixmap.fromImage(img)) and then, you can use scaledToHeight().
How to allow scaling with uniform aspect ratio in (Py)Qt?
If you have a QImage wrapped inside a QLabel, is it possible to scale it up or down when you resize the window and maintain the aspect ratio (so the image doesn't become distorted)? I figured out that it can scale using setScaledContents(), and you can set a minimum and maximum size, but the image still loses its aspect. It would be great if this could be explained using Python (since I don't know c++), but I'll take what I can get. :-) Thanks in advance!
[ "I'm showing this as C++, which is what the documentation I'm looking at is in. It shouldn't be too difficult to convert to python.\nYou need to create a custom derivative of QLayoutItem, which overrides bool hasHeightForWidth() and int heightForWidth( int width) to preserve the aspect ratio somehow. You could either pass the image in and query it, or you could just set the ratio directly. You'll also need to make sure the widget() function returns a pointer to the proper label.\nOnce that is done, you can add a layout item to a layout in the same manner you would a widget. So when your label gets added, change it to use your custom layout item class.\nI haven't actually tested any of this, so it is a theoretical solution at this point. I don't know of any way to do this solution through designer, if that was desired.\n", "Convert the image to a QPixmap (use QPixmap.fromImage(img)) and then, you can use scaledToHeight().\n" ]
[ 2, 0 ]
[]
[]
[ "pyqt", "python", "qt", "user_interface" ]
stackoverflow_0001631574_pyqt_python_qt_user_interface.txt
Q: Python LDAP old password still working I have the LDAP python module installed to authorise logins via active directory, but if I change the password, the new and the old one work together. Does anyone know how to resolve this issue? A: I strongly suspect that it is active directory who is caching the credentials -at least for some time after the change- ; have a look at this link.
Python LDAP old password still working
I have the LDAP python module installed to authorise logins via active directory, but if I change the password, the new and the old one work together. Does anyone know how to resolve this issue?
[ "I strongly suspect that it is active directory who is caching the credentials -at least for some time after the change- ; have a look at this link.\n" ]
[ 2 ]
[]
[]
[ "authentication", "ldap", "python" ]
stackoverflow_0001631739_authentication_ldap_python.txt
Q: How to access a forms instance in a modelformset django In my view I create a formset of photos belonging to a specific article, this works brilliantly, and I am able to render and process the forms. However for the image field I would like to display the already uploaded image. Normally I would access the path through the instance form.instance.image.get_thumbnail_url however that doesn't work for the forms in my modelformset - how can i access the instance? ... article = get_object_or_404(Article, pk=article_id) PhotoFormSet = modelformset_factory(Photo) formset = PhotoFormSet(queryset=Photo.objects.filter(articles=article_id)) ... ... {% for form in formset.forms %} {{ form.instance.image.get_thumbnail_url }} {% endfor %} ... A: Not sure, but it may be that you don't need instance: {% for form in formset.forms %} {{ form.image.get_thumbnail_url }} {% endfor %}
How to access a forms instance in a modelformset django
In my view I create a formset of photos belonging to a specific article, this works brilliantly, and I am able to render and process the forms. However for the image field I would like to display the already uploaded image. Normally I would access the path through the instance form.instance.image.get_thumbnail_url however that doesn't work for the forms in my modelformset - how can i access the instance? ... article = get_object_or_404(Article, pk=article_id) PhotoFormSet = modelformset_factory(Photo) formset = PhotoFormSet(queryset=Photo.objects.filter(articles=article_id)) ... ... {% for form in formset.forms %} {{ form.instance.image.get_thumbnail_url }} {% endfor %} ...
[ "Not sure, but it may be that you don't need instance:\n{% for form in formset.forms %}\n {{ form.image.get_thumbnail_url }}\n{% endfor %}\n\n" ]
[ 2 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0001632043_django_django_forms_python.txt
Q: Real time update of relative leaderboard for each user among friends Ive been working on a feature of my application to implement a leaderboard - basically stack rank users according to their score. Im currently tracking the score on an individual basis. My thought is that this leaderboard should be relative instead of absolute i.e. instead of having the top 10 highest scoring users across the site, its a top 10 among a user's friend network. This seems better because everyone has a chance to be #1 in their network and there is a form of friendly competition for those that are interested in this sort of thing. Im already storing the score for each user so the challenge is how to compute the rank of that score in real time in an efficient way. Im using Google App Engine so there are some benefits and limitations (e.g., IN [array]) queries perform a sub-query for every element of the array and also are limited to 30 elements per statement For example 1st Jack 100 2nd John 50 Here are the approaches I came up with but they all seem to be inefficient and I thought that this community could come up with something more elegant. My sense is that any solution will likely be done with a cron and that I will store a daily rank and list order to optimize read operations but it would be cool if there is something more lightweight and real time Pull the list of all users of the site ordered by score. For each user pick their friends out of that list and create new rankings. Store the rank and list order. Update daily. Cons - If I get a lot of users this will take forever 2a. For each user pick their friends and for each friend pick score. Sort that list. Store the rank and list order. Update daily. Record the last position of each user so that the pre-existing list can be used for re-ordering for the next update in order to make it more efficient (may save sorting time) 2b. Same as above except only compute the rank and list order for people who's profiles have been viewed in the last day Cons - rank is only up to date for the 2nd person that views the profile A: If writes are very rare compared to reads (a key assumption in most key-value stores, and not just in those;-), then you might prefer to take a time hit when you need to update scores (a write) rather than to get the relative leaderboards (a read). Specifically, when a user's score change, queue up tasks for each of their friends to update their "relative leaderboards" and keep those leaderboards as list attributes (which do keep order!-) suitably sorted (yep, the latter's a denormalization -- it's often necessary to denormalize, i.e., duplicate information appropriately, to exploit key-value stores at their best!-). Of course you'll also update the relative leaderboards when a friendship (user to user connection) disappears or appears, but those should (I imagine) be even rarer than score updates;-). If writes are pretty frequent, since you don't need perfectly precise up-to-the-second info (i.e., it's not financials/accounting stuff;-), you still have many viable approaches to try. E.g., big score changes (rarer) might trigger the relative-leaderboards recomputes, while smaller ones (more frequent) get stashed away and only applied once in a while "when you get around to it". It's hard to be more specific without ballpark numbers about frequency of updates of various magnitude, typical network-friendship cluster sizes, etc, etc. I know, like everybody else, you want a perfect approach that applies no matter how different the sizes and frequencies in question... but, you just won't find one!-) A: There is a python library available for storing rankings: http://code.google.com/p/google-app-engine-ranklist/
Real time update of relative leaderboard for each user among friends
Ive been working on a feature of my application to implement a leaderboard - basically stack rank users according to their score. Im currently tracking the score on an individual basis. My thought is that this leaderboard should be relative instead of absolute i.e. instead of having the top 10 highest scoring users across the site, its a top 10 among a user's friend network. This seems better because everyone has a chance to be #1 in their network and there is a form of friendly competition for those that are interested in this sort of thing. Im already storing the score for each user so the challenge is how to compute the rank of that score in real time in an efficient way. Im using Google App Engine so there are some benefits and limitations (e.g., IN [array]) queries perform a sub-query for every element of the array and also are limited to 30 elements per statement For example 1st Jack 100 2nd John 50 Here are the approaches I came up with but they all seem to be inefficient and I thought that this community could come up with something more elegant. My sense is that any solution will likely be done with a cron and that I will store a daily rank and list order to optimize read operations but it would be cool if there is something more lightweight and real time Pull the list of all users of the site ordered by score. For each user pick their friends out of that list and create new rankings. Store the rank and list order. Update daily. Cons - If I get a lot of users this will take forever 2a. For each user pick their friends and for each friend pick score. Sort that list. Store the rank and list order. Update daily. Record the last position of each user so that the pre-existing list can be used for re-ordering for the next update in order to make it more efficient (may save sorting time) 2b. Same as above except only compute the rank and list order for people who's profiles have been viewed in the last day Cons - rank is only up to date for the 2nd person that views the profile
[ "If writes are very rare compared to reads (a key assumption in most key-value stores, and not just in those;-), then you might prefer to take a time hit when you need to update scores (a write) rather than to get the relative leaderboards (a read). Specifically, when a user's score change, queue up tasks for each of their friends to update their \"relative leaderboards\" and keep those leaderboards as list attributes (which do keep order!-) suitably sorted (yep, the latter's a denormalization -- it's often necessary to denormalize, i.e., duplicate information appropriately, to exploit key-value stores at their best!-).\nOf course you'll also update the relative leaderboards when a friendship (user to user connection) disappears or appears, but those should (I imagine) be even rarer than score updates;-).\nIf writes are pretty frequent, since you don't need perfectly precise up-to-the-second info (i.e., it's not financials/accounting stuff;-), you still have many viable approaches to try.\nE.g., big score changes (rarer) might trigger the relative-leaderboards recomputes, while smaller ones (more frequent) get stashed away and only applied once in a while \"when you get around to it\". It's hard to be more specific without ballpark numbers about frequency of updates of various magnitude, typical network-friendship cluster sizes, etc, etc. I know, like everybody else, you want a perfect approach that applies no matter how different the sizes and frequencies in question... but, you just won't find one!-)\n", "There is a python library available for storing rankings:\nhttp://code.google.com/p/google-app-engine-ranklist/\n" ]
[ 4, 1 ]
[]
[]
[ "google_app_engine", "leaderboard", "python" ]
stackoverflow_0001628562_google_app_engine_leaderboard_python.txt
Q: Parsing XML - right scripting languages / packages for the job? I know that any language is capable of parsing XML; I'm really just looking for advantages or drawbacks that you may have come across in your own experiences. Perl would be my standard go to here, but I'm open to suggestions. Thanks! UPDATE: I ended up going with XML::Simple which did a nice job, but I have one piece of advice if you plan to use it--research the forcearray option first. I had to rewrite a bunch of statements after learning that it is usually best practice to set forcearray. This page had the clearest explanation that I could find. Frankly, I'm surprised this isn't the default behavior. A: XML::Twig is very nice, especially because it’s not as awfully verbose as some of the other options. A: If you are using Perl then I would recommend XML::Simple: As more and more Web sites begin using XML for their content, it's increasingly important for Web developers to know how to parse XML data and convert it into different formats. That's where the Perl module called XML::Simple comes in. It takes away the drudgery of parsing XML data, making the process easier than you ever thought possible. A: For pure XML parsing, I wouldn't use Java, C#, C++, C, etc. They tend to overcomplicate things, as in you want a banana and get the gorilla with it as well. Higher-level and interpreted languages such as Perl, PHP, Python, Groovy are more suitable. Perl is included in virtually every Linux distro, as is PHP for the most part. I've used Groovy recently for especially this and found it very easy. Mind you though that a C parser will be orders of magnitude faster than Groovy for instance. A: It's all going to be in the libraries. Python has great libraries for XML. My preference is lxml. It uses libxml/libxslt so it's fast, but the Python binding make it really easy to use. Perl may very well have equally awesome OO libraries. A: I saw that people recommend XML::Simple if you decide on Perl. While XML::Simple is, indeed, very simple to use and great, is a DOM parser. As such, it is, sadly, completely unsuitable to processing large XML files as your process would run out of memory (it's a common problem for any DOM parser, not limited to XML::Simple or Perl). So, for large files, you must pick a SAX parser in whichever language you choose (there are many XML SAX parsers in Perl, or use another stream parser like XML::Twig that is even better than standard SAX parser. Can't speak for other languages). A: Not exactly a scripting language, but you could also consider Scala. You can start from here. A: Scala's XML support is rather good, especially as XML can just be typed directly into Scala programs. Microsoft also did some cool integrated stuff with their LINQ for XML But I really like Elementtree and just that package alone is a good reason to use Python instead of Perl ;) Here's an example: import elementtree.ElementTree as ET # build a tree structure root = ET.Element("html") head = ET.SubElement(root, "head") title = ET.SubElement(head, "title") title.text = "Page Title" body = ET.SubElement(root, "body") body.set("bgcolor", "#ffffff") body.text = "Hello, World!" # wrap it in an ElementTree instance, and save as XML tree = ET.ElementTree(root) tree.write("page.xhtml") A: It's not a scripting language, but Scala is great for working with XML natively. Also, see this book (draft) by Burak. A: Python has some pretty good support for XML. From the standard library DOM packages to much more 'pythonic' libraries that parse XML directly into more usable object structures. There isn't really a 'right' language though... there are good XML packages for most languages nowadays. A: If you're going to use Ruby to do it then you're going to want to take a look at Nokogiri or Hpricot. Both have their strengths and weaknesses. The language and package selection really comes down to what you want to do with the data after you've parsed it. A: Reading Data out of XML files is dead easy with C# and LINQ to XML! Somehow, although I really love python, I found it hard to parse XML with the standard libraries. A: I would say it depends like everything else. VB.NET 2008 uses XML literals, has IntelliSense for LINQ to XML, and a few power toys that help turn XML into XSD. So personally, if you are working in a .NET environment I think this is the best choice.
Parsing XML - right scripting languages / packages for the job?
I know that any language is capable of parsing XML; I'm really just looking for advantages or drawbacks that you may have come across in your own experiences. Perl would be my standard go to here, but I'm open to suggestions. Thanks! UPDATE: I ended up going with XML::Simple which did a nice job, but I have one piece of advice if you plan to use it--research the forcearray option first. I had to rewrite a bunch of statements after learning that it is usually best practice to set forcearray. This page had the clearest explanation that I could find. Frankly, I'm surprised this isn't the default behavior.
[ "XML::Twig is very nice, especially because it’s not as awfully verbose as some of the other options.\n", "If you are using Perl then I would recommend XML::Simple:\n\nAs more and more Web sites begin using\n XML for their content, it's\n increasingly important for Web\n developers to know how to parse XML\n data and convert it into different\n formats. That's where the Perl module\n called XML::Simple comes in. It takes\n away the drudgery of parsing XML data,\n making the process easier than you\n ever thought possible.\n\n", "For pure XML parsing, I wouldn't use Java, C#, C++, C, etc. They tend to overcomplicate things, as in you want a banana and get the gorilla with it as well.\nHigher-level and interpreted languages such as Perl, PHP, Python, Groovy are more suitable. Perl is included in virtually every Linux distro, as is PHP for the most part.\nI've used Groovy recently for especially this and found it very easy. Mind you though that a C parser will be orders of magnitude faster than Groovy for instance.\n", "It's all going to be in the libraries.\nPython has great libraries for XML. My preference is lxml. It uses libxml/libxslt so it's fast, but the Python binding make it really easy to use. Perl may very well have equally awesome OO libraries.\n", "I saw that people recommend XML::Simple if you decide on Perl.\nWhile XML::Simple is, indeed, very simple to use and great, is a DOM parser. As such, it is, sadly, completely unsuitable to processing large XML files as your process would run out of memory (it's a common problem for any DOM parser, not limited to XML::Simple or Perl).\nSo, for large files, you must pick a SAX parser in whichever language you choose (there are many XML SAX parsers in Perl, or use another stream parser like XML::Twig that is even better than standard SAX parser. Can't speak for other languages).\n", "Not exactly a scripting language, but you could also consider Scala. You can start from here.\n", "Scala's XML support is rather good, especially as XML can just be typed directly into Scala programs.\nMicrosoft also did some cool integrated stuff with their LINQ for XML\nBut I really like Elementtree and just that package alone is a good reason to use Python instead of Perl ;) \nHere's an example:\nimport elementtree.ElementTree as ET\n\n# build a tree structure\nroot = ET.Element(\"html\")\n\nhead = ET.SubElement(root, \"head\")\n\ntitle = ET.SubElement(head, \"title\")\ntitle.text = \"Page Title\"\n\nbody = ET.SubElement(root, \"body\")\nbody.set(\"bgcolor\", \"#ffffff\")\n\nbody.text = \"Hello, World!\"\n\n# wrap it in an ElementTree instance, and save as XML\ntree = ET.ElementTree(root)\ntree.write(\"page.xhtml\")\n\n", "It's not a scripting language, but Scala is great for working with XML natively. Also, see this book (draft) by Burak.\n", "Python has some pretty good support for XML. From the standard library DOM packages to much more 'pythonic' libraries that parse XML directly into more usable object structures.\nThere isn't really a 'right' language though... there are good XML packages for most languages nowadays.\n", "If you're going to use Ruby to do it then you're going to want to take a look at Nokogiri or Hpricot. Both have their strengths and weaknesses. The language and package selection really comes down to what you want to do with the data after you've parsed it.\n", "Reading Data out of XML files is dead easy with C# and LINQ to XML!\nSomehow, although I really love python, I found it hard to parse XML with the standard libraries.\n", "I would say it depends like everything else. VB.NET 2008 uses XML literals, has IntelliSense for LINQ to XML, and a few power toys that help turn XML into XSD. So personally, if you are working in a .NET environment I think this is the best choice. \n" ]
[ 10, 10, 7, 6, 4, 3, 3, 2, 1, 1, 0, 0 ]
[]
[]
[ "perl", "python", "ruby", "xml" ]
stackoverflow_0001613042_perl_python_ruby_xml.txt
Q: Getting a Python library listed in easy_setup and pip? Every Python developer is familiar with easy_install and setup tools. If I want to install a library that's well known, all I have to do is this: sudo easy_setup install django Now I have a library that I've written and would love to see widespread. How do you get added to this library list? A: Upload it to PyPI. See the tutorial.
Getting a Python library listed in easy_setup and pip?
Every Python developer is familiar with easy_install and setup tools. If I want to install a library that's well known, all I have to do is this: sudo easy_setup install django Now I have a library that I've written and would love to see widespread. How do you get added to this library list?
[ "Upload it to PyPI. See the tutorial.\n" ]
[ 9 ]
[]
[]
[ "python" ]
stackoverflow_0001633180_python.txt
Q: python re: no such group I'm newbie in Python. I can't understand why this code does not work: reOptions = re.search( "[\s+@twitter\s+(?P<login>\w+):(?P<password>.*?)\s+]", document_text) if reOptions: login = reOptions.group('login') password = reOptions.group('password') I'm having an error: IndexError: no such group With document_text Blah-blah [ @twitter va1en0k:somepass ] A: You need to escape the brackets [ and ] as \[ and \]. \[\s+@twitter\s+(?P<login>\w+):(?P<password>.*?)\s+\] A: The [ and ] are special regular expression characters. Escape them to match literal [ and ]. See Regular Expression Syntax.
python re: no such group
I'm newbie in Python. I can't understand why this code does not work: reOptions = re.search( "[\s+@twitter\s+(?P<login>\w+):(?P<password>.*?)\s+]", document_text) if reOptions: login = reOptions.group('login') password = reOptions.group('password') I'm having an error: IndexError: no such group With document_text Blah-blah [ @twitter va1en0k:somepass ]
[ "You need to escape the brackets [ and ] as \\[ and \\].\n\\[\\s+@twitter\\s+(?P<login>\\w+):(?P<password>.*?)\\s+\\]\n\n", "The [ and ] are special regular expression characters. Escape them to match literal [ and ].\nSee Regular Expression Syntax.\n" ]
[ 4, 3 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001633181_python_regex.txt
Q: Using properties defined at a per-instance level not per-class What I am trying to achieve is something like this: class object: def __init__(self): WidthVariable(self) print self.width #Imagine I did this 60frames/1second later print self.width #output: >>0 >>25 What I want happening (as above): When WidthVariable - a class - is created it adds the variable width to the object instance. This variable acts like a normal property, but in this particular case it is read-only (only the fget variable is set). Additionally, the fget calls a function defined in WidthVariable which decides what width to return. However, I have no idea how to do this! I tried it using normal properties but I found that they only work on classes and not per instance - please note that the code I use should be similar as possible to the above (i.e. only code within the __init__ of WidthVariable should set the width variable, nowhere else). Also, self.width cannot be function, because I do not what to call it like self.width(), I want self.width (because it keeps with the rest of the design I have). To clarify, the complete code would be something like this: class MyObject: def __init__(self) WidthVariable(self) print self.width class WidthVariable: def __init__(self, object) object.width = property(self.get_width) def get_width(self): value = #do_stuff return value #The Value #output: >>25 #Whatever the Value was A: Since, as @Jonathan says, descriptors (including properties) are per-class, not per-instance, the only way to get different per-instance descriptors is to have each instance individualize its own class. That's pretty shallow and easy as far as metaprogramming goes;-)...: class Individualistic(object_or_whatever_bases): def __init__(self, whatever_args): self.__class__ = type('GottaBeMe', (self.__class__, object), {}) # keep rocking...!-) I'm also adding object explicitly because it's needed (in Python 2.*, and you do say that's what you're using!!!) to make classes new-type. Never use legacy classes any more, they don't behave correctly with respect to properties and much else besides (and for backwards compatibility they can't -- in Python 3, legacy classes have finally been annihilated so EVERY class is new-style without the requirement to explicitly inherit from object any more!). Now, any descriptor that's placed in self.__class__.__dict__ will only affect this one instance, none other. There's a bit of overhead (each GottaBeMe class and therefore each instance has its own __dict__, etc), but nothing too terrible. Now, all that's needed to satisfy the original request is to change: class WidthVariable: def __init__(self, object) object.width = property(self.get_width) (also renaming the object arg sensibly to avoid trampling on a built-in, and making the class new-style because you should ALWAYS make EVERY class new-style;-), to: class WidthVariable(object): def __init__(self, obj) obj.__class__.width = property(self.get_width) Nothing too black-magicky, as you can see!-) A: I believe I now understand your question, and I also believe you're out of luck. For new-style classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary. Descriptors (which are used to implement properties) must appear in the class's __dict__, and cannot appear in the instance's __dict__. In other words, Python is not Ruby! I happily await correction from a godly Python metaprogrammer, but I think I'm right. A: I don't understand the way you've constructed your example, nor do I understand what you mean about "normal properties" only working "on classes". Here's how you create a read-only property: class Foo(object): # create read-only property "rop" rop = property(lambda self: self._x) def __init__(self): self._x = 0 def tick(self): self._x += 1 f = Foo() print f.rop # prints 0 f.tick() f.tick() print f.rop # prints 2 f.rop = 4 # this will raise AtributeError A: Not very clear what you want? but I assume you want obj.width to be customized for each instance Here is a easy way by using plain properties, width property returns value returned by a per instance callback class MyClass(object): def __init__(self, callback): self.callback = callback def get_width(self): return self.callback() width = property(get_width) def w1(): return 0 def w2(): return 25 o1 = MyClass(w1) o2 = MyClass(w2) print o1.width print o2.width If callback can not be passed we can assign it to WidthVariable which returns width based on instance class MyClass(object): def __init__(self): self.callback = WidthVariable(self) def get_width(self): return self.callback() width = property(get_width) class WidthVariable(object): def __init__(self, obj): self.obj = obj def __call__(self): return hash(self.obj) o1 = MyClass() o2 = MyClass() print o1.width print o2.width
Using properties defined at a per-instance level not per-class
What I am trying to achieve is something like this: class object: def __init__(self): WidthVariable(self) print self.width #Imagine I did this 60frames/1second later print self.width #output: >>0 >>25 What I want happening (as above): When WidthVariable - a class - is created it adds the variable width to the object instance. This variable acts like a normal property, but in this particular case it is read-only (only the fget variable is set). Additionally, the fget calls a function defined in WidthVariable which decides what width to return. However, I have no idea how to do this! I tried it using normal properties but I found that they only work on classes and not per instance - please note that the code I use should be similar as possible to the above (i.e. only code within the __init__ of WidthVariable should set the width variable, nowhere else). Also, self.width cannot be function, because I do not what to call it like self.width(), I want self.width (because it keeps with the rest of the design I have). To clarify, the complete code would be something like this: class MyObject: def __init__(self) WidthVariable(self) print self.width class WidthVariable: def __init__(self, object) object.width = property(self.get_width) def get_width(self): value = #do_stuff return value #The Value #output: >>25 #Whatever the Value was
[ "Since, as @Jonathan says, descriptors (including properties) are per-class, not per-instance, the only way to get different per-instance descriptors is to have each instance individualize its own class. That's pretty shallow and easy as far as metaprogramming goes;-)...:\nclass Individualistic(object_or_whatever_bases):\n def __init__(self, whatever_args):\n self.__class__ = type('GottaBeMe', (self.__class__, object), {})\n # keep rocking...!-)\n\nI'm also adding object explicitly because it's needed (in Python 2.*, and you do say that's what you're using!!!) to make classes new-type. Never use legacy classes any more, they don't behave correctly with respect to properties and much else besides (and for backwards compatibility they can't -- in Python 3, legacy classes have finally been annihilated so EVERY class is new-style without the requirement to explicitly inherit from object any more!).\nNow, any descriptor that's placed in self.__class__.__dict__ will only affect this one instance, none other. There's a bit of overhead (each GottaBeMe class and therefore each instance has its own __dict__, etc), but nothing too terrible.\nNow, all that's needed to satisfy the original request is to change:\nclass WidthVariable:\n def __init__(self, object)\n object.width = property(self.get_width)\n\n(also renaming the object arg sensibly to avoid trampling on a built-in, and making the class new-style because you should ALWAYS make EVERY class new-style;-), to:\nclass WidthVariable(object):\n def __init__(self, obj)\n obj.__class__.width = property(self.get_width)\n\nNothing too black-magicky, as you can see!-)\n", "I believe I now understand your question, and I also believe you're out of luck.\n\nFor new-style classes, implicit\n invocations of special methods are\n only guaranteed to work correctly if\n defined on an object’s type, not in\n the object’s instance dictionary.\n\nDescriptors (which are used to implement properties) must appear in the class's __dict__, and cannot appear in the instance's __dict__. In other words, Python is not Ruby!\nI happily await correction from a godly Python metaprogrammer, but I think I'm right.\n", "I don't understand the way you've constructed your example, nor do I understand what you mean about \"normal properties\" only working \"on classes\". Here's how you create a read-only property:\nclass Foo(object):\n # create read-only property \"rop\"\n rop = property(lambda self: self._x)\n\n def __init__(self):\n self._x = 0\n\n def tick(self):\n self._x += 1 \n\nf = Foo()\nprint f.rop # prints 0\nf.tick()\nf.tick()\nprint f.rop # prints 2\nf.rop = 4 # this will raise AtributeError\n\n", "Not very clear what you want? but I assume you want obj.width to be customized for each instance\nHere is a easy way by using plain properties, width property returns value returned by a per instance callback\nclass MyClass(object):\n def __init__(self, callback):\n self.callback = callback\n\n def get_width(self):\n return self.callback()\n\n width = property(get_width)\n\ndef w1(): return 0\ndef w2(): return 25\n\no1 = MyClass(w1)\no2 = MyClass(w2)\n\nprint o1.width\nprint o2.width\n\nIf callback can not be passed we can assign it to WidthVariable which returns width based on instance\nclass MyClass(object):\n def __init__(self):\n self.callback = WidthVariable(self)\n\n def get_width(self):\n return self.callback()\n\n width = property(get_width)\n\nclass WidthVariable(object):\n def __init__(self, obj):\n self.obj = obj\n\n def __call__(self):\n return hash(self.obj)\n\no1 = MyClass()\no2 = MyClass()\n\nprint o1.width\nprint o2.width\n\n" ]
[ 6, 1, 0, 0 ]
[]
[]
[ "properties", "python", "python_2.6" ]
stackoverflow_0001632170_properties_python_python_2.6.txt
Q: Python File Slurp w/ endian conversion It was recently asked how to do a file slurp in python, and the accepted answer suggested something like: with open('x.txt') as x: f = x.read() How would I go about doing this to read the file in and convert the endian representation of the data? For example, I have a 1GB binary file that's just a bunch of single precision floats packed as a big endian and I want to convert it to little endian and dump into a numpy array. Below is the function I wrote to accomplish this and some real code that calls it. I use struct.unpack do the endian conversion and tried to speed everything up by using mmap. My question then is, am I using the slurp correctly with mmap and struct.unpack? Is there a cleaner, faster way to do this? Right now what I have works, but I'd really like to learn how to do this better. Thanks in advance! #!/usr/bin/python from struct import unpack import mmap import numpy as np def mmapChannel(arrayName, fileName, channelNo, line_count, sample_count): """ We need to read in the asf internal file and convert it into a numpy array. It is stored as a single row, and is binary. Thenumber of lines (rows), samples (columns), and channels all come from the .meta text file Also, internal format files are packed big endian, but most systems use little endian, so we need to make that conversion as well. Memory mapping seemed to improve the ingestion speed a bit """ # memory-map the file, size 0 means whole file # length = line_count * sample_count * arrayName.itemsize print "\tMemory Mapping..." with open(fileName, "rb") as f: map = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) map.seek(channelNo*line_count*sample_count*arrayName.itemsize) for i in xrange(line_count*sample_count): arrayName[0, i] = unpack('>f', map.read(arrayName.itemsize) )[0] # Same method as above, just more verbose for the maintenance programmer. # for i in xrange(line_count*sample_count): #row # be_float = map.read(arrayName.itemsize) # arrayName.itemsize should be 4 for float32 # le_float = unpack('>f', be_float)[0] # > for big endian, < for little endian # arrayName[0, i]= le_float map.close() return arrayName print "Initializing the Amp HH HV, and Phase HH HV arrays..." HHamp = np.ones((1, line_count*sample_count), dtype='float32') HHphase = np.ones((1, line_count*sample_count), dtype='float32') HVamp = np.ones((1, line_count*sample_count), dtype='float32') HVphase = np.ones((1, line_count*sample_count), dtype='float32') print "Ingesting HH_Amp..." HHamp = mmapChannel(HHamp, 'ALPSRP042301700-P1.1__A.img', 0, line_count, sample_count) print "Ingesting HH_phase..." HHphase = mmapChannel(HHphase, 'ALPSRP042301700-P1.1__A.img', 1, line_count, sample_count) print "Ingesting HV_AMP..." HVamp = mmapChannel(HVamp, 'ALPSRP042301700-P1.1__A.img', 2, line_count, sample_count) print "Ingesting HV_phase..." HVphase = mmapChannel(HVphase, 'ALPSRP042301700-P1.1__A.img', 3, line_count, sample_count) print "Reshaping...." HHamp_orig = HHamp.reshape(line_count, -1) HHphase_orig = HHphase.reshape(line_count, -1) HVamp_orig = HVamp.reshape(line_count, -1) HVphase_orig = HVphase.reshape(line_count, -1) A: Slightly modified @Alex Martelli's answer: arr = numpy.fromfile(filename, numpy.dtype('>f4')) # no byteswap is needed regardless of endianess of the machine A: with open(fileName, "rb") as f: arrayName = numpy.fromfile(f, numpy.float32) arrayName.byteswap(True) Pretty hard to beat for speed AND conciseness;-). For byteswap see here (the True argument means, "do it in place"); for fromfile see here. This works as is on little-endian machines (since the data are big-endian, the byteswap is needed). You can test if that is the case to do the byteswap conditionally, change the last line from an unconditional call to byteswap into, for example: if struct.pack('=f', 2.3) == struct.pack('<f', 2.3): arrayName.byteswap(True) i.e., a call to byteswap conditional on a test of little-endianness. A: You could coble together an ASM based solution using CorePy. I wonder though, if you might be able to gain enough performance from the some other part of your algorithm. I/O and manipulations on 1GB chunks of data are going to take a while which ever way you slice it. One other thing you might find helpful would be to switch to C once you have prototyped the algorithm in python. I did this for manipulations on a whole-world DEM (height) data set one time. The whole thing was much more tolerable once I got away from the interpreted script. A: I'd expect something like this to be faster arrayName[0] = unpack('>'+'f'*line_count*sample_count, map.read(arrayName.itemsize*line_count*sample_count)) Please don't use map as a variable name
Python File Slurp w/ endian conversion
It was recently asked how to do a file slurp in python, and the accepted answer suggested something like: with open('x.txt') as x: f = x.read() How would I go about doing this to read the file in and convert the endian representation of the data? For example, I have a 1GB binary file that's just a bunch of single precision floats packed as a big endian and I want to convert it to little endian and dump into a numpy array. Below is the function I wrote to accomplish this and some real code that calls it. I use struct.unpack do the endian conversion and tried to speed everything up by using mmap. My question then is, am I using the slurp correctly with mmap and struct.unpack? Is there a cleaner, faster way to do this? Right now what I have works, but I'd really like to learn how to do this better. Thanks in advance! #!/usr/bin/python from struct import unpack import mmap import numpy as np def mmapChannel(arrayName, fileName, channelNo, line_count, sample_count): """ We need to read in the asf internal file and convert it into a numpy array. It is stored as a single row, and is binary. Thenumber of lines (rows), samples (columns), and channels all come from the .meta text file Also, internal format files are packed big endian, but most systems use little endian, so we need to make that conversion as well. Memory mapping seemed to improve the ingestion speed a bit """ # memory-map the file, size 0 means whole file # length = line_count * sample_count * arrayName.itemsize print "\tMemory Mapping..." with open(fileName, "rb") as f: map = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) map.seek(channelNo*line_count*sample_count*arrayName.itemsize) for i in xrange(line_count*sample_count): arrayName[0, i] = unpack('>f', map.read(arrayName.itemsize) )[0] # Same method as above, just more verbose for the maintenance programmer. # for i in xrange(line_count*sample_count): #row # be_float = map.read(arrayName.itemsize) # arrayName.itemsize should be 4 for float32 # le_float = unpack('>f', be_float)[0] # > for big endian, < for little endian # arrayName[0, i]= le_float map.close() return arrayName print "Initializing the Amp HH HV, and Phase HH HV arrays..." HHamp = np.ones((1, line_count*sample_count), dtype='float32') HHphase = np.ones((1, line_count*sample_count), dtype='float32') HVamp = np.ones((1, line_count*sample_count), dtype='float32') HVphase = np.ones((1, line_count*sample_count), dtype='float32') print "Ingesting HH_Amp..." HHamp = mmapChannel(HHamp, 'ALPSRP042301700-P1.1__A.img', 0, line_count, sample_count) print "Ingesting HH_phase..." HHphase = mmapChannel(HHphase, 'ALPSRP042301700-P1.1__A.img', 1, line_count, sample_count) print "Ingesting HV_AMP..." HVamp = mmapChannel(HVamp, 'ALPSRP042301700-P1.1__A.img', 2, line_count, sample_count) print "Ingesting HV_phase..." HVphase = mmapChannel(HVphase, 'ALPSRP042301700-P1.1__A.img', 3, line_count, sample_count) print "Reshaping...." HHamp_orig = HHamp.reshape(line_count, -1) HHphase_orig = HHphase.reshape(line_count, -1) HVamp_orig = HVamp.reshape(line_count, -1) HVphase_orig = HVphase.reshape(line_count, -1)
[ "Slightly modified @Alex Martelli's answer:\narr = numpy.fromfile(filename, numpy.dtype('>f4'))\n# no byteswap is needed regardless of endianess of the machine\n\n", "with open(fileName, \"rb\") as f:\n arrayName = numpy.fromfile(f, numpy.float32)\narrayName.byteswap(True)\n\nPretty hard to beat for speed AND conciseness;-). For byteswap see here (the True argument means, \"do it in place\"); for fromfile see here.\nThis works as is on little-endian machines (since the data are big-endian, the byteswap is needed). You can test if that is the case to do the byteswap conditionally, change the last line from an unconditional call to byteswap into, for example:\nif struct.pack('=f', 2.3) == struct.pack('<f', 2.3):\n arrayName.byteswap(True)\n\ni.e., a call to byteswap conditional on a test of little-endianness.\n", "You could coble together an ASM based solution using CorePy. I wonder though, if you might be able to gain enough performance from the some other part of your algorithm. I/O and manipulations on 1GB chunks of data are going to take a while which ever way you slice it.\nOne other thing you might find helpful would be to switch to C once you have prototyped the algorithm in python. I did this for manipulations on a whole-world DEM (height) data set one time. The whole thing was much more tolerable once I got away from the interpreted script. \n", "I'd expect something like this to be faster\narrayName[0] = unpack('>'+'f'*line_count*sample_count, map.read(arrayName.itemsize*line_count*sample_count))\n\nPlease don't use map as a variable name\n" ]
[ 7, 6, 0, 0 ]
[]
[]
[ "endianness", "mmap", "numpy", "python", "struct" ]
stackoverflow_0001632673_endianness_mmap_numpy_python_struct.txt
Q: Is a Python closure a good replacement for `__all__`? Is it a good idea to use a closure instead of __all__ to limit the names exposed by a Python module? This would prevent programmers from accidentally using the wrong name for a module (import urllib; urllib.os.getlogin()) as well as avoiding "from x import *" namespace pollution as __all__. def _init_module(): global foo import bar def foo(): return bar.baz.operation() class Quux(bar.baz.Splort): pass _init_module(); del _init_module vs. the same module using __all__: __all__ = ['foo'] import bar def foo(): return bar.baz.operation() class Quux(bar.baz.Splort): pass Functions could just adopt this style to avoid polluting the module namespace: def foo(): import bar bar.baz.operation() This might be helpful for a large package that wants to help users distinguish its API from the package's use of its and other modules' API during interactive introspection. On the other hand, maybe IPython should simply distinguish names in __all__ during tab completion, and more users should use an IDE that allows them to jump between files to see the definition of each name. A: I am a fan of writing code that is absolutely as brain-dead simple as it can be. __all__ is a feature of Python, added explicitly to solve the problem of limiting what names are made visible by a module. When you use it, people immediately understand what you are doing with it. Your closure trick is very nonstandard, and if I encountered it, I would not immediately understand it. You would need to put in a long comment to explain it, and then you would need to put in another long comment to explain why you did it that way instead of using __all__. EDIT: Now that I understand the problem a little better, here is an alternate answer. In Python it is considered good practice to prefix private names with an underscore in a module. If you do from the_module_name import * you will get all the names that do not start with an underscore. So, rather than the closure trick, I would prefer to see correct use of the initial-underscore idiom. Note that if you use the initial underscore names, you don't even need to use __all__. A: The problem with from x import * is that it can hide NameErrors which makes trivial bugs hard to track down. "namespace pollution" means adding stuff to the namespace that you have no idea where it came from. Which is kind of what your closure does too. Plus it might confuse IDEs, outlines, pylint and the like. Using the "wrong" name for a module is not a real problem either. Module objects are the same from wherever you import them. If the "wrong" name disappears (after a update) it should be clear why and motivate the programmer to do it properly next time. But it doesn't cause bugs. A: Okay, I'm beginning to understand this issue a bit more. The closure really does allow for hiding private stuff. Here's a simple example. Without the closure: # module named "foo.py" def _bar(): return 5 def foo(): return _bar() - 2 With the closure: # module named "fooclosure.py" def _init_module(): global foo def _bar(): return 5 def foo(): return _bar() - 2 _init_module(); del _init_module Sample of usage: >>> import foo >>> dir(foo) ['__builtins__', '__doc__', '__file__', '__name__', '__package__', '_bar', 'foo'] >>> >>> import fooclosure >>> dir(fooclosure) ['__builtins__', '__doc__', '__file__', '__name__', '__package__', 'foo'] >>> This is actually disturbingly subtle. In the first case, function foo() simply has a reference to the name _bar(), and if you were to remove _bar() from the name space, foo() would stop working. foo() looks up _bar() each and every time it runs. In contrast, the closure version of foo() works without _bar() existing in the name space. I'm not even certain how it works... is it holding a reference to the function object created for _bar(), or is it holding a reference to a name space that still exists, such that it can look up the name _bar() and find it?
Is a Python closure a good replacement for `__all__`?
Is it a good idea to use a closure instead of __all__ to limit the names exposed by a Python module? This would prevent programmers from accidentally using the wrong name for a module (import urllib; urllib.os.getlogin()) as well as avoiding "from x import *" namespace pollution as __all__. def _init_module(): global foo import bar def foo(): return bar.baz.operation() class Quux(bar.baz.Splort): pass _init_module(); del _init_module vs. the same module using __all__: __all__ = ['foo'] import bar def foo(): return bar.baz.operation() class Quux(bar.baz.Splort): pass Functions could just adopt this style to avoid polluting the module namespace: def foo(): import bar bar.baz.operation() This might be helpful for a large package that wants to help users distinguish its API from the package's use of its and other modules' API during interactive introspection. On the other hand, maybe IPython should simply distinguish names in __all__ during tab completion, and more users should use an IDE that allows them to jump between files to see the definition of each name.
[ "I am a fan of writing code that is absolutely as brain-dead simple as it can be.\n__all__ is a feature of Python, added explicitly to solve the problem of limiting what names are made visible by a module. When you use it, people immediately understand what you are doing with it.\nYour closure trick is very nonstandard, and if I encountered it, I would not immediately understand it. You would need to put in a long comment to explain it, and then you would need to put in another long comment to explain why you did it that way instead of using __all__.\nEDIT: Now that I understand the problem a little better, here is an alternate answer.\nIn Python it is considered good practice to prefix private names with an underscore in a module. If you do from the_module_name import * you will get all the names that do not start with an underscore. So, rather than the closure trick, I would prefer to see correct use of the initial-underscore idiom.\nNote that if you use the initial underscore names, you don't even need to use __all__.\n", "The problem with from x import * is that it can hide NameErrors which makes trivial bugs hard to track down. \"namespace pollution\" means adding stuff to the namespace that you have no idea where it came from.\nWhich is kind of what your closure does too. Plus it might confuse IDEs, outlines, pylint and the like.\nUsing the \"wrong\" name for a module is not a real problem either. Module objects are the same from wherever you import them. If the \"wrong\" name disappears (after a update) it should be clear why and motivate the programmer to do it properly next time. But it doesn't cause bugs.\n", "Okay, I'm beginning to understand this issue a bit more. The closure really does allow for hiding private stuff. Here's a simple example.\nWithout the closure:\n# module named \"foo.py\"\ndef _bar():\n return 5\n\ndef foo():\n return _bar() - 2 \n\nWith the closure:\n# module named \"fooclosure.py\"\ndef _init_module():\n global foo\n def _bar():\n return 5\n\n def foo():\n return _bar() - 2\n\n_init_module(); del _init_module\n\nSample of usage:\n>>> import foo\n>>> dir(foo)\n['__builtins__', '__doc__', '__file__', '__name__', '__package__', '_bar', 'foo']\n>>>\n>>> import fooclosure\n>>> dir(fooclosure)\n['__builtins__', '__doc__', '__file__', '__name__', '__package__', 'foo']\n>>>\n\nThis is actually disturbingly subtle. In the first case, function foo() simply has a reference to the name _bar(), and if you were to remove _bar() from the name space, foo() would stop working. foo() looks up _bar() each and every time it runs.\nIn contrast, the closure version of foo() works without _bar() existing in the name space. I'm not even certain how it works... is it holding a reference to the function object created for _bar(), or is it holding a reference to a name space that still exists, such that it can look up the name _bar() and find it?\n" ]
[ 7, 4, 1 ]
[]
[]
[ "closures", "python" ]
stackoverflow_0001632739_closures_python.txt
Q: Amazon Web Service ItemSearch DetailPageURL's with Associate IDs? DetailPageURL's returned by ItemSearch seem to include an incorrect ID/tag rather than the associate ID I requested the search with. I'm getting: http://www.amazon.co.uk/gp/product/1590595009?SubscriptionId=XXX&tag=foo-12&linkCode=as2&camp=1634&creative=19450&creativeASIN=1590595009 When I expect: http://www.amazon.co.uk/gp/product/1590595009?SubscriptionId=XXX&tag=wwwmydomain-12&linkCode=as2&camp=1634&creative=19450&creativeASIN=1590595009 How do I get the correct tag? (Note that SO rewrites the above links to their own Associate ID if you click either of the above) I'm using Python and PyAWS 0.3.0, although I think the problem is with my request, rather than with the API wrapper. (As an aside, The Amazon Associates Link Checker (U.K. store)/U.S. store is invaluable in testing these links) A: Simple error in the end..... I was including the tag in the initial search: for searchResult in ecs.ItemSearch(item, SearchIndex=index, AssociateTag='wwwmydomain-12') But not in the secondary loop that steps through each result getting more details: for item in ecs.ItemSearch(searchResult.ASIN, ResponseGroup='Medium'): should be: for item in ecs.ItemSearch(searchResult.ASIN, ResponseGroup='Medium', AssociateTag='wwwodbodycom-21'): The tag is needed in both - it seems it's not carried over.
Amazon Web Service ItemSearch DetailPageURL's with Associate IDs?
DetailPageURL's returned by ItemSearch seem to include an incorrect ID/tag rather than the associate ID I requested the search with. I'm getting: http://www.amazon.co.uk/gp/product/1590595009?SubscriptionId=XXX&tag=foo-12&linkCode=as2&camp=1634&creative=19450&creativeASIN=1590595009 When I expect: http://www.amazon.co.uk/gp/product/1590595009?SubscriptionId=XXX&tag=wwwmydomain-12&linkCode=as2&camp=1634&creative=19450&creativeASIN=1590595009 How do I get the correct tag? (Note that SO rewrites the above links to their own Associate ID if you click either of the above) I'm using Python and PyAWS 0.3.0, although I think the problem is with my request, rather than with the API wrapper. (As an aside, The Amazon Associates Link Checker (U.K. store)/U.S. store is invaluable in testing these links)
[ "Simple error in the end..... I was including the tag in the initial search:\n\nfor searchResult in\n ecs.ItemSearch(item,\n SearchIndex=index,\n AssociateTag='wwwmydomain-12')\n\nBut not in the secondary loop that steps through each result getting more details:\n\nfor item in\n ecs.ItemSearch(searchResult.ASIN,\n ResponseGroup='Medium'):\n\nshould be:\n\nfor item in\n ecs.ItemSearch(searchResult.ASIN,\n ResponseGroup='Medium',\n AssociateTag='wwwodbodycom-21'):\n\nThe tag is needed in both - it seems it's not carried over.\n" ]
[ 2 ]
[]
[]
[ "amazon", "amazon_web_services", "python" ]
stackoverflow_0001633357_amazon_amazon_web_services_python.txt
Q: lambda versus list comprehension performance I recently posted a question using a lambda function and in a reply someone had mentioned lambda is going out of favor, to use list comprehensions instead. I am relatively new to Python. I ran a simple test: import time S=[x for x in range(1000000)] T=[y**2 for y in range(300)] # # time1 = time.time() N=[x for x in S for y in T if x==y] time2 = time.time() print 'time diff [x for x in S for y in T if x==y]=', time2-time1 #print N # # time1 = time.time() N=filter(lambda x:x in S,T) time2 = time.time() print 'time diff filter(lambda x:x in S,T)=', time2-time1 #print N # # #http://snipt.net/voyeg3r/python-intersect-lists/ time1 = time.time() N = [val for val in S if val in T] time2 = time.time() print 'time diff [val for val in S if val in T]=', time2-time1 #print N # # time1 = time.time() N= list(set(S) & set(T)) time2 = time.time() print 'time diff list(set(S) & set(T))=', time2-time1 #print N #the results will be unordered as compared to the other ways!!! # # time1 = time.time() N=[] for x in S: for y in T: if x==y: N.append(x) time2 = time.time() print 'time diff using traditional for loop', time2-time1 #print N They all print the same N so I commented that print stmt out (except the last way it's unordered), but the resulting time differences were interesting over repeated tests as seen in this one example: time diff [x for x in S for y in T if x==y]= 54.875 time diff filter(lambda x:x in S,T)= 0.391000032425 time diff [val for val in S if val in T]= 12.6089999676 time diff list(set(S) & set(T))= 0.125 time diff using traditional for loop 54.7970001698 So while I find list comprehensions on the whole easier to read, there seems to be some performance issues at least in this example. So, two questions: Why is lambda etc being pushed aside? For the list comprehension ways, is there a more efficient implementation and how would you KNOW it's more efficient without testing? I mean, lambda/map/filter was supposed to be less efficient because of the extra function calls, but it seems to be MORE efficient. Paul A: Your tests are doing very different things. With S being 1M elements and T being 300: [x for x in S for y in T if x==y]= 54.875 This option does 300M equality comparisons.   filter(lambda x:x in S,T)= 0.391000032425 This option does 300 linear searches through S.   [val for val in S if val in T]= 12.6089999676 This option does 1M linear searches through T.   list(set(S) & set(T))= 0.125 This option does two set constructions and one set intersection. The differences in performance between these options is much more related to the algorithms each one is using, rather than any difference between list comprehensions and lambda. A: When I fix your code so that the list comprehension and the call to filter are actually doing the same work things change a whole lot: import time S=[x for x in range(1000000)] T=[y**2 for y in range(300)] # # time1 = time.time() N=[x for x in T if x in S] time2 = time.time() print 'time diff [x for x in T if x in S]=', time2-time1 #print N # # time1 = time.time() N=filter(lambda x:x in S,T) time2 = time.time() print 'time diff filter(lambda x:x in S,T)=', time2-time1 #print N Then the output is more like: time diff [x for x in T if x in S]= 0.414485931396 time diff filter(lambda x:x in S,T)= 0.466315984726 So the list comprehension has a time that's generally pretty close to and usually less than the lambda expression. The reason lambda expressions are being phased out is that many people think they are a lot less readable than list comprehensions. I sort of reluctantly agree. A: Q: Why is lambda etc being pushed aside? A: List comprehensions and generator expressions are generally considered to be a nice mix of power and readability. The pure functional-programming style where you use map(), reduce(), and filter() with functions (often lambda functions) is considered not as clear. Also, Python has added built-in functions that nicely handle all the major uses for reduce(). Suppose you wanted to sum a list. Here are two ways of doing it. lst = range(10) print reduce(lambda x, y: x + y, lst) print sum(lst) Sign me up as a fan of sum() and not a fan of reduce() to solve this problem. Here's another, similar problem: lst = range(10) print reduce(lambda x, y: bool(x or y), lst) print any(lst) Not only is the any() solution easier to understand, but it's also much faster; it has short-circuit evaluation, such that it will stop evaluating as soon as it has found any true value. The reduce() has to crank through the entire list. This performance difference would be stark if the list was a million items long, and the first item evaluated true. By the way, any() was added in Python 2.5; if you don't have it, here is a version for older versions of Python: def any(iterable): for x in iterable: if x: return True return False Suppose you wanted to make a list of squares of even numbers from some list. lst = range(10) print map(lambda x: x**2, filter(lambda x: x % 2 == 0, lst)) print [x**2 for x in lst if x % 2 == 0] Now suppose you wanted to sum that list of squares. lst = range(10) print sum(map(lambda x: x**2, filter(lambda x: x % 2 == 0, lst))) # list comprehension version of the above print sum([x**2 for x in lst if x % 2 == 0]) # generator expression version; note the lack of '[' and ']' print sum(x**2 for x in lst if x % 2 == 0) The generator expression actually just returns an iterable object. sum() takes the iterable and pulls values from it, one by one, summing as it goes, until all the values are consumed. This is the most efficient way you can solve this problem in Python. In contrast, the map() solution, and the equivalent solution with a list comprehension inside the call to sum(), must first build a list; this list is then passed to sum(), used once, and discarded. The time to build the list and then delete it again is just wasted. (EDIT: and note that the version with both map and filter must build two lists, one built by filter and one built by map; both lists are discarded.) (EDIT: But in Python 3.0 and newer, map() and filter() are now both "lazy" and produce an iterator instead of a list; so this point is less true than it used to be. Also, in Python 2.x you were able to use itertools.imap() and itertools.ifilter() for iterator-based map and filter. But I continue to prefer the generator expression solutions over any map/filter solutions.) By composing map(), filter(), and reduce() in combination with lambda functions, you can do many powerful things. But Python has idiomatic ways to solve the same problems which are simultaneously better performing and easier to read and understand. A: Many people have already pointed out that you're comparing apples with oranges, etc, etc. But I think nobody's shown how to a really simple comparison -- list comprehension vs map plus lambda with little else to get in the way -- and that might be: $ python -mtimeit -s'L=range(1000)' 'map(lambda x: x+1, L)' 1000 loops, best of 3: 328 usec per loop $ python -mtimeit -s'L=range(1000)' '[x+1 for x in L]' 10000 loops, best of 3: 129 usec per loop Here, you can see very sharply the cost of lambda -- about 200 microseconds, which in the case of a sufficiently simple operation such as this one swamps the operation itself. Numbers are very similar with filter of course, since the problem is not filter or map, but rather the lambda itself: $ python -mtimeit -s'L=range(1000)' '[x for x in L if not x%7]' 10000 loops, best of 3: 162 usec per loop $ python -mtimeit -s'L=range(1000)' 'filter(lambda x: not x%7, L)' 1000 loops, best of 3: 334 usec per loop No doubt the fact that lambda can be less clear, or its weird connection with Sparta (Spartans had a Lambda, for "Lakedaimon", painted on their shields -- this suggests that lambda is pretty dictatorial and bloody;-) have at least as much to do with its slowly falling out of fashion, as its performance costs. But the latter are quite real. A: First of all, test like this: import timeit S=[x for x in range(10000)] T=[y**2 for y in range(30)] print "v1", timeit.Timer('[x for x in S for y in T if x==y]', 'from __main__ import S,T').timeit(100) print "v2", timeit.Timer('filter(lambda x:x in S,T)', 'from __main__ import S,T').timeit(100) print "v3", timeit.Timer('[val for val in T if val in S]', 'from __main__ import S,T').timeit(100) print "v4", timeit.Timer('list(set(S) & set(T))', 'from __main__ import S,T').timeit(100) And basically you are doing different things each time you test. When you would rewrite the list-comprehension for example as [val for val in T if val in S] performance would be on par with the 'lambda/filter' construct. A: Sets are the correct solution for this. However try swapping S and T and see how long it takes! filter(lambda x:x in T,S) $ python -m timeit -s'S=[x for x in range(1000000)];T=[y**2 for y in range(300)]' 'filter(lambda x:x in S,T)' 10 loops, best of 3: 485 msec per loop $ python -m timeit -r1 -n1 -s'S=[x for x in range(1000000)];T=[y**2 for y in range(300)]' 'filter(lambda x:x in T,S)' 1 loops, best of 1: 19.6 sec per loop So you see that the order of S and T are quite important Changing the order of the list comprehension to match the filter gives $ python -m timeit -s'S=[x for x in range(1000000)];T=[y**2 for y in range(300)]' '[x for x in T if x in S]' 10 loops, best of 3: 441 msec per loop So if fact the list comprehension is slightly faster than the lambda on my computer A: Your list comprehension and lambda are doing different things, the list comprehension matching the lambda would be [val for val in T if val in S]. Efficiency isn't the reason why list comprehension are preferred (while they actually are slightly faster in almost all cases). The reason why they are preferred is readability. Try it with smaller loop body and larger loops, like make T a set, and iterate over S. In that case on my machine the list comprehension is nearly twice as fast. A: Your profiling is done wrong. Take a look the timeit module and try again. lambda defines anonymous functions. Their main problem is that many people don't know the whole python library and use them to re-implement functions that are already in the operator, functools etc module ( and much faster ). List comprehensions have nothing to do with lambda. They are equivalent to the standard filter and map functions from functional languages. LCs are preferred because they can be used as generators too, not to mention readability. A: This is pretty fast: def binary_search(a, x, lo=0, hi=None): if hi is None: hi = len(a) while lo < hi: mid = (lo+hi)//2 midval = a[mid] if midval < x: lo = mid+1 elif midval > x: hi = mid else: return mid return -1 time1 = time.time() N = [x for x in T if binary_search(S, x) >= 0] time2 = time.time() print 'time diff binary search=', time2-time1 Simply: less comparisions, less time. A: List comprehensions can make a bigger difference if you have to process your filtered results. In your case you just build a list, but if you had to do something like this: n = [f(i) for i in S if some_condition(i)] you would gain from LC optimization over this: n = map(f, filter(some_condition(i), S)) simply because the latter has to build an intermediate list (or tuple, or string, depending on the nature of S). As a consequence you will also notice a different impact on the memory used by each method, the LC will keep lower. The lambda in itself does not matter.
lambda versus list comprehension performance
I recently posted a question using a lambda function and in a reply someone had mentioned lambda is going out of favor, to use list comprehensions instead. I am relatively new to Python. I ran a simple test: import time S=[x for x in range(1000000)] T=[y**2 for y in range(300)] # # time1 = time.time() N=[x for x in S for y in T if x==y] time2 = time.time() print 'time diff [x for x in S for y in T if x==y]=', time2-time1 #print N # # time1 = time.time() N=filter(lambda x:x in S,T) time2 = time.time() print 'time diff filter(lambda x:x in S,T)=', time2-time1 #print N # # #http://snipt.net/voyeg3r/python-intersect-lists/ time1 = time.time() N = [val for val in S if val in T] time2 = time.time() print 'time diff [val for val in S if val in T]=', time2-time1 #print N # # time1 = time.time() N= list(set(S) & set(T)) time2 = time.time() print 'time diff list(set(S) & set(T))=', time2-time1 #print N #the results will be unordered as compared to the other ways!!! # # time1 = time.time() N=[] for x in S: for y in T: if x==y: N.append(x) time2 = time.time() print 'time diff using traditional for loop', time2-time1 #print N They all print the same N so I commented that print stmt out (except the last way it's unordered), but the resulting time differences were interesting over repeated tests as seen in this one example: time diff [x for x in S for y in T if x==y]= 54.875 time diff filter(lambda x:x in S,T)= 0.391000032425 time diff [val for val in S if val in T]= 12.6089999676 time diff list(set(S) & set(T))= 0.125 time diff using traditional for loop 54.7970001698 So while I find list comprehensions on the whole easier to read, there seems to be some performance issues at least in this example. So, two questions: Why is lambda etc being pushed aside? For the list comprehension ways, is there a more efficient implementation and how would you KNOW it's more efficient without testing? I mean, lambda/map/filter was supposed to be less efficient because of the extra function calls, but it seems to be MORE efficient. Paul
[ "Your tests are doing very different things. With S being 1M elements and T being 300:\n[x for x in S for y in T if x==y]= 54.875\n\nThis option does 300M equality comparisons.\n \nfilter(lambda x:x in S,T)= 0.391000032425\n\nThis option does 300 linear searches through S.\n \n[val for val in S if val in T]= 12.6089999676\n\nThis option does 1M linear searches through T.\n \nlist(set(S) & set(T))= 0.125\n\nThis option does two set constructions and one set intersection.\n\nThe differences in performance between these options is much more related to the algorithms each one is using, rather than any difference between list comprehensions and lambda.\n", "When I fix your code so that the list comprehension and the call to filter are actually doing the same work things change a whole lot:\nimport time\n\nS=[x for x in range(1000000)]\nT=[y**2 for y in range(300)]\n#\n#\ntime1 = time.time()\nN=[x for x in T if x in S]\ntime2 = time.time()\nprint 'time diff [x for x in T if x in S]=', time2-time1\n#print N\n#\n#\ntime1 = time.time()\nN=filter(lambda x:x in S,T)\ntime2 = time.time()\nprint 'time diff filter(lambda x:x in S,T)=', time2-time1\n#print N\n\nThen the output is more like:\ntime diff [x for x in T if x in S]= 0.414485931396\ntime diff filter(lambda x:x in S,T)= 0.466315984726\n\nSo the list comprehension has a time that's generally pretty close to and usually less than the lambda expression.\nThe reason lambda expressions are being phased out is that many people think they are a lot less readable than list comprehensions. I sort of reluctantly agree.\n", "Q: Why is lambda etc being pushed aside?\nA: List comprehensions and generator expressions are generally considered to be a nice mix of power and readability. The pure functional-programming style where you use map(), reduce(), and filter() with functions (often lambda functions) is considered not as clear. Also, Python has added built-in functions that nicely handle all the major uses for reduce().\nSuppose you wanted to sum a list. Here are two ways of doing it.\nlst = range(10)\nprint reduce(lambda x, y: x + y, lst)\n\nprint sum(lst)\n\nSign me up as a fan of sum() and not a fan of reduce() to solve this problem. Here's another, similar problem:\nlst = range(10)\nprint reduce(lambda x, y: bool(x or y), lst)\n\nprint any(lst)\n\nNot only is the any() solution easier to understand, but it's also much faster; it has short-circuit evaluation, such that it will stop evaluating as soon as it has found any true value. The reduce() has to crank through the entire list. This performance difference would be stark if the list was a million items long, and the first item evaluated true. By the way, any() was added in Python 2.5; if you don't have it, here is a version for older versions of Python:\ndef any(iterable):\n for x in iterable:\n if x:\n return True\n return False\n\nSuppose you wanted to make a list of squares of even numbers from some list.\nlst = range(10)\nprint map(lambda x: x**2, filter(lambda x: x % 2 == 0, lst))\n\nprint [x**2 for x in lst if x % 2 == 0]\n\nNow suppose you wanted to sum that list of squares.\nlst = range(10)\nprint sum(map(lambda x: x**2, filter(lambda x: x % 2 == 0, lst)))\n\n# list comprehension version of the above\nprint sum([x**2 for x in lst if x % 2 == 0])\n\n# generator expression version; note the lack of '[' and ']'\nprint sum(x**2 for x in lst if x % 2 == 0)\n\nThe generator expression actually just returns an iterable object. sum() takes the iterable and pulls values from it, one by one, summing as it goes, until all the values are consumed. This is the most efficient way you can solve this problem in Python. In contrast, the map() solution, and the equivalent solution with a list comprehension inside the call to sum(), must first build a list; this list is then passed to sum(), used once, and discarded. The time to build the list and then delete it again is just wasted. (EDIT: and note that the version with both map and filter must build two lists, one built by filter and one built by map; both lists are discarded.) (EDIT: But in Python 3.0 and newer, map() and filter() are now both \"lazy\" and produce an iterator instead of a list; so this point is less true than it used to be. Also, in Python 2.x you were able to use itertools.imap() and itertools.ifilter() for iterator-based map and filter. But I continue to prefer the generator expression solutions over any map/filter solutions.)\nBy composing map(), filter(), and reduce() in combination with lambda functions, you can do many powerful things. But Python has idiomatic ways to solve the same problems which are simultaneously better performing and easier to read and understand.\n", "Many people have already pointed out that you're comparing apples with oranges, etc, etc. But I think nobody's shown how to a really simple comparison -- list comprehension vs map plus lambda with little else to get in the way -- and that might be:\n$ python -mtimeit -s'L=range(1000)' 'map(lambda x: x+1, L)'\n1000 loops, best of 3: 328 usec per loop\n$ python -mtimeit -s'L=range(1000)' '[x+1 for x in L]'\n10000 loops, best of 3: 129 usec per loop\n\nHere, you can see very sharply the cost of lambda -- about 200 microseconds, which in the case of a sufficiently simple operation such as this one swamps the operation itself.\nNumbers are very similar with filter of course, since the problem is not filter or map, but rather the lambda itself:\n$ python -mtimeit -s'L=range(1000)' '[x for x in L if not x%7]'\n10000 loops, best of 3: 162 usec per loop\n$ python -mtimeit -s'L=range(1000)' 'filter(lambda x: not x%7, L)'\n1000 loops, best of 3: 334 usec per loop\n\nNo doubt the fact that lambda can be less clear, or its weird connection with Sparta (Spartans had a Lambda, for \"Lakedaimon\", painted on their shields -- this suggests that lambda is pretty dictatorial and bloody;-) have at least as much to do with its slowly falling out of fashion, as its performance costs. But the latter are quite real.\n", "First of all, test like this:\nimport timeit\n\nS=[x for x in range(10000)]\nT=[y**2 for y in range(30)]\n\nprint \"v1\", timeit.Timer('[x for x in S for y in T if x==y]',\n 'from __main__ import S,T').timeit(100)\nprint \"v2\", timeit.Timer('filter(lambda x:x in S,T)',\n 'from __main__ import S,T').timeit(100)\nprint \"v3\", timeit.Timer('[val for val in T if val in S]',\n 'from __main__ import S,T').timeit(100)\nprint \"v4\", timeit.Timer('list(set(S) & set(T))',\n 'from __main__ import S,T').timeit(100)\n\nAnd basically you are doing different things each time you test. When you would rewrite the list-comprehension for example as\n[val for val in T if val in S]\n\nperformance would be on par with the 'lambda/filter' construct.\n", "Sets are the correct solution for this. However try swapping S and T and see how long it takes!\nfilter(lambda x:x in T,S)\n\n$ python -m timeit -s'S=[x for x in range(1000000)];T=[y**2 for y in range(300)]' 'filter(lambda x:x in S,T)'\n10 loops, best of 3: 485 msec per loop\n$ python -m timeit -r1 -n1 -s'S=[x for x in range(1000000)];T=[y**2 for y in range(300)]' 'filter(lambda x:x in T,S)'\n1 loops, best of 1: 19.6 sec per loop\n\nSo you see that the order of S and T are quite important\nChanging the order of the list comprehension to match the filter gives \n$ python -m timeit -s'S=[x for x in range(1000000)];T=[y**2 for y in range(300)]' '[x for x in T if x in S]'\n10 loops, best of 3: 441 msec per loop\n\nSo if fact the list comprehension is slightly faster than the lambda on my computer\n", "Your list comprehension and lambda are doing different things, the list comprehension matching the lambda would be [val for val in T if val in S].\nEfficiency isn't the reason why list comprehension are preferred (while they actually are slightly faster in almost all cases). The reason why they are preferred is readability.\nTry it with smaller loop body and larger loops, like make T a set, and iterate over S. In that case on my machine the list comprehension is nearly twice as fast.\n", "Your profiling is done wrong. Take a look the timeit module and try again.\nlambda defines anonymous functions. Their main problem is that many people don't know the whole python library and use them to re-implement functions that are already in the operator, functools etc module ( and much faster ). \nList comprehensions have nothing to do with lambda. They are equivalent to the standard filter and map functions from functional languages. LCs are preferred because they can be used as generators too, not to mention readability.\n", "This is pretty fast:\ndef binary_search(a, x, lo=0, hi=None):\n if hi is None:\n hi = len(a)\n while lo < hi:\n mid = (lo+hi)//2\n midval = a[mid]\n if midval < x:\n lo = mid+1\n elif midval > x: \n hi = mid\n else:\n return mid\n return -1\n\ntime1 = time.time()\nN = [x for x in T if binary_search(S, x) >= 0]\ntime2 = time.time()\nprint 'time diff binary search=', time2-time1\n\nSimply: less comparisions, less time.\n", "List comprehensions can make a bigger difference if you have to process your filtered results. In your case you just build a list, but if you had to do something like this:\nn = [f(i) for i in S if some_condition(i)]\n\nyou would gain from LC optimization over this:\nn = map(f, filter(some_condition(i), S))\n\nsimply because the latter has to build an intermediate list (or tuple, or string, depending on the nature of S). As a consequence you will also notice a different impact on the memory used by each method, the LC will keep lower.\nThe lambda in itself does not matter.\n" ]
[ 30, 25, 19, 8, 4, 2, 1, 1, 0, 0 ]
[]
[]
[ "algorithm", "lambda", "list_comprehension", "python", "set" ]
stackoverflow_0001632902_algorithm_lambda_list_comprehension_python_set.txt
Q: Unix paths that work for any platform in Python? Can all paths in a Python program use ".." (for the parent directory) and / (for separating path components), and still work whatever the platform? On one hand, I have never seen such a claim in the documentation (I may have missed it), and the os and os.path modules do provide facilities for handling paths in a platform agnostic way (os.pardir, os.path.join,…), which lets me think that they are here for a reason. On the other hand, you can read on StackOverflow that "../path/to/file" works on all platforms… So, should os.pardir, os.path.join and friends always be used, for portability purposes, or are Unix path names always safe (up to possible character encoding issues)? or maybe "almost always" safe (i.e. working under Windows, OS X, and Linux)? A: I've never had any problems with using .., although it might be a good idea to convert it to an absolute path using os.path.abspath. Secondly, I would recommend always using os.path.join whereever possible. There are a lot of corner cases (aside from portability issues) in joining paths, and it's good not to have to worry about them. For instance: >>> '/foo/bar/' + 'qux' '/foo/bar/qux' >>> '/foo/bar' + 'qux' '/foo/barqux' >>> from os.path import join >>> join('/foo/bar/', 'qux') '/foo/bar/qux' >>> join('/foo/bar', 'qux') '/foo/bar/qux' You may run into problems with using .. if you're on some obscure platforms, but I can't name any (Windows, *nix, and OS X all support that notation). A: "Almost always safe" is right. All of the platforms you care about probably work ok today and I don't think they will be changing their conventions any time soon. However Python is very portable and runs on a lot more than the usual platforms. The reason for the os module is to help smooth things over it a platform does have different requirements. Is there a good reason for you to not use the os functions? os.pardir is self documenting whereas ".." isn't, and os.pardir might be easier to grep for Here is some docs from python 1.6 when Mac was still different for everything OS routines for Mac, DOS, NT, or Posix depending on what system we're on. This exports: - all functions from posix, nt, dos, os2, mac, or ce, e.g. unlink, stat, etc. - os.path is one of the modules posixpath, ntpath, macpath, or dospath - os.name is 'posix', 'nt', 'dos', 'os2', 'mac', or 'ce' - os.curdir is a string representing the current directory ('.' or ':') - os.pardir is a string representing the parent directory ('..' or '::') - os.sep is the (or a most common) pathname separator ('/' or ':' or '\') - os.altsep is the alternate pathname separator (None or '/') - os.pathsep is the component separator used in $PATH etc - os.linesep is the line separator in text files (' ' or ' ' or ' ') - os.defpath is the default search path for executables Programs that import and use 'os' stand a better chance of being portable between different platforms. Of course, they must then only use functions that are defined by all platforms (e.g., unlink and opendir), and leave all pathname manipulation to os.path (e.g., split and join). A: Within python, using / will always work. You will need to be aware of the OS convention if you want to execute a command in a subshell myprog = "/path/to/my/program" os.system([myprog, "-n"]) # 1 os.system([myprog, "C:/input/file/to/myprog"]) # 2 Command #1 will probably work as expected. Command #2 might not work if myprog is a Windows command and expects to parse its command line arguments to get a Windows file name. A: Windows supports / as a path separator. The only incompatibilities between Unix filenames and Windows filenames are: the allowed characters in filenames the special names and case sensitivity Windows is more restrictive in the first two accounts (this is, it has more forbidden characters and more special names), while Unix is typically case sensitive. There are some answers here listing exactly what are these characters and names. I'll see if I can find them. Now, if your development environment comes with a function to create or manipulate paths, you should use it, it's there for a reason, y'know. Especially given that there are a lot more platforms than Windows and Unix. Answering your first question, yes ../dir/file will work, unless they hit some of the above mentioned incompatibilities. A: It works on Windows, so if you define "whatever the platform" to be Unix and Windows, you're fine. On the other hand, Python also runs on VMS, RISC OS, and other odd platforms that use completely different filename conventions. However, it's probable that trying to get your application to run on VMS, blind, is kind of silly anyway - "premature portability is the root of some relatively minor evil" I like using the os.path functions anyway because they are good for expressing intent - instead of just a string concatenation, which might be done for any of a million purposes, it reads very explicitly as a path manipulation. A: OS/X and Linux are both Unix compatible, so by definition they use the format you gave at the beginning of the question. Windows allows "/" in addition to "\" so that programs could be interchangeable with Xenix, a Unix variant that Microsoft was trying out a long time ago, and that compatibility has been carried forward to the present. Thus it works too. I don't know how many other platforms Python has been ported to, and I can't speak for them. A: As others have said, a forward slash will work in all cases, but you're better off creating a list of path segments and os.path.join()-ing them.
Unix paths that work for any platform in Python?
Can all paths in a Python program use ".." (for the parent directory) and / (for separating path components), and still work whatever the platform? On one hand, I have never seen such a claim in the documentation (I may have missed it), and the os and os.path modules do provide facilities for handling paths in a platform agnostic way (os.pardir, os.path.join,…), which lets me think that they are here for a reason. On the other hand, you can read on StackOverflow that "../path/to/file" works on all platforms… So, should os.pardir, os.path.join and friends always be used, for portability purposes, or are Unix path names always safe (up to possible character encoding issues)? or maybe "almost always" safe (i.e. working under Windows, OS X, and Linux)?
[ "I've never had any problems with using .., although it might be a good idea to convert it to an absolute path using os.path.abspath. Secondly, I would recommend always using os.path.join whereever possible. There are a lot of corner cases (aside from portability issues) in joining paths, and it's good not to have to worry about them. For instance:\n>>> '/foo/bar/' + 'qux'\n'/foo/bar/qux'\n>>> '/foo/bar' + 'qux'\n'/foo/barqux'\n>>> from os.path import join\n>>> join('/foo/bar/', 'qux')\n'/foo/bar/qux'\n>>> join('/foo/bar', 'qux')\n'/foo/bar/qux'\n\nYou may run into problems with using .. if you're on some obscure platforms, but I can't name any (Windows, *nix, and OS X all support that notation).\n", "\"Almost always safe\" is right. All of the platforms you care about probably work ok today and I don't think they will be changing their conventions any time soon.\nHowever Python is very portable and runs on a lot more than the usual platforms. The reason for the os module is to help smooth things over it a platform does have different requirements.\nIs there a good reason for you to not use the os functions?\nos.pardir is self documenting whereas \"..\" isn't, and os.pardir might be easier to grep for\nHere is some docs from python 1.6 when Mac was still different for everything\n\nOS routines for Mac, DOS, NT, or Posix depending on what system we're\n on.\nThis exports:\n - all functions from posix, nt, dos, os2, mac, or ce, e.g. unlink, stat, etc.\n - os.path is one of the modules posixpath, ntpath, macpath, or dospath\n - os.name is 'posix', 'nt', 'dos', 'os2', 'mac', or 'ce'\n - os.curdir is a string representing the current directory ('.' or ':')\n - os.pardir is a string representing the parent directory ('..' or '::')\n - os.sep is the (or a most common) pathname separator ('/' or ':' or '\\')\n - os.altsep is the alternate pathname separator (None or '/')\n - os.pathsep is the component separator used in $PATH etc\n - os.linesep is the line separator in text files (' ' or ' ' or ' ')\n - os.defpath is the default search path for executables\nPrograms that import and use 'os' stand a better chance of being\n portable between different platforms. Of course, they must then only\n use functions that are defined by all platforms (e.g., unlink and\n opendir), and leave all pathname manipulation to os.path (e.g., split\n and join).\n\n", "Within python, using / will always work. You will need to be aware of the OS convention if you want to execute a command in a subshell\nmyprog = \"/path/to/my/program\"\nos.system([myprog, \"-n\"]) # 1\nos.system([myprog, \"C:/input/file/to/myprog\"]) # 2\n\nCommand #1 will probably work as expected.\nCommand #2 might not work if myprog is a Windows command and expects to parse its command line arguments to get a Windows file name.\n", "Windows supports / as a path separator. The only incompatibilities between Unix filenames and Windows filenames are:\n\nthe allowed characters in filenames\nthe special names and\ncase sensitivity\n\nWindows is more restrictive in the first two accounts (this is, it has more forbidden characters and more special names), while Unix is typically case sensitive. There are some answers here listing exactly what are these characters and names. I'll see if I can find them.\nNow, if your development environment comes with a function to create or manipulate paths, you should use it, it's there for a reason, y'know. Especially given that there are a lot more platforms than Windows and Unix.\nAnswering your first question, yes ../dir/file will work, unless they hit some of the above mentioned incompatibilities.\n", "It works on Windows, so if you define \"whatever the platform\" to be Unix and Windows, you're fine.\nOn the other hand, Python also runs on VMS, RISC OS, and other odd platforms that use completely different filename conventions. However, it's probable that trying to get your application to run on VMS, blind, is kind of silly anyway - \"premature portability is the root of some relatively minor evil\"\nI like using the os.path functions anyway because they are good for expressing intent - instead of just a string concatenation, which might be done for any of a million purposes, it reads very explicitly as a path manipulation.\n", "OS/X and Linux are both Unix compatible, so by definition they use the format you gave at the beginning of the question. Windows allows \"/\" in addition to \"\\\" so that programs could be interchangeable with Xenix, a Unix variant that Microsoft was trying out a long time ago, and that compatibility has been carried forward to the present. Thus it works too.\nI don't know how many other platforms Python has been ported to, and I can't speak for them.\n", "As others have said, a forward slash will work in all cases, but you're better off creating a list of path segments and os.path.join()-ing them.\n" ]
[ 11, 6, 3, 3, 3, 1, 0 ]
[]
[]
[ "path", "portability", "python", "relative_path" ]
stackoverflow_0001633643_path_portability_python_relative_path.txt
Q: GUI + multithreading support + regex support. Which language? JAVA / Python / Ruby? I'm interested in learning a programming language with support for GUI, multithreading and easy test manipulation (support for regex). Mainly on Windows but preferably cross-platform. What does the Stack Overflow community suggest? A: My suggestion would be Java. You can do all of that and much more. A: I am a fan of Erlang: Wx GUI tool Regex (module regexp) Cross-platform Multi-threading (of course !) EUnit testing Of course Python is really appropriate too! A: If you really like typing go for Java, if you really like whitespace go for python, if you like programming more than you like high performance go for Ruby. Seriously, Java is very complete and very cross-platform. I don't know how Python adds up for GUI stuff but when I was looking at Ruby in detail a couple of years back it seemed a trifle complex ( or at least, nothing is hard to write in ruby but it didn't look easy to produce a nice, modern-looking UI ) but I much prefer what I can achieve with a scripting language in terms of lines of code compared with Java's painful verbosity. Erlang, which I see recommended above, I've never tried but it's a language I would be very interested to learn. Possibly well worth looking into if you're learning something new anyway, especially if multi-threading is important to you. A: Python is nice but has major problems with multithreading (unless you are using Stackless). it has nice support of multiprocessing, though. There're bindings to Tk (out-of-box), Qt, GTK and WxWidgets. Ruby 1.8 has only green threads, and Ruby 1.9 uses native threads, but as @James Cunningham noted, it still has global VM lock, so it is limited in its concurrency too. It's the only of mentioned languages to have syntax-level regex support. AFAIK, it supports the same UI toolkits as Python. Java supports native threads. It has two standard UI toolkits out-of-box (obsolete AWT and more modern Swing). There's also very popular SWT toolkit (Eclipse is developed with it). If not you requirement of portable UI I would recommend you to use C#. It has more nice syntax then Java and generally less frustrating. But current state of UI on Windows and Linux is sad, unfortunately. A: I would call Tcl/Tk a natural choice for the features you listed. But might be called old fashioned. GUI - Tk, well integrated, looks okay if you know what your doing, you can also use Qt or gtk but thats a bit less common. Multithreading - Tcl has both CoRoutines for cooperative multitasking, message passing based threads that do not have a global interpreter lock and scale nicely and in addition a built in event loop to make you need threads less Test manipulation: you get a really flexible language to do test automation, add in the expect package and you have the tool of choice used for testing things like lots of routers, or with in the frame of DejaGnu the GCC testsuite for example. Its usually easy to learn and has some cool features, but you won't usually find a job with it. The Tcl'ers Wiki is a good starting point, other points of interest might be The tkdocs homepage or the official language page at www.tcl.tk
GUI + multithreading support + regex support. Which language? JAVA / Python / Ruby?
I'm interested in learning a programming language with support for GUI, multithreading and easy test manipulation (support for regex). Mainly on Windows but preferably cross-platform. What does the Stack Overflow community suggest?
[ "My suggestion would be Java. You can do all of that and much more.\n", "I am a fan of Erlang: \n\nWx GUI tool\nRegex (module regexp)\nCross-platform\nMulti-threading (of course !)\nEUnit testing\n\nOf course Python is really appropriate too!\n", "If you really like typing go for Java, if you really like whitespace go for python, if you like programming more than you like high performance go for Ruby.\nSeriously, Java is very complete and very cross-platform. I don't know how Python adds up for GUI stuff but when I was looking at Ruby in detail a couple of years back it seemed a trifle complex ( or at least, nothing is hard to write in ruby but it didn't look easy to produce a nice, modern-looking UI ) but I much prefer what I can achieve with a scripting language in terms of lines of code compared with Java's painful verbosity. \nErlang, which I see recommended above, I've never tried but it's a language I would be very interested to learn. Possibly well worth looking into if you're learning something new anyway, especially if multi-threading is important to you.\n", "\nPython is nice but has major problems with multithreading (unless you are using Stackless). it has nice support of multiprocessing, though. There're bindings to Tk (out-of-box), Qt, GTK and WxWidgets.\nRuby 1.8 has only green threads, and Ruby 1.9 uses native threads, but as @James Cunningham noted, it still has global VM lock, so it is limited in its concurrency too. It's the only of mentioned languages to have syntax-level regex support. AFAIK, it supports the same UI toolkits as Python.\nJava supports native threads. It has two standard UI toolkits out-of-box (obsolete AWT and more modern Swing). There's also very popular SWT toolkit (Eclipse is developed with it). \n\nIf not you requirement of portable UI I would recommend you to use C#. It has more nice syntax then Java and generally less frustrating. But current state of UI on Windows and Linux is sad, unfortunately.\n", "I would call Tcl/Tk a natural choice for the features you listed.\nBut might be called old fashioned.\n\nGUI - Tk, well integrated, looks okay if you know what your doing, you can also use Qt or gtk but thats a bit less common.\nMultithreading - Tcl has both CoRoutines for cooperative multitasking, message passing based threads that do not have a global interpreter lock and scale nicely and in addition a built in event loop to make you need threads less\nTest manipulation: you get a really flexible language to do test automation, add in the expect package and you have the tool of choice used for testing things like lots of routers, or with in the frame of DejaGnu the GCC testsuite for example.\n\nIts usually easy to learn and has some cool features, but you won't usually find a job with it.\nThe Tcl'ers Wiki is a good starting point, other points of interest might be The tkdocs homepage or the official language page at www.tcl.tk\n" ]
[ 5, 1, 1, 0, 0 ]
[]
[]
[ "java", "multithreading", "python", "user_interface" ]
stackoverflow_0001624570_java_multithreading_python_user_interface.txt
Q: What is the difference between "is" and "==" in python? Possible Duplicate: Python ‘==’ vs ‘is’ comparing strings, ‘is’ fails sometimes, why? Is a == b the same as a is b ? If not, what is the difference? Edit: Why does a = 1 a is 1 return True, but a = 100.5 a is 100.5 return False? A: No, these aren't the same. is is a check for object identity - ie, checking if a and b are exactly the same object. Example: a = 100.5 a is 100.5 # => False a == 100.5 # => True a = [1,2,3] b = [1,2,3] a == b # => True a is b # => False a = b a == b # => True a is b # => True, because if we change a, b changes too. So: use == if you mean the objects should represent the same thing (most common usage) and is if you mean the objects should be in identical pieces of memory (you'd know if you needed the latter). Also, you can overload == via the __eq__ operator, but you can't overload is. A: As already very clearly explained above. is : used for identity testing (identical 'objects') == : used for equality testing (~~ identical value) Also keep in mind that Python uses string interning (as an optimisation) so you can get the following strange side-effects: >>> a = "test" >>> b = "test" >>> a is b True >>> "test_string" is "test" + "_" + "string" True >>> a = 5; b = 6; c = 5; d = a >>> d is a True # --> expected >>> b is a False # --> expected >>> c is a True # --> unexpected
What is the difference between "is" and "==" in python?
Possible Duplicate: Python ‘==’ vs ‘is’ comparing strings, ‘is’ fails sometimes, why? Is a == b the same as a is b ? If not, what is the difference? Edit: Why does a = 1 a is 1 return True, but a = 100.5 a is 100.5 return False?
[ "No, these aren't the same. is is a check for object identity - ie, checking if a and b are exactly the same object. Example:\na = 100.5\na is 100.5 # => False\na == 100.5 # => True\n\na = [1,2,3]\nb = [1,2,3]\na == b # => True\na is b # => False\na = b\na == b # => True\na is b # => True, because if we change a, b changes too.\n\nSo: use == if you mean the objects should represent the same thing (most common usage) and is if you mean the objects should be in identical pieces of memory (you'd know if you needed the latter).\nAlso, you can overload == via the __eq__ operator, but you can't overload is.\n", "As already very clearly explained above.\n\nis : used for identity testing (identical 'objects')\n== : used for equality testing (~~ identical value)\n\nAlso keep in mind that Python uses string interning (as an optimisation) so you can get the following strange side-effects:\n>>> a = \"test\"\n>>> b = \"test\"\n>>> a is b\nTrue\n>>> \"test_string\" is \"test\" + \"_\" + \"string\"\nTrue\n\n>>> a = 5; b = 6; c = 5; d = a\n>>> d is a\nTrue # --> expected\n>>> b is a\nFalse # --> expected\n>>> c is a\nTrue # --> unexpected\n\n" ]
[ 13, 5 ]
[]
[]
[ "operators", "python" ]
stackoverflow_0001634352_operators_python.txt
Q: Inconsistency in ability to access data member of loaded JSON object I'm loading a JSON string in Django using simplejson, thus: obj = json.loads('{"name": "joe"}') person = obj.name This throws an error: 'dict' object has no attribute 'name' but when I pass obj down to the view template and print it out via {{ obj.name }}, it works! Why? A: I'm not sure how the Django aspect of it works, but the object you get from json.loads is a Python dict object. That means it doesn't have attributes of its keys, but you can access them like you would any other dictionary: obj = json.loads('{"name": "joe"}') person = obj['name'] A: json.loads loads json into a python dictionary. So you must access it like a dictionary, i.e. data['key']. Now, on the django template side of things, check the official django templates documentation. Directly quoting: Technically, when the template system encounters a dot, it tries the following lookups, in this order: Dictionary lookup Attribute lookup Method call List-index lookup So basically, django templates allow you to access dictionary items using data.key syntax.
Inconsistency in ability to access data member of loaded JSON object
I'm loading a JSON string in Django using simplejson, thus: obj = json.loads('{"name": "joe"}') person = obj.name This throws an error: 'dict' object has no attribute 'name' but when I pass obj down to the view template and print it out via {{ obj.name }}, it works! Why?
[ "I'm not sure how the Django aspect of it works, but the object you get from json.loads is a Python dict object. That means it doesn't have attributes of its keys, but you can access them like you would any other dictionary:\nobj = json.loads('{\"name\": \"joe\"}')\nperson = obj['name']\n\n", "json.loads loads json into a python dictionary. So you must access it like a dictionary, i.e. data['key'].\nNow, on the django template side of things, check the official django templates documentation.\nDirectly quoting:\n\nTechnically, when the template system encounters a dot, it tries the following lookups, in this order:\n\nDictionary lookup\nAttribute lookup\nMethod call\nList-index lookup\n\n\nSo basically, django templates allow you to access dictionary items using data.key syntax.\n" ]
[ 7, 4 ]
[]
[]
[ "django", "python", "simplejson" ]
stackoverflow_0001634347_django_python_simplejson.txt
Q: Please help me in understanding Class in Python I am newbie and finding it very hard to grasp the syntax of Class in python. I have a background of C/C++, java and objective C. A very big difference which i am noticing in python is that you don't explicitly declare the "data members" in the class and you just randomly add them? And it leads to quite big confusion. Let say i have a class class MyClass: def __int__(self, a, b): self.a = a self.b = b And then when i initiate the object. myobject = MyClass(10,10) And just after some time for some reason i come to know that i need another parameter in this class but i dont wanted to initiate that using constructor because it will be initiated by another function depending on the some particular condition, so in whole mess of code that will be only point that variable actually get birth. is not the case when i will be checking the code while debugging or reviewing it for some other reason it will be confusing? A: In short, Yes. You're right. Python lets you add (and remove!) members from objects at will, at any time. There's nothing special about a constructor that allows it to do anything that other functions can't. If you want to be sure that all instances of your class have the same members at all times, then by all means assign them all in the constructor, using a sentinel value like None for ones that don't have a meaningful value yet, and avoid adding new members outside the constructor. It's up to you how you manipulate your objects, and if you want to do that in a static fashion then that's fine, or if you want to take advantage of the ability to add and remove members at arbitrary times, that's fine too. Python itself doesn't impose (m)any rules. A: You should really use some . in your text :p Could you mean: class MyClass: def __int__(self, a, b, c=None): self.a = a self.b = b self.c = c one = MyClass(1,2) one.c # None two = MyClass(1,2,3) two.c # 3 A: class MyClass: def __int__(self, a, b): self.a = a self.b = b self.c = None #This line is optional def set_c(self, c): self.c = c Some people prefer to list all the attributes in the __init__. You don't have to, but there are any number of reasons you might choose to. Maybe it improves your editor's ability to understand your code for highlighting or completion. Maybe it is just a style that you prefer.
Please help me in understanding Class in Python
I am newbie and finding it very hard to grasp the syntax of Class in python. I have a background of C/C++, java and objective C. A very big difference which i am noticing in python is that you don't explicitly declare the "data members" in the class and you just randomly add them? And it leads to quite big confusion. Let say i have a class class MyClass: def __int__(self, a, b): self.a = a self.b = b And then when i initiate the object. myobject = MyClass(10,10) And just after some time for some reason i come to know that i need another parameter in this class but i dont wanted to initiate that using constructor because it will be initiated by another function depending on the some particular condition, so in whole mess of code that will be only point that variable actually get birth. is not the case when i will be checking the code while debugging or reviewing it for some other reason it will be confusing?
[ "In short, Yes.\nYou're right. Python lets you add (and remove!) members from objects at will, at any time. There's nothing special about a constructor that allows it to do anything that other functions can't.\nIf you want to be sure that all instances of your class have the same members at all times, then by all means assign them all in the constructor, using a sentinel value like None for ones that don't have a meaningful value yet, and avoid adding new members outside the constructor.\nIt's up to you how you manipulate your objects, and if you want to do that in a static fashion then that's fine, or if you want to take advantage of the ability to add and remove members at arbitrary times, that's fine too. Python itself doesn't impose (m)any rules.\n", "You should really use some . in your text :p\nCould you mean:\nclass MyClass:\n def __int__(self, a, b, c=None):\n self.a = a\n self.b = b\n self.c = c\n\none = MyClass(1,2)\none.c # None\ntwo = MyClass(1,2,3)\ntwo.c # 3\n\n", "class MyClass:\n def __int__(self, a, b):\n self.a = a\n self.b = b\n self.c = None #This line is optional\n\n def set_c(self, c):\n self.c = c\n\nSome people prefer to list all the attributes in the __init__. You don't have to, but there are any number of reasons you might choose to.\nMaybe it improves your editor's ability to understand your code for highlighting or completion.\nMaybe it is just a style that you prefer.\n" ]
[ 5, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001634502_python.txt
Q: Best way to construct a "complex" data structure in Python I need to construct a tool that will be used to create field mappings (between tables) in the most automated manner possible. Here is the deal: imagine a table being appended to other. (lets ignore field type, just for a second...) CREATE OR REPLACE TABLE fooA( id, name, type, foo) CREATE OR REPLACE TABLE otherFooTable( idFoo, nameFoo, spam) I am thinking to create a structure like this: fieldMap = {'otherFooTable': [('idFoo','id'),('nameFoo','name'),('spam','foo')]} I would be able to access this using (for example) print fieldMap['tabelax'][0][1] It´s not a very complex structure, but i can run into some problems using it? Is there any suggestions of how to handle this sort of issue? I need to store (for now) at least inputTable (i don´t want to repeat it for each field mapped), inputField,outputField. There is no reason to store outputTable, because that is always known beforehand. Suggestions and past experiences are deeply appreciated. PS: perhaps a formal structure (like a class) would be better? Thanks A: I'd honestly just take hints from (or use) SQLAlchemy or Django Models. These are tried and true data representation methods. A: Here is a little wrapper class for FooB's to mimic FooA's, but still retain their FooB-ishness. from collections import namedtuple # use namedtuple to define some simple classes (requires Py2.6 or later) FooA = namedtuple('FooA', 'id name type foo') FooB = namedtuple('FooB', 'idfoo namefoo spam') # create a wrapper class for FooB's to look like a FooA class FooAMimic(object): attrMap = dict(zip(FooA._fields, FooB._fields)) # or if the fields aren't nicely ordered, declare this mapping explicitly #~ attrMap = { 'id' : 'idfoo', 'name' : 'namefoo', 'foo' : 'spam' } def __init__(self, obj): self.obj = obj def __getattr__(self, aname): ob = self.obj if aname in self.attrMap: return getattr(ob, self.attrMap[aname]) elif hasattr(ob, aname): return getattr(ob, aname) else: raise AttributeError("no such attribute " + aname) def __dir__(self): return sorted(set(dir(super(FooAMimic,self)) + dir(self.obj) + list(FooA._fields))) Use it like this: # make some objects, some FooA, some FooB fa = FooA('a', 'b', 'c','d') fb = FooB('xx', 'yy', 'zz') fc = FooA('e', 'f', 'g','h') # create list of items that are FooA's, or FooA lookalikes coll = [fa, FooAMimic(fb), fc] # access objects like FooA's, but notice that the wrapped FooB # attributes are still available too for f in sorted(coll, key=lambda k : k.id): print f.id, '=', try: print f.namefoo, "(really a namefoo)" except AttributeError: print f.name Prints: a = b e = f xx = yy (really a namefoo) A: Think about this class Column( object ): def __init__( self, name, type_information=None ): self.name = name self.type_information = type_information self.pk = None self.fk_ref = None def fk( self, column ): self.fk_ref = column class Table( object ): def __init__( self, name, *columns ): self.name = name self.columns = dict( (c.name, c) for c in columns ) def column( self, name ): return self.columns[ name ] Table( "FOOA", Column( "id" ), Column( "name" ), Column( "type" ), Column( "foo" ) ) Table( "otherFooTable", Column( "idFoo" ), Column( "nameFoo" ), Column( "spam" ) ) It's not clear at all what you're tying to do or why, so this is as good as anything, since it seems to represent the information you actually have. A: Try to avoid accessing your data through fixed numerical indexes as in fieldMap['tabelax'][0][1]. After a year of not looking at your code, it may take you (or others) a while to figure out what it all means in human terms (e.g. "the value of idFoo in table tabelax"). Also, if you ever need to change your data structure (e.g. add another field) then some/all your numerical indexes may need fixing. Your code becomes ossified when the risk of breaking the logic prevents you from modifying the data structure. It is much better to use a class and use class (accessor) methods to access the data structure. That way, the code outside of your class can be preserved even if you need to change your data structure (inside the class) at some future date.
Best way to construct a "complex" data structure in Python
I need to construct a tool that will be used to create field mappings (between tables) in the most automated manner possible. Here is the deal: imagine a table being appended to other. (lets ignore field type, just for a second...) CREATE OR REPLACE TABLE fooA( id, name, type, foo) CREATE OR REPLACE TABLE otherFooTable( idFoo, nameFoo, spam) I am thinking to create a structure like this: fieldMap = {'otherFooTable': [('idFoo','id'),('nameFoo','name'),('spam','foo')]} I would be able to access this using (for example) print fieldMap['tabelax'][0][1] It´s not a very complex structure, but i can run into some problems using it? Is there any suggestions of how to handle this sort of issue? I need to store (for now) at least inputTable (i don´t want to repeat it for each field mapped), inputField,outputField. There is no reason to store outputTable, because that is always known beforehand. Suggestions and past experiences are deeply appreciated. PS: perhaps a formal structure (like a class) would be better? Thanks
[ "I'd honestly just take hints from (or use) SQLAlchemy or Django Models. These are tried and true data representation methods.\n", "Here is a little wrapper class for FooB's to mimic FooA's, but still retain their FooB-ishness.\nfrom collections import namedtuple\n\n# use namedtuple to define some simple classes (requires Py2.6 or later)\nFooA = namedtuple('FooA', 'id name type foo')\nFooB = namedtuple('FooB', 'idfoo namefoo spam')\n\n# create a wrapper class for FooB's to look like a FooA\nclass FooAMimic(object):\n attrMap = dict(zip(FooA._fields, FooB._fields))\n # or if the fields aren't nicely ordered, declare this mapping explicitly\n #~ attrMap = { 'id' : 'idfoo', 'name' : 'namefoo', 'foo' : 'spam' }\n def __init__(self, obj):\n self.obj = obj\n def __getattr__(self, aname):\n ob = self.obj\n if aname in self.attrMap:\n return getattr(ob, self.attrMap[aname])\n elif hasattr(ob, aname):\n return getattr(ob, aname)\n else:\n raise AttributeError(\"no such attribute \" + aname)\n def __dir__(self):\n return sorted(set(dir(super(FooAMimic,self)) \n + dir(self.obj) \n + list(FooA._fields)))\n\nUse it like this:\n# make some objects, some FooA, some FooB\nfa = FooA('a', 'b', 'c','d')\nfb = FooB('xx', 'yy', 'zz')\nfc = FooA('e', 'f', 'g','h')\n\n# create list of items that are FooA's, or FooA lookalikes\ncoll = [fa, FooAMimic(fb), fc]\n\n# access objects like FooA's, but notice that the wrapped FooB\n# attributes are still available too\nfor f in sorted(coll, key=lambda k : k.id):\n print f.id, '=', \n try:\n print f.namefoo, \"(really a namefoo)\"\n except AttributeError:\n print f.name\n\nPrints:\na = b\ne = f\nxx = yy (really a namefoo)\n\n", "Think about this\nclass Column( object ):\n def __init__( self, name, type_information=None ):\n self.name = name\n self.type_information = type_information\n self.pk = None\n self.fk_ref = None\n def fk( self, column ): \n self.fk_ref = column\n\nclass Table( object ):\n def __init__( self, name, *columns ):\n self.name = name\n self.columns = dict( (c.name, c) for c in columns )\n def column( self, name ):\n return self.columns[ name ]\n\nTable( \"FOOA\", Column( \"id\" ), Column( \"name\" ), Column( \"type\" ), Column( \"foo\" ) )\n\nTable( \"otherFooTable\", Column( \"idFoo\" ), Column( \"nameFoo\" ), Column( \"spam\" ) )\n\nIt's not clear at all what you're tying to do or why, so this is as good as anything, since it seems to represent the information you actually have.\n", "Try to avoid accessing your data through fixed numerical indexes as in fieldMap['tabelax'][0][1]. After a year of not looking at your code, it may take you (or others) a while to figure out what it all means in human terms (e.g. \"the value of idFoo in table tabelax\"). Also, if you ever need to change your data structure (e.g. add another field) then some/all your numerical indexes may need fixing. Your code becomes ossified when the risk of breaking the logic prevents you from modifying the data structure.\nIt is much better to use a class and use class (accessor) methods to access the data structure. That way, the code outside of your class can be preserved even if you need to change your data structure (inside the class) at some future date.\n" ]
[ 6, 4, 2, 2 ]
[]
[]
[ "data_structures", "list", "python" ]
stackoverflow_0001632304_data_structures_list_python.txt
Q: Advice on set-up/management of the WSGI stack? After looking through the many useful and shiny Python frameworks, I find none of them get close to what I need or provide way more than my needs. I'm looking to put something together myself; could define it as a framework, but not full-stack. However, I can't find online what the Python community sees as the correct/standard way to manage WSGI middleware in an application. I'm not looking for framework suggestions, unless its to provide an example of ways to manage WSGI middleware. Nor am I looking for information on how to get a webserver to talk to python -- that bit I understand. Rather, I'm looking for advice on how one tells python what components/middleware to put into the stack, and in which order. For instance, if I wanted to use: Spawning-->memento-->AuthKit-->(?)-->MyApp how would I get those components into the right order, and how would I configure an additional item (say Routes) before MyApp? So; Can you advise on the common/correct/standard way of managing what middleware is included in a WSGI stack for a Python application? Edit Thanks to Michael Dillon for recommending A Do-It-Yourself Framework, which helps highlight my problem. The middleware section of that document states that one should wrap middleware A in middleware B, B in C, and so-on: app = ObjectPublisher(Root()) wrapped_app = AuthMiddleware(app) from paste.evalexception import EvalException exc_wrapped_app = EvalException(wrapped_app) Which shows how to do it in a very simple way. I understand how this works, however it seems too simple when working with a number of middleware packages. Is there a better way to manage how these middleware components are added to the stack? Maybe a common design pattern which reads from a config file? A: That is what a framework does. Some frameworks like Django are fairly rigid and others like Pylons make it easier to mix and match. Since you will likely be using some of the WSGI components from the Paste project sooner or later, you might as well read this article from the Paste folks about a Do-It-Yourself Framework. I'm not suggesting that you should go and build your own framework, but that the article gives a good explanation of how the WSGI stack works and how things go together. A: I'd have to say that Apache/mod_wsgi is probably the most "manageable" of the setups I've used. nginx/fcgi is the fastest, but its a bit of a headache. A: What middleware do you think you need? You may very well not need to include any WSGI ‘middleware’-like components at all. You can perfectly well put together a loose ‘pseudo-framework’ of standalone libraries without needing to ‘wrap’ the application in middleware at all. (Personally I use a separate form-reading library, data access layer and template engine, none of which know about each other or need to start fiddling with the WSGI environ.) A: My advice is to read the PEP on WSGI, specifically the part on middleware. If you have a question about anything with the words "standard" and "WSGI" in it, the answer is either there, or you're asking the wrong question. A: If you liked the Do-It-Yourself-Framework tutorial mentioned before, but you want to manage these things in a config file, Paste Deploy would be the obvious answer. (It is mentioned in the tutorial, but only very briefly in the very last paragraph). This is what the Pylons framework uses, by the way (and Turbogears 2, which is built upon Pylons, also). A: "it seems too simple when working with a number of middleware packages." How big a number? You won't be working with hundreds or thousands. It will be (a) a small number (under a dozen) and (b) the "right" order isn't magical. Each piece of middleware will have a very, very specific job and very specific requirements for what must come before it. It's much less confusing than you're assuming.
Advice on set-up/management of the WSGI stack?
After looking through the many useful and shiny Python frameworks, I find none of them get close to what I need or provide way more than my needs. I'm looking to put something together myself; could define it as a framework, but not full-stack. However, I can't find online what the Python community sees as the correct/standard way to manage WSGI middleware in an application. I'm not looking for framework suggestions, unless its to provide an example of ways to manage WSGI middleware. Nor am I looking for information on how to get a webserver to talk to python -- that bit I understand. Rather, I'm looking for advice on how one tells python what components/middleware to put into the stack, and in which order. For instance, if I wanted to use: Spawning-->memento-->AuthKit-->(?)-->MyApp how would I get those components into the right order, and how would I configure an additional item (say Routes) before MyApp? So; Can you advise on the common/correct/standard way of managing what middleware is included in a WSGI stack for a Python application? Edit Thanks to Michael Dillon for recommending A Do-It-Yourself Framework, which helps highlight my problem. The middleware section of that document states that one should wrap middleware A in middleware B, B in C, and so-on: app = ObjectPublisher(Root()) wrapped_app = AuthMiddleware(app) from paste.evalexception import EvalException exc_wrapped_app = EvalException(wrapped_app) Which shows how to do it in a very simple way. I understand how this works, however it seems too simple when working with a number of middleware packages. Is there a better way to manage how these middleware components are added to the stack? Maybe a common design pattern which reads from a config file?
[ "That is what a framework does. Some frameworks like Django are fairly rigid and others like Pylons make it easier to mix and match. \nSince you will likely be using some of the WSGI components from the Paste project sooner or later, you might as well read this article from the Paste folks about a Do-It-Yourself Framework. I'm not suggesting that you should go and build your own framework, but that the article gives a good explanation of how the WSGI stack works and how things go together.\n", "I'd have to say that Apache/mod_wsgi is probably the most \"manageable\" of the setups I've used.\nnginx/fcgi is the fastest, but its a bit of a headache.\n", "What middleware do you think you need? You may very well not need to include any WSGI ‘middleware’-like components at all. You can perfectly well put together a loose ‘pseudo-framework’ of standalone libraries without needing to ‘wrap’ the application in middleware at all.\n(Personally I use a separate form-reading library, data access layer and template engine, none of which know about each other or need to start fiddling with the WSGI environ.)\n", "My advice is to read the PEP on WSGI, specifically the part on middleware. If you have a question about anything with the words \"standard\" and \"WSGI\" in it, the answer is either there, or you're asking the wrong question.\n", "If you liked the Do-It-Yourself-Framework tutorial mentioned before, but you want to manage these things in a config file, Paste Deploy would be the obvious answer. (It is mentioned in the tutorial, but only very briefly in the very last paragraph).\nThis is what the Pylons framework uses, by the way (and Turbogears 2, which is built upon Pylons, also).\n", "\"it seems too simple when working with a number of middleware packages.\"\nHow big a number?\nYou won't be working with hundreds or thousands. \nIt will be (a) a small number (under a dozen) and (b) the \"right\" order isn't magical. \nEach piece of middleware will have a very, very specific job and very specific requirements for what must come before it. \nIt's much less confusing than you're assuming.\n" ]
[ 4, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "wsgi" ]
stackoverflow_0001633342_python_wsgi.txt
Q: ToscaWidgets CalendarDatePicker pylons How does one set the date on the CalendarDatePicker. i.e. it defaults to current date and I want to display it with another date which I will set from my controller. I am displaying the CalendarDatePicker widget in a TableForm from tw.form. I have looked at this for a few hours and can't work out how to do this so any pointers greatly appreciated. import tw.forms as twf form = twf.TableForm('dateSel', action='changeDate', children=[ twf.CalendarDatePicker('StartDate', date_format = "%d/%m/%Y"), twf.CalendarDatePicker('EndDate', date_format = "%d/%m/%Y" ) ]) A: I don't have a copy of twforms laying around, but based on their sample code, it looks like you might want to do something like: from datetime import datetime start = twf.CalendarDatePicker('StartDate', date_format = "%d/%m/%Y") start.default = datetime.now() # or any valid datetime object end = twf.CalendarDatePicker('EndDate', date_format = "%d/%m/%Y" ) start.default = datetime.now() # or any valid datetime object form = twf.TableForm('dateSel', action='changeDate', children=[start, end])
ToscaWidgets CalendarDatePicker pylons
How does one set the date on the CalendarDatePicker. i.e. it defaults to current date and I want to display it with another date which I will set from my controller. I am displaying the CalendarDatePicker widget in a TableForm from tw.form. I have looked at this for a few hours and can't work out how to do this so any pointers greatly appreciated. import tw.forms as twf form = twf.TableForm('dateSel', action='changeDate', children=[ twf.CalendarDatePicker('StartDate', date_format = "%d/%m/%Y"), twf.CalendarDatePicker('EndDate', date_format = "%d/%m/%Y" ) ])
[ "I don't have a copy of twforms laying around, but based on their sample code, it looks like you might want to do something like:\nfrom datetime import datetime\n\nstart = twf.CalendarDatePicker('StartDate', date_format = \"%d/%m/%Y\")\nstart.default = datetime.now() # or any valid datetime object\n\nend = twf.CalendarDatePicker('EndDate', date_format = \"%d/%m/%Y\" )\nstart.default = datetime.now() # or any valid datetime object\n\nform = twf.TableForm('dateSel', action='changeDate', children=[start, end])\n\n" ]
[ 0 ]
[]
[]
[ "pylons", "python", "toscawidgets" ]
stackoverflow_0001634793_pylons_python_toscawidgets.txt
Q: Designing a simple network packet I'm learning socket programming (in python) and I was wondering what the best/typical way of encapsulating data is? My packets will be used to issue run, stop, configure, etc. commands on the receiving side. Is it helpful to use JSON or just straight text? A: I suggest you use a fixed, or mostly fixed format, as this make things easier. By then using features such as the standard library's struct.Struct, with its pack() and umpack() methods, or possibly a slightly more featured pacakges such as Construct, you should have much of the parsing work done for you ;-) A: I suggest plain text to begin with - it is easier to debug. The format that your text takes depends on what you're doing, how many commands, arguments, etc. Have you fleshed out how your commands will look? Once you figure out what that looks like it'll likely suggest a format all on its own. Are you using TCP or UDP? TCP is easy since it is a stream, but if you're using UDP keep in mind the maximum size of UDP packets and thus how big your message can be. A: If you're developing something as a learning exercise you might find it best to go with a structured text (ie. human readable and human writable) format. An example would be to use a fixed number of fields per command, fixed width text fields and/or easily parsable field delimiters. Generally text is less efficient in terms of packet size, but it does have the benefits that you can read it easily if you do a packet capture (eg. using wireshark) or if you want to use telnet to mimic a client. And if this is only a learning exercise then ease of debugging is a significant issue. A: Take a look at how scapy (an awesome Python packet manipulation library) implements it. Looks like that have a handful of fields.
Designing a simple network packet
I'm learning socket programming (in python) and I was wondering what the best/typical way of encapsulating data is? My packets will be used to issue run, stop, configure, etc. commands on the receiving side. Is it helpful to use JSON or just straight text?
[ "I suggest you use a fixed, or mostly fixed format, as this make things easier.\nBy then using features such as the standard library's struct.Struct, with its pack() and umpack() methods, or possibly a slightly more featured pacakges such as Construct, you should have much of the parsing work done for you ;-)\n", "I suggest plain text to begin with - it is easier to debug. The format that your text takes depends on what you're doing, how many commands, arguments, etc. Have you fleshed out how your commands will look? Once you figure out what that looks like it'll likely suggest a format all on its own.\nAre you using TCP or UDP? TCP is easy since it is a stream, but if you're using UDP keep in mind the maximum size of UDP packets and thus how big your message can be.\n", "If you're developing something as a learning exercise you might find it best to go with a structured text (ie. human readable and human writable) format.\nAn example would be to use a fixed number of fields per command, fixed width text fields and/or easily parsable field delimiters.\nGenerally text is less efficient in terms of packet size, but it does have the benefits that you can read it easily if you do a packet capture (eg. using wireshark) or if you want to use telnet to mimic a client.\nAnd if this is only a learning exercise then ease of debugging is a significant issue.\n", "Take a look at how scapy (an awesome Python packet manipulation library) implements it. Looks like that have a handful of fields.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "network_programming", "python" ]
stackoverflow_0001633934_network_programming_python.txt
Q: Regular expression implementation details A question that I answered got me wondering: How are regular expressions implemented in Python? What sort of efficiency guarantees are there? Is the implementation "standard", or is it subject to change? I thought that regular expressions would be implemented as DFAs, and therefore were very efficient (requiring at most one scan of the input string). Laurence Gonsalves raised an interesting point that not all Python regular expressions are regular. (His example is r"(a+)b\1", which matches some number of a's, a b, and then the same number of a's as before). This clearly cannot be implemented with a DFA. So, to reiterate: what are the implementation details and guarantees of Python regular expressions? It would also be nice if someone could give some sort of explanation (in light of the implementation) as to why the regular expressions "cat|catdog" and "catdog|cat" lead to different search results in the string "catdog", as mentioned in the question that I referenced before. A: Python's re module was based on PCRE, but has moved on to their own implementation. Here is the link to the C code. It appears as though the library is based on recursive backtracking when an incorrect path has been taken. Regular expression and text size n a?nan matching an Keep in mind that this graph is not representative of normal regex searches. http://swtch.com/~rsc/regexp/regexp1.html A: There are no "efficiency guarantees" on Python REs any more than on any other part of the language (C++'s standard library is the only widespread language standard I know that tries to establish such standards -- but there are no standards, even in C++, specifying that, say, multiplying two ints must take constant time, or anything like that); nor is there any guarantee that big optimizations won't be applied at any time. Today, F. Lundh (originally responsible for implementing Python's current RE module, etc), presenting Unladen Swallow at Pycon Italia, mentioned that one of the avenues they'll be exploring is to compile regular expressions directly to LLVM intermediate code (rather than their own bytecode flavor to be interpreted by an ad-hoc runtime) -- since ordinary Python code is also getting compiled to LLVM (in a soon-forthcoming release of Unladen Swallow), a RE and its surrounding Python code could then be optimized together, even in quite aggressive ways sometimes. I doubt anything like that will be anywhere close to "production-ready" very soon, though;-). A: Matching regular expressions with backreferences is NP-hard, which is at least as hard as NP-Complete. That basically means that it's as hard as any problem you're likely to encounter, and most computer scientists think it could require exponential time in the worst case. If you could match such "regular" expressions (which really aren't, in the technical sense) in polynomial time, you could win a million bucks.
Regular expression implementation details
A question that I answered got me wondering: How are regular expressions implemented in Python? What sort of efficiency guarantees are there? Is the implementation "standard", or is it subject to change? I thought that regular expressions would be implemented as DFAs, and therefore were very efficient (requiring at most one scan of the input string). Laurence Gonsalves raised an interesting point that not all Python regular expressions are regular. (His example is r"(a+)b\1", which matches some number of a's, a b, and then the same number of a's as before). This clearly cannot be implemented with a DFA. So, to reiterate: what are the implementation details and guarantees of Python regular expressions? It would also be nice if someone could give some sort of explanation (in light of the implementation) as to why the regular expressions "cat|catdog" and "catdog|cat" lead to different search results in the string "catdog", as mentioned in the question that I referenced before.
[ "Python's re module was based on PCRE, but has moved on to their own implementation.\nHere is the link to the C code.\nIt appears as though the library is based on recursive backtracking when an incorrect path has been taken.\n\nRegular expression and text size n\na?nan matching an\nKeep in mind that this graph is not representative of normal regex searches.\nhttp://swtch.com/~rsc/regexp/regexp1.html\n", "There are no \"efficiency guarantees\" on Python REs any more than on any other part of the language (C++'s standard library is the only widespread language standard I know that tries to establish such standards -- but there are no standards, even in C++, specifying that, say, multiplying two ints must take constant time, or anything like that); nor is there any guarantee that big optimizations won't be applied at any time.\nToday, F. Lundh (originally responsible for implementing Python's current RE module, etc), presenting Unladen Swallow at Pycon Italia, mentioned that one of the avenues they'll be exploring is to compile regular expressions directly to LLVM intermediate code (rather than their own bytecode flavor to be interpreted by an ad-hoc runtime) -- since ordinary Python code is also getting compiled to LLVM (in a soon-forthcoming release of Unladen Swallow), a RE and its surrounding Python code could then be optimized together, even in quite aggressive ways sometimes. I doubt anything like that will be anywhere close to \"production-ready\" very soon, though;-).\n", "Matching regular expressions with backreferences is NP-hard, which is at least as hard as NP-Complete. That basically means that it's as hard as any problem you're likely to encounter, and most computer scientists think it could require exponential time in the worst case. If you could match such \"regular\" expressions (which really aren't, in the technical sense) in polynomial time, you could win a million bucks.\n" ]
[ 21, 8, 2 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0000844183_python_regex.txt
Q: CGI & Python - return choice to python script I have a python script that, once executed from command line, performs the needed operations and exit. If, during the execution, the program is not able to perform a choice, he prompts the user and asks them to take a decision! Now I have to implement a web interface, and here comes the problems ... I created an htm file with a simple form that, once the user "submits" he passes the parameters to a cgi script that contains just one line and runs my python program ! And it seems to work. My question is: if it happens that the program needs to ask the user for a choice, how can I return this value to my python script? To prompt the user for a choice I need to create a webpage with the possible choices ... Does anybody know how can I open a webpage with python ? The second and most important question is: how can I return a value from a web page to my "original" python module? In python I would simply make a return choice but with a web page I have no idea how to do it. Recap: Starting from a web page, I run a cgi script ! Done This CGI script runs my python program... Done If the program is not able to take a decision, 3a create a web page with the possible choices I can do it 3b display the created web page ???????? 3c return the response to the original python module ???????? A: "Does anybody know how can I open a webpage with python ? The second and most important question is: how can I return a value from a web page to my "original" python module ??" This is all very simple. However, you need to read about what the web really is. You need to read up on web servers, browsers and the HTTP protocol. Here's the golden rule: A web server responds to HTTP requests with a web page. The second part of that rules is: A Request is a URL and a method (GET or POST). There's more to a request, but that's the important part. That's all that ever happens. So, you have to recast your use case into the above form. Person clicks a bookmark; browser makes an empty request (to a URL of "/") and gets a form. Person fills in the form, clicks the button; browser POST's the request (to the URL in the form) and gets one of two things. If your script worked, they get their page that says it all worked. If your script needed information, they get another form. Person fills in the form, clicks the button; browser POST's the request (to the URL in the form) and gets the final page that says it all worked. You can do all of this from a "CGI" script. Use mod_wsgi and plug your stuff into the Apache web server. Or, you can get a web framework. Django, TurboGears, web.py, etc. You'll be happier with a framework even though you think your operation is simple. A: I think you could modify the Python script to return an error if it needs a choice and accept choices as arguments. If you do that, you can check the return value from your cgi script and use that to call the python script appropriately and return the information to the user. Is there a reason why you can't call the python script directly? I suspect you'd end up with a neater implementation if you were to avoid the intermediate CGI. What webserver are you using? What cgi language? Perl maybe? A: Web pages don't return values, and they aren't programs - a web page is just a static collection of HTML or something similar which a browser can display. Your CGI script can't wait for the user to send a response - it must send the web page to the user and terminate. However, if the browser performs a second query to your CGI program (or a different CGI program) based on the data in that page, then you can collect the information that way and continue from that point. A: Probably easier if you write your cgi in python then call your python script from the cgi script. Update your script to separate the UI from the logic. Then it should be relatively easy to interface your script with the (python) cgi script. For python cgi reference: Five minutes to a Python CGI A: http://docs.python.org/library/cgihttpserver.html I think first off you need to separate your code from your interface. When you run a script, it spits out a page. You can pass arguments to it using url parameters. Ideally you want to do your logic, and then pass the results into a template that python prints to the cgi.
CGI & Python - return choice to python script
I have a python script that, once executed from command line, performs the needed operations and exit. If, during the execution, the program is not able to perform a choice, he prompts the user and asks them to take a decision! Now I have to implement a web interface, and here comes the problems ... I created an htm file with a simple form that, once the user "submits" he passes the parameters to a cgi script that contains just one line and runs my python program ! And it seems to work. My question is: if it happens that the program needs to ask the user for a choice, how can I return this value to my python script? To prompt the user for a choice I need to create a webpage with the possible choices ... Does anybody know how can I open a webpage with python ? The second and most important question is: how can I return a value from a web page to my "original" python module? In python I would simply make a return choice but with a web page I have no idea how to do it. Recap: Starting from a web page, I run a cgi script ! Done This CGI script runs my python program... Done If the program is not able to take a decision, 3a create a web page with the possible choices I can do it 3b display the created web page ???????? 3c return the response to the original python module ????????
[ "\"Does anybody know how can I open a webpage with python ? The second and most important question is: how can I return a value from a web page to my \"original\" python module ??\"\nThis is all very simple.\nHowever, you need to read about what the web really is. You need to read up on web servers, browsers and the HTTP protocol.\nHere's the golden rule: A web server responds to HTTP requests with a web page.\nThe second part of that rules is: A Request is a URL and a method (GET or POST). There's more to a request, but that's the important part.\nThat's all that ever happens. So, you have to recast your use case into the above form.\nPerson clicks a bookmark; browser makes an empty request (to a URL of \"/\") and gets a form. \nPerson fills in the form, clicks the button; browser POST's the request (to the URL in the form) and gets one of two things.\n\nIf your script worked, they get their page that says it all worked.\nIf your script needed information, they get another form.\n\nPerson fills in the form, clicks the button; browser POST's the request (to the URL in the form) and gets the final page that says it all worked.\nYou can do all of this from a \"CGI\" script. Use mod_wsgi and plug your stuff into the Apache web server.\nOr, you can get a web framework. Django, TurboGears, web.py, etc. You'll be happier with a framework even though you think your operation is simple.\n", "I think you could modify the Python script to return an error if it needs a choice and accept choices as arguments. If you do that, you can check the return value from your cgi script and use that to call the python script appropriately and return the information to the user.\nIs there a reason why you can't call the python script directly? I suspect you'd end up with a neater implementation if you were to avoid the intermediate CGI.\nWhat webserver are you using? What cgi language? Perl maybe?\n", "Web pages don't return values, and they aren't programs - a web page is just a static collection of HTML or something similar which a browser can display. Your CGI script can't wait for the user to send a response - it must send the web page to the user and terminate.\nHowever, if the browser performs a second query to your CGI program (or a different CGI program) based on the data in that page, then you can collect the information that way and continue from that point.\n", "Probably easier if you write your cgi in python then call your python script from the cgi script.\nUpdate your script to separate the UI from the logic.\nThen it should be relatively easy to interface your script with the (python) cgi script.\nFor python cgi reference:\nFive minutes to a Python CGI\n", "http://docs.python.org/library/cgihttpserver.html\nI think first off you need to separate your code from your interface. When you run a script, it spits out a page. You can pass arguments to it using url parameters. Ideally you want to do your logic, and then pass the results into a template that python prints to the cgi.\n" ]
[ 9, 1, 0, 0, 0 ]
[]
[]
[ "cgi", "python" ]
stackoverflow_0000796906_cgi_python.txt
Q: Google app engine ReferenceProperty relationships I'm trying to get my models related using ReferenceProperty, but not have a huge amount of luck. I have 3 levels: Group, Topic, then Pros, and Cons. As in a Group houses many topics, and within each topic could be many Pros and Cons. I am able to store new Groups nice and fine, but I don't have any idea how to store topics underneath these groups. I want to link from a page with a link "New topic" underneath each group, that takes them to a simple form (1 field for now). Obviously the URL will need to have some sort of reference to the id of the group or something. Here are my models: class Groups(db.Model): group_user = db.UserProperty() group_name = db.StringProperty(multiline=True) group_date = db.DateTimeProperty(auto_now_add=True) class Topics(db.Model): topic_user = db.UserProperty() topic_name = db.StringProperty(multiline=True) topic_date = db.DateTimeProperty(auto_now_add=True) topic_group = db.ReferenceProperty(Groups, collection_name='topics') class Pro(db.Model): pro_user = db.UserProperty() pro_content = db.StringProperty(multiline=True) pro_date = db.IntegerProperty(default=0) pro_topic = db.ReferenceProperty(Topics, collection_name='pros') class Con(db.Model): con_user = db.UserProperty() con_content = db.StringProperty(multiline=True) con_date = db.IntegerProperty(default=0) con_topic = db.ReferenceProperty(Topics, collection_name='cons') And one function for the actual page I want to show the list of Groups, and then underneath their topics: class Summary(webapp.RequestHandler): def get(self): groups_query = Groups.all() groups = groups_query.fetch(1000) template_values = { 'groups': groups, } path = os.path.join(os.path.dirname(__file__), 'summary.html') self.response.out.write(template.render(path, template_values)) And finally the html: <html> <body> <a href="/newgroup">New Group</a> <br> {% for group in groups %} <font size="24">{{ group.group_name|escape }}</font><br> by <b>{{ group.group_user }}</b> at <b>{{ group.group_date }}</b> {{ group.raw_id }} <br> <a href="/newtopic?id={{group.key.id}}" >New topice </a> <br> <blockquote> {{ topics.topics_name }} </blockquote> {% endfor %} </body> </html> A: Something that has side effects, such as altering the store (by creating a new object for example) should NOT be an HTTP GET -- GET should essentially only do "read" operations. This isn't pedantry, it's a key bit of HTTP semantics -- browsers, caches, proxies, etc, are allowed to act on GET as read-only operations (for example by caching results and not passing a request to the server if they can satisfy it from cache). For modifications, use HTTP verbs such as POST (most popular essentially because all browsers implement it correctly) or for specialized operations PUT (to create new objects) or DELETE (to remove objects). I assume you'll be going to use POST to support a variety of browsers. To get a POST from a browser, you need either Javascript wizardy or a plain old form with method=post -- I'll assume the latter for simplicity. If you're using Django 1.0 (which app engine supports now), it has its own mechanisms to make, validate and accept forms based on models. Other frameworks have their own similarly advanced layers. If you want to avoid "rich" frameworks you'll have to implement by hand templates for your HTML forms, direct them (via some kind of URL dispatching, e.g. in app.yaml) to a handler of yours implementing with a def post(self):, get the data from the request, validate it, form the new object, put it, display some acknowledgment page. What part or parts of the procedure are unclear to you? Your question's title focuses specifically on reference properties but I'm not sure what problem they are giving you in particular -- from the text of your question you appear to be on the right tack about them. Edit: the OP has now clarified in a comment that his problem is how to make something like: "<a href="/newtopic?id={{group.key.id}}" >New topic </a>" work. There's more than one way to do that. If the newtopic URL is served by a static form, the handler for the post "action" of that form could get back to that id= via the Referer: header (a notorious but unfixable mis-spelling), but that's a bit clunky and fragile. Better is to have the newtopic URI served by a handler whose def get gets the id= from the request and inserts it in the resulting form template -- for example, in a hidden input field. Have that form's template contain (among the other fields): <INPUT TYPE=hidden NAME=thegroupid VALUE={{ theid }}> </INPUT> put theid in the context with which you render that template, and it will be in the request that the def post of the action receiving the form finally gets. A: Just to answer the question for others as you probably figured this out: class NewTopic(webapp.RequestHandler): def get(self): groupId = self.request.get('group') # either get the actual group object from the DB and initialize topic with topic_group=object as in 'Nick Johnson's answer, or do as follows topic = Topic() topic.name = self.request.get("topicname") topic.reference = groupId topic.put() A: Thankyou for the reply. Yeah I am aware of the get vs post. The class I posted was just to actually print all the Groups(). The issue I have is I'm unsure how I use the models to keep data in a sort of hierarchical fashion, with Groups > Topics > Pros/Cons. Grabbing data is simple enough and I am using: class NewGroupSubmit(webapp.RequestHandler): def post(self): group = Groups() if users.get_current_user(): group.group_user = users.get_current_user() group.group_name = self.request.get('groupname') group.put() self.redirect('/summary') I need another function to add a new topic, that stores it within that group. So lets say a group is "Cars" for instance; the topics might be "Ferrari", "Porsche", "BMW", and then pros/cons for each topic. I realise I'm being a little vague, but it's because I'm very new to relational databasing and not quite used to the terminology. A: I'm not quite sure what problem you're having. Everything you list looks fine - the ReferenceProperties are set up according to what one would expect from your dscription. The only problem I can see is that in your template, you're referring to a variable "topics", which isn't defined anywhere, and you're not iterating through the topics for a group anywhere. You can do that like this: <html> <body> <a href="/newgroup">New Group</a> <br> {% for group in groups %} <font size="24">{{ group.group_name|escape }}</font><br> by <b>{{ group.group_user }}</b> at <b>{{ group.group_date }}</b> {{ group.raw_id }} <br> <a href="/newtopic?id={{group.key.id}}" >New topice </a> <br> Topics: <ul> {% for topic in group.topics %} <li>{{topic.topic_name}}</li> {% endfor %} </ul> {% endfor %} </body> </html> To create a new topic, just use the constructor, passing in the required arguments: mytopic = Topic(topic_name="foo", topic_group=somegroup) Here, somegroup should be either a Group object, or a key for a Group object.
Google app engine ReferenceProperty relationships
I'm trying to get my models related using ReferenceProperty, but not have a huge amount of luck. I have 3 levels: Group, Topic, then Pros, and Cons. As in a Group houses many topics, and within each topic could be many Pros and Cons. I am able to store new Groups nice and fine, but I don't have any idea how to store topics underneath these groups. I want to link from a page with a link "New topic" underneath each group, that takes them to a simple form (1 field for now). Obviously the URL will need to have some sort of reference to the id of the group or something. Here are my models: class Groups(db.Model): group_user = db.UserProperty() group_name = db.StringProperty(multiline=True) group_date = db.DateTimeProperty(auto_now_add=True) class Topics(db.Model): topic_user = db.UserProperty() topic_name = db.StringProperty(multiline=True) topic_date = db.DateTimeProperty(auto_now_add=True) topic_group = db.ReferenceProperty(Groups, collection_name='topics') class Pro(db.Model): pro_user = db.UserProperty() pro_content = db.StringProperty(multiline=True) pro_date = db.IntegerProperty(default=0) pro_topic = db.ReferenceProperty(Topics, collection_name='pros') class Con(db.Model): con_user = db.UserProperty() con_content = db.StringProperty(multiline=True) con_date = db.IntegerProperty(default=0) con_topic = db.ReferenceProperty(Topics, collection_name='cons') And one function for the actual page I want to show the list of Groups, and then underneath their topics: class Summary(webapp.RequestHandler): def get(self): groups_query = Groups.all() groups = groups_query.fetch(1000) template_values = { 'groups': groups, } path = os.path.join(os.path.dirname(__file__), 'summary.html') self.response.out.write(template.render(path, template_values)) And finally the html: <html> <body> <a href="/newgroup">New Group</a> <br> {% for group in groups %} <font size="24">{{ group.group_name|escape }}</font><br> by <b>{{ group.group_user }}</b> at <b>{{ group.group_date }}</b> {{ group.raw_id }} <br> <a href="/newtopic?id={{group.key.id}}" >New topice </a> <br> <blockquote> {{ topics.topics_name }} </blockquote> {% endfor %} </body> </html>
[ "Something that has side effects, such as altering the store (by creating a new object for example) should NOT be an HTTP GET -- GET should essentially only do \"read\" operations. This isn't pedantry, it's a key bit of HTTP semantics -- browsers, caches, proxies, etc, are allowed to act on GET as read-only operations (for example by caching results and not passing a request to the server if they can satisfy it from cache).\nFor modifications, use HTTP verbs such as POST (most popular essentially because all browsers implement it correctly) or for specialized operations PUT (to create new objects) or DELETE (to remove objects). I assume you'll be going to use POST to support a variety of browsers.\nTo get a POST from a browser, you need either Javascript wizardy or a plain old form with method=post -- I'll assume the latter for simplicity.\nIf you're using Django 1.0 (which app engine supports now), it has its own mechanisms to make, validate and accept forms based on models. Other frameworks have their own similarly advanced layers.\nIf you want to avoid \"rich\" frameworks you'll have to implement by hand templates for your HTML forms, direct them (via some kind of URL dispatching, e.g. in app.yaml) to a handler of yours implementing with a def post(self):, get the data from the request, validate it, form the new object, put it, display some acknowledgment page.\nWhat part or parts of the procedure are unclear to you? Your question's title focuses specifically on reference properties but I'm not sure what problem they are giving you in particular -- from the text of your question you appear to be on the right tack about them.\nEdit: the OP has now clarified in a comment that his problem is how to make something like:\n\"<a href=\"/newtopic?id={{group.key.id}}\" >New topic </a>\" \n\nwork. There's more than one way to do that. If the newtopic URL is served by a static form, the handler for the post \"action\" of that form could get back to that id= via the Referer: header (a notorious but unfixable mis-spelling), but that's a bit clunky and fragile. Better is to have the newtopic URI served by a handler whose def get gets the id= from the request and inserts it in the resulting form template -- for example, in a hidden input field. Have that form's template contain (among the other fields):\n<INPUT TYPE=hidden NAME=thegroupid VALUE={{ theid }}> </INPUT>\n\nput theid in the context with which you render that template, and it will be in the request that the def post of the action receiving the form finally gets.\n", "Just to answer the question for others as you probably figured this out:\n\nclass NewTopic(webapp.RequestHandler):\n def get(self):\n groupId = self.request.get('group')\n # either get the actual group object from the DB and initialize topic with topic_group=object as in 'Nick Johnson's answer, or do as follows\n topic = Topic()\n topic.name = self.request.get(\"topicname\")\n topic.reference = groupId\n topic.put()\n\n", "Thankyou for the reply.\nYeah I am aware of the get vs post. The class I posted was just to actually print all the Groups().\nThe issue I have is I'm unsure how I use the models to keep data in a sort of hierarchical fashion, with Groups > Topics > Pros/Cons.\nGrabbing data is simple enough and I am using:\nclass NewGroupSubmit(webapp.RequestHandler):\n def post(self):\n\n group = Groups()\n if users.get_current_user():\n group.group_user = users.get_current_user() \n group.group_name = self.request.get('groupname')\n\n group.put()\n self.redirect('/summary')\n\nI need another function to add a new topic, that stores it within that group. So lets say a group is \"Cars\" for instance; the topics might be \"Ferrari\", \"Porsche\", \"BMW\", and then pros/cons for each topic. I realise I'm being a little vague, but it's because I'm very new to relational databasing and not quite used to the terminology.\n", "I'm not quite sure what problem you're having. Everything you list looks fine - the ReferenceProperties are set up according to what one would expect from your dscription. The only problem I can see is that in your template, you're referring to a variable \"topics\", which isn't defined anywhere, and you're not iterating through the topics for a group anywhere. You can do that like this:\n<html>\n <body>\n <a href=\"/newgroup\">New Group</a>\n <br>\n {% for group in groups %}\n\n <font size=\"24\">{{ group.group_name|escape }}</font><br> by <b>{{ group.group_user }}</b> at <b>{{ group.group_date }}</b> {{ group.raw_id }}\n <br>\n <a href=\"/newtopic?id={{group.key.id}}\" >New topice </a>\n <br>\n Topics:\n <ul>\n {% for topic in group.topics %}\n <li>{{topic.topic_name}}</li>\n {% endfor %}\n </ul>\n {% endfor %}\n </body>\n</html>\n\nTo create a new topic, just use the constructor, passing in the required arguments:\nmytopic = Topic(topic_name=\"foo\", topic_group=somegroup)\n\nHere, somegroup should be either a Group object, or a key for a Group object.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "django_models", "django_templates", "google_app_engine", "model", "python" ]
stackoverflow_0001210321_django_models_django_templates_google_app_engine_model_python.txt
Q: What is wrong with my django's model field? I am trying to do a PhoneField that convert the value as a standardized value. In this case, I want to use this clean method. def clean(self): phone = self.cleaned_data.get('phone') # Is it already standardized ? if phone.startswith('+'): mo = re.search(r'^\+\d{2,3}\.\d{9,11}$', phone) if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: return phone # If not, it must be a french number. phone = re.sub("\D", "", phone) # Suppression des caractères séparateurs mo = re.search(r'^\d{10}$', phone) # Numéro à 10 chiffres if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: phone = mo.group()[-9:] return u'+33.%s' % phone If I use it in a form, it works well. But I would like to use it as a form field. I tried to do like that : EMPTY_VALUES = (None, '') class PhoneInput (forms.TextInput): def render(self, name, value, attrs=None): if value not in EMPTY_VALUES: value = phone_render(value) else: value = None return super(PhoneInput, self).render(name, value, attrs) class PhoneField(models.CharField): widget = PhoneInput def __init__(self, *args, **kwargs): kwargs['max_length'] = 16 super(PhoneField, self).__init__(*args, **kwargs) def get_internal_type(self): return "CharField" def clean(self, value): phone = super(PhoneField, self).clean(value) # Le numéro contient-il un indicatif ? if phone.startswith('+'): mo = re.search(r'^\+\d{2,3}\.\d{9,11}$', phone) if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: return phone # Pas d'indicatif : on est en France par défaut phone = re.sub("\D", "", phone) # Suppression des caractères séparateurs mo = re.search(r'^\d{10}$', phone) # Numéro à 10 chiffres if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: phone = mo.group()[-9:] return u'+33.%s' % phone But the clean method is never called. Can you help me ? A: You are mixing up the model fields and form fields. Form Fields need to be first defined and then corresponding model Fields need to be asked to use those form fields for a model form. See specifying-the-form-field-for-a-model-field documentation Basically you need to define a method called formfield on the model field def formfield(self, **kwargs): # This is a fairly standard way to set up some defaults # while letting the caller override them. defaults = {'form_class': MyFormField} defaults.update(kwargs) return super(HandField, self).formfield(**defaults) The Field that you have created above, has a clean method, that is a requirement for a form field, not a model field So, now you should define a form field (the one you have defined, except extend it from forms.CharField, not models.CharField) and then define a model field, as per model field creation definitions, and include the method like above. Alternatively and simplistically, you could just choose to define a form field, and in the model form, override the default form field of this particular modelfield. (Altho' in this case validation doesnt take place unless data is entered from that modelform) A: Here is how I fixed it using becomingGuru help :) class PhoneFormField(forms.CharField): widget = PhoneInput def clean(self, value): phone = super(PhoneFormField, self).clean(value) # Le numéro contient-il un indicatif ? if phone.startswith('+'): mo = re.search(r'^\+\d{2,3}\.\d{9,11}$', phone) if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: return phone # Pas d'indicatif : on est en France par défaut phone = re.sub("\D", "", phone) # Suppression des caractères séparateurs mo = re.search(r'^\d{10}$', phone) # Numéro à 10 chiffres if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: phone = mo.group()[-9:] return u'+33.%s' % phone class PhoneField(models.CharField): def __init__(self, *args, **kwargs): kwargs['max_length'] = 16 super(PhoneField, self).__init__(*args, **kwargs) def get_internal_type(self): return "CharField" def formfield(self, form_class=PhoneFormField, **kwargs): return super(PhoneField, self).formfield(form_class=form_class, **kwargs) Thank you for your help.
What is wrong with my django's model field?
I am trying to do a PhoneField that convert the value as a standardized value. In this case, I want to use this clean method. def clean(self): phone = self.cleaned_data.get('phone') # Is it already standardized ? if phone.startswith('+'): mo = re.search(r'^\+\d{2,3}\.\d{9,11}$', phone) if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: return phone # If not, it must be a french number. phone = re.sub("\D", "", phone) # Suppression des caractères séparateurs mo = re.search(r'^\d{10}$', phone) # Numéro à 10 chiffres if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: phone = mo.group()[-9:] return u'+33.%s' % phone If I use it in a form, it works well. But I would like to use it as a form field. I tried to do like that : EMPTY_VALUES = (None, '') class PhoneInput (forms.TextInput): def render(self, name, value, attrs=None): if value not in EMPTY_VALUES: value = phone_render(value) else: value = None return super(PhoneInput, self).render(name, value, attrs) class PhoneField(models.CharField): widget = PhoneInput def __init__(self, *args, **kwargs): kwargs['max_length'] = 16 super(PhoneField, self).__init__(*args, **kwargs) def get_internal_type(self): return "CharField" def clean(self, value): phone = super(PhoneField, self).clean(value) # Le numéro contient-il un indicatif ? if phone.startswith('+'): mo = re.search(r'^\+\d{2,3}\.\d{9,11}$', phone) if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: return phone # Pas d'indicatif : on est en France par défaut phone = re.sub("\D", "", phone) # Suppression des caractères séparateurs mo = re.search(r'^\d{10}$', phone) # Numéro à 10 chiffres if not mo: raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).')) else: phone = mo.group()[-9:] return u'+33.%s' % phone But the clean method is never called. Can you help me ?
[ "You are mixing up the model fields and form fields.\nForm Fields need to be first defined and then corresponding model Fields need to be asked to use those form fields for a model form.\nSee specifying-the-form-field-for-a-model-field documentation \nBasically you need to define a method called formfield on the model field\ndef formfield(self, **kwargs):\n # This is a fairly standard way to set up some defaults\n # while letting the caller override them.\n defaults = {'form_class': MyFormField}\n defaults.update(kwargs)\n return super(HandField, self).formfield(**defaults)\n\nThe Field that you have created above, has a clean method, that is a requirement for a form field, not a model field\nSo, now you should define a form field (the one you have defined, except extend it from forms.CharField, not models.CharField) and then define a model field, as per model field creation definitions, and include the method like above.\nAlternatively and simplistically, you could just choose to define a form field, and in the model form, override the default form field of this particular modelfield. (Altho' in this case validation doesnt take place unless data is entered from that modelform)\n", "Here is how I fixed it using becomingGuru help :)\n class PhoneFormField(forms.CharField):\n widget = PhoneInput\n\n def clean(self, value):\n phone = super(PhoneFormField, self).clean(value)\n\n # Le numéro contient-il un indicatif ?\n if phone.startswith('+'):\n mo = re.search(r'^\\+\\d{2,3}\\.\\d{9,11}$', phone)\n\n if not mo:\n raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).'))\n else:\n return phone\n\n # Pas d'indicatif : on est en France par défaut\n phone = re.sub(\"\\D\", \"\", phone) # Suppression des caractères séparateurs\n\n mo = re.search(r'^\\d{10}$', phone) # Numéro à 10 chiffres\n if not mo:\n raise forms.ValidationError(_(u'Vous devez entrer un numéro de téléphone. (+33.389520638 ou 0389520638).'))\n else:\n phone = mo.group()[-9:]\n\n return u'+33.%s' % phone\n\n class PhoneField(models.CharField):\n def __init__(self, *args, **kwargs):\n kwargs['max_length'] = 16\n super(PhoneField, self).__init__(*args, **kwargs)\n\n def get_internal_type(self):\n return \"CharField\"\n\n def formfield(self, form_class=PhoneFormField, **kwargs):\n return super(PhoneField, self).formfield(form_class=form_class, **kwargs)\n\nThank you for your help.\n" ]
[ 6, 1 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0001635392_django_django_forms_python.txt
Q: Django forms: making a disabled field persist between validations At some point I need to display a "disabled" (greyed out by disabled="disabled" attribute) input of type "select". As specified in the standard (xhtml and html4), inputs of type "select" can not have the "readonly" attribute. Note that this is for presentation purposes only, the actual value must end up in the POST. So here is what I do (quoting a part of the form declaration in django): from django import forms _choices = ['to be', 'not to be'] class SomeForm(forms.Form): field = forms.ChoiceField(choices=[(item, item) for item in _choices], widget=forms.HiddenInput()) # the real field mock_field = forms.ChoiceField(required=False, # doesn't get submitted choices=[(item, item) for item in _choices], label="The question", widget=forms.Select(attrs={'disabled':'disabled'})) Then it is initialized like this: initial_val = 'to be' form = SomeForm(ititial={'field':initial_val, 'mock_field':initial_val}) And all is well. Well, until the form gets validated and one of the other fields fails the validation. When this happens, the form is reloaded and the values are preserved, but not the one of the "mock_field" - it never got submitted (it is disabled). So it is not preserved. While this doesn't affect the data integrity, it is still not so good presentation-wise. Is there any way to preserve that field, with as little hackery as possible? The form is a part of a django.contrib.formtools.FormWizard and the initial values (and some fields) are generated dynamically. Basically, there is a lot of stuff going on already, it'd be great if it was possible not to overcomplicate things. A: Browsers don't POST disabled fields. You can try to copy fields initial value to mock_field in your Form's __init__ def __init__(self, *args, **kwargs): super(SomeForm, self).__init__(*args, **kwargs) mock_initial = self.fields['field'].initial self.fields['mock_field'].initial = mock_initial Code is not tested. Normally you would be concerned about form.data as well, but in this case it won't be different than initial A: Well, this will be the first time I answer my question, but I've found a solution and (while it cerainly is a hack) it works. Instead of getting the initial value from the form instance, - self.fields['whatever'].initial seems to be None inside the constructor, I am getting the value from keyword argument "initial". And then I set it as the only choice for the "mock" field. Like this: from django import forms _choices = ['to be', 'not to be'] class SomeForm(forms.Form): field = forms.ChoiceField(choices=[(item, item) for item in _choices], widget=forms.HiddenInput()) # the real field mock_field = forms.ChoiceField(required=False, # doesn't get submitted choices=[(item, item) for item in _choices], label="The question", widget=forms.Select(attrs={'disabled':'disabled'})) def __init__(self, *args, **kwargs): super(SomeForm, self).__init__(*args, **kwargs) mock_initial = kwargs['initial']['field'] self.fields['mock_field'].choices = [(mock_initial, mock_initial),] This probably needs some error handling. Obviously, this will not work if the initial value is not provided for the actual field.
Django forms: making a disabled field persist between validations
At some point I need to display a "disabled" (greyed out by disabled="disabled" attribute) input of type "select". As specified in the standard (xhtml and html4), inputs of type "select" can not have the "readonly" attribute. Note that this is for presentation purposes only, the actual value must end up in the POST. So here is what I do (quoting a part of the form declaration in django): from django import forms _choices = ['to be', 'not to be'] class SomeForm(forms.Form): field = forms.ChoiceField(choices=[(item, item) for item in _choices], widget=forms.HiddenInput()) # the real field mock_field = forms.ChoiceField(required=False, # doesn't get submitted choices=[(item, item) for item in _choices], label="The question", widget=forms.Select(attrs={'disabled':'disabled'})) Then it is initialized like this: initial_val = 'to be' form = SomeForm(ititial={'field':initial_val, 'mock_field':initial_val}) And all is well. Well, until the form gets validated and one of the other fields fails the validation. When this happens, the form is reloaded and the values are preserved, but not the one of the "mock_field" - it never got submitted (it is disabled). So it is not preserved. While this doesn't affect the data integrity, it is still not so good presentation-wise. Is there any way to preserve that field, with as little hackery as possible? The form is a part of a django.contrib.formtools.FormWizard and the initial values (and some fields) are generated dynamically. Basically, there is a lot of stuff going on already, it'd be great if it was possible not to overcomplicate things.
[ "Browsers don't POST disabled fields.\nYou can try to copy fields initial value to mock_field in your Form's __init__\ndef __init__(self, *args, **kwargs):\n super(SomeForm, self).__init__(*args, **kwargs)\n mock_initial = self.fields['field'].initial\n self.fields['mock_field'].initial = mock_initial\n\nCode is not tested. Normally you would be concerned about form.data as well, but in this case it won't be different than initial\n", "Well, this will be the first time I answer my question, but I've found a solution and (while it cerainly is a hack) it works.\nInstead of getting the initial value from the form instance, - self.fields['whatever'].initial seems to be None inside the constructor, I am getting the value from keyword argument \"initial\". And then I set it as the only choice for the \"mock\" field. Like this:\nfrom django import forms\n\n_choices = ['to be', 'not to be']\nclass SomeForm(forms.Form):\n field = forms.ChoiceField(choices=[(item, item) for item in _choices],\n widget=forms.HiddenInput()) # the real field\n\n mock_field = forms.ChoiceField(required=False, # doesn't get submitted\n choices=[(item, item) for item in _choices],\n label=\"The question\",\n widget=forms.Select(attrs={'disabled':'disabled'}))\n\n def __init__(self, *args, **kwargs):\n super(SomeForm, self).__init__(*args, **kwargs)\n mock_initial = kwargs['initial']['field']\n self.fields['mock_field'].choices = [(mock_initial, mock_initial),]\n\nThis probably needs some error handling. Obviously, this will not work if the initial value is not provided for the actual field.\n" ]
[ 3, 1 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0001596054_django_django_forms_python.txt
Q: How to run django development server at startup? I added following command to Sessions -> Startup program but it didn't work. I'm using Ubuntu. sudo -u www-data python manage.py 192.168.1.2:8001 192.168.1.2 is the ip address on ath0. Is it still not available for binding at the stage when this command is executed? What I currently do is add another cronjob to restart the development server if it's not running every 5 minutes? Is it a better way to run it at startup? A: Hopefully you're not trying to run the server in a production environment (according to the django docs). Take a look instead at apache with mod_wsgi. If you are just running for local development, there is no need to run as the www-data user. You might want to look into the @reboot directive for cron, and just run the server as your user. See this answer for details on runserver. A: Mmmm... shouldn't it be? sudo -u www-data python manage.py runserver 192.168.1.2:8001 A: If you want the dev server to always run, you're probably better off setting up a real server on your local machine. It doesn't have to be something fat or big. I use cherokee server. Another options is lighttpd
How to run django development server at startup?
I added following command to Sessions -> Startup program but it didn't work. I'm using Ubuntu. sudo -u www-data python manage.py 192.168.1.2:8001 192.168.1.2 is the ip address on ath0. Is it still not available for binding at the stage when this command is executed? What I currently do is add another cronjob to restart the development server if it's not running every 5 minutes? Is it a better way to run it at startup?
[ "Hopefully you're not trying to run the server in a production environment (according to the django docs). Take a look instead at apache with mod_wsgi.\nIf you are just running for local development, there is no need to run as the www-data user. You might want to look into the @reboot directive for cron, and just run the server as your user.\nSee this answer for details on runserver.\n", "Mmmm... shouldn't it be?\n\nsudo -u www-data python manage.py runserver 192.168.1.2:8001\n\n", "If you want the dev server to always run, you're probably better off setting up a real server on your local machine. It doesn't have to be something fat or big. I use cherokee server. Another options is lighttpd\n" ]
[ 6, 3, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001634730_django_python.txt
Q: Is there any Python module similar to Distributed Ruby I am new to Python. Just want to know is there any module in python similar to ruby's drb? Like a client can use object provided by the drb server? A: This is generally called "object brokering" and a list of some Python packages in this area can be found by browsing the Object Brokering topic area of the Python Package Index here. The oldest and most widely used of these is Pyro. A: Pyro does what I think you're discribing (although I've not used drb). From the website: Pyro is short for PYthon Remote Objects. It is an advanced and powerful Distributed Object Technology system written entirely in Python, that is designed to be very easy to use. Never worry about writing network communication code again, when using Pyro you just write your Python objects like you would normally. With only a few lines of extra code, Pyro takes care of the network communication between your objects once you split them over different machines on the network. All the gory socket programming details are taken care of, you just call a method on a remote object as if it were a local object! A: The standard multiprocessing module might do what you want. A: I have no idea what drb is, but from the little information you have given, it might be something like the Perspective Broker in Twisted Introduction Suppose you find yourself in control of both ends of the wire: you have two programs that need to talk to each other, and you get to use any protocol you want. If you can think of your problem in terms of objects that need to make method calls on each other, then chances are good that you can use twisted's Perspective Broker protocol rather than trying to shoehorn your needs into something like HTTP, or implementing yet another RPC mechanism. The Perspective Broker system (abbreviated PB, spawning numerous sandwich-related puns) is based upon a few central concepts: serialization: taking fairly arbitrary objects and types, turning them into a chunk of bytes, sending them over a wire, then reconstituting them on the other end. By keeping careful track of object ids, the serialized objects can contain references to other objects and the remote copy will still be useful. remote method calls: doing something to a local object and causing a method to get run on a distant one. The local object is called a RemoteReference, and you do something by running its .callRemote method. A: Have you looked at execnet? http://codespeak.net/execnet/ A: For parallel processing and distributed computing I use parallel python.
Is there any Python module similar to Distributed Ruby
I am new to Python. Just want to know is there any module in python similar to ruby's drb? Like a client can use object provided by the drb server?
[ "This is generally called \"object brokering\" and a list of some Python packages in this area can be found by browsing the Object Brokering topic area of the Python Package Index here.\nThe oldest and most widely used of these is Pyro.\n", "Pyro does what I think you're discribing (although I've not used drb).\nFrom the website:\n\nPyro is short for PYthon Remote Objects. It is an advanced and powerful Distributed Object Technology system written entirely in Python, that is designed to be very easy to use. Never worry about writing network communication code again, when using Pyro you just write your Python objects like you would normally. With only a few lines of extra code, Pyro takes care of the network communication between your objects once you split them over different machines on the network. All the gory socket programming details are taken care of, you just call a method on a remote object as if it were a local object! \n\n", "The standard multiprocessing module might do what you want.\n", "I have no idea what drb is, but from the little information you have given,\nit might be something like the Perspective Broker in Twisted\n\nIntroduction\nSuppose you find yourself in control\n of both ends of the wire: you have two\n programs that need to talk to each\n other, and you get to use any protocol\n you want. If you can think of your\n problem in terms of objects that need\n to make method calls on each other,\n then chances are good that you can use\n twisted's Perspective Broker protocol\n rather than trying to shoehorn your\n needs into something like HTTP, or\n implementing yet another RPC\n mechanism.\nThe Perspective Broker system\n (abbreviated PB, spawning numerous\n sandwich-related puns) is based upon a\n few central concepts:\nserialization: taking fairly arbitrary\n objects and types, turning them into a\n chunk of bytes, sending them over a\n wire, then reconstituting them on the\n other end. By keeping careful track of\n object ids, the serialized objects can\n contain references to other objects\n and the remote copy will still be\n useful. \nremote method calls: doing\n something to a local object and\n causing a method to get run on a\n distant one. The local object is\n called a RemoteReference, and you do\n something by running its .callRemote\n method.\n\n", "Have you looked at execnet?\nhttp://codespeak.net/execnet/\n", "For parallel processing and distributed computing I use parallel python.\n" ]
[ 6, 2, 1, 0, 0, 0 ]
[]
[]
[ "drb", "python", "ruby" ]
stackoverflow_0001635558_drb_python_ruby.txt
Q: Any hints on programming Dia with Python extensions? I'm searching for documentation on how to do it properly. Any hints? A: if you google "dia python", you'll find https://wiki.gnome.org/Apps/Dia/Python which is a good starting point
Any hints on programming Dia with Python extensions?
I'm searching for documentation on how to do it properly. Any hints?
[ "if you google \"dia python\", you'll find https://wiki.gnome.org/Apps/Dia/Python which is a good starting point\n" ]
[ 4 ]
[]
[]
[ "dia", "interface", "python", "uml", "visio" ]
stackoverflow_0001635943_dia_interface_python_uml_visio.txt
Q: SQLAlchemy declarative concrete autoloaded table inheritance I've an already existing database and want to access it using SQLAlchemy. Because, the database structure's managed by another piece of code (Django ORM, actually) and I don't want to repeat myself, describing every table structure, I'm using autoload introspection. I'm stuck with a simple concrete table inheritance. Payment FooPayment + id (PK) <----FK------+ payment_ptr_id (PK) + user_id + foo + amount + date Here is the code, with table SQL descritions as docstrings: class Payment(Base): """ CREATE TABLE payments( id serial NOT NULL, user_id integer NOT NULL, amount numeric(11,2) NOT NULL, date timestamp with time zone NOT NULL, CONSTRAINT payment_pkey PRIMARY KEY (id), CONSTRAINT payment_user_id_fkey FOREIGN KEY (user_id) REFERENCES users (id) MATCH SIMPLE) """ __tablename__ = 'payments' __table_args__ = {'autoload': True} # user = relation(User) class FooPayment(Payment): """ CREATE TABLE payments_foo( payment_ptr_id integer NOT NULL, foo integer NOT NULL, CONSTRAINT payments_foo_pkey PRIMARY KEY (payment_ptr_id), CONSTRAINT payments_foo_payment_ptr_id_fkey FOREIGN KEY (payment_ptr_id) REFERENCES payments (id) MATCH SIMPLE) """ __tablename__ = 'payments_foo' __table_args__ = {'autoload': True} __mapper_args__ = {'concrete': True} The actual tables have additional columns, but this is completely irrelevant to the question, so in attempt to minimize the code I've simplified everything just to the core. The problem is, when I run this: payment = session.query(FooPayment).filter(Payment.amount >= 200.0).first() print payment.date The resulting SQL is meaningless (note the lack of join condidion): SELECT payments_foo.payment_ptr_id AS payments_foo_payment_ptr_id, ... /* More `payments_foo' columns and NO columns from `payments' */ FROM payments_foo, payments WHERE payments.amount >= 200.0 LIMIT 1 OFFSET 0 And when I'm trying to access payment.date I get the following error: Concrete Mapper|FooPayment|payments_foo does not implement attribute u'date' at the instance level. I've tried adding implicit foreign key reference id = Column('payment_ptr_id', Integer, ForeignKey('payments_payment.id'), primary_key=True) to FooPayment without any success. Trying print session.query(Payment).first().user works (I've omited User class and commented the line) perfectly, so FK introspection works. How can I perform a simple query on FooPayment and access Payment's values from resulting instance? I'm using SQLAlchemy 0.5.3, PostgreSQL 8.3, psycopg2 and Python 2.5.2. Thanks for any suggestions. A: Your table structures are similar to what is used in joint table inheritance, but they certainly don't correspond to concrete table inheritance where all fields of parent class are duplicated in the table of subclass. Right now you have a subclass with less fields than parent and a reference to instance of parent class. Switch to joint table inheritance (and use FooPayment.amount in your condition or give up with inheritance in favor of simple aggregation (reference). Filter by a field in other model doesn't automatically add join condition. Although it's obvious what condition should be used in join for your example, it's not possible to determine such condition in general. That's why you have to define relation property referring to Payment and use its has() method in filter to get proper join condition.
SQLAlchemy declarative concrete autoloaded table inheritance
I've an already existing database and want to access it using SQLAlchemy. Because, the database structure's managed by another piece of code (Django ORM, actually) and I don't want to repeat myself, describing every table structure, I'm using autoload introspection. I'm stuck with a simple concrete table inheritance. Payment FooPayment + id (PK) <----FK------+ payment_ptr_id (PK) + user_id + foo + amount + date Here is the code, with table SQL descritions as docstrings: class Payment(Base): """ CREATE TABLE payments( id serial NOT NULL, user_id integer NOT NULL, amount numeric(11,2) NOT NULL, date timestamp with time zone NOT NULL, CONSTRAINT payment_pkey PRIMARY KEY (id), CONSTRAINT payment_user_id_fkey FOREIGN KEY (user_id) REFERENCES users (id) MATCH SIMPLE) """ __tablename__ = 'payments' __table_args__ = {'autoload': True} # user = relation(User) class FooPayment(Payment): """ CREATE TABLE payments_foo( payment_ptr_id integer NOT NULL, foo integer NOT NULL, CONSTRAINT payments_foo_pkey PRIMARY KEY (payment_ptr_id), CONSTRAINT payments_foo_payment_ptr_id_fkey FOREIGN KEY (payment_ptr_id) REFERENCES payments (id) MATCH SIMPLE) """ __tablename__ = 'payments_foo' __table_args__ = {'autoload': True} __mapper_args__ = {'concrete': True} The actual tables have additional columns, but this is completely irrelevant to the question, so in attempt to minimize the code I've simplified everything just to the core. The problem is, when I run this: payment = session.query(FooPayment).filter(Payment.amount >= 200.0).first() print payment.date The resulting SQL is meaningless (note the lack of join condidion): SELECT payments_foo.payment_ptr_id AS payments_foo_payment_ptr_id, ... /* More `payments_foo' columns and NO columns from `payments' */ FROM payments_foo, payments WHERE payments.amount >= 200.0 LIMIT 1 OFFSET 0 And when I'm trying to access payment.date I get the following error: Concrete Mapper|FooPayment|payments_foo does not implement attribute u'date' at the instance level. I've tried adding implicit foreign key reference id = Column('payment_ptr_id', Integer, ForeignKey('payments_payment.id'), primary_key=True) to FooPayment without any success. Trying print session.query(Payment).first().user works (I've omited User class and commented the line) perfectly, so FK introspection works. How can I perform a simple query on FooPayment and access Payment's values from resulting instance? I'm using SQLAlchemy 0.5.3, PostgreSQL 8.3, psycopg2 and Python 2.5.2. Thanks for any suggestions.
[ "Your table structures are similar to what is used in joint table inheritance, but they certainly don't correspond to concrete table inheritance where all fields of parent class are duplicated in the table of subclass. Right now you have a subclass with less fields than parent and a reference to instance of parent class. Switch to joint table inheritance (and use FooPayment.amount in your condition or give up with inheritance in favor of simple aggregation (reference).\nFilter by a field in other model doesn't automatically add join condition. Although it's obvious what condition should be used in join for your example, it's not possible to determine such condition in general. That's why you have to define relation property referring to Payment and use its has() method in filter to get proper join condition.\n" ]
[ 4 ]
[]
[]
[ "autoload", "concrete", "inheritance", "python", "sqlalchemy" ]
stackoverflow_0001633447_autoload_concrete_inheritance_python_sqlalchemy.txt
Q: Chaining Deferred Tasks with Google App Engine I have a website I am looking to stay updated with and scrape some content from there every day. I know the site is updated manually at a certain time, and I've set cron schedules to reflect this, but since it is updated manually it could be 10 or even 20 minutes later. Right now I have a hack-ish cron update every 5 minutes, but I'd like to use the deferred library to do things in a more precise manner. I'm trying to chain deferred tasks so I can check if there was an update and defer that same update a for couple minutes if there was none, and defer again if need be until there is finally an update. I have some code I thought would work, but it only ever defers once, when instead I need to continue deferring until there is an update: (I am using Python) class Ripper(object): def rip(self): if siteHasNotBeenUpdated: deferred.defer(self.rip, _countdown=120) else: updateMySite() This was just a simplified excerpt obviously. I thought this was simple enough to work, but maybe I've just got it all wrong? A: The example you give should work just fine. You need to add logging to determine if deferred.defer is being called when you think it is. More information would help, too: How is siteHasNotBeenUpdated set?
Chaining Deferred Tasks with Google App Engine
I have a website I am looking to stay updated with and scrape some content from there every day. I know the site is updated manually at a certain time, and I've set cron schedules to reflect this, but since it is updated manually it could be 10 or even 20 minutes later. Right now I have a hack-ish cron update every 5 minutes, but I'd like to use the deferred library to do things in a more precise manner. I'm trying to chain deferred tasks so I can check if there was an update and defer that same update a for couple minutes if there was none, and defer again if need be until there is finally an update. I have some code I thought would work, but it only ever defers once, when instead I need to continue deferring until there is an update: (I am using Python) class Ripper(object): def rip(self): if siteHasNotBeenUpdated: deferred.defer(self.rip, _countdown=120) else: updateMySite() This was just a simplified excerpt obviously. I thought this was simple enough to work, but maybe I've just got it all wrong?
[ "The example you give should work just fine. You need to add logging to determine if deferred.defer is being called when you think it is. More information would help, too: How is siteHasNotBeenUpdated set?\n" ]
[ 2 ]
[]
[]
[ "deferred_execution", "google_app_engine", "python" ]
stackoverflow_0001630001_deferred_execution_google_app_engine_python.txt
Q: Is there a convenient way to alias only conflicting columns when joining tables in SQLAlchemy? Sometimes it is useful to map a class against a join instead of a single table when using SQLAlchemy's declarative extension. When column names collide, usually in a one-to-many because all primary keys are named id by default, you can use .alias() to prefix every column with its table name. That is inconvenient if you've already written code that assumes your mapped class has non-prefixed names. For example: from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Table, Column, Integer, ForeignKeyConstraint Base = declarative_base() t1 = Table('t1', Base.metadata, Column('id', Integer, primary_key=True)) t2 = Table('t2', Base.metadata, Column('id', Integer, primary_key=True), Column('fkey', Integer), ForeignKeyConstraint(['fkey'], [t1.c.id])) class ST(Base): __table__ = t1.join(t2) class ST2(Base): __table__ = t1.join(t2).alias() ST has id, fkey properties with each name mapping to the first table in the join that uses the overridden name, so the mapped class does not expose t2's primary key. ST2 has t1_id, t2_id and t2_fkey properties. Is there a convenient way to alias only some of the columns from each table in the join so the mapped class exposes the more convenient non-prefixed property names for most mapped columns? A: You can create alias for each column separately with its label() method. So it's possible something similar to the following (not tested): from sqlalchemy import select def alias_dups(join): dups = set(col.key for col in join.left.columns) & \ set(col.key for col in join.right.columns) columns = [] for col in join.columns: if col.key in dups: col = col.label('%s_%s' % (col.table.name, col.key)) columns.append(col) return select(columns, from_obj=[join]).alias() class ST2(Base): __table__ = alias_dups(t1.join(t2))
Is there a convenient way to alias only conflicting columns when joining tables in SQLAlchemy?
Sometimes it is useful to map a class against a join instead of a single table when using SQLAlchemy's declarative extension. When column names collide, usually in a one-to-many because all primary keys are named id by default, you can use .alias() to prefix every column with its table name. That is inconvenient if you've already written code that assumes your mapped class has non-prefixed names. For example: from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Table, Column, Integer, ForeignKeyConstraint Base = declarative_base() t1 = Table('t1', Base.metadata, Column('id', Integer, primary_key=True)) t2 = Table('t2', Base.metadata, Column('id', Integer, primary_key=True), Column('fkey', Integer), ForeignKeyConstraint(['fkey'], [t1.c.id])) class ST(Base): __table__ = t1.join(t2) class ST2(Base): __table__ = t1.join(t2).alias() ST has id, fkey properties with each name mapping to the first table in the join that uses the overridden name, so the mapped class does not expose t2's primary key. ST2 has t1_id, t2_id and t2_fkey properties. Is there a convenient way to alias only some of the columns from each table in the join so the mapped class exposes the more convenient non-prefixed property names for most mapped columns?
[ "You can create alias for each column separately with its label() method. So it's possible something similar to the following (not tested):\nfrom sqlalchemy import select\n\ndef alias_dups(join):\n dups = set(col.key for col in join.left.columns) & \\\n set(col.key for col in join.right.columns)\n columns = []\n for col in join.columns:\n if col.key in dups:\n col = col.label('%s_%s' % (col.table.name, col.key))\n columns.append(col)\n return select(columns, from_obj=[join]).alias()\n\nclass ST2(Base):\n __table__ = alias_dups(t1.join(t2))\n\n" ]
[ 5 ]
[]
[]
[ "alias", "join", "python", "sqlalchemy" ]
stackoverflow_0001627429_alias_join_python_sqlalchemy.txt
Q: Python debugging in Netbeans I have a problem with debugging Python programs under the Netbeans IDE. When I start debugging, the debugger writes the following log and error. Thank you for help. [LOG]PythonDebugger : overall Starting >>>[LOG]PythonDebugger.taskStarted : I am Starting a new Debugging Session ... [LOG]This window is an interactive debugging context aware Python Shell [LOG]where you can enter python console commands while debugging >>>c:\documents and settings\aster\.netbeans\6.7\config\nbpython\debug\nbpythondebug\jpydaemon.py args = ['C:\\Documents and Settings\\aster\\.netbeans\\6.7\\config\\nbPython\\debug\\nbpythondebug\\jpydaemon.py', 'localhost', '11111'] localDebuggee= None JPyDbg connecting localhost on in= 11111 /out= 11112 ERROR:JPyDbg connection failed errno(10061) : Connection refused Debug session normal end ERROR :: Server Socket listen for debuggee has timed out (more than 20 seconds wait) java.net.SocketTimeoutException: Accept timed out thanks for answer A: I just installed Python for NetBeans yesterday and hadn't tried the debugger, so just tried it, and I got the same error. So I thought maybe it's a Firewall issue, disabled my Firewall and retried it, and then it worked. However I restarted the Firewall and now it's still working, so I don't know. I saw the Netbeans options for Python have an input to specify the beginning listening port (which mine was 29000 not 11111 like yours). A: For Python I like WingIDE from Wingware.
Python debugging in Netbeans
I have a problem with debugging Python programs under the Netbeans IDE. When I start debugging, the debugger writes the following log and error. Thank you for help. [LOG]PythonDebugger : overall Starting >>>[LOG]PythonDebugger.taskStarted : I am Starting a new Debugging Session ... [LOG]This window is an interactive debugging context aware Python Shell [LOG]where you can enter python console commands while debugging >>>c:\documents and settings\aster\.netbeans\6.7\config\nbpython\debug\nbpythondebug\jpydaemon.py args = ['C:\\Documents and Settings\\aster\\.netbeans\\6.7\\config\\nbPython\\debug\\nbpythondebug\\jpydaemon.py', 'localhost', '11111'] localDebuggee= None JPyDbg connecting localhost on in= 11111 /out= 11112 ERROR:JPyDbg connection failed errno(10061) : Connection refused Debug session normal end ERROR :: Server Socket listen for debuggee has timed out (more than 20 seconds wait) java.net.SocketTimeoutException: Accept timed out thanks for answer
[ "I just installed Python for NetBeans yesterday and hadn't tried the debugger, so just tried it, and I got the same error. So I thought maybe it's a Firewall issue, disabled my Firewall and retried it, and then it worked.\nHowever I restarted the Firewall and now it's still working, so I don't know. I saw the Netbeans options for Python have an input to specify the beginning listening port (which mine was 29000 not 11111 like yours).\n", "For Python I like WingIDE from Wingware.\n" ]
[ 1, 1 ]
[]
[]
[ "netbeans", "python" ]
stackoverflow_0001606746_netbeans_python.txt
Q: How am I able to assign a value to a literal? ('a' = 10) def foo(**args): for k, v in args.items(): print type(k), type(v) for k, v in args.items(): k = v print k print type(k) foo(a = 10) foo(**{'a':10}) Gives me <type 'str'> <type 'int'> 10 <type 'int'> <type 'str'> <type 'int'> 10 <type 'int'> So I am confused how am I able to do this as k is a string, so shouldn't I not be able to assign to it? I obviously can't do In [35]: 'a' = 10 ------------------------------------------------------------ File "<ipython console>", line 1 SyntaxError: can't assign to literal (<ipython console>, line 1) A: k is not a string, it is the name of a variable. You can easily do k = 'a' k = 10 without any problem, since an assignment statement in Python will assign the name to point to whichever value is on the right-hand side. Strings are immutable, as you mentioned, but this means that as an object, it has no method you can call that will cause it to modify its data. Every variable in Python can always be assigned to point to something else. For example, if you say x = y = 'hello' then both x and y refer to the same object, but assignment statements like x += 'world' or x = 'bacon' will change the binding of x to point to something else. A: k is not a string, it is just a loop variable. You've just assigned some new value to it and it lost its relation to the dictionary you are iterating over. A: Well, when you do for k, v in args.items(), in the first (and the only, in your case) iteration the identifier k starts pointing to a current key of the dictionary "args", which happens to be a string, right? When you do k = v, k starts pointing to whatever v points to, which just happens to be an integer. I don't see much of a problem here. If I understand it correctly, the values of variables in python are references to objects.
How am I able to assign a value to a literal? ('a' = 10)
def foo(**args): for k, v in args.items(): print type(k), type(v) for k, v in args.items(): k = v print k print type(k) foo(a = 10) foo(**{'a':10}) Gives me <type 'str'> <type 'int'> 10 <type 'int'> <type 'str'> <type 'int'> 10 <type 'int'> So I am confused how am I able to do this as k is a string, so shouldn't I not be able to assign to it? I obviously can't do In [35]: 'a' = 10 ------------------------------------------------------------ File "<ipython console>", line 1 SyntaxError: can't assign to literal (<ipython console>, line 1)
[ "k is not a string, it is the name of a variable. You can easily do\nk = 'a'\nk = 10\n\nwithout any problem, since an assignment statement in Python will assign the name to point to whichever value is on the right-hand side.\nStrings are immutable, as you mentioned, but this means that as an object, it has no method you can call that will cause it to modify its data. Every variable in Python can always be assigned to point to something else.\nFor example, if you say\nx = y = 'hello'\n\nthen both x and y refer to the same object, but assignment statements like\nx += 'world'\n\nor\nx = 'bacon'\n\nwill change the binding of x to point to something else.\n", "k is not a string, it is just a loop variable. You've just assigned some new value to it and it lost its relation to the dictionary you are iterating over.\n", "Well, when you do for k, v in args.items(), in the first (and the only, in your case) iteration the identifier k starts pointing to a current key of the dictionary \"args\", which happens to be a string, right?\nWhen you do k = v, k starts pointing to whatever v points to, which just happens to be an integer. I don't see much of a problem here.\nIf I understand it correctly, the values of variables in python are references to objects.\n" ]
[ 5, 3, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001636852_python.txt
Q: Removing Array Elements in Python while keeping track of their position I'v got two numpy arrays. The first array contains some zeros (which are distributed randomly over the length of the array), which I would like to remove. My issue is that I would also like to remove the entries of the second array at the index positions where the first array elements are zero. I only came up with a very cumbersome for-loop. Does anyone have an "elegant" method for doing this? Thx! A: Is it what you want? I am a NumPy newbie. In [1]: import numpy as np In [2]: a = np.array([1,2,0,3,0,4]) In [3]: b = np.array([1,2,3,4,5,6]) In [4]: b[np.where(a)] Out[4]: array([1, 2, 4, 6]) In [5]: np.where(a) Out[5]: (array([0, 1, 3, 5]),) In [6]: a[np.where(a)] Out[6]: array([1, 2, 3, 4]) A: You can use boolean indexing. x!=0 gives you a boolean array with True where x!=0 false where x==0. If you index either x or y with this array (ie x_nozeros=x[x!=0]) then you will get only the elements where x!=0. eg: In [1]: import numpy as np In [2]: x = np.array([1,2,0,3,0,4]) In [3]: y = np.arange(1,7) In [4]: indx = x!=0 In [5]: x_nozeros = x[indx] In [6]: y_nozeros = y[indx] In [7]: x_nozeros Out[7]: array([1, 2, 3, 4]) In [8]: y_nozeros Out[8]: array([1, 2, 4, 6])
Removing Array Elements in Python while keeping track of their position
I'v got two numpy arrays. The first array contains some zeros (which are distributed randomly over the length of the array), which I would like to remove. My issue is that I would also like to remove the entries of the second array at the index positions where the first array elements are zero. I only came up with a very cumbersome for-loop. Does anyone have an "elegant" method for doing this? Thx!
[ "Is it what you want? I am a NumPy newbie. \nIn [1]: import numpy as np\n\nIn [2]: a = np.array([1,2,0,3,0,4])\n\nIn [3]: b = np.array([1,2,3,4,5,6])\n\nIn [4]: b[np.where(a)] \nOut[4]: array([1, 2, 4, 6])\n\nIn [5]: np.where(a) \nOut[5]: (array([0, 1, 3, 5]),)\n\nIn [6]: a[np.where(a)] \nOut[6]: array([1, 2, 3, 4])\n\n", "You can use boolean indexing. x!=0 gives you a boolean array with True where x!=0 false where x==0. If you index either x or y with this array (ie x_nozeros=x[x!=0]) then you will get only the elements where x!=0. eg:\nIn [1]: import numpy as np\nIn [2]: x = np.array([1,2,0,3,0,4])\nIn [3]: y = np.arange(1,7)\nIn [4]: indx = x!=0\nIn [5]: x_nozeros = x[indx]\nIn [6]: y_nozeros = y[indx]\nIn [7]: x_nozeros\nOut[7]: array([1, 2, 3, 4])\nIn [8]: y_nozeros\nOut[8]: array([1, 2, 4, 6])\n\n" ]
[ 4, 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0001624395_arrays_numpy_python.txt
Q: IDLE and unicode chars (2.5.4) Why does IDLE handle one symbol correctly but not another? >>> e = '€' >>> print unichr(ord(e)) # looks like a very thin rectangle on my system. >>> p = '£' >>> print unichr(ord(p)) £ >>> ord(e) 128 >>> ord(p) 163 I tried adding various # coding lines, but that didn't help. EDIT: browser should be UTF-8, else this will look rather strange EDIT 2: On my system, the euro char is displayed correctly on line 1, but not in the print line. The pound char is displayed correctly both places. A: The answer depends what encoding the IDLE REPL is using. You should be more explicit about what's actually unicode text, and what's a byte sequence. Meditate on this example: # -*- coding: utf-8 -*- c = u'€' print type(c) for b in c.encode('utf-8'): print ord(b) c = '€' print type(c) for b in c: print ord(b) EDIT: As for IDLE, it's kind of borken, and needs to be patched to work correctly. IDLE 1.2.2 >>> c = u'€' >>> ord(c) 128 >>> c.encode('utf-8') '\xc2\x80' >>> c u'\x80' >>> print c [the box thingy] >>> c = u'\u20ac' >>> ord(c) 8364 >>> c.encode('utf-8') '\xe2\x82\xac' >>> c u'\u20ac' >>> print c € In the first session, by the time the € is interpreted, it has already been mis-encoded, and is unrecoverable. A: The problem is probably that your font doesn't have the proper glyphs. In addition to getting the encoding right, you have to have the proper font when presenting the text in the IDLE ui. Try using a different font to see if it helps (Arial Unicode has a very large glyph complement, for example). The euro symbol is much newer than the pounds sterling symbol, so your font may not have a euro glyph.
IDLE and unicode chars (2.5.4)
Why does IDLE handle one symbol correctly but not another? >>> e = '€' >>> print unichr(ord(e)) # looks like a very thin rectangle on my system. >>> p = '£' >>> print unichr(ord(p)) £ >>> ord(e) 128 >>> ord(p) 163 I tried adding various # coding lines, but that didn't help. EDIT: browser should be UTF-8, else this will look rather strange EDIT 2: On my system, the euro char is displayed correctly on line 1, but not in the print line. The pound char is displayed correctly both places.
[ "The answer depends what encoding the IDLE REPL is using. You should be more explicit about what's actually unicode text, and what's a byte sequence. Meditate on this example:\n# -*- coding: utf-8 -*-\nc = u'€'\nprint type(c)\nfor b in c.encode('utf-8'):\n print ord(b)\n\nc = '€'\nprint type(c)\nfor b in c:\n print ord(b)\n\nEDIT:\nAs for IDLE, it's kind of borken, and needs to be patched to work correctly. \nIDLE 1.2.2 \n>>> c = u'€'\n>>> ord(c)\n128\n>>> c.encode('utf-8')\n'\\xc2\\x80'\n>>> c\nu'\\x80'\n>>> print c\n[the box thingy]\n\n\n>>> c = u'\\u20ac'\n>>> ord(c)\n8364\n>>> c.encode('utf-8')\n'\\xe2\\x82\\xac'\n>>> c\nu'\\u20ac'\n>>> print c\n€\n\nIn the first session, by the time the € is interpreted, it has already been mis-encoded, and is unrecoverable.\n", "The problem is probably that your font doesn't have the proper glyphs. In addition to getting the encoding right, you have to have the proper font when presenting the text in the IDLE ui. Try using a different font to see if it helps (Arial Unicode has a very large glyph complement, for example).\nThe euro symbol is much newer than the pounds sterling symbol, so your font may not have a euro glyph.\n" ]
[ 3, 0 ]
[]
[]
[ "python", "unicode" ]
stackoverflow_0001637479_python_unicode.txt
Q: Unable to query from entities loaded onto the app engine datastore I am a newbie to python. I am not able to query from the entities- UserDetails and PhoneBook I loaded to the app engine datastore. I have written this UI below based on the youtube video by Brett on "Developing and Deploying applications on GAE" -- shoutout application. Well I just tried to do some reverse engineering to query from the datastore but failed in every step. #!/usr/bin/env python import wsgiref.handlers from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp import template import models class showPhoneBook(db.Model): """ property to store user_name from UI to persist for the session """ user_name = db.StringProperty(required=True) class MyHandler(webapp.RequestHandler): def get(self): ## Query to get the user_id using user_name retrieved from UI ## p = UserDetails.all().filter('user_name = ', user_name) result1 = p.get() for itr1 in result1: userId = itr.user_id ## Query to get the phone book contacts using user_id retrieved ## q = PhoneBook.all().filter('user_id = ', userId) values = { 'phoneBookValues': q } self.request.out.write( template.render('phonebook.html', values)) def post(self): phoneBookuser = showPhoneBook( user_name = self.request.get('username')) phoneBookuser.put() self.redirect('/') def main(): app = webapp.WSGIApplication([ (r'.*',MyHandler)], debug=True) wsgiref.handlers.CGIHandler().run(app) if __name__ == "__main__": main() This is my models.py file where I've defined my UserDetails and PhoneBook classes, #!/usr/bin/env python from google.appengine.ext import db #Table structure of User Details table class UserDetails(db.Model): user_id = db.IntegerProperty(required = True) user_name = db.StringProperty(required = True) mobile_number = db.PhoneNumberProperty(required = True) #Table structure of Phone Book table class PhoneBook(db.Model): contact_id = db.IntegerProperty(required=True) user_id = db.IntegerProperty(required=True) contact_name = db.StringProperty(required=True) contact_number = db.PhoneNumberProperty(required=True) Here are the problems I am facing, 1) I am not able to call user_name (retrieved from UI-- phoneBookuser = showPhoneBook(user_name = self.request.get('username'))) in get(self) method for querying UserDetails to to get the corresponding user_name. 2) The code is not able to recognize UserDetails and PhoneBook classes when importing from models.py file. 3) I tried to define UserDetails and PhoneBook classes in the main.py file itself, them I get the error at result1 = p.get() saying BadValueError: Unsupported type for property : <class 'google.appengine.ext.db.PropertiedClass'> I have been struggling since 2 weeks to get through the mess I am into but in vain. Please help me out in straightening out my code('coz I feel what I've written is a error-prone code all the way). A: I recommend that you read the Python documentation of GAE found here. Some comments: To use your models found in models.py, you either need to use the prefix models. (e.g. models.UserDetails) or import them using from models import * in MyHandler.get() you don't lookup the username get parameter To fetch values corresponding to a query, you do p.fetch(1) not p.get() You should also read Reference properties in GAE as well. I recommend you having your models as: class UserDetails(db.Model): user_name = db.StringProperty(required = True) mobile_number = db.PhoneNumberProperty(required = True) #Table structure of Phone Book table class PhoneBook(db.Model): user = db.ReferenceProperty(UserDetails) contact_name = db.StringProperty(required=True) contact_number = db.PhoneNumberProperty(required=True) Then your MyHandler.get() code will look like: def get(self): ## Query to get the user_id using user_name retrieved from UI ## user_name = self.request.get('username') p = UserDetails.all().filter('user_name = ', user_name) user = p.fetch(1)[0] values = { 'phoneBookValues': user.phonebook_set } self.response.out.write(template.render('phonebook.html', values)) (Needless to say, you need to handle the case where the username is not found in the database) I don't quite understand the point of showPhoneBook model. A: Your "session variable" being stored to the datastore isn't going to follow your redirect; you'd have to fetch it from the datastore in your get() handler, although without setting a session ID in a cookie or something this isn't going to implement sessions at all, but rather allow anyone getting / to use whatever value was send with a POST request whether it was sent by them or someone else. Why use the redirect at all; responding to a POST request should be done in the post() method, not through a redirect to a GET method.
Unable to query from entities loaded onto the app engine datastore
I am a newbie to python. I am not able to query from the entities- UserDetails and PhoneBook I loaded to the app engine datastore. I have written this UI below based on the youtube video by Brett on "Developing and Deploying applications on GAE" -- shoutout application. Well I just tried to do some reverse engineering to query from the datastore but failed in every step. #!/usr/bin/env python import wsgiref.handlers from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp import template import models class showPhoneBook(db.Model): """ property to store user_name from UI to persist for the session """ user_name = db.StringProperty(required=True) class MyHandler(webapp.RequestHandler): def get(self): ## Query to get the user_id using user_name retrieved from UI ## p = UserDetails.all().filter('user_name = ', user_name) result1 = p.get() for itr1 in result1: userId = itr.user_id ## Query to get the phone book contacts using user_id retrieved ## q = PhoneBook.all().filter('user_id = ', userId) values = { 'phoneBookValues': q } self.request.out.write( template.render('phonebook.html', values)) def post(self): phoneBookuser = showPhoneBook( user_name = self.request.get('username')) phoneBookuser.put() self.redirect('/') def main(): app = webapp.WSGIApplication([ (r'.*',MyHandler)], debug=True) wsgiref.handlers.CGIHandler().run(app) if __name__ == "__main__": main() This is my models.py file where I've defined my UserDetails and PhoneBook classes, #!/usr/bin/env python from google.appengine.ext import db #Table structure of User Details table class UserDetails(db.Model): user_id = db.IntegerProperty(required = True) user_name = db.StringProperty(required = True) mobile_number = db.PhoneNumberProperty(required = True) #Table structure of Phone Book table class PhoneBook(db.Model): contact_id = db.IntegerProperty(required=True) user_id = db.IntegerProperty(required=True) contact_name = db.StringProperty(required=True) contact_number = db.PhoneNumberProperty(required=True) Here are the problems I am facing, 1) I am not able to call user_name (retrieved from UI-- phoneBookuser = showPhoneBook(user_name = self.request.get('username'))) in get(self) method for querying UserDetails to to get the corresponding user_name. 2) The code is not able to recognize UserDetails and PhoneBook classes when importing from models.py file. 3) I tried to define UserDetails and PhoneBook classes in the main.py file itself, them I get the error at result1 = p.get() saying BadValueError: Unsupported type for property : <class 'google.appengine.ext.db.PropertiedClass'> I have been struggling since 2 weeks to get through the mess I am into but in vain. Please help me out in straightening out my code('coz I feel what I've written is a error-prone code all the way).
[ "I recommend that you read the Python documentation of GAE found here.\nSome comments:\n\nTo use your models found in models.py, you either need to use the prefix models. (e.g. models.UserDetails) or import them using\nfrom models import *\nin MyHandler.get() you don't lookup the username get parameter\nTo fetch values corresponding to a query, you do p.fetch(1) not p.get()\nYou should also read Reference properties in GAE as well. I recommend you having your models as:\nclass UserDetails(db.Model):\n user_name = db.StringProperty(required = True)\n mobile_number = db.PhoneNumberProperty(required = True)\n\n#Table structure of Phone Book table\nclass PhoneBook(db.Model):\n user = db.ReferenceProperty(UserDetails)\n contact_name = db.StringProperty(required=True)\n contact_number = db.PhoneNumberProperty(required=True)\n\nThen your MyHandler.get() code will look like:\ndef get(self):\n ## Query to get the user_id using user_name retrieved from UI ##\n user_name = self.request.get('username')\n p = UserDetails.all().filter('user_name = ', user_name)\n user = p.fetch(1)[0]\n values = {\n 'phoneBookValues': user.phonebook_set\n }\n self.response.out.write(template.render('phonebook.html', values))\n\n(Needless to say, you need to handle the case where the username is not found in the database)\nI don't quite understand the point of showPhoneBook model.\n\n", "Your \"session variable\" being stored to the datastore isn't going to follow your redirect; you'd have to fetch it from the datastore in your get() handler, although without setting a session ID in a cookie or something this isn't going to implement sessions at all, but rather allow anyone getting / to use whatever value was send with a POST request whether it was sent by them or someone else. Why use the redirect at all; responding to a POST request should be done in the post() method, not through a redirect to a GET method.\n" ]
[ 3, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001636940_google_app_engine_python.txt
Q: How to check which XP theme is enabled I have a wxPython which works perfectly on window xp theme but on switching to 'classic theme' rich text cntrl comes up without border. I can enable border for classic theme but for that Q1. I need to know if classic theme is enabled. Q2.I am also not sure how many different theme could be there which may break my app appearance. so what could be the best way to go around it? Q3. Can I enforce a theme for given application? e.g. from python I can load any windows DLL and call functions, but is there any such way? Edit: in my case ctypes.windll.UxTheme.IsThemeActive() worked A: Classic theming is more of a non theme. You check for classic theming by calling IsAppThemed() in UxTheme.dll There should therefore be little reason to worry about different themes. Lastly, the only choice applications get is whether to try and support theming or not - by including a manifest specifying that the new common controls are to be used. Apps that don't include the manifest will never be themed. Apps that do, will be themed as per the users preferences.
How to check which XP theme is enabled
I have a wxPython which works perfectly on window xp theme but on switching to 'classic theme' rich text cntrl comes up without border. I can enable border for classic theme but for that Q1. I need to know if classic theme is enabled. Q2.I am also not sure how many different theme could be there which may break my app appearance. so what could be the best way to go around it? Q3. Can I enforce a theme for given application? e.g. from python I can load any windows DLL and call functions, but is there any such way? Edit: in my case ctypes.windll.UxTheme.IsThemeActive() worked
[ "Classic theming is more of a non theme.\nYou check for classic theming by calling IsAppThemed() in UxTheme.dll\nThere should therefore be little reason to worry about different themes.\nLastly, the only choice applications get is whether to try and support theming or not - by including a manifest specifying that the new common controls are to be used. Apps that don't include the manifest will never be themed. Apps that do, will be themed as per the users preferences.\n" ]
[ 1 ]
[]
[]
[ "c", "python", "winapi", "windows", "windows_themes" ]
stackoverflow_0001637946_c_python_winapi_windows_windows_themes.txt
Q: Problem Using Python's subprocess.communicate() on Windows I have an application that I am trying to control via Python and the subprocess module. Essentially what I do is start the application using Popen (which opens a command prompt within which the program executes) and then at some point in time later on in the execution I need to send a string (a command) to the STDIN of that program. That works fine except for the fact that the command doesn't get processed until I manually type a button into the command window of the application that Python has started. Here is part of my code: cmd = 'quit\n' app.communicate(cmd.encode('utf-8')) Any ideas? EDIT #1 Yes typing a button does mean pressing a key on the keyboard, sorry for the confusion. I've attached more of my code below app = Popen(['runProg.exe', '-m', '20'], stdin=PIPE, universal_newlines=True) while not os.path.exists('C:/temp/quit-app.tmp'): time.sleep(1) app.communicate('quit') os.remove('C:/temp/quit-app.tmp') So what should happen is the program should run until the quit-app.tmp file is created; once it's created "quit" should be sent to the application, which is a command for it to shut down cleanly. If a human was running this program, they'd do this just by typing "quit" in the command window. Thanks! A: try: cmd = 'quit\n\r' EDIT: Only thing that is working for me is: app = subprocess.Popen(["cmd.exe","testparam"],stdout=subprocess.PIPE,stdin=subprocess.PIPE) app.stdin.write('exit\r\n') Because as documentation says: Popen.communicate(input=None) Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child.
Problem Using Python's subprocess.communicate() on Windows
I have an application that I am trying to control via Python and the subprocess module. Essentially what I do is start the application using Popen (which opens a command prompt within which the program executes) and then at some point in time later on in the execution I need to send a string (a command) to the STDIN of that program. That works fine except for the fact that the command doesn't get processed until I manually type a button into the command window of the application that Python has started. Here is part of my code: cmd = 'quit\n' app.communicate(cmd.encode('utf-8')) Any ideas? EDIT #1 Yes typing a button does mean pressing a key on the keyboard, sorry for the confusion. I've attached more of my code below app = Popen(['runProg.exe', '-m', '20'], stdin=PIPE, universal_newlines=True) while not os.path.exists('C:/temp/quit-app.tmp'): time.sleep(1) app.communicate('quit') os.remove('C:/temp/quit-app.tmp') So what should happen is the program should run until the quit-app.tmp file is created; once it's created "quit" should be sent to the application, which is a command for it to shut down cleanly. If a human was running this program, they'd do this just by typing "quit" in the command window. Thanks!
[ "try:\ncmd = 'quit\\n\\r'\n\nEDIT:\nOnly thing that is working for me is:\napp = subprocess.Popen([\"cmd.exe\",\"testparam\"],stdout=subprocess.PIPE,stdin=subprocess.PIPE)\napp.stdin.write('exit\\r\\n')\n\nBecause as documentation says:\n\nPopen.communicate(input=None)\nInteract with process: Send data to\n stdin. Read data from stdout and\n stderr, until end-of-file is reached.\n Wait for process to terminate. The\n optional input argument should be a\n string to be sent to the child\n process, or None, if no data should be\n sent to the child.\n\n" ]
[ 0 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0001638405_python_subprocess.txt
Q: how can i figure if a vim buffer is listed or unlisted from vim's python api? for a tool i need to figure all vim buffers that are still listed (there are listed and unlisted buffers) unfortunately vim.buffers contains all buffers and there doesnt seem to be an attribute to figure if a buffer is listed or unlisted the vim command of what i want to do is :buffers unfortunately all thats possible with the vim python api is emulating :buffers! but without the metadata about listed/unlisted thats we need A: Here is how you can manage this using just Vim language. function s:buffers_list() let result = [] for buffer_number in range(1, bufnr('$')) if !buflisted(buffer_number) continue endif call add(result, buffer_number) endfor return result endfunction A: Using Vim's python api: listedBufs = [] for b in vim.buffers: listed = vim.eval('buflisted(bufnr("%s"))' % b.name) if int(listed) > 0: listedBufs.append(b) or if you don't mind sacrificing some readability: listedBufs = [b for b in vim.buffers if int(vim.eval('buflisted(bufnr("%s"))' % b.name)) > 0]
how can i figure if a vim buffer is listed or unlisted from vim's python api?
for a tool i need to figure all vim buffers that are still listed (there are listed and unlisted buffers) unfortunately vim.buffers contains all buffers and there doesnt seem to be an attribute to figure if a buffer is listed or unlisted the vim command of what i want to do is :buffers unfortunately all thats possible with the vim python api is emulating :buffers! but without the metadata about listed/unlisted thats we need
[ "Here is how you can manage this using just Vim language.\nfunction s:buffers_list()\n let result = []\n\n for buffer_number in range(1, bufnr('$'))\n if !buflisted(buffer_number)\n continue\n endif\n\n call add(result, buffer_number)\n endfor\n\n return result\nendfunction\n\n", "Using Vim's python api:\nlistedBufs = []\nfor b in vim.buffers:\n listed = vim.eval('buflisted(bufnr(\"%s\"))' % b.name)\n if int(listed) > 0:\n listedBufs.append(b)\n\nor if you don't mind sacrificing some readability:\nlistedBufs = [b for b in vim.buffers\n if int(vim.eval('buflisted(bufnr(\"%s\"))' % b.name)) > 0]\n\n" ]
[ 6, 2 ]
[]
[]
[ "python", "vim" ]
stackoverflow_0000648638_python_vim.txt
Q: is there a way to check if a param contains a class or a class instance? I want the wrapper my_function to be able to receive either a class or class instance, instead of writing two different functions: >>> from module import MyClass >>> my_function(MyClass) True >>> cls_inst = MyClass() >>> my_function(cls_inst) True the problem is that I don't know in advance which type of classes or class instances I am going to receive. So I can't, for example, use functions like isinstance... How can I type check if a param contains a class or a class instance, in a generic way? Any idea? A: >>> class A: pass >>> isinstance(A, type) True >>> isinstance(A(), type) False A: import types def myfun(maybe_class): if type(maybe_class) == types.ClassType: print "It's a class." else: print "It's an instance." A: Use the type() buitlin function. E.g.: import avahi print type(avahi) <type 'module'>
is there a way to check if a param contains a class or a class instance?
I want the wrapper my_function to be able to receive either a class or class instance, instead of writing two different functions: >>> from module import MyClass >>> my_function(MyClass) True >>> cls_inst = MyClass() >>> my_function(cls_inst) True the problem is that I don't know in advance which type of classes or class instances I am going to receive. So I can't, for example, use functions like isinstance... How can I type check if a param contains a class or a class instance, in a generic way? Any idea?
[ ">>> class A: pass\n\n>>> isinstance(A, type)\nTrue\n>>> isinstance(A(), type)\nFalse\n\n", "import types\n\ndef myfun(maybe_class):\n if type(maybe_class) == types.ClassType:\n print \"It's a class.\"\n else:\n print \"It's an instance.\"\n\n", "Use the type() buitlin function.\nE.g.:\nimport avahi\nprint type(avahi)\n\n<type 'module'>\n\n" ]
[ 7, 1, 0 ]
[]
[]
[ "python", "typechecking" ]
stackoverflow_0001638657_python_typechecking.txt
Q: Python bitwise operations confusion I came up with this "magic string" to meet the ID3 tagging specification: The ID3v2 tag size is encoded with four bytes where the most significant bit (bit 7) is set to zero in every byte, making a total of 28 bits. The zeroed bits are ignored, so a 257 bytes long tag is represented as $00 00 02 01. >>> hex_val = 0xFFFFFFFF >>> str.format('0b{0:07b}{1:07b}{2:07b}{3:07b}', ((hex_val >> 24) & 0xEF), ((hex_val >> 16) & 0xEF), ((hex_val >> 8) & 0xEF), ((hex_val >> 0) & 0xEF)) '0b11101111111011111110111111101111' Why does it not equal: '0b11111111111111111111111111111111' ? If anyone cares, this seems to work: >>> int(str.format('0b{0:07b}{1:07b}{2:07b}{3:07b}', ((hex_val >> 24) & 0xFE), ((hex_val >> 16) & 0xFE), ((hex_val >> 8) & 0xFE), ((hex_val >> 0) & 0xFE)), 2) A: I think you are confusing the and and the or operations. bitwise and: return a number with only bits that are in both operands set. bitwise or: return a number with bits that are in either of the operands set. A: Sorry getting my 7s and Es confused Corrected code: >>> str.format('0b{0:07b}{1:07b}{2:07b}{3:07b}', ((hex_val >> 24) & 0x7F), ((hex_val >> 16) & 0x7F), ((hex_val >> 8) & 0x7F), ((hex_val >> 0) & 0x7F)) A: It does not equal all ones because you're masking out the 4th bit using the & operator!
Python bitwise operations confusion
I came up with this "magic string" to meet the ID3 tagging specification: The ID3v2 tag size is encoded with four bytes where the most significant bit (bit 7) is set to zero in every byte, making a total of 28 bits. The zeroed bits are ignored, so a 257 bytes long tag is represented as $00 00 02 01. >>> hex_val = 0xFFFFFFFF >>> str.format('0b{0:07b}{1:07b}{2:07b}{3:07b}', ((hex_val >> 24) & 0xEF), ((hex_val >> 16) & 0xEF), ((hex_val >> 8) & 0xEF), ((hex_val >> 0) & 0xEF)) '0b11101111111011111110111111101111' Why does it not equal: '0b11111111111111111111111111111111' ? If anyone cares, this seems to work: >>> int(str.format('0b{0:07b}{1:07b}{2:07b}{3:07b}', ((hex_val >> 24) & 0xFE), ((hex_val >> 16) & 0xFE), ((hex_val >> 8) & 0xFE), ((hex_val >> 0) & 0xFE)), 2)
[ "I think you are confusing the and and the or operations.\n\nbitwise and: return a number with only bits that are in both operands set.\nbitwise or: return a number with bits that are in either of the operands set.\n\n", "Sorry getting my 7s and Es confused\nCorrected code:\n>>> str.format('0b{0:07b}{1:07b}{2:07b}{3:07b}', ((hex_val >> 24) & 0x7F),\n ((hex_val >> 16) & 0x7F),\n ((hex_val >> 8) & 0x7F),\n ((hex_val >> 0) & 0x7F))\n\n", "It does not equal all ones because you're masking out the 4th bit using the & operator!\n" ]
[ 2, 1, 1 ]
[]
[]
[ "bit_manipulation", "id3", "python" ]
stackoverflow_0001638604_bit_manipulation_id3_python.txt
Q: Google App Engine and google authentication with redirect and HTTP POST I have a form and I need to send the content to the server. I use google authentication because only authorized people can send to the server. The form is somthing like this: <form action="/blog/submit" method="post"> ... </form> The authentication is needed only during the submit, not entering the form page. So in the submit controller I used something like this: class SubmitPage(webapp.RequestHandler): def post(self): if users.get_current_user() is None: self.redirect(users.create_login_url(self.request.uri)) ... The problem is that the return url in redirect is executed only ad an HTTP GET, not HTTP POST as I wanted. I'd like to authenticate and then redirect to the submit page (POST), but it tries to execute a GET on the same url. Is it possible to implement what I want? A: No, it's not possible to have POST data follow an HTTP redirect. You should almost certainly be checking for a login before you display the form that's posting this data in the first place, but once the user gets to the form your best bet is probably to save the content to the datastore linked to a session ID you generate, and either set it as a cookie or add it to the redirect URI so you can retrieve it again when the user returns from the login page.
Google App Engine and google authentication with redirect and HTTP POST
I have a form and I need to send the content to the server. I use google authentication because only authorized people can send to the server. The form is somthing like this: <form action="/blog/submit" method="post"> ... </form> The authentication is needed only during the submit, not entering the form page. So in the submit controller I used something like this: class SubmitPage(webapp.RequestHandler): def post(self): if users.get_current_user() is None: self.redirect(users.create_login_url(self.request.uri)) ... The problem is that the return url in redirect is executed only ad an HTTP GET, not HTTP POST as I wanted. I'd like to authenticate and then redirect to the submit page (POST), but it tries to execute a GET on the same url. Is it possible to implement what I want?
[ "No, it's not possible to have POST data follow an HTTP redirect. \nYou should almost certainly be checking for a login before you display the form that's posting this data in the first place, but once the user gets to the form your best bet is probably to save the content to the datastore linked to a session ID you generate, and either set it as a cookie or add it to the redirect URI so you can retrieve it again when the user returns from the login page.\n" ]
[ 3 ]
[]
[]
[ "authentication", "google_app_engine", "post", "python" ]
stackoverflow_0001638493_authentication_google_app_engine_post_python.txt
Q: gobject io monitoring + nonblocking reads I've got a problem with using the io_add_watch monitor in python (via gobject). I want to do a nonblocking read of the whole buffer after every notification. Here's the code (shortened a bit): class SomeApp(object): def __init__(self): # some other init that does a lot of stderr debug writes fl = fcntl.fcntl(0, fcntl.F_GETFL, 0) fcntl.fcntl(0, fcntl.F_SETFL, fl | os.O_NONBLOCK) print "hooked", gobject.io_add_watch(0, gobject.IO_IN | gobject.IO_PRI, self.got_message, [""]) self.app = gobject.MainLoop() def run(self): print "ready" self.app.run() def got_message(self, fd, condition, data): print "reading now" data[0] += os.read(0, 1024) print "got something", fd, condition, data return True gobject.threads_init() SomeApp().run() Here's the trick - when I run the program without debug output activated, I don't get the got_message calls. When I write a lot of stuff to the stderr first, the problem disappears. If I don't write anything apart from the prints visible in this code, I don't get the stdin messsage signals. Another interesting thing is that when I try to run the same app with stderr debug enabled but via strace (to check if there are any fcntl / ioctl calls I missed), the problem appears again. So in short: if I write a lot to stderr first without strace, io_watch works. If I write a lot with strace, or don't write at all io_watch doesn't work. The "some other init" part takes some time, so if I type some text before I see "hooked 2" output and then press "ctrl+c" after "ready", the get_message callback is called, but the read call throws EAGAIN, so the buffer seems to be empty. Strace log related to the stdin: ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 fcntl(0, F_GETFL) = 0xa002 (flags O_RDWR|O_ASYNC|O_LARGEFILE) fcntl(0, F_SETFL, O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE) = 0 fcntl(0, F_GETFL) = 0xa802 (flags O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE) Does anyone have some ideas on what's going on here? EDIT: Another clue. I tried to refactor the app to do the reading in a different thread and pass it back via a pipe. It "kind of" works: ... rpipe, wpipe = os.pipe() stopped = threading.Event() self.stdreader = threading.Thread(name = "reader", target = self.std_read_loop, args = (wpipe, stopped)) self.stdreader.start() new_data = "" print "hooked", gobject.io_add_watch(rpipe, gobject.IO_IN | gobject.IO_PRI, self.got_message, [new_data]) def std_read_loop(self, wpipe, stop_event): while True: try: new_data = os.read(0, 1024) while len(new_data) > 0: l = os.write(wpipe, new_data) new_data = new_data[l:] except OSError, e: if stop_event.isSet(): break time.sleep(0.1) ... It's surprising that if I just put the same text in a new pipe, everything starts to work. The problem is that: the first line is not "noticed" at all - I get only the second and following lines it's fugly Maybe that will give someone else a clue on why that's happening? A: This sounds like a race condition in which there is some delay to setting your callback, or else there is a change in the environment which affects whether or not you can set the callback. I would look carefully at what happens before you call io_add_watch(). For instance the Python fcntl docs say: All functions in this module take a file descriptor fd as their first argument. This can be an integer file descriptor, such as returned by sys.stdin.fileno(), or a file object, such as sys.stdin itself, which provides a fileno() which returns a genuine file descriptor. Clearly that is not what you are doing when you assume that STDIN will have FD == 0. I would change that first and try again. The other thing is that if the FD is already blocked, then your process could be waiting while other non-blocked processes are running, therefore there is a timing difference depending on what you do first. What happens if you refactor the fcntl stuff so that it is done soon after the program starts, even before importing the GTK modules? I'm not sure that I understand why a program using the GTK GUI would want to read from the standard input in the first place. If you are actually trying to capture the output of another process, you should use the subprocess module to set up a pipe, then io_add_watch() on the pipe like so: proc = subprocess.Popen(command, stdout = subprocess.PIPE) gobject.io_add_watch(proc.stdout, glib.IO_IN, self.write_to_buffer ) Again, in this example we make sure that we have a valid opened FD before calling io_add_watch(). Normally, when gobject.io_add_watch() is used, it is called just before gobject.MainLoop(). For example, here is some working code using io_add_watch to catch IO_IN. A: The documentation says you should return TRUE from the callback or it will be removed from the list of event sources. A: What happens if you hook the callback first, prior to any stderr output? Does it still get called when you have debug output enabled? Also, I suppose you should probably be repeatedly calling os.read() in your handler until it gives no data, in case >1024 bytes become ready between calls. Have you tried using the select module in a background thread to emulate gio functionality? Does that work? What platform is this and what kind of FD are you dealing with? (file? socket? pipe?)
gobject io monitoring + nonblocking reads
I've got a problem with using the io_add_watch monitor in python (via gobject). I want to do a nonblocking read of the whole buffer after every notification. Here's the code (shortened a bit): class SomeApp(object): def __init__(self): # some other init that does a lot of stderr debug writes fl = fcntl.fcntl(0, fcntl.F_GETFL, 0) fcntl.fcntl(0, fcntl.F_SETFL, fl | os.O_NONBLOCK) print "hooked", gobject.io_add_watch(0, gobject.IO_IN | gobject.IO_PRI, self.got_message, [""]) self.app = gobject.MainLoop() def run(self): print "ready" self.app.run() def got_message(self, fd, condition, data): print "reading now" data[0] += os.read(0, 1024) print "got something", fd, condition, data return True gobject.threads_init() SomeApp().run() Here's the trick - when I run the program without debug output activated, I don't get the got_message calls. When I write a lot of stuff to the stderr first, the problem disappears. If I don't write anything apart from the prints visible in this code, I don't get the stdin messsage signals. Another interesting thing is that when I try to run the same app with stderr debug enabled but via strace (to check if there are any fcntl / ioctl calls I missed), the problem appears again. So in short: if I write a lot to stderr first without strace, io_watch works. If I write a lot with strace, or don't write at all io_watch doesn't work. The "some other init" part takes some time, so if I type some text before I see "hooked 2" output and then press "ctrl+c" after "ready", the get_message callback is called, but the read call throws EAGAIN, so the buffer seems to be empty. Strace log related to the stdin: ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 fcntl(0, F_GETFL) = 0xa002 (flags O_RDWR|O_ASYNC|O_LARGEFILE) fcntl(0, F_SETFL, O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE) = 0 fcntl(0, F_GETFL) = 0xa802 (flags O_RDWR|O_NONBLOCK|O_ASYNC|O_LARGEFILE) Does anyone have some ideas on what's going on here? EDIT: Another clue. I tried to refactor the app to do the reading in a different thread and pass it back via a pipe. It "kind of" works: ... rpipe, wpipe = os.pipe() stopped = threading.Event() self.stdreader = threading.Thread(name = "reader", target = self.std_read_loop, args = (wpipe, stopped)) self.stdreader.start() new_data = "" print "hooked", gobject.io_add_watch(rpipe, gobject.IO_IN | gobject.IO_PRI, self.got_message, [new_data]) def std_read_loop(self, wpipe, stop_event): while True: try: new_data = os.read(0, 1024) while len(new_data) > 0: l = os.write(wpipe, new_data) new_data = new_data[l:] except OSError, e: if stop_event.isSet(): break time.sleep(0.1) ... It's surprising that if I just put the same text in a new pipe, everything starts to work. The problem is that: the first line is not "noticed" at all - I get only the second and following lines it's fugly Maybe that will give someone else a clue on why that's happening?
[ "This sounds like a race condition in which there is some delay to setting your callback, or else there is a change in the environment which affects whether or not you can set the callback.\nI would look carefully at what happens before you call io_add_watch(). For instance the Python fcntl docs say:\n\nAll functions in this module take a\n file descriptor fd as their first\n argument. This can be an integer file\n descriptor, such as returned by\n sys.stdin.fileno(), or a file object,\n such as sys.stdin itself, which\n provides a fileno() which returns a\n genuine file descriptor.\n\nClearly that is not what you are doing when you assume that STDIN will have FD == 0. I would change that first and try again.\nThe other thing is that if the FD is already blocked, then your process could be waiting while other non-blocked processes are running, therefore there is a timing difference depending on what you do first. What happens if you refactor the fcntl stuff so that it is done soon after the program starts, even before importing the GTK modules?\nI'm not sure that I understand why a program using the GTK GUI would want to read from the standard input in the first place. If you are actually trying to capture the output of another process, you should use the subprocess module to set up a pipe, then io_add_watch() on the pipe like so:\nproc = subprocess.Popen(command, stdout = subprocess.PIPE)\ngobject.io_add_watch(proc.stdout, glib.IO_IN, self.write_to_buffer )\n\nAgain, in this example we make sure that we have a valid opened FD before calling io_add_watch().\nNormally, when gobject.io_add_watch() is used, it is called just before gobject.MainLoop(). For example, here is some working code using io_add_watch to catch IO_IN.\n", "The documentation says you should return TRUE from the callback or it will be removed from the list of event sources.\n", "What happens if you hook the callback first, prior to any stderr output? Does it still get called when you have debug output enabled?\nAlso, I suppose you should probably be repeatedly calling os.read() in your handler until it gives no data, in case >1024 bytes become ready between calls.\nHave you tried using the select module in a background thread to emulate gio functionality? Does that work? What platform is this and what kind of FD are you dealing with? (file? socket? pipe?)\n" ]
[ 2, 0, 0 ]
[]
[]
[ "glib", "gobject", "input", "python" ]
stackoverflow_0001586342_glib_gobject_input_python.txt
Q: SQLAlchemy: relation in mappers compile result of function rather than calling the function when the relation is queried I have a number of mappers that look like this: mapper(Photo,photo_table, properties = { "locale": relation(PhotoContent, uselist=False, primaryjoin=and_(photo_content_table.c.photoId == photo_table.c.id, photo_content_table.c.locale == get_lang()), foreign_keys=[photo_content_table.c.photoId, photo_content_table.c.locale]) I have deployed in Pylons, the so the get_lang() function should return either "en" or "es" based on the current session. from pylons.i18n import get_lang The problem is that SA compiles that "locale" relation with the result returned by get_lang() at compile time. So if I do something like this: meta.Session.query(Photo).options(eagerload('locale')).get(id) The relation does not call get_lang(). It just uses whatever the value of get_lang() was at compile time. Anyone have any ideas how to implement SQLAlchemy eagerloaders that are dynamic? This would be a lifesaver for me! A: The relation statements gets executed when the class is loaded, which means every function call gets evaluated. Try passing the function instead: and_(photo_content_table.c.photoId == photo_table.c.id, photo_content_table.c.locale == get_lang) Note the missing parenthesis. It now should get evaluated whenever the relation gets queried.
SQLAlchemy: relation in mappers compile result of function rather than calling the function when the relation is queried
I have a number of mappers that look like this: mapper(Photo,photo_table, properties = { "locale": relation(PhotoContent, uselist=False, primaryjoin=and_(photo_content_table.c.photoId == photo_table.c.id, photo_content_table.c.locale == get_lang()), foreign_keys=[photo_content_table.c.photoId, photo_content_table.c.locale]) I have deployed in Pylons, the so the get_lang() function should return either "en" or "es" based on the current session. from pylons.i18n import get_lang The problem is that SA compiles that "locale" relation with the result returned by get_lang() at compile time. So if I do something like this: meta.Session.query(Photo).options(eagerload('locale')).get(id) The relation does not call get_lang(). It just uses whatever the value of get_lang() was at compile time. Anyone have any ideas how to implement SQLAlchemy eagerloaders that are dynamic? This would be a lifesaver for me!
[ "The relation statements gets executed when the class is loaded, which means every function call gets evaluated.\nTry passing the function instead:\nand_(photo_content_table.c.photoId == photo_table.c.id, photo_content_table.c.locale == get_lang)\n\nNote the missing parenthesis. It now should get evaluated whenever the relation gets queried.\n" ]
[ 2 ]
[]
[]
[ "pylons", "python", "sqlalchemy" ]
stackoverflow_0001638751_pylons_python_sqlalchemy.txt
Q: __init__.py descends dirtree for python, but not from c++; causes "import matplotlib" error Why or how does the file __init__.py cause the python interpreter to search subdirectories for a module -- and why does the interpreter not honor this convention when invoked from C++? Here's what I know: Using strace on my program, I can see that the same python2.5 interpreter is being executed for both the interactive case and the C++ invocation. In both cases, the PYTHONPATH directs the search for the imported module in question (matplotlib). This appears as a series of open() calls, starting from the current working directory and extending to the PYTHONPATH (here, /opt/epd/lib/python2.5/site-packages), and lastly into the subdirectories, in the working case. The full disclosure is that I am using the "Enthought" distribution and had to place the __init__.py file in the site-packages directory and put the site-packages directory in the PYTHONPATH to create the working case. The code is below. It seems that I may need to make a call to configure the python interpreter to look for __init__ and/or recurse, in order to find the requested packages. IF SO, HOW?? PyObject* main_module, * global_dict, * expression, *args, *spec; Py_Initialize (); char* script = "abc.py"; PySys_SetArgv(1, &script); //Open the file containing the python modules we want to run FILE* file_1 = fopen("abc.py", "r"); if (file_1 == 0) fprintf(stdout, "ERROR: File not opened"); //Loads the python file into the interpreter PyRun_SimpleFile(file_1, "abc.py"); //Creates a python object that contains references to the functions and classes in abc.py main_module = PyImport_AddModule("__main__"); global_dict = PyModule_GetDict(main_module); expression = PyDict_GetItemString(global_dict, "view_gui"); spec = PyObject_CallObject(expression, NULL); PyObject_CallMethod(spec, "shutdown", NULL); Py_Finalize(); return NULL; When the python script is invoked from C++, the search seems to stop when the file /opt/epd/lib/python2.5/site-packages/matplotlib (or it's variant, matplotlib.so, etc) are not found. Note that I can augment the PYTHONPATH to include the exact location of matplotlib (and other needed packages) to get farther; however, I cannot seem to include a path to import matplotlib.cbook. A: Looking at (a different version # of) python, I see that import.c has the find_init_module(), which is a part of the find_module(). Not evident why find_init_module() is not executed or fails.
__init__.py descends dirtree for python, but not from c++; causes "import matplotlib" error
Why or how does the file __init__.py cause the python interpreter to search subdirectories for a module -- and why does the interpreter not honor this convention when invoked from C++? Here's what I know: Using strace on my program, I can see that the same python2.5 interpreter is being executed for both the interactive case and the C++ invocation. In both cases, the PYTHONPATH directs the search for the imported module in question (matplotlib). This appears as a series of open() calls, starting from the current working directory and extending to the PYTHONPATH (here, /opt/epd/lib/python2.5/site-packages), and lastly into the subdirectories, in the working case. The full disclosure is that I am using the "Enthought" distribution and had to place the __init__.py file in the site-packages directory and put the site-packages directory in the PYTHONPATH to create the working case. The code is below. It seems that I may need to make a call to configure the python interpreter to look for __init__ and/or recurse, in order to find the requested packages. IF SO, HOW?? PyObject* main_module, * global_dict, * expression, *args, *spec; Py_Initialize (); char* script = "abc.py"; PySys_SetArgv(1, &script); //Open the file containing the python modules we want to run FILE* file_1 = fopen("abc.py", "r"); if (file_1 == 0) fprintf(stdout, "ERROR: File not opened"); //Loads the python file into the interpreter PyRun_SimpleFile(file_1, "abc.py"); //Creates a python object that contains references to the functions and classes in abc.py main_module = PyImport_AddModule("__main__"); global_dict = PyModule_GetDict(main_module); expression = PyDict_GetItemString(global_dict, "view_gui"); spec = PyObject_CallObject(expression, NULL); PyObject_CallMethod(spec, "shutdown", NULL); Py_Finalize(); return NULL; When the python script is invoked from C++, the search seems to stop when the file /opt/epd/lib/python2.5/site-packages/matplotlib (or it's variant, matplotlib.so, etc) are not found. Note that I can augment the PYTHONPATH to include the exact location of matplotlib (and other needed packages) to get farther; however, I cannot seem to include a path to import matplotlib.cbook.
[ "Looking at (a different version # of) python, I see that import.c has the find_init_module(), which is a part of the find_module(). Not evident why find_init_module() is not executed or fails. \n" ]
[ 0 ]
[]
[]
[ "c++", "import", "init", "path", "python" ]
stackoverflow_0001634147_c++_import_init_path_python.txt
Q: django generic templates So, Generic views are pretty cool, but what I'm interested in is something that's a generic template. so for example, I can give it an object and it'll just tostring it for me. or if I give it a list, it'll just iterate over the objects and tostring them as a ul (or tr, or whatever else it deems necessary). for most uses you wouldn't need this. I just threw something together quickly for a friend (a bar stock app, if you must know), and I don't feel like writing templates. A: If there's a django model for it, you can just stick to django.contrib.admin or django.contrib.databrowse. If not, then you might manage by skipping the django template altogether. example: from django.http import HttpResponse import datetime def current_datetime(request): now = datetime.datetime.now() html = "<html><body>It is now %s.</body></html>" % now return HttpResponse(html) But of course you wanted to avoid even writing that much, so instead of doing html, we can use plain text and the pprint module: from django.http import HttpResponse import datetime from pprint import pformat def current_datetime(request): now = datetime.datetime.now() return HttpResponse(pformat(now), mimetype="text/plain") edit: Hmm... this seems like something a view decorator should handle: from django.http import HttpResponse import datetime import pprint def prettyprint(fun): return lambda request:HttpResponse( pprint.pformat(fun(request)), mimetype="text/plain") @prettyprint def current_datetime(request): return datetime.datetime.now() A: I don't see you getting away from writing templates, especially if you would want to format it, even slightly. However you can re-use basic templates, for e.g, create a generic object_list.html and object_detail.html that will basically contain the information to loop over the object list and present it, and show the object detail. You could use these "Generic" templates across the entire app if need be.
django generic templates
So, Generic views are pretty cool, but what I'm interested in is something that's a generic template. so for example, I can give it an object and it'll just tostring it for me. or if I give it a list, it'll just iterate over the objects and tostring them as a ul (or tr, or whatever else it deems necessary). for most uses you wouldn't need this. I just threw something together quickly for a friend (a bar stock app, if you must know), and I don't feel like writing templates.
[ "If there's a django model for it, you can just stick to django.contrib.admin or django.contrib.databrowse. If not, then you might manage by skipping the django template altogether. example:\nfrom django.http import HttpResponse\nimport datetime\n\ndef current_datetime(request):\n now = datetime.datetime.now()\n html = \"<html><body>It is now %s.</body></html>\" % now\n return HttpResponse(html)\n\nBut of course you wanted to avoid even writing that much, so instead of doing html, we can use plain text and the pprint module:\nfrom django.http import HttpResponse\nimport datetime\nfrom pprint import pformat\n\ndef current_datetime(request):\n now = datetime.datetime.now()\n return HttpResponse(pformat(now), mimetype=\"text/plain\")\n\nedit: Hmm... this seems like something a view decorator should handle: \nfrom django.http import HttpResponse\nimport datetime\nimport pprint\n\ndef prettyprint(fun):\n return lambda request:HttpResponse(\n pprint.pformat(fun(request)), mimetype=\"text/plain\")\n\n@prettyprint\ndef current_datetime(request):\n return datetime.datetime.now()\n\n", "I don't see you getting away from writing templates, especially if you would want to format it, even slightly.\nHowever you can re-use basic templates, for e.g, create a generic object_list.html and object_detail.html\nthat will basically contain the information to loop over the object list and present it, and show the object detail. You could use these \"Generic\" templates across the entire app if need be.\n" ]
[ 5, 1 ]
[]
[]
[ "django", "frameworks", "python", "templating" ]
stackoverflow_0001638870_django_frameworks_python_templating.txt
Q: python: how to generate a bitmap? What's the easiest way to generate a bitmap using Python? Text support would be nice but not required. (On Mac, I was trying to use Quartz through Python, but Snow Leopard seems to have broken its functionality. Therefore I've decided to look for a solid, simple, cross-platform solution that won't break each time the OS is updated.) A: Use the Python Imaging Library: "The Python Imaging Library (PIL) adds image processing capabilities to your Python interpreter. This library supports many file formats, and provides powerful image processing and graphics capabilities." I'm not a Mac person so I can't help with Mac specifics, but I do know it works on the Mac.
python: how to generate a bitmap?
What's the easiest way to generate a bitmap using Python? Text support would be nice but not required. (On Mac, I was trying to use Quartz through Python, but Snow Leopard seems to have broken its functionality. Therefore I've decided to look for a solid, simple, cross-platform solution that won't break each time the OS is updated.)
[ "Use the Python Imaging Library:\n\"The Python Imaging Library (PIL) adds image processing capabilities to your Python interpreter. This library supports many file formats, and provides powerful image processing and graphics capabilities.\"\nI'm not a Mac person so I can't help with Mac specifics, but I do know it works on the Mac.\n" ]
[ 7 ]
[]
[]
[ "bitmap", "python" ]
stackoverflow_0001639470_bitmap_python.txt
Q: Facebook API registerUsers - Error 100: Invalid email hash Using PyFacebook I am trying to register a test user of my site with my facebook application. I can connect to the API fine and return a list of friends etc. However when trying to register an address using: hashed_emails = facebook.hash_email('[email protected]') accounts = [hashed_emails] facebook.connect.registerUsers(accounts) I get: FacebookError: Error 100: Invalid email hash specified when trying to use connect.registerUsers(accounts) Yet I know the hash is correct as the test hash in the documentation returns the same result: [email protected] = 4228600737_c96da02bba97aedfd26136e980ae3761 I also know the email address used is definitely a Facebook user. Moreover connect.getUnconnectedFriendsCount() works fine and returns the expected result (0!) - suggesting the link to the App is OK. What's going on? Is connect.registerUsers() something that would only work once I've been given 'permission' to use Friend Linking? Or is the error message I'm receiving a catch all for a number of different results? Or have I just misunderstood the use of connect.registerUsers()? A: Is the hash being stored as the right type? Also, it may be good to store the hash as a separate variable in case there's some strange race condition popping up.. A: My request array to the API was incorrectly formatted. It should have been: hashed_emails = facebook.hash_email('[email protected]') # Wrong: accounts = [hashed_emails] accounts = [{"email_hash": hashed_emails}] facebook.connect.registerUsers(accounts) Which returned the expected response (list of registered hashes) and was further proven by connect.getUnconnectedFriendsCount() which now returns 1.
Facebook API registerUsers - Error 100: Invalid email hash
Using PyFacebook I am trying to register a test user of my site with my facebook application. I can connect to the API fine and return a list of friends etc. However when trying to register an address using: hashed_emails = facebook.hash_email('[email protected]') accounts = [hashed_emails] facebook.connect.registerUsers(accounts) I get: FacebookError: Error 100: Invalid email hash specified when trying to use connect.registerUsers(accounts) Yet I know the hash is correct as the test hash in the documentation returns the same result: [email protected] = 4228600737_c96da02bba97aedfd26136e980ae3761 I also know the email address used is definitely a Facebook user. Moreover connect.getUnconnectedFriendsCount() works fine and returns the expected result (0!) - suggesting the link to the App is OK. What's going on? Is connect.registerUsers() something that would only work once I've been given 'permission' to use Friend Linking? Or is the error message I'm receiving a catch all for a number of different results? Or have I just misunderstood the use of connect.registerUsers()?
[ "Is the hash being stored as the right type?\nAlso, it may be good to store the hash as a separate variable in case there's some strange race condition popping up..\n", "My request array to the API was incorrectly formatted. It should have been:\nhashed_emails = facebook.hash_email('[email protected]')\n\n# Wrong: accounts = [hashed_emails]\naccounts = [{\"email_hash\": hashed_emails}] \n\nfacebook.connect.registerUsers(accounts)\n\nWhich returned the expected response (list of registered hashes) and was further proven by connect.getUnconnectedFriendsCount() which now returns 1. \n" ]
[ 0, 0 ]
[]
[]
[ "facebook", "python" ]
stackoverflow_0001625742_facebook_python.txt
Q: Use javascript to generate a templatetag based on events after document ready? I am working with the new version of django-threadedcomments and making some progress; it integrates nicely with django's commenting system, however, I'm stuck and not sure how to proceed. For threaded comments to work, the user needs to select a comment to "reply to" and then the correct submit form is brought up (with the appropriate hidden fields) via javascript. Not using javascript, I would simply use: {% render_comment_form for [object] with [parent_id] %} However, I am not sure how I can use this template tag within a javascript function because it will compile/create itself only once; how can I pass it the [parent_id] variable dynacmically via javascript/ajax? A: You could do this with Ajax, passing the id of the comment to a dedicated view which just renders the form, but I don't think there's any need. I haven't looked at threaded-comments, but I guess that each comment is of the same object type. Therefore, the only thing that differs in the rendered form would be the id of the comment you're replying to. So, use the normal template tag render a default form using the first comment on the page, inside a hidden div. Then all your javascript function needs to do is to change the value of the object_pk hidden field within that form, depending on the comment you're replying to. If that id isn't already easily accessible, make it available in the template via the class or id of each comment. Then you can parse it out of there, stuff it into your ready-made form, display it, and you should be good to go.
Use javascript to generate a templatetag based on events after document ready?
I am working with the new version of django-threadedcomments and making some progress; it integrates nicely with django's commenting system, however, I'm stuck and not sure how to proceed. For threaded comments to work, the user needs to select a comment to "reply to" and then the correct submit form is brought up (with the appropriate hidden fields) via javascript. Not using javascript, I would simply use: {% render_comment_form for [object] with [parent_id] %} However, I am not sure how I can use this template tag within a javascript function because it will compile/create itself only once; how can I pass it the [parent_id] variable dynacmically via javascript/ajax?
[ "You could do this with Ajax, passing the id of the comment to a dedicated view which just renders the form, but I don't think there's any need. I haven't looked at threaded-comments, but I guess that each comment is of the same object type. Therefore, the only thing that differs in the rendered form would be the id of the comment you're replying to.\nSo, use the normal template tag render a default form using the first comment on the page, inside a hidden div. Then all your javascript function needs to do is to change the value of the object_pk hidden field within that form, depending on the comment you're replying to. If that id isn't already easily accessible, make it available in the template via the class or id of each comment. Then you can parse it out of there, stuff it into your ready-made form, display it, and you should be good to go.\n" ]
[ 1 ]
[]
[]
[ "ajax", "django", "jquery", "python" ]
stackoverflow_0001639511_ajax_django_jquery_python.txt
Q: python line.split semantic error fromtext file this section of code is supposed to make a list containing the values of the second column in the text file, but it takes the second letter. anyone know what my problem is? TEXTFILE opi 60 kid 60 pou 60 ret 60 kai 60 bob 100 for line in lst: line.split(' ') fire.append(int(line[1])) print(line[1]) A: You probably want to do something like this: mylist = [] for line in file: firstcolval, secondcolval = line.split() mylist.append(int(firstcolval)) A: This should also work import csv for line in csv.reader(open("datafile"), delimiter=" "): fire.append(int(line[1])) print(line[1]) Alternatively from operator import itemgetter import csv f=csv.reader(open("datafile"),delimiter=' ') fire+=[int(x[1]) for x in f] A: line.split(' ') Your mistake is there, the split doesn't split line but returns a line split. So instead what you need to do is columns = line.split(' ') and use columns instead A: You can do it with a simple one liner. fire = [int(b) for a, b in line.split(" ") for line in lst] A: Okay, so since split creates a new object (list containing at least one element), you should do: for line in lst: lc = line.split(' ') fire.append(int(lc[1])) print lc[1] But as someone has already suggested, you can do that in one line, with list comprehension: newlist = [int(current[1]) for current in line.split(" ") for line in lst] Which reads like: for ever line in lst we split the line with whitespace, and add element with index one from splitted line (current[1]) converted to int, to our new list.
python line.split semantic error fromtext file
this section of code is supposed to make a list containing the values of the second column in the text file, but it takes the second letter. anyone know what my problem is? TEXTFILE opi 60 kid 60 pou 60 ret 60 kai 60 bob 100 for line in lst: line.split(' ') fire.append(int(line[1])) print(line[1])
[ "You probably want to do something like this:\nmylist = []\n\nfor line in file:\n firstcolval, secondcolval = line.split()\n mylist.append(int(firstcolval))\n\n", "This should also work\nimport csv\nfor line in csv.reader(open(\"datafile\"), delimiter=\" \"): \n fire.append(int(line[1]))\n print(line[1])\n\nAlternatively\nfrom operator import itemgetter\nimport csv\nf=csv.reader(open(\"datafile\"),delimiter=' ')\nfire+=[int(x[1]) for x in f]\n\n", "line.split(' ') \n\nYour mistake is there, the split doesn't split line but returns a line split. So instead what you need to do is\ncolumns = line.split(' ')\n\nand use columns instead\n", "You can do it with a simple one liner.\nfire = [int(b) for a, b in line.split(\" \") for line in lst]\n\n", "Okay, so since split creates a new object (list containing at least one element), you should do:\n\nfor line in lst:\n lc = line.split(' ')\n fire.append(int(lc[1]))\n print lc[1]\n\nBut as someone has already suggested, you can do that in one line, with list comprehension:\n\nnewlist = [int(current[1]) for current in line.split(\" \") for line in lst]\n\nWhich reads like: for ever line in lst we split the line with whitespace, and add element with index one from splitted line (current[1]) converted to int, to our new list.\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001634117_python_string.txt
Q: Running multiple commands simultaneously from python I want to run three commands at the same time from python. The command format is query.pl -args Currently I am doing os.system("query.pl -results '10000' -serverName 'server1' >> log1.txt") os.system("query.pl -results '10000' -serverName 'server2' >> log2.txt") os.system("query.pl -results '10000' -serverName 'server3' >> log3.txt") I want to query all three servers at the same time but in this case, each command executes only after the last one has finished. How can I make them simultaneous? I was thinking of using '&' at the end but I want the next part of the code to be run only when all three command finish A: You could use the subprocess module and have all three running independently: use subprocess.Popen. Take care in setting the "shell" parameter correctly. Use the wait() or poll() method to determine when the subprocesses are finished. A: os.system("query.pl -results '10000' -serverName 'server1' &") os.system("query.pl -results '10000' -serverName 'server2' &") os.system("query.pl -results '10000' -serverName 'server3' &") in this case - process will be started in background A: You can use Queue tasks = ("query.pl -results '10000' -serverName 'server1'",\ "query.pl -results '10000' -serverName 'server2'",\ "query.pl -results '10000' -serverName 'server1'") def worker(): while True: item = q.get() os.system(item) q = Queue() for i in tasks: t = Thread(target=worker) t.setDaemon(True) t.start() for item in tasks: q.put(item) q.join()
Running multiple commands simultaneously from python
I want to run three commands at the same time from python. The command format is query.pl -args Currently I am doing os.system("query.pl -results '10000' -serverName 'server1' >> log1.txt") os.system("query.pl -results '10000' -serverName 'server2' >> log2.txt") os.system("query.pl -results '10000' -serverName 'server3' >> log3.txt") I want to query all three servers at the same time but in this case, each command executes only after the last one has finished. How can I make them simultaneous? I was thinking of using '&' at the end but I want the next part of the code to be run only when all three command finish
[ "You could use the subprocess module and have all three running independently: use subprocess.Popen. Take care in setting the \"shell\" parameter correctly.\nUse the wait() or poll() method to determine when the subprocesses are finished.\n", "os.system(\"query.pl -results '10000' -serverName 'server1' &\") \nos.system(\"query.pl -results '10000' -serverName 'server2' &\") \nos.system(\"query.pl -results '10000' -serverName 'server3' &\")\n\nin this case - process will be started in background\n", "You can use Queue\ntasks = (\"query.pl -results '10000' -serverName 'server1'\",\\\n\"query.pl -results '10000' -serverName 'server2'\",\\\n\"query.pl -results '10000' -serverName 'server1'\")\n\ndef worker():\n while True:\n item = q.get()\n os.system(item)\n\nq = Queue()\nfor i in tasks:\n t = Thread(target=worker)\n t.setDaemon(True)\n t.start()\n\nfor item in tasks:\n q.put(item)\n\nq.join() \n\n" ]
[ 10, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001639912_python.txt
Q: urlretrieve returns an empty file I'm trying to use urlretrieve to download files from urls that take the form: http://example.com/download.php?id=6456&name=foo yet for some reason I just get an empty response. I've tried the method suggested in this question didn't seem to help because remotefile.info() doesn't contain the key 'content-disposition', only ['content-length', 'x-powered-by', 'refresh', 'server', 'connection', 'date', 'content-type'] Any suggestions? A: Information from manual: info() — return the meta-information of the page, such as headers, in the form of an httplib.HTTPMessage instance (see Quick Reference to HTTP Headers) What keys do you have in dict remotefile.info()? Can you get a full header information?
urlretrieve returns an empty file
I'm trying to use urlretrieve to download files from urls that take the form: http://example.com/download.php?id=6456&name=foo yet for some reason I just get an empty response. I've tried the method suggested in this question didn't seem to help because remotefile.info() doesn't contain the key 'content-disposition', only ['content-length', 'x-powered-by', 'refresh', 'server', 'connection', 'date', 'content-type'] Any suggestions?
[ "Information from manual:\n\ninfo() — return the meta-information\n of the page, such as headers, in the\n form of an httplib.HTTPMessage\n instance (see Quick Reference to HTTP\n Headers)\n\nWhat keys do you have in dict remotefile.info()?\nCan you get a full header information?\n" ]
[ 1 ]
[]
[]
[ "download", "python", "urllib", "urllib2" ]
stackoverflow_0001639989_download_python_urllib_urllib2.txt
Q: Understanding a factorial function in python I'm trying to understand if the following Python function: def factorial(i): if not hasattr(factorial, 'lstFactorial'): factorial.lstFactorial = [None] * 1000 if factorial.lstFactorial[i] is None: iProduct = 1 for iFactor in xrange(1, i+1): iProduct *= iFactor factorial.lstFactorial[i] = iProduct return factorial.lstFactorial[i] would produce the same results as the equivalent in C#: long factorial(long n) { return n <= 1 ? 1 : n * factorial(n-1); } for a value of 12 or less. I know nothing about Python but have just converted some Python code to C#. This was the only function that I didn't fully understand. A: here is main algorithm iProduct = 1 for iFactor in xrange(1, i+1): iProduct *= iFactor other code is for caching results. A: Even without knowing Python, it must be clear to you that the two functions are far from identical. The C# version is calculating the factorial via recursion, whereas the Python one is doing it via iteration (although in a slightly weird way, with some odd memoization/caching going on - I guess in case you want to calculate multiple factorials in the lifetime of a program). Anyway, since calculating a factorial is a very simple algorithm, it works out the same in both cases. A: IANAPG (Python Guru), but it looks to me like the function is creating a static array of 1000 entries, then filling them on an as-needed basis to prevent recalculation. In C++, it'd be something like: long factorial(int i){ //Cache array static long factorials[1000]; if (!factorials[i]){ //If not cached, calculate & store int product = 1; for (int idx = 1; idx <= i + 1; ++idx){ product *= idx; } factorials[i] = product; } return factorials[i]; //Return cached value } A: it will return the same results, but the Python version will probably have better performance, because it memoizes the results A: It just attaches an attribute called lstFactorial to factorial. This attribute is a list of 1000 values used to cache the results of previous calls. A: def factorial(i): if not hasattr(factorial, 'lstFactorial'): #program checks whether caching list exists factorial.lstFactorial = [None] * 1000 #if so, it creates a list of thousand None elements (it is more or less equivalent to C/C++'s NULL if factorial.lstFactorial[i] is None: #prog checks if that factorial has been already calculated iProduct = 1 #set result to 1 for iFactor in xrange(1, i+1): # C's for(iFactor = 1; iFactor &lt;= i+1; iFactor++) iProduct *= iFactor #we multiply result times current loop counter factorial.lstFactorial[i] = iProduct #and put result in caching list return factorial.lstFactorial[i] #after all, we return the result, calculated jest now or obtained from cache To be honest, it is not the best algorithm, since it uses cache only partially. The simple, user-friendly factorial function (no caching) would be: def factorial(i): if i == 0 or i == 1: return 1 return i*factorial(i-1) Of for lazy python programmers, most similiar to that C# example: f = lambda i: i and i*f(i-1) or 1
Understanding a factorial function in python
I'm trying to understand if the following Python function: def factorial(i): if not hasattr(factorial, 'lstFactorial'): factorial.lstFactorial = [None] * 1000 if factorial.lstFactorial[i] is None: iProduct = 1 for iFactor in xrange(1, i+1): iProduct *= iFactor factorial.lstFactorial[i] = iProduct return factorial.lstFactorial[i] would produce the same results as the equivalent in C#: long factorial(long n) { return n <= 1 ? 1 : n * factorial(n-1); } for a value of 12 or less. I know nothing about Python but have just converted some Python code to C#. This was the only function that I didn't fully understand.
[ "here is main algorithm\niProduct = 1\nfor iFactor in xrange(1, i+1):\n iProduct *= iFactor\n\nother code is for caching results.\n", "Even without knowing Python, it must be clear to you that the two functions are far from identical. The C# version is calculating the factorial via recursion, whereas the Python one is doing it via iteration (although in a slightly weird way, with some odd memoization/caching going on - I guess in case you want to calculate multiple factorials in the lifetime of a program).\nAnyway, since calculating a factorial is a very simple algorithm, it works out the same in both cases. \n", "IANAPG (Python Guru), but it looks to me like the function is creating a static array of 1000 entries, then filling them on an as-needed basis to prevent recalculation. In C++, it'd be something like:\nlong factorial(int i){\n //Cache array\n static long factorials[1000];\n if (!factorials[i]){ //If not cached, calculate & store\n int product = 1;\n for (int idx = 1; idx <= i + 1; ++idx){\n product *= idx;\n }\n factorials[i] = product;\n }\n return factorials[i]; //Return cached value\n}\n\n", "it will return the same results, but the Python version will probably have better performance, because it memoizes the results\n", "It just attaches an attribute called lstFactorial to factorial. This attribute is a list of 1000 values used to cache the results of previous calls.\n", "def factorial(i):\n if not hasattr(factorial, 'lstFactorial'): #program checks whether caching list exists\n factorial.lstFactorial = [None] * 1000 #if so, it creates a list of thousand None elements (it is more or less equivalent to C/C++'s NULL\n if factorial.lstFactorial[i] is None: #prog checks if that factorial has been already calculated\n iProduct = 1 #set result to 1\n for iFactor in xrange(1, i+1): # C's for(iFactor = 1; iFactor &lt;= i+1; iFactor++)\n iProduct *= iFactor #we multiply result times current loop counter\n factorial.lstFactorial[i] = iProduct #and put result in caching list\n return factorial.lstFactorial[i] #after all, we return the result, calculated jest now or obtained from cache\n\nTo be honest, it is not the best algorithm, since it uses cache only partially.\nThe simple, user-friendly factorial function (no caching) would be:\ndef factorial(i):\n if i == 0 or i == 1:\n return 1\n return i*factorial(i-1)\n\nOf for lazy python programmers, most similiar to that C# example:\nf = lambda i: i and i*f(i-1) or 1\n\n" ]
[ 4, 2, 2, 1, 1, 1 ]
[]
[]
[ "c#", "python" ]
stackoverflow_0001639976_c#_python.txt
Q: Calculating average in interchangable range? I know it's because of n, but n is supposed to be any variable, and left as n, this is what I have: def average(n): if n >= 0: avg = sum((range(1:int(n)))/float(len(range(1:int(n))))) print avg how do I fix it? A: The summation of x from 1 to n is simply (n + 1) * (n / 2). The number of elements being summed is n . Do a little simplification and your new function is def average(n): return (n + 1) / 2.0 You'll have to adjust this if you actually wanted Python's behavior of an exclusive upper-bound for range() (i.e., having average(10) return the average of the sum of values 1 - 9 instead of 1 - 10). A: I may be wrong but range(1:int(n)) doesn't look like syntactically correct and parenthesis don't match. You may want to calculate the average of numbers in the range of 0 to n. In that case, I would replace your code like this: def average(n): if n >= 0: avg = sum((range(int(n))))/float(n) print avg A: If your range is always 1:n, why don't you just use this: avg = sum((range(1:int(n)))/float(n)) Or maybe I am not understanding your question...
Calculating average in interchangable range?
I know it's because of n, but n is supposed to be any variable, and left as n, this is what I have: def average(n): if n >= 0: avg = sum((range(1:int(n)))/float(len(range(1:int(n))))) print avg how do I fix it?
[ "The summation of x from 1 to n is simply (n + 1) * (n / 2). The number of elements being summed is n . Do a little simplification and your new function is\ndef average(n):\n return (n + 1) / 2.0\n\nYou'll have to adjust this if you actually wanted Python's behavior of an exclusive upper-bound for range() (i.e., having average(10) return the average of the sum of values 1 - 9 instead of 1 - 10).\n", "I may be wrong but range(1:int(n)) doesn't look like syntactically correct and parenthesis don't match. You may want to calculate the average of numbers in the range of 0 to n. In that case, I would replace your code like this:\ndef average(n):\nif n >= 0:\n avg = sum((range(int(n))))/float(n)\nprint avg\n\n", "If your range is always 1:n, why don't you just use this:\navg = sum((range(1:int(n)))/float(n))\n\nOr maybe I am not understanding your question...\n" ]
[ 2, 1, 0 ]
[]
[]
[ "average", "python", "variables" ]
stackoverflow_0001640145_average_python_variables.txt
Q: Web service for an Excel automation script on Windows I am tasked to develop a very simple web layer for a very complex algorithm that is implemented as an Excel worksheet. This script would be called from a Ruby on Rails app that would be presenting the user with the forms, check validations and whatnot, and should return just a number. After perusing this site, my best shot is to automate Excel using Python or Ruby under Windows and run the algorithm there --we are a Ruby shop, but I've found more info for Python. I think I can write a Python script to run the calculation in a day, but now the final question remains: how do we put a web layer on top of the core script? We are familiar with Apache, so installing Python as a Apache module is my straight thought, but we could also install Twisted and try to run the web server itself in Python. What would be your choices? A: My first choice would be to move the calculations that are in the Excel workbook into my Ruby application code. While it will likely take some additional work, my guess is that it will take less time to port the Excel app to Ruby than introducing layers of complexity on top of Excel. Additionally, calling into Excel, passing user submitted data, opens up additional possible security holes. My second choice would be to do it the language you know best. It looks like Ruby can interact with Excel using win32ole (another example).
Web service for an Excel automation script on Windows
I am tasked to develop a very simple web layer for a very complex algorithm that is implemented as an Excel worksheet. This script would be called from a Ruby on Rails app that would be presenting the user with the forms, check validations and whatnot, and should return just a number. After perusing this site, my best shot is to automate Excel using Python or Ruby under Windows and run the algorithm there --we are a Ruby shop, but I've found more info for Python. I think I can write a Python script to run the calculation in a day, but now the final question remains: how do we put a web layer on top of the core script? We are familiar with Apache, so installing Python as a Apache module is my straight thought, but we could also install Twisted and try to run the web server itself in Python. What would be your choices?
[ "My first choice would be to move the calculations that are in the Excel workbook into my Ruby application code. While it will likely take some additional work, my guess is that it will take less time to port the Excel app to Ruby than introducing layers of complexity on top of Excel. Additionally, calling into Excel, passing user submitted data, opens up additional possible security holes.\nMy second choice would be to do it the language you know best. It looks like Ruby can interact with Excel using win32ole (another example).\n" ]
[ 1 ]
[]
[]
[ "automation", "excel", "python", "winapi" ]
stackoverflow_0001640068_automation_excel_python_winapi.txt
Q: Django Forms - Can the initial value of one field be dependant on another? Example, for this form: class CommentForm(forms.Form): name = forms.CharField(initial='class') action = forms.ChoiceField(...) Can I have the choices in the action field be different depending on what is in the name field? A: How about wrapping initial in a function or a lambda, so that the value of initial is deferred until form creation. Something like: class CommentForm(forms.Form): name = forms.CharField(initial=lambda : self.action) # or more complex logic action = forms.ChoiceField(...)
Django Forms - Can the initial value of one field be dependant on another?
Example, for this form: class CommentForm(forms.Form): name = forms.CharField(initial='class') action = forms.ChoiceField(...) Can I have the choices in the action field be different depending on what is in the name field?
[ "How about wrapping initial in a function or a lambda, so that the value of initial is deferred until form creation. Something like:\nclass CommentForm(forms.Form):\n name = forms.CharField(initial=lambda : self.action) # or more complex logic\n action = forms.ChoiceField(...)\n\n" ]
[ 3 ]
[]
[]
[ "django", "django_forms", "forms", "python" ]
stackoverflow_0001640204_django_django_forms_forms_python.txt
Q: python string replacement with % character/**kwargs weirdness Following code: def __init__(self, url, **kwargs): for key in kwargs.keys(): url = url.replace('%%s%' % key, str(kwargs[key])) Throws the following exception: File "/home/wells/py-mlb/lib/fetcher.py", line 25, in __init__ url = url.replace('%%s%' % key, str(kwargs[key])) ValueError: incomplete format The string has a format like: http://www.blah.com?id=%PLAYER_ID% What am I doing wrong? A: You probably want the format string %%%s%% instead of %%s%. Two consecutive % signs are interpreted as a literal %, so in your version, you have a literal %, a literal s, and then a lone %, which is expecting a format specifier after it. You need to double up each literal % to not be interpreted as a format string, so you want %%%s%%: literal %, %s for string, literal %. A: you need to double the percentage sign to escape it: >>> '%%%s%%' % 'PLAYER_ID' '%PLAYER_ID%' also when iterating over the dictionary you could unpack values in the for statement like this: def __init__(self, url, **kwargs): for key, value in kwargs.items(): url = url.replace('%%%s%%' % key, str(value)) A: Adam almost had it right. Change your code to: def __init__(self, url, **kwargs): for key in kwargs.keys(): url = url.replace('%%%s%%' % key, str(kwargs[key])) When key is FOO, then '%%%s%%' % key results in '%FOO%', and your url.replace will do what you want. In a format string, two percents results in a percent in the output.
python string replacement with % character/**kwargs weirdness
Following code: def __init__(self, url, **kwargs): for key in kwargs.keys(): url = url.replace('%%s%' % key, str(kwargs[key])) Throws the following exception: File "/home/wells/py-mlb/lib/fetcher.py", line 25, in __init__ url = url.replace('%%s%' % key, str(kwargs[key])) ValueError: incomplete format The string has a format like: http://www.blah.com?id=%PLAYER_ID% What am I doing wrong?
[ "You probably want the format string %%%s%% instead of %%s%.\nTwo consecutive % signs are interpreted as a literal %, so in your version, you have a literal %, a literal s, and then a lone %, which is expecting a format specifier after it. You need to double up each literal % to not be interpreted as a format string, so you want %%%s%%: literal %, %s for string, literal %.\n", "you need to double the percentage sign to escape it:\n>>> '%%%s%%' % 'PLAYER_ID'\n'%PLAYER_ID%'\n\nalso when iterating over the dictionary you could unpack values in the for statement like this:\ndef __init__(self, url, **kwargs):\n for key, value in kwargs.items():\n url = url.replace('%%%s%%' % key, str(value))\n\n", "Adam almost had it right. Change your code to:\ndef __init__(self, url, **kwargs):\n for key in kwargs.keys():\n url = url.replace('%%%s%%' % key, str(kwargs[key]))\n\nWhen key is FOO, then '%%%s%%' % key results in '%FOO%', and your url.replace will do what you want. In a format string, two percents results in a percent in the output.\n" ]
[ 15, 3, 1 ]
[]
[]
[ "python", "string_formatting" ]
stackoverflow_0001640487_python_string_formatting.txt
Q: How to handle a tokenize error with unterminated multiline comments (python 2.6) The following sample code: import token, tokenize, StringIO def generate_tokens(src): rawstr = StringIO.StringIO(unicode(src)) tokens = tokenize.generate_tokens(rawstr.readline) for i, item in enumerate(tokens): toktype, toktext, (srow,scol), (erow,ecol), line = item print i, token.tok_name[toktype], toktext s = \ """ def test(x): \"\"\" test with an unterminated docstring """ generate_tokens(s) causes the following to fire: ... (stripped a little) File "/usr/lib/python2.6/tokenize.py", line 296, in generate_tokens raise TokenError, ("EOF in multi-line string", strstart) tokenize.TokenError: ('EOF in multi-line string', (3, 5)) Some questions about this behaviour: Should I catch and 'selectively' ignore tokenize.TokenError here? Or should I stop trying to generate tokens from non-compliant/non-complete code? If so, how would I check for that? Can this error (or similar errors) be caused by anything other than an unterminated docstring? A: How you handle tokenize errors depends entirely on why you are tokenizing. You code gives you all the valid tokens up until the beginning of the bad string literal. If that token stream is useful to you, then use it. You have a few options about what to do with the error: You could ignore it and have an incomplete token stream. You could buffer all the tokens and only use the token stream if no error occurred. You could process the tokens, but abort the higher-level processing if an error occurred. As to whether that error can happen with anything other than an incomplete docstring, yes. Remember that docstrings are just string literals. Any unterminated multi-line string literal will give you the same error. Similar errors could happen for other lexical errors in the code. For example, here are other values of s that produce errors (at least with Python 2.5): s = ")" # EOF in multi-line statement s = "(" # EOF in multi-line statement s = "]" # EOF in multi-line statement s = "[" # EOF in multi-line statement s = "}" # EOF in multi-line statement s = "{" # EOF in multi-line statement Oddly, other nonsensical inputs produce ERRORTOKEN values instead: s = "$" s = "'"
How to handle a tokenize error with unterminated multiline comments (python 2.6)
The following sample code: import token, tokenize, StringIO def generate_tokens(src): rawstr = StringIO.StringIO(unicode(src)) tokens = tokenize.generate_tokens(rawstr.readline) for i, item in enumerate(tokens): toktype, toktext, (srow,scol), (erow,ecol), line = item print i, token.tok_name[toktype], toktext s = \ """ def test(x): \"\"\" test with an unterminated docstring """ generate_tokens(s) causes the following to fire: ... (stripped a little) File "/usr/lib/python2.6/tokenize.py", line 296, in generate_tokens raise TokenError, ("EOF in multi-line string", strstart) tokenize.TokenError: ('EOF in multi-line string', (3, 5)) Some questions about this behaviour: Should I catch and 'selectively' ignore tokenize.TokenError here? Or should I stop trying to generate tokens from non-compliant/non-complete code? If so, how would I check for that? Can this error (or similar errors) be caused by anything other than an unterminated docstring?
[ "How you handle tokenize errors depends entirely on why you are tokenizing. You code gives you all the valid tokens up until the beginning of the bad string literal. If that token stream is useful to you, then use it. \nYou have a few options about what to do with the error:\n\nYou could ignore it and have an incomplete token stream.\nYou could buffer all the tokens and only use the token stream if no error occurred.\nYou could process the tokens, but abort the higher-level processing if an error occurred.\n\nAs to whether that error can happen with anything other than an incomplete docstring, yes. Remember that docstrings are just string literals. Any unterminated multi-line string literal will give you the same error. Similar errors could happen for other lexical errors in the code.\nFor example, here are other values of s that produce errors (at least with Python 2.5):\ns = \")\" # EOF in multi-line statement\ns = \"(\" # EOF in multi-line statement\ns = \"]\" # EOF in multi-line statement\ns = \"[\" # EOF in multi-line statement\ns = \"}\" # EOF in multi-line statement\ns = \"{\" # EOF in multi-line statement\n\nOddly, other nonsensical inputs produce ERRORTOKEN values instead:\ns = \"$\"\ns = \"'\"\n\n" ]
[ 2 ]
[]
[]
[ "parsing", "python", "python_2.6", "tokenize" ]
stackoverflow_0001640097_parsing_python_python_2.6_tokenize.txt
Q: How would one implement Lazy Evaluation in C? Take for example, The follow python code: def multiples_of_2(): i = 0 while True: i = i + 2 yield i How do we translate this into C code? Edit: I am looking to translate this python code into a similar generator in C, with next() function. What I am not looking for is how to create a function in C to output multiples of 2. Multiples of 2 is merely an example to illustrate the problem of lazy eval generators in C. A: You could try to encapsulate this in a struct: typedef struct s_generator { int current; int (*func)(int); } generator; int next(generator* gen) { int result = gen->current; gen->current = (gen->func)(gen->current); return result; } Then you define you multiples with: int next_multiple(int current) { return 2 + current; } generator multiples_of_2 = {0, next_multiple}; You get the next multiple by calling next(&multiples_of_2); A: I found a good article recently on coroutines in C, which describes one method of doing this. It's certainly not for the faint of heart. A: As Will mentioned, languages like python do the job of storing the state of the stack between successive calls of the generator. Since C does not have this mechanism, you'll have to do it yourself. The "generic" way of doing this is not for the faint-hearted, as Greg pointed out. The traditional C way of doing this would be for you to define and maintain the state yourself and pass it in and out of your method. So: struct multiples_of_two_state { int i; /* all the state you need should go in here */ }; void multiples_of_two_init(struct multiples_of_two_state *s) { s->i = 0; } int multiples_of_two_next(struct multiples_of_two_state *s) { s->i += 2; return s->i; } /* Usage */ struct multiples_of_two_state s; int result; multiples_of_two_init(&s); for (int i=0; i<INFINITY; i++) { result = multiples_of_two_next(&s); printf("next is %d", result); } A: The basic approach is to not do it. In Python (and C#) the 'yield' method stores local state between calls, whereas in C/C++ and most other languages the local state stored on the stack is not preserved between calls and this is a fundemental implementation difference. So in C you'd have to store the state between calls in some variable explicitly - either a global variable or a function parameter to your sequence generator. So either: int multiples_of_2() { static int i = 0; i += 2; return i; } or int multiples_of_2(int i) { i += 2; return i; } depending upon if there's one global sequence or many. I've quickly considered longjmp and GCC computed gotos and other non-standard things, and I can't say I'd recommend any of them for this! In C, do it the C way. A: Check out setjmp/longjmp setjmp.h is a header defined in the C standard library to provide "non-local jumps," or control flow besides the usual subroutine call and return sequence. The paired functions setjmp and longjmp provide this functionality. First setjmp saves the environment of the calling function into a data structure, and then longjmp can use this structure to "jump" back to the point it was created, at the setjmp call. (Lua coroutines were implemented that way) A: You can pass the argument as a pointer to allow the function to modify it without using the return value: void multiples_of_2(int *i) { *i += 2; } And call it: int i = 0; multiples_of_2(&i); A: The key is keeping the state of the function between calls. You have a number of options: Static (or global) state. Means the sequence of calls to the function is not reentrant, i.e. you can't have the function call itself recursively, nor can you have more than one caller running different sequences of calls. Initialising (and possibly allocating) the state on or before the first call, and passing that to the function on each subsequent call. Doing clever stuff with setjmp/longjmp, the stack, or modifiable code (there's an article somewhere about currying functions in C that creates an object with the necessary code to call the curried function; a similar technique could create an object with the function's state and the necessary code to save and restore it for each call). (Edit Found it -- http://asg.unige.ch/site/papers/Dami91a.pdf) Greg cites an interesting article, above, that presents a way of using static state with syntax similar to the yield statement. I liked it academically but probably wouldn't use it because of the reentrancy issue, and because I'm still surprised that the infamous Duffy's Device even compiles ;-). In practice, large C programs do want to compute things lazily, e.g. a database server may want to satisfy a SELECT ... LIMIT 10 query by wrapping the plain SELECT query inside something that will yield each row until 10 have been returned, rather than computing the whole result and then discarding most of them. The most C-like technique for this is explicitly create an object for the state, and to call a function with it for each call. For your example, you might see something like: /* Definitions in a library somewhere. */ typedef int M2_STATE; M2_STATE m2_new() { return 0; } int m2_empty(M2_STATE s) { return s < INT_MAX; } int m2_next(M2_STATE s) { int orig_s = s; s = s + 2; return orig_s; } /* Caller. */ M2_STATE s; s = m2_new(); while (!m2_empty(s)) { int num = m2_next(s); printf("%d\n", num); } This seems cumbersome for the multiples of two, but it becomes a useful pattern for more complicated generators. You can make the state more complicated without having to burden all the code that uses your generator with the details. Even better practice is to return an opaque pointer in the new function, and (unless GC is available) provide a function for cleaning up the generator. The big advantage of allocating the state for each new sequence of calls is things like recursive generators. For example, a generator that returns all files under a directory, by calling itself on each subdirectory. char *walk_next(WALK_STATE *s) { if (s->subgenerator) { if (walk_is_empty(s->subgenerator)) { walk_finish(s->subgenerator); s->subgenerator = NULL; } else return walk_next(s->subgenerator); } char *name = readdir(s->dir); if (is_file(name)) return name; else if (is_dir(name)) { char subpath[MAX_PATH]; strcpy(subpath, s->path); strcat(subpath, name); s->subgenerator = walk_new(subpath); return walk_next(s->subgenerator); } closedir(s->dir); s->empty = 1; return NULL; } (You'll have to excuse my misuse of readdir, et al. and my pretense that C has idiot-proof string support.) A: int multiples_of_2() { static int i = 0; i += 2; return i; } The static int i behaves like a global variable but is visible only within the contect of multiples_of_2(). A: I have implemented my own lazy eval, with respects to solving the hamming's problem. Heres my code for anyone whos interested: #include <stdio.h> #include <stdlib.h> // Hamming problem in C. typedef struct gen { int tracker; int (*next)(struct gen *g); } generator; int twos_gen(struct gen *g) { g->tracker = g->tracker + 2; return g->tracker; } generator* twos_stream() { generator *g = malloc(sizeof(generator)); g->next = twos_gen; g->tracker = 0; return g; } int threes_gen(struct gen *g) { g->tracker = g->tracker + 3; return g->tracker; } generator* threes_stream() { generator *g = malloc(sizeof(generator)); g->next = threes_gen; g->tracker = 0; return g; } int fives_gen(struct gen *g) { g->tracker = g->tracker + 5; return g->tracker; } generator* fives_stream() { generator *g = malloc(sizeof(generator)); g->next = fives_gen; g->tracker = 0; return g; } int smallest(int a, int b, int c) { if (a < b) { if (c < a) return c; return a; } else { if (c < b) return c; return b; } } int hamming_gen(struct gen *g) { generator* twos = twos_stream(); generator* threes = threes_stream(); generator* fives = fives_stream(); int c2 = twos->next(twos); int c3 = threes->next(threes); int c5 = fives->next(fives); while (c2 <= g->tracker) c2 = twos->next(twos); while (c3 <= g->tracker) c3 = threes->next(threes); while (c5 <= g->tracker) c5 = fives->next(fives); g->tracker = smallest(c2,c3,c5); return g->tracker; } generator* hammings_stream() { generator *g = malloc(sizeof(generator)); g->next = hamming_gen; g->tracker = 0; return g; } int main() { generator* hammings = hammings_stream(); int i = 0; while (i<10) { printf("Hamming No: %d\n",hammings->next(hammings)); i++; } }
How would one implement Lazy Evaluation in C?
Take for example, The follow python code: def multiples_of_2(): i = 0 while True: i = i + 2 yield i How do we translate this into C code? Edit: I am looking to translate this python code into a similar generator in C, with next() function. What I am not looking for is how to create a function in C to output multiples of 2. Multiples of 2 is merely an example to illustrate the problem of lazy eval generators in C.
[ "You could try to encapsulate this in a struct:\ntypedef struct s_generator {\n int current;\n int (*func)(int);\n} generator;\n\nint next(generator* gen) {\n int result = gen->current;\n gen->current = (gen->func)(gen->current);\n return result;\n}\n\nThen you define you multiples with:\nint next_multiple(int current) { return 2 + current; }\ngenerator multiples_of_2 = {0, next_multiple};\n\nYou get the next multiple by calling\nnext(&multiples_of_2);\n\n", "I found a good article recently on coroutines in C, which describes one method of doing this. It's certainly not for the faint of heart.\n", "As Will mentioned, languages like python do the job of storing the state of the stack between successive calls of the generator. Since C does not have this mechanism, you'll have to do it yourself. The \"generic\" way of doing this is not for the faint-hearted, as Greg pointed out. The traditional C way of doing this would be for you to define and maintain the state yourself and pass it in and out of your method. So:\nstruct multiples_of_two_state {\n int i;\n /* all the state you need should go in here */\n};\n\nvoid multiples_of_two_init(struct multiples_of_two_state *s) {\n s->i = 0;\n}\n\nint multiples_of_two_next(struct multiples_of_two_state *s) {\n s->i += 2;\n return s->i;\n}\n\n/* Usage */\nstruct multiples_of_two_state s;\nint result;\nmultiples_of_two_init(&s);\nfor (int i=0; i<INFINITY; i++) {\n result = multiples_of_two_next(&s);\n printf(\"next is %d\", result);\n}\n\n", "The basic approach is to not do it. In Python (and C#) the 'yield' method stores local state between calls, whereas in C/C++ and most other languages the local state stored on the stack is not preserved between calls and this is a fundemental implementation difference. So in C you'd have to store the state between calls in some variable explicitly - either a global variable or a function parameter to your sequence generator. So either:\nint multiples_of_2() {\n static int i = 0;\n i += 2;\n return i;\n}\n\nor\nint multiples_of_2(int i) {\n i += 2;\n return i;\n}\n\ndepending upon if there's one global sequence or many.\nI've quickly considered longjmp and GCC computed gotos and other non-standard things, and I can't say I'd recommend any of them for this! In C, do it the C way.\n", "Check out setjmp/longjmp\n\nsetjmp.h is a header defined in the C\n standard library to provide \"non-local\n jumps,\" or control flow besides the\n usual subroutine call and return\n sequence. The paired functions setjmp\n and longjmp provide this\n functionality. First setjmp saves the\n environment of the calling function\n into a data structure, and then\n longjmp can use this structure to\n \"jump\" back to the point it was\n created, at the setjmp call.\n\n(Lua coroutines were implemented that way)\n", "You can pass the argument as a pointer to allow the function to modify it without using the return value:\nvoid multiples_of_2(int *i)\n{\n *i += 2;\n}\n\nAnd call it:\nint i = 0;\nmultiples_of_2(&i);\n\n", "The key is keeping the state of the function between calls. You have a number of options:\n\nStatic (or global) state. Means the sequence of calls to the function is not reentrant, i.e. you can't have the function call itself recursively, nor can you have more than one caller running different sequences of calls.\nInitialising (and possibly allocating) the state on or before the first call, and passing that to the function on each subsequent call.\nDoing clever stuff with setjmp/longjmp, the stack, or modifiable code (there's an article somewhere about currying functions in C that creates an object with the necessary code to call the curried function; a similar technique could create an object with the function's state and the necessary code to save and restore it for each call). (Edit Found it -- http://asg.unige.ch/site/papers/Dami91a.pdf)\n\nGreg cites an interesting article, above, that presents a way of using static state with syntax similar to the yield statement. I liked it academically but probably wouldn't use it because of the reentrancy issue, and because I'm still surprised that the infamous Duffy's Device even compiles ;-). \nIn practice, large C programs do want to compute things lazily, e.g. a database server may want to satisfy a SELECT ... LIMIT 10 query by wrapping the plain SELECT query inside something that will yield each row until 10 have been returned, rather than computing the whole result and then discarding most of them. The most C-like technique for this is explicitly create an object for the state, and to call a function with it for each call. For your example, you might see something like:\n/* Definitions in a library somewhere. */\ntypedef int M2_STATE;\nM2_STATE m2_new() { return 0; }\nint m2_empty(M2_STATE s) { return s < INT_MAX; }\nint m2_next(M2_STATE s) { int orig_s = s; s = s + 2; return orig_s; }\n\n/* Caller. */\nM2_STATE s;\ns = m2_new();\nwhile (!m2_empty(s))\n{\n int num = m2_next(s);\n printf(\"%d\\n\", num);\n}\n\nThis seems cumbersome for the multiples of two, but it becomes a useful pattern for more complicated generators. You can make the state more complicated without having to burden all the code that uses your generator with the details. Even better practice is to return an opaque pointer in the new function, and (unless GC is available) provide a function for cleaning up the generator.\nThe big advantage of allocating the state for each new sequence of calls is things like recursive generators. For example, a generator that returns all files under a directory, by calling itself on each subdirectory.\nchar *walk_next(WALK_STATE *s)\n{\n if (s->subgenerator)\n {\n if (walk_is_empty(s->subgenerator))\n {\n walk_finish(s->subgenerator);\n s->subgenerator = NULL;\n }\n else\n return walk_next(s->subgenerator);\n }\n\n char *name = readdir(s->dir);\n if (is_file(name))\n return name;\n else if (is_dir(name))\n {\n char subpath[MAX_PATH];\n strcpy(subpath, s->path);\n strcat(subpath, name);\n s->subgenerator = walk_new(subpath);\n return walk_next(s->subgenerator);\n }\n closedir(s->dir);\n s->empty = 1;\n return NULL;\n}\n\n(You'll have to excuse my misuse of readdir, et al. and my pretense that C has idiot-proof string support.)\n", "int multiples_of_2() {\n static int i = 0;\n i += 2;\n return i;\n}\n\nThe static int i behaves like a global variable but is visible only within the contect of multiples_of_2().\n", "I have implemented my own lazy eval, with respects to solving the hamming's problem.\nHeres my code for anyone whos interested:\n#include <stdio.h>\n#include <stdlib.h>\n\n// Hamming problem in C.\n\ntypedef struct gen {\n int tracker;\n int (*next)(struct gen *g);\n} generator;\n\nint twos_gen(struct gen *g) {\n g->tracker = g->tracker + 2;\n return g->tracker;\n}\n\ngenerator* twos_stream() {\n generator *g = malloc(sizeof(generator));\n g->next = twos_gen;\n g->tracker = 0;\n return g;\n}\n\nint threes_gen(struct gen *g) {\n g->tracker = g->tracker + 3;\n return g->tracker;\n}\n\ngenerator* threes_stream() {\n generator *g = malloc(sizeof(generator));\n g->next = threes_gen;\n g->tracker = 0;\n return g;\n}\n\nint fives_gen(struct gen *g) {\n g->tracker = g->tracker + 5;\n return g->tracker;\n}\n\ngenerator* fives_stream() {\n generator *g = malloc(sizeof(generator));\n g->next = fives_gen;\n g->tracker = 0;\n return g;\n}\n\nint smallest(int a, int b, int c) {\n if (a < b) {\n if (c < a) return c;\n return a;\n }\n else {\n if (c < b) return c;\n return b;\n }\n}\n\nint hamming_gen(struct gen *g) {\n generator* twos = twos_stream();\n generator* threes = threes_stream();\n generator* fives = fives_stream();\n\n int c2 = twos->next(twos);\n int c3 = threes->next(threes);\n int c5 = fives->next(fives);\n\n while (c2 <= g->tracker) c2 = twos->next(twos);\n while (c3 <= g->tracker) c3 = threes->next(threes);\n while (c5 <= g->tracker) c5 = fives->next(fives);\n\n g->tracker = smallest(c2,c3,c5);\n return g->tracker;\n}\n\ngenerator* hammings_stream() {\n generator *g = malloc(sizeof(generator));\n g->next = hamming_gen;\n g->tracker = 0;\n return g;\n}\n\nint main() {\n generator* hammings = hammings_stream();\n int i = 0;\n while (i<10) {\n printf(\"Hamming No: %d\\n\",hammings->next(hammings));\n i++;\n }\n}\n\n" ]
[ 20, 6, 2, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "c", "python" ]
stackoverflow_0001635827_c_python.txt
Q: Python pysqlite2 dbapi2 problem I'm having an issue with the line: from pysqlite2 import dbapi2 as sqlite The error i'm getting is: ImportError: /usr/lib/python2.4/site-packages/pysqlite2/_sqlite.so: undefined symbol: sqlite3_enable_shared_cache What can I do to solve this problem? Thanks! A: Sounds like _sqlite.so was compiled against a newer version of sqlite than you have installed. That function wasn't added to SQLite's API until version 3.5.0. A: The easiest way around this problem is to get the AS package Python 2.6 or later from Activestate and install that. It comes with SQLITE in the standard library. The AS package is a tarball and you install it in a user directory by running a shell script after unpacking the archive. This does not touch any of the Python bits installed with your system, and gives you a fully controlled Python environment that is easy to replicate on other systems regardless of the distro. Python's packaging system doesn't interoperate well with Linux distro package systems, especially because the Linux distros can be considerably out of date.
Python pysqlite2 dbapi2 problem
I'm having an issue with the line: from pysqlite2 import dbapi2 as sqlite The error i'm getting is: ImportError: /usr/lib/python2.4/site-packages/pysqlite2/_sqlite.so: undefined symbol: sqlite3_enable_shared_cache What can I do to solve this problem? Thanks!
[ "Sounds like _sqlite.so was compiled against a newer version of sqlite than you have installed. That function wasn't added to SQLite's API until version 3.5.0.\n", "The easiest way around this problem is to get the AS package Python 2.6 or later from Activestate and install that. It comes with SQLITE in the standard library.\nThe AS package is a tarball and you install it in a user directory by running a shell script after unpacking the archive. This does not touch any of the Python bits installed with your system, and gives you a fully controlled Python environment that is easy to replicate on other systems regardless of the distro.\nPython's packaging system doesn't interoperate well with Linux distro package systems, especially because the Linux distros can be considerably out of date.\n" ]
[ 2, 0 ]
[]
[]
[ "linux", "python", "sqlite" ]
stackoverflow_0001640537_linux_python_sqlite.txt
Q: PyQt: Trouble with asterisk on modification in QPlainTextEdit I'm having a problem with a QPlainTextEdit. I want the "contents have been modified" asterisk to appear in the title bar whenever the contents have been modified. In the example below, type a few letters. The asterisk appears as it should. Hit Ctrl+S, the asterisk disappears as it should. But then if you type a few more letters... why doesn't the asterisk appear again? import os, sys from PyQt4 import QtGui, QtCore class MyTextEdit(QtGui.QPlainTextEdit): def __init__(self): QtGui.QPlainTextEdit.__init__(self) save_seq = QtGui.QKeySequence.Save self.save_shortcut = QtGui.QShortcut(save_seq, self, self.save) QtCore.QObject.connect(self, QtCore.SIGNAL("modificationChanged(bool)"), self.on_change) def on_change(self, is_modified): print "on_change" window.setWindowModified(is_modified) def save(self): window.setWindowModified(False) # app = QtGui.QApplication(sys.argv) window = QtGui.QMainWindow() edit = MyTextEdit() window.setCentralWidget(edit) window.setWindowTitle("None [*]") window.show() app.exec_() A: Never mind, figured it out. The problem was that in the save method I should've been calling self.document().setModified(False) instead of window.setWindowModified(False)
PyQt: Trouble with asterisk on modification in QPlainTextEdit
I'm having a problem with a QPlainTextEdit. I want the "contents have been modified" asterisk to appear in the title bar whenever the contents have been modified. In the example below, type a few letters. The asterisk appears as it should. Hit Ctrl+S, the asterisk disappears as it should. But then if you type a few more letters... why doesn't the asterisk appear again? import os, sys from PyQt4 import QtGui, QtCore class MyTextEdit(QtGui.QPlainTextEdit): def __init__(self): QtGui.QPlainTextEdit.__init__(self) save_seq = QtGui.QKeySequence.Save self.save_shortcut = QtGui.QShortcut(save_seq, self, self.save) QtCore.QObject.connect(self, QtCore.SIGNAL("modificationChanged(bool)"), self.on_change) def on_change(self, is_modified): print "on_change" window.setWindowModified(is_modified) def save(self): window.setWindowModified(False) # app = QtGui.QApplication(sys.argv) window = QtGui.QMainWindow() edit = MyTextEdit() window.setCentralWidget(edit) window.setWindowTitle("None [*]") window.show() app.exec_()
[ "Never mind, figured it out. The problem was that in the save method I should've been calling self.document().setModified(False) instead of window.setWindowModified(False)\n" ]
[ 1 ]
[]
[]
[ "pyqt", "pyqt4", "python", "qt", "qt4" ]
stackoverflow_0001640878_pyqt_pyqt4_python_qt_qt4.txt
Q: post_save in django to update instance immediately I'm trying to immediately update a record after it's saved. This example may seem pointless but imagine we need to use an API after the data is saved to get some extra info and update the record: def my_handler(sender, instance=False, **kwargs): t = Test.objects.filter(id=instance.id) t.blah = 'hello' t.save() class Test(models.Model): title = models.CharField('title', max_length=200) blah = models.CharField('blah', max_length=200) post_save.connect(my_handler, sender=Test) So the 'extra' field is supposed to be set to 'hello' after each save. Correct? But it's not working. Any ideas? A: When you find yourself using a post_save signal to update an object of the sender class, chances are you should be overriding the save method instead. In your case, the model definition would look like: class Test(models.Model): title = models.CharField('title', max_length=200) blah = models.CharField('blah', max_length=200) def save(self, force_insert=False, force_update=False): if not self.blah: self.blah = 'hello' super(Test, self).save(force_insert, force_update) A: Doesn't the post_save handler take the instance? Why are you filtering using it? Why not just do: def my_handler(sender, instance=False, created, **kwargs): if created: instance.blah = 'hello' instance.save() Your existing code doesn't work because it loops, and Test.objects.filter(id=instance.id) returns a query set, not an object. To get a single object directly, use Queryset.get(). But you don't need to do that here. The created argument keeps it from looping, as it only sets it the first time. In general, unless you absolutely need to be using post_save signals, you should be overriding your object's save() method anyway.
post_save in django to update instance immediately
I'm trying to immediately update a record after it's saved. This example may seem pointless but imagine we need to use an API after the data is saved to get some extra info and update the record: def my_handler(sender, instance=False, **kwargs): t = Test.objects.filter(id=instance.id) t.blah = 'hello' t.save() class Test(models.Model): title = models.CharField('title', max_length=200) blah = models.CharField('blah', max_length=200) post_save.connect(my_handler, sender=Test) So the 'extra' field is supposed to be set to 'hello' after each save. Correct? But it's not working. Any ideas?
[ "When you find yourself using a post_save signal to update an object of the sender class, chances are you should be overriding the save method instead. In your case, the model definition would look like:\nclass Test(models.Model):\n title = models.CharField('title', max_length=200)\n blah = models.CharField('blah', max_length=200)\n\n def save(self, force_insert=False, force_update=False):\n if not self.blah:\n self.blah = 'hello'\n super(Test, self).save(force_insert, force_update)\n\n", "Doesn't the post_save handler take the instance? Why are you filtering using it? Why not just do:\ndef my_handler(sender, instance=False, created, **kwargs):\n if created:\n instance.blah = 'hello'\n instance.save()\n\nYour existing code doesn't work because it loops, and Test.objects.filter(id=instance.id) returns a query set, not an object. To get a single object directly, use Queryset.get(). But you don't need to do that here. The created argument keeps it from looping, as it only sets it the first time.\nIn general, unless you absolutely need to be using post_save signals, you should be overriding your object's save() method anyway.\n" ]
[ 21, 6 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001640744_django_django_models_python.txt
Q: Python Service Custom Command Arguments I am currently working on a python program which runs as a windows service using win32service and win32serviceutil. The service runs as it should and even after using py2exe, everything is fine (the service monitors target folder(s) and autmotically FTP's newly created files to specified FTP location). I would like, however, to add some command line arguments (in addition to install, remove, start, stop, etc...) for specifying the local and FTP directories. The only documentation on this is what I found at: http://www.py2exe.org/old/ "Optionally, you can specify a 'cmdline-style' attribute to py2exe, with valid values being 'py2exe' (the default), 'pywin32' or 'custom'. 'py2exe' specifies the traditional command-line always supported by py2exe. 'pywin32' supports the exact same command-line arguments as pywin32 supports (ie, the same arguments supported when running the service from the .py file.) 'custom' means that your module is expected to provide a 'HandleCommandLine' function which is responsible for all command-line handling." Any help would be appreciated in getting pointed in the right direction. Please let me know if any code is needed for clarity. Thanks, Zach A: here is a nice example of how to make a service with a custom HandleCommandLine classmethod -- it's part of pyro but has no dependencies on pyro, rather it's a utility "abstract base class" that you can subclass and get a service going with minimum fuss by just setting a few things in your subclass. For your specific needs, you can use it as a template to copy and edit to get the command line handling that you want!
Python Service Custom Command Arguments
I am currently working on a python program which runs as a windows service using win32service and win32serviceutil. The service runs as it should and even after using py2exe, everything is fine (the service monitors target folder(s) and autmotically FTP's newly created files to specified FTP location). I would like, however, to add some command line arguments (in addition to install, remove, start, stop, etc...) for specifying the local and FTP directories. The only documentation on this is what I found at: http://www.py2exe.org/old/ "Optionally, you can specify a 'cmdline-style' attribute to py2exe, with valid values being 'py2exe' (the default), 'pywin32' or 'custom'. 'py2exe' specifies the traditional command-line always supported by py2exe. 'pywin32' supports the exact same command-line arguments as pywin32 supports (ie, the same arguments supported when running the service from the .py file.) 'custom' means that your module is expected to provide a 'HandleCommandLine' function which is responsible for all command-line handling." Any help would be appreciated in getting pointed in the right direction. Please let me know if any code is needed for clarity. Thanks, Zach
[ "here is a nice example of how to make a service with a custom HandleCommandLine classmethod -- it's part of pyro but has no dependencies on pyro, rather it's a utility \"abstract base class\" that you can subclass and get a service going with minimum fuss by just setting a few things in your subclass. For your specific needs, you can use it as a template to copy and edit to get the command line handling that you want!\n" ]
[ 3 ]
[]
[]
[ "py2exe", "python", "windows_services" ]
stackoverflow_0001640255_py2exe_python_windows_services.txt
Q: Android: Java v. Python Is there any reason to favor Python or Java over the other for developing on Android phones, other than the usual Python v. Java issues? A: Java is "more native" on the Android platform; Python is coming after and striving to get parity but not quite there yet AFAIK. Roughly the reverse situation wrt App Engine, where Python's been around for a year longer than Java and so is still more mature and complete (even though Java's catching up). So, in any situation where you'd be at all undecided between Java and Python if the deployment was due to happen on some general purpose platform such as Linux, I think the maturity and completeness arguments could sway you towards Python for deployment on App Engine, and towards Java for deployment on Android. A: On the mobile platform performance and memory usage are much more critical than desktop or server. The JVM that runs on Android is highly optimized for the mobile platform. Based on the links I have seen about Python on Android none of them seem to have an optimized VM for mobile platform. A: With Java you have access to the full OS API. Python on Android, last time I checked, was kind of a hack. You couldn't create a GUI app, for example. There seems to some progress on the Python front on the last few months.
Android: Java v. Python
Is there any reason to favor Python or Java over the other for developing on Android phones, other than the usual Python v. Java issues?
[ "Java is \"more native\" on the Android platform; Python is coming after and striving to get parity but not quite there yet AFAIK. Roughly the reverse situation wrt App Engine, where Python's been around for a year longer than Java and so is still more mature and complete (even though Java's catching up).\nSo, in any situation where you'd be at all undecided between Java and Python if the deployment was due to happen on some general purpose platform such as Linux, I think the maturity and completeness arguments could sway you towards Python for deployment on App Engine, and towards Java for deployment on Android.\n", "On the mobile platform performance and memory usage are much more critical than desktop or server. The JVM that runs on Android is highly optimized for the mobile platform. Based on the links I have seen about Python on Android none of them seem to have an optimized VM for mobile platform. \n", "With Java you have access to the full OS API.\nPython on Android, last time I checked, was kind of a hack. You couldn't create a GUI app, for example. \nThere seems to some progress on the Python front on the last few months.\n" ]
[ 11, 2, 1 ]
[]
[]
[ "android", "java", "python" ]
stackoverflow_0001640806_android_java_python.txt
Q: Why can't Python find my path? (django) import sys sys.path.append('/home/myuser/svn-repos/myproject') from myproject.settings import * But, it says module not found when I run the script? By the way, settings.py has been set up and manage.py syncdb works. A: You want sys.path.append('/home/myuser/svn-repos') instead. Then when you import myproject, it looks in svn-repos for the myproject folder, and looks in that for settings. Alternatively, leave it as is and just import settings. This is less good because it's less specific and you may end up importing something other than what you intend. You may also want to consider sys.path.insert(0, 'yourpath') because python starts at the beginning of that dict and works backwards, so whatever you put at the front takes precedence, solving the aforementioned settings problem. A: Try: import sys sys.path.append('/home/myuser/svn-repos/myproject') from settings import * Note that from settings import * makes it difficult to track down where imported variables come from. It is not a good practive if you can avoid it.
Why can't Python find my path? (django)
import sys sys.path.append('/home/myuser/svn-repos/myproject') from myproject.settings import * But, it says module not found when I run the script? By the way, settings.py has been set up and manage.py syncdb works.
[ "You want sys.path.append('/home/myuser/svn-repos') instead. Then when you import myproject, it looks in svn-repos for the myproject folder, and looks in that for settings.\nAlternatively, leave it as is and just import settings. This is less good because it's less specific and you may end up importing something other than what you intend.\nYou may also want to consider sys.path.insert(0, 'yourpath') because python starts at the beginning of that dict and works backwards, so whatever you put at the front takes precedence, solving the aforementioned settings problem.\n", "Try:\nimport sys\nsys.path.append('/home/myuser/svn-repos/myproject')\nfrom settings import *\n\nNote that \nfrom settings import *\n\nmakes it difficult to track down where imported variables come from. It is not a good practive if you can avoid it.\n" ]
[ 3, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001641393_django_python.txt
Q: How to permanently append a path to Python for Linux? I know there are multiple solutions online, but some are for windows, some are environmental variable, etc.. What is the best way? A: Find your site-packages directory and create a new file called myproj.pth Inside that file, put each directory you want to add, one per line like so: /home/myuser/svn-repos/myproject /home/myuser/svn-repos/SomeOtherProject Python loads *.pth every time it runs, and imports all paths in each of those files. In Ubuntu, that directory can often be found at /usr/local/lib/python2.6/dist-packages/ A: I personally just define PYTHONPATH in ~/.bashrc, as for what's the "best" approach? I think that's hard to answer (or rather, there's no correct answer). It depends.
How to permanently append a path to Python for Linux?
I know there are multiple solutions online, but some are for windows, some are environmental variable, etc.. What is the best way?
[ "Find your site-packages directory and create a new file called myproj.pth\nInside that file, put each directory you want to add, one per line like so:\n\n/home/myuser/svn-repos/myproject\n/home/myuser/svn-repos/SomeOtherProject\n\nPython loads *.pth every time it runs, and imports all paths in each of those files. \nIn Ubuntu, that directory can often be found at\n/usr/local/lib/python2.6/dist-packages/\n", "I personally just define PYTHONPATH in ~/.bashrc, as for what's the \"best\" approach? I think that's hard to answer (or rather, there's no correct answer). It depends.\n" ]
[ 11, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001641418_python.txt
Q: Checking arguments in numerical Python code I find myself writing the same argument checking code all the time for number-crunching: def myfun(a, b): if a < 0: raise ValueError('a cannot be < 0 (was a=%s)' % a) # more if.. raise exception stuff here ... return a + b Is there a better way? I was told not to use 'assert' for these things (though I don't see the problem, apart from not knowing the value of the variable that caused the error). edit: To clarify, the arguments are usually just numbers and the error checking conditions can be complex, non-trivial and will not necessarily lead to an exception later, but simply to a wrong result. (unstable algorithms, meaningless solutions etc) A: assert gets optimized away if you run with python -O (modest optimizations, but sometimes nice to have). One preferable alternative if you have patterns that often repeat may be to use decorators -- great way to factor out repetition. E.g., say you have a zillion functions that must be called with arguments by-position (not by-keyword) and must have their first arguments positive; then...: def firstargpos(f): def wrapper(first, *args): if first < 0: raise ValueError(whateveryouwish) return f(first, *args) return wrapper then you say something like: @firstargpos def myfun(a, b): ... and the checks are performed in the decorators (or rather the wrapper closure it returns) once and for all. So, the only tricky part is figuring out exactly what checks your functions need and how best to call the decorator(s) to express those (hard to say, without seeing the set of functions you're defining and the set of checks each needs!-). Remember, DRY ("Don't Repeat Yourself") is close to the top spot among guiding principles in software development, and Python has reasonable support to allow you to implement DRY and avoid boilerplatey, repetitious code!-) A: You don't want to use assert because your code can be run (and is by default on some systems) in such a way that assert lines are not checked and do not raise errors (-O command line flag). If you're using a lot of variables that are all supposed to have those same properties, why not subclass whatever type you're using and add that check to the class itself? Then when you use your new class, you know you never have an invalid value, and don't have to go checking for it all over the place.
Checking arguments in numerical Python code
I find myself writing the same argument checking code all the time for number-crunching: def myfun(a, b): if a < 0: raise ValueError('a cannot be < 0 (was a=%s)' % a) # more if.. raise exception stuff here ... return a + b Is there a better way? I was told not to use 'assert' for these things (though I don't see the problem, apart from not knowing the value of the variable that caused the error). edit: To clarify, the arguments are usually just numbers and the error checking conditions can be complex, non-trivial and will not necessarily lead to an exception later, but simply to a wrong result. (unstable algorithms, meaningless solutions etc)
[ "assert gets optimized away if you run with python -O (modest optimizations, but sometimes nice to have). One preferable alternative if you have patterns that often repeat may be to use decorators -- great way to factor out repetition. E.g., say you have a zillion functions that must be called with arguments by-position (not by-keyword) and must have their first arguments positive; then...:\ndef firstargpos(f):\n def wrapper(first, *args):\n if first < 0:\n raise ValueError(whateveryouwish)\n return f(first, *args)\n return wrapper\n\nthen you say something like:\n@firstargpos\n def myfun(a, b):\n ...\nand the checks are performed in the decorators (or rather the wrapper closure it returns) once and for all. So, the only tricky part is figuring out exactly what checks your functions need and how best to call the decorator(s) to express those (hard to say, without seeing the set of functions you're defining and the set of checks each needs!-). Remember, DRY (\"Don't Repeat Yourself\") is close to the top spot among guiding principles in software development, and Python has reasonable support to allow you to implement DRY and avoid boilerplatey, repetitious code!-)\n", "You don't want to use assert because your code can be run (and is by default on some systems) in such a way that assert lines are not checked and do not raise errors (-O command line flag).\nIf you're using a lot of variables that are all supposed to have those same properties, why not subclass whatever type you're using and add that check to the class itself? Then when you use your new class, you know you never have an invalid value, and don't have to go checking for it all over the place.\n" ]
[ 4, 0 ]
[ "I'm not sure if this will answer your question, but it strikes me that checking a lot of arguments at the start of a function isn't very pythonic.\nWhat I mean by this is that it is the assumption of most pythonistas that we are all consenting adults, and we trust each other not to do something stupid. Here's how I'd write your example:\ndef myfun(a, b):\n '''a cannot be < 0'''\n return a + b\n\nThis has three distinct advantages. First off, it's concise, there's really no extra code doing anything unrelated to what you're actually trying to get done. Second, it puts the information exactly where it belongs, in help(myfun), where pythonistas are expected to look for usage notes. Finally, is a non-positive value for a really an error? Although you might think so, unless something definitely will break if a is zero (here it probably wont), then maybe letting it slip through and cause an error up the call stream is wiser. after all, if a + b is in error, it raises an exception which gets passed up the call stack and behavior is still pretty much the same.\n" ]
[ -1 ]
[ "arguments", "assert", "exception", "python" ]
stackoverflow_0001641591_arguments_assert_exception_python.txt
Q: Manual garbage collection in Python Is there any way to manually remove an object which the garbage collection refuses to get rid of even when I call gc.collect()? Working in Python 3.0 A: Per the docs, gc.get_referrers(thatobject) will tell you why the object is still alive (do it right after a gc.collect() to make sure the undesired "liveness" is gonna be persistent). After that, it's somehow of a black art;-). You'll often find that some of the referrers are lists (so WHY is that list referring to thatobject? you can .remove it in an emergency mode, but making the normal code sound is better...), and, even more often, dicts (many of whose may be __dict__s of some class instance or other -- often not trivial to find out which one... again, brute-force removal is sometimes an expedient emergency solution, but never a sustainable long-range one!-). A: If the GC is refusing to destroy it, it's because you have a reference to it somewhere. Get rid of the reference and it will (eventually) go. For example: myRef = None Keep in mind that GC may not necessarily destroy your object unless it needs to. If your object is holding resources not under the management of Python (e.g., some trickery with C code called from Python), the object should provide a resource release call so you can do it when you want rather than when Python decides. A: del Or None are your only friends >>> a = "Hello" >>> a = None Or >>> del a A: It depends on what your Python is running on. Here's good article that explains the details Quoting: In current releases of CPython, each new assignment to x inside the loop will release the previously allocated resource. Using GC, this is not guaranteed. If you want to write code that will work with any Python implementation, you should explicitly close the resource; this will work regardless of GC: for name in big_list: x = Resource() do something with x x.close()
Manual garbage collection in Python
Is there any way to manually remove an object which the garbage collection refuses to get rid of even when I call gc.collect()? Working in Python 3.0
[ "Per the docs, gc.get_referrers(thatobject) will tell you why the object is still alive (do it right after a gc.collect() to make sure the undesired \"liveness\" is gonna be persistent). After that, it's somehow of a black art;-). You'll often find that some of the referrers are lists (so WHY is that list referring to thatobject? you can .remove it in an emergency mode, but making the normal code sound is better...), and, even more often, dicts (many of whose may be __dict__s of some class instance or other -- often not trivial to find out which one... again, brute-force removal is sometimes an expedient emergency solution, but never a sustainable long-range one!-).\n", "If the GC is refusing to destroy it, it's because you have a reference to it somewhere. Get rid of the reference and it will (eventually) go. For example:\nmyRef = None\n\nKeep in mind that GC may not necessarily destroy your object unless it needs to.\nIf your object is holding resources not under the management of Python (e.g., some trickery with C code called from Python), the object should provide a resource release call so you can do it when you want rather than when Python decides.\n", "del Or None are your only friends\n>>> a = \"Hello\"\n>>> a = None\nOr\n>>> del a\n\n", "It depends on what your Python is running on. Here's good article that explains the details\nQuoting:\nIn current releases of CPython, each new assignment to x inside the loop will release the previously allocated resource. Using GC, this is not guaranteed. If you want to write code that will work with any Python implementation, you should explicitly close the resource; this will work regardless of GC:\nfor name in big_list:\n x = Resource()\n do something with x\n x.close()\n\n" ]
[ 28, 4, 4, 1 ]
[]
[]
[ "garbage_collection", "python" ]
stackoverflow_0001641717_garbage_collection_python.txt
Q: extract grammar features from sentence on Google App Engine For my GAE app I need to do some natural language processing to extract the subject and object from an input sentence. Apparently NLTK can't be installed (easily) on GAE so I am looking for another solution. I noticed GAE comes with Antlr3 but from browsing their documentation it solves a different kind of grammar problem. Any ideas? A: With regards to the NLTK problem specifically, my solution would probably be to fix the weird imports that NLTK is doing, and use that as originally planned. When you're done, submit a patch of course. That said, if this ultimately involves touching the data store, the answer is that it probably can't be done in a performant way, unless your data set is small or for some reason your NLP stuff doesn't need to hit some kind of full-text index. The GAE guys are working on it, but they have indicated that no one should be expecting a quick resolution to this particular issue. A: You can easily build and NTLK RPC server on some machine and access it. Another option is to find another web based service that already does that (such as opencalais).
extract grammar features from sentence on Google App Engine
For my GAE app I need to do some natural language processing to extract the subject and object from an input sentence. Apparently NLTK can't be installed (easily) on GAE so I am looking for another solution. I noticed GAE comes with Antlr3 but from browsing their documentation it solves a different kind of grammar problem. Any ideas?
[ "With regards to the NLTK problem specifically, my solution would probably be to fix the weird imports that NLTK is doing, and use that as originally planned. When you're done, submit a patch of course.\nThat said, if this ultimately involves touching the data store, the answer is that it probably can't be done in a performant way, unless your data set is small or for some reason your NLP stuff doesn't need to hit some kind of full-text index. The GAE guys are working on it, but they have indicated that no one should be expecting a quick resolution to this particular issue.\n", "You can easily build and NTLK RPC server on some machine and access it.\nAnother option is to find another web based service that already does that (such as opencalais).\n" ]
[ 1, 1 ]
[]
[]
[ "antlr3", "google_app_engine", "nlp", "python" ]
stackoverflow_0001641635_antlr3_google_app_engine_nlp_python.txt
Q: I'm a python beginner, dictionary is new Given dictionaries, d1 and d2, create a new dictionary with the following property: for each entry (a, b) in d1, if there is an entry (b, c) in d2, then the entry (a, c) should be added to the new dictionary. How to think of the solution? A: def transitive_dict_join(d1, d2): result = dict() for a, b in d1.iteritems(): if b in d2: result[a] = d2[b] return result You can express this more concisely, of course, but I think that, for a beginner, spelling things out is clearer and more instructive. A: I agree with Alex, on the need of spelling things out as a novice, and to move to more concise/abstract/dangerous constructs later on. For the record I'm placing here a list comprehension version as Paul's doesn't seem to work. >>> d1 = {'a':'alpha', 'b':'bravo', 'c':'charlie', 'd':'delta'} >>> d2 = {'alpha':'male', 'delta':'faucet', 'echo':'in the valley'} >>> d3 = dict([(x, d2[d1[x]]) for x in d1**.keys() **if d2.has_key(d1[x])]) #.keys() is optional, cf notes >>> d3 {'a': 'male', 'd': 'faucet'} In a nutshell, the line with "d3 =" says the following: d3 is a new dict object made from all the pairs made of x, the key of d1 and d2[d1[x]] (above are respectively the "a"s and the "c"s in the problem) where x is taken from all the keys of d1 (the "a"s in the problem) if d2 has indeed a key equal to d1[x] (above condition avoids the key errors when getting d2[d1[x]]) A: #!/usr/local/bin/python3.1 b = { 'aaa' : '[email protected]', 'bbb' : '[email protected]', 'ccc' : '[email protected]' } a = {'a':'aaa', 'b':'bbb', 'c':'ccc'} c = {} for x in a.keys(): if a[x] in b: c[x] = b[a[x]] print(c) OutPut: {'a': '[email protected]', 'c': '[email protected]', 'b': '[email protected]'}
I'm a python beginner, dictionary is new
Given dictionaries, d1 and d2, create a new dictionary with the following property: for each entry (a, b) in d1, if there is an entry (b, c) in d2, then the entry (a, c) should be added to the new dictionary. How to think of the solution?
[ "def transitive_dict_join(d1, d2):\n result = dict()\n for a, b in d1.iteritems():\n if b in d2:\n result[a] = d2[b]\n return result\n\nYou can express this more concisely, of course, but I think that, for a beginner, spelling things out is clearer and more instructive.\n", "I agree with Alex, on the need of spelling things out as a novice, and to move to more concise/abstract/dangerous constructs later on.\nFor the record I'm placing here a list comprehension version as Paul's doesn't seem to work.\n>>> d1 = {'a':'alpha', 'b':'bravo', 'c':'charlie', 'd':'delta'}\n>>> d2 = {'alpha':'male', 'delta':'faucet', 'echo':'in the valley'}\n>>> d3 = dict([(x, d2[d1[x]]) for x in d1**.keys() **if d2.has_key(d1[x])]) #.keys() is optional, cf notes\n>>> d3\n{'a': 'male', 'd': 'faucet'}\n\nIn a nutshell, the line with \"d3 =\" says the following:\n\n d3 is a new dict object made from\n all the pairs\n made of x, the key of d1 and d2[d1[x]] \n (above are respectively the \"a\"s and the \"c\"s in the problem)\n where x is taken from all the keys of d1 (the \"a\"s in the problem)\n if d2 has indeed a key equal to d1[x]\n (above condition avoids the key errors when getting d2[d1[x]])\n\n", "#!/usr/local/bin/python3.1\nb = { 'aaa' : '[email protected]',\n 'bbb' : '[email protected]',\n 'ccc' : '[email protected]'\n }\na = {'a':'aaa', 'b':'bbb', 'c':'ccc'}\nc = {} \n\nfor x in a.keys():\n if a[x] in b:\n c[x] = b[a[x]]\n\nprint(c)\n\nOutPut:\n{'a': '[email protected]', 'c': '[email protected]', 'b': '[email protected]'}\n" ]
[ 6, 4, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001641612_python.txt
Q: Regular expression to match alphanumeric string If string "x" contains any letter or number, print that string. How to do that using regular expressions? The code below is wrong if re.search('^[A-Z]?[a-z]?[0-9]?', i): print i A: re — Regular expression operations This question is actually rather tricky. Unfortunately \w includes _ and [a-z] solutions assume a 26-letter alphabet. With the below solution please read the pydoc where it talks about LOCALE and UNICODE. "[^_\\W]" Note that since you are only testing for existence, no quantifiers need to be used -- and in fact, using quantifiers that may match 0 times will returns false positives. A: You want if re.search('[A-Za-z0-9]+', i): print i A: I suggest that you check out RegexBuddy. It can explain regexes well. A: [A-Z]?[a-z]?[0-9]? matches an optional upper case letter, followed by an optional lower case letter, followed by an optional digit. So, it also matches an empty string. What you're looking for is this: [a-zA-Z0-9] which will match a single digit, lower- or upper case letter. And if you need to check for letter (and digits) outside of the ascii range, use this if your regex flavour supports it: [\p{L}\p{N}]. Where \p{L} matches any letter and \p{N} any number. A: don't need regex. >>> a="abc123" >>> if True in map(str.isdigit,list(a)): ... print a ... abc123 >>> if True in map(str.isalpha,list(a)): ... print a ... abc123 >>> a="##@%$#%#^!" >>> if True in map(str.isdigit,list(a)): ... print a ... >>> if True in map(str.isalpha,list(a)): ... print a ...
Regular expression to match alphanumeric string
If string "x" contains any letter or number, print that string. How to do that using regular expressions? The code below is wrong if re.search('^[A-Z]?[a-z]?[0-9]?', i): print i
[ "re — Regular expression operations\nThis question is actually rather tricky. Unfortunately \\w includes _ and [a-z] solutions assume a 26-letter alphabet. With the below solution please read the pydoc where it talks about LOCALE and UNICODE.\n\"[^_\\\\W]\"\n\nNote that since you are only testing for existence, no quantifiers need to be used -- and in fact, using quantifiers that may match 0 times will returns false positives.\n", "You want\nif re.search('[A-Za-z0-9]+', i):\n print i\n\n", "I suggest that you check out RegexBuddy. It can explain regexes well.\n\n\n\n\n", "[A-Z]?[a-z]?[0-9]? matches an optional upper case letter, followed by an optional lower case letter, followed by an optional digit. So, it also matches an empty string. What you're looking for is this: [a-zA-Z0-9] which will match a single digit, lower- or upper case letter. \nAnd if you need to check for letter (and digits) outside of the ascii range, use this if your regex flavour supports it: [\\p{L}\\p{N}]. Where \\p{L} matches any letter and \\p{N} any number.\n", "don't need regex. \n>>> a=\"abc123\"\n>>> if True in map(str.isdigit,list(a)):\n... print a\n...\nabc123\n>>> if True in map(str.isalpha,list(a)):\n... print a\n...\nabc123\n>>> a=\"##@%$#%#^!\"\n>>> if True in map(str.isdigit,list(a)):\n... print a\n...\n>>> if True in map(str.isalpha,list(a)):\n... print a\n...\n\n" ]
[ 4, 2, 2, 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001642018_python_regex.txt
Q: Why do I have so many DeadlineExceededErrors with google-app-engine-django? I'm using google-app-engine-django to run Django 1.1 on Google App Engine and I'm getting lots and lots of DeadlineExceededErrors, sometimes with . My entire app is quite simple, and it's happening throughout my app, so I suspect that there is a problem with my basic settings. Any advice would be greatly appreciated! Sample error: <class 'google.appengine.runtime.DeadlineExceededError'>: Traceback (most recent call last): File "/base/data/home/apps/coffeeshopprapp/1.337356339816540588/main.py", line 38, in <module> import django.core.handlers.wsgi File "/base/python_lib/versions/third_party/django-1.1/django/core/handlers/wsgi.py", line 11, in <module> from django.core.urlresolvers import set_script_prefix File "/base/python_lib/versions/third_party/django-1.1/django/core/urlresolvers.py", line 8, in <module> """ app.yaml application: coffeeshopprapp version: 1 runtime: python api_version: 1 handlers: - url: /media static_dir: media - url: /admin.* script: main.py login: admin - url: /.* script: main.py main.py import os import sys import logging from appengine_django import InstallAppengineHelperForDjango InstallAppengineHelperForDjango() from appengine_django import have_django_zip from appengine_django import django_zip_path # Google App Engine imports. from google.appengine.ext.webapp import util # Import the part of Django that we use here. import django.core.handlers.wsgi def main(): # Ensure the Django zipfile is in the path if required. if have_django_zip and django_zip_path not in sys.path: sys.path.insert(1, django_zip_path) # Create a Django application for WSGI. application = django.core.handlers.wsgi.WSGIHandler() # Run the WSGI CGI handler with that application. util.run_wsgi_app(application) if __name__ == '__main__': main() settings.py import os DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( # ('Your Name', '[email protected]'), ) MANAGERS = ADMINS DATABASE_ENGINE = 'appengine' DATABASE_NAME = '' DATABASE_USER = '' DATABASE_PASSWORD = '' DATABASE_HOST = '' DATABASE_PORT = '' TIME_ZONE = 'UTC' LANGUAGE_CODE = 'en-us' SITE_ID = 1 USE_I18N = True MEDIA_ROOT = '' MEDIA_URL = '' ADMIN_MEDIA_PREFIX = '/media/' SECRET_KEY = 'some_secret_key' EMAIL_HOST = '' TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.load_template_source', 'django.template.loaders.app_directories.load_template_source', ) MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware' ) TEMPLATE_CONTEXT_PROCESSORS = ( 'django.core.context_processors.debug', 'django.core.context_processors.i18n', ) ROOT_URLCONF = 'urls' ROOT_PATH = os.path.dirname(__file__) TEMPLATE_DIRS = ( os.path.join(ROOT_PATH, 'templates') ) INSTALLED_APPS = ( 'appengine_django', 'coffeeshoppr' # my app ) A: This is a known bug that occurs intermittently for some apps. We're working on fixing it ASAP.
Why do I have so many DeadlineExceededErrors with google-app-engine-django?
I'm using google-app-engine-django to run Django 1.1 on Google App Engine and I'm getting lots and lots of DeadlineExceededErrors, sometimes with . My entire app is quite simple, and it's happening throughout my app, so I suspect that there is a problem with my basic settings. Any advice would be greatly appreciated! Sample error: <class 'google.appengine.runtime.DeadlineExceededError'>: Traceback (most recent call last): File "/base/data/home/apps/coffeeshopprapp/1.337356339816540588/main.py", line 38, in <module> import django.core.handlers.wsgi File "/base/python_lib/versions/third_party/django-1.1/django/core/handlers/wsgi.py", line 11, in <module> from django.core.urlresolvers import set_script_prefix File "/base/python_lib/versions/third_party/django-1.1/django/core/urlresolvers.py", line 8, in <module> """ app.yaml application: coffeeshopprapp version: 1 runtime: python api_version: 1 handlers: - url: /media static_dir: media - url: /admin.* script: main.py login: admin - url: /.* script: main.py main.py import os import sys import logging from appengine_django import InstallAppengineHelperForDjango InstallAppengineHelperForDjango() from appengine_django import have_django_zip from appengine_django import django_zip_path # Google App Engine imports. from google.appengine.ext.webapp import util # Import the part of Django that we use here. import django.core.handlers.wsgi def main(): # Ensure the Django zipfile is in the path if required. if have_django_zip and django_zip_path not in sys.path: sys.path.insert(1, django_zip_path) # Create a Django application for WSGI. application = django.core.handlers.wsgi.WSGIHandler() # Run the WSGI CGI handler with that application. util.run_wsgi_app(application) if __name__ == '__main__': main() settings.py import os DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( # ('Your Name', '[email protected]'), ) MANAGERS = ADMINS DATABASE_ENGINE = 'appengine' DATABASE_NAME = '' DATABASE_USER = '' DATABASE_PASSWORD = '' DATABASE_HOST = '' DATABASE_PORT = '' TIME_ZONE = 'UTC' LANGUAGE_CODE = 'en-us' SITE_ID = 1 USE_I18N = True MEDIA_ROOT = '' MEDIA_URL = '' ADMIN_MEDIA_PREFIX = '/media/' SECRET_KEY = 'some_secret_key' EMAIL_HOST = '' TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.load_template_source', 'django.template.loaders.app_directories.load_template_source', ) MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware' ) TEMPLATE_CONTEXT_PROCESSORS = ( 'django.core.context_processors.debug', 'django.core.context_processors.i18n', ) ROOT_URLCONF = 'urls' ROOT_PATH = os.path.dirname(__file__) TEMPLATE_DIRS = ( os.path.join(ROOT_PATH, 'templates') ) INSTALLED_APPS = ( 'appengine_django', 'coffeeshoppr' # my app )
[ "This is a known bug that occurs intermittently for some apps. We're working on fixing it ASAP.\n" ]
[ 3 ]
[]
[]
[ "django", "google_app_engine", "python" ]
stackoverflow_0001639561_django_google_app_engine_python.txt
Q: Why do so many apps/frameworks keep their configuration files in an un-executed format? Many frameworks keep their configuration files in a language different from the rest of the program. Eg, Appengine keeps the configuration in yaml format. to compare, DJango settings.py is a python module. There are many disadvantages I can see with this. If its in same language as rest of the program, I can Do interesting things in the configuration file. MEDIA_DIR = os.path.join(os.path.dir(__file__), 'media') #Or whaever the correct cals are, you get the idea. Don't have to learn a new(admittedly lightweight) format My tools work as expected with it. I can just do import conf etc. I can see the advantages if it were a heavyweight language like C/C++ etc, but for python why does it make sense. It just seems like taking away power without adding any benefits. A: Some framework designers feel that the configuration files are inappropriate places for heavy logic. Just as the MVC framework prevents you from putting logic where it does not belong, the configuration file prevents you from putting programming where it does not belong. It's a matter of taste and philosophy. That said, I prefer Django's method. A: Python may not always be the only language that appengine runs on. So the same yaml configuration file could drive an appengine app written in, for example, java or perl A: Sometimes you need to use an automatic/GUI tool to parse and/or generate and/or modify a configuration file. This is not easy if your conffile is a python script. A: There is a very good reason: if your program is distributed in an unsafe environment, like the user computer, executing a text file which is so easy to modify is the door open to many viruses. This is less the case with a django application where the application is hosted on the server - a safe environment. But with an application distributed on windows using py2exe, you should refrain to make your program execute random stuff. Another reason for using a syntax like YAML is that you can manipulate the file using other tools, even other languages, the format is portable and documented enough. That said, when I need to have a configuration file with a python program, I use a python dictionnary with a few security measures: remove the enclosing { } so that it does not eval directly to a python expression use of safe_eval to discard any executable item.
Why do so many apps/frameworks keep their configuration files in an un-executed format?
Many frameworks keep their configuration files in a language different from the rest of the program. Eg, Appengine keeps the configuration in yaml format. to compare, DJango settings.py is a python module. There are many disadvantages I can see with this. If its in same language as rest of the program, I can Do interesting things in the configuration file. MEDIA_DIR = os.path.join(os.path.dir(__file__), 'media') #Or whaever the correct cals are, you get the idea. Don't have to learn a new(admittedly lightweight) format My tools work as expected with it. I can just do import conf etc. I can see the advantages if it were a heavyweight language like C/C++ etc, but for python why does it make sense. It just seems like taking away power without adding any benefits.
[ "Some framework designers feel that the configuration files are inappropriate places for heavy logic. Just as the MVC framework prevents you from putting logic where it does not belong, the configuration file prevents you from putting programming where it does not belong.\nIt's a matter of taste and philosophy. \nThat said, I prefer Django's method.\n", "Python may not always be the only language that appengine runs on.\nSo the same yaml configuration file could drive an appengine app written in, for example, java or perl\n", "Sometimes you need to use an automatic/GUI tool to parse and/or generate and/or modify a configuration file. This is not easy if your conffile is a python script.\n", "There is a very good reason: if your program is distributed in an unsafe environment, like the user computer, executing a text file which is so easy to modify is the door open to many viruses. This is less the case with a django application where the application is hosted on the server - a safe environment. But with an application distributed on windows using py2exe, you should refrain to make your program execute random stuff.\nAnother reason for using a syntax like YAML is that you can manipulate the file using other tools, even other languages, the format is portable and documented enough.\nThat said, when I need to have a configuration file with a python program, I use a python dictionnary with a few security measures:\n\nremove the enclosing { } so that it does not eval directly to a python expression\nuse of safe_eval to discard any executable item.\n\n" ]
[ 12, 5, 5, 2 ]
[ "It probably just didn't occur to them that they could do it. Many programmers are from the old days where scripting languages were slow and not really more simple than the programming languages (just look at things like Unix shells). When nifty dynamic languages came along, they just stuck to \"text only config files\" because that's what they always did.\n" ]
[ -1 ]
[ "configuration", "python", "settings", "yaml" ]
stackoverflow_0001642413_configuration_python_settings_yaml.txt
Q: Django Form Validation Framework on AppEngine: How to strip out HTML etc.? I'm using the Django Form Validation Framework on AppEngine (http://code.google.com/appengine/articles/djangoforms.html), like this: data = MyForm(data=self.request.POST) if data.is_valid(): entity = data.save(commit=False) entity.put() I wonder if there's a way to preprocess the POST data (strip out malicious code, HTML etc.) before storing it. It seems that any form validation library should offer something like that, no? Thanks Hannes A: Short answer: forms.is_valid() auto populates a dictionary forms.cleaned_data by calling a method called clean(). If you want to do any custom validation define your own 'clean_filed_name' that returns the cleaned field value or raises forms.ValidationError(). On error, the corresponding error on the field is auto populated. Long answer: Refer Documentation A: In addition to the answers above, a different perspective: Don't. Store the user input with as little processing as is practical, and sanitize the data on output. Django templates provide filters for this - 'escape' is one that escapes all HTML tags. The advantage of this approach over sanitizing the data at input time is twofold: You can change how you sanitize data at any time without having to 'grandfather in' all your old data, and when a user wants to edit something, you can show them the original data they entered, rather than the 'cleaned up' version. Cleaning up data at the wrong time is also a major cause of things like double-escaping. A: Yes, of course. Have you tried reading the forms documentation? http://docs.djangoproject.com/en/dev/ref/forms/validation/
Django Form Validation Framework on AppEngine: How to strip out HTML etc.?
I'm using the Django Form Validation Framework on AppEngine (http://code.google.com/appengine/articles/djangoforms.html), like this: data = MyForm(data=self.request.POST) if data.is_valid(): entity = data.save(commit=False) entity.put() I wonder if there's a way to preprocess the POST data (strip out malicious code, HTML etc.) before storing it. It seems that any form validation library should offer something like that, no? Thanks Hannes
[ "Short answer:\nforms.is_valid() auto populates a dictionary forms.cleaned_data by calling a method called clean(). If you want to do any custom validation define your own 'clean_filed_name' that returns the cleaned field value or raises forms.ValidationError(). On error, the corresponding error on the field is auto populated.\nLong answer:\nRefer Documentation\n", "In addition to the answers above, a different perspective: Don't. Store the user input with as little processing as is practical, and sanitize the data on output. Django templates provide filters for this - 'escape' is one that escapes all HTML tags.\nThe advantage of this approach over sanitizing the data at input time is twofold: You can change how you sanitize data at any time without having to 'grandfather in' all your old data, and when a user wants to edit something, you can show them the original data they entered, rather than the 'cleaned up' version. Cleaning up data at the wrong time is also a major cause of things like double-escaping.\n", "Yes, of course. Have you tried reading the forms documentation?\nhttp://docs.djangoproject.com/en/dev/ref/forms/validation/\n" ]
[ 2, 1, 0 ]
[]
[]
[ "django", "google_app_engine", "python" ]
stackoverflow_0001422674_django_google_app_engine_python.txt
Q: appengine remote api unable to login When I go to appengine.google.com/a/mydomain.com i am able to login and see all my apps and administer them. However, when I try to use the remote_api the same username/password does not work. I'm using the interactive console code from http://code.google.com/appengine/articles/remote_api.html A: This is a known issue with Google Accounts authentication. If you created an app and set it to use Google Accounts for authentication, and you yourself use a Google Apps account, you will not be able to authenticate against your app as an administrator using that account, even if you've created a Google Account for that email address. The workaround for now is to create a gmail account and add it as an administrator of your app, using that whenever you need to authenticate against your app as an administrator. You can still use your existing account for deployments and to log in to the admin console. It's a pain, I know - I run into it every day myself. A: I've come across the same issue while developing for GAE on google apps. To make it work, I've created a simple wrapper around a snippet provided by Nick (blog.notdot.net - I can't seem to find the proper reference right now) that has been working for me. You can find it here.
appengine remote api unable to login
When I go to appengine.google.com/a/mydomain.com i am able to login and see all my apps and administer them. However, when I try to use the remote_api the same username/password does not work. I'm using the interactive console code from http://code.google.com/appengine/articles/remote_api.html
[ "This is a known issue with Google Accounts authentication. If you created an app and set it to use Google Accounts for authentication, and you yourself use a Google Apps account, you will not be able to authenticate against your app as an administrator using that account, even if you've created a Google Account for that email address. The workaround for now is to create a gmail account and add it as an administrator of your app, using that whenever you need to authenticate against your app as an administrator. You can still use your existing account for deployments and to log in to the admin console.\nIt's a pain, I know - I run into it every day myself.\n", "I've come across the same issue while developing for GAE on google apps. To make it work, I've created a simple wrapper around a snippet provided by Nick (blog.notdot.net - I can't seem to find the proper reference right now) that has been working for me. You can find it here.\n" ]
[ 6, 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001555469_google_app_engine_python.txt
Q: General programming question. When to use OOP? My program needs to do 2 things. Extract stuff from a webpage. Do stuff with a webpage. However, there are many webpages, such as Twitter and Facebook. should I do this? def facebookExtract(): code here def twitterExtract(): code here def myspaceExtract(): code here def facebookProcess(): code here def twitterProcess(): code here def myspaceProcess(): code here Or, should I have some sort of class? When is it recommended to use classes, and when is it recommend to just use functions? A: "My program needs to do 2 things." When you start out like that, the objects cannot be seen. You're perspective isn't right. Change your thinking. "My program works with stuff" That's OO thinking. What "stuff" does your program work with? Define the stuff. Those are your basic classes. There's a class for each kind of stuff. "My program gets the stuff from various sources" There's a class for each source. "My program displays the stuff" This is usually a combination of accessor methods of the stuff plus some "reporting" classes that gather parts of the stuff to display it. When you start out defining the "stuff" not the "do", you're doing OO programming. OO applies to everything, since every single program involves "doing" and "stuff". You can chose the "doing" POV (which is can be procedural or functional), or you can chose the "stuff" POV (which is object-oriented.) A: My favorite rule of thumb: if you're in doubt (unspoken assumption: "and you're a reasonable person rather than a fanatic";-), make and use some classes. I've often found myself refactoring code originally written as simple functions into classes -- for example, any time the simple functions' best way to communicating with each others is with globals, that's a code smell, a strong hint that the system's factoring is not really good -- and often refactoring the OOP way is a reasonable fix for that. Python is multi-paradigm, but its central paradigm is OOP (much like, say, C++). When a procedural or functional approach (maybe through generators &c) is optimal for some part of the system, that generally stands out -- for example, static functions are also a code smell, and if your classes have any substantial amount of those THAT is a hint to refactor things to avoid that requirement. So, assuming you have a rich grasp of all the paradigms Python affords -- if you're STILL in doubt, that suggests you probably want to go OOP for that part of your system! Just because Python supports OOP even more wholly than it supports functional programming and the like. From your very skeletal code, it seems to me that each extract/process pair belongs together and probably needs to communicate state, so a small set of classes with extraction and processing methods seems a natural fit. A: Put as much of the common stuff together in a single function. Once you've factored as much out as possible, build a mechanism for branching to the appropriate function for each website. One possible way to do this is with python's if/else clauses, but if you have many such functions, you may want something more elegant such as F = __import__('yourproject.facebookmodule') This lets you put the code that's specific for facebook in it's own area. Since you pass __import__() a string, you can modify that at runtime based on which site you're accessing, and then just call function F in your generic worker code. More on that here: http://effbot.org/zone/import-confusion.htm A: It's up to you. I personally try to stay away from Java-style classes when programming in python. Instead, I use dicts and/or simple objects. For instance, after defining these functions (the ones you defined in the question), I'd create a simple dict, maybe like this: { 'facebook' : { 'process' : facebookProcess, 'extract': facebookExtract }, ..... } or, better yet, use introspection to get the process/extract function automatically: def processor(sitename): return getattr(module, sitename + 'Process') def extractor(sitename): return getattr(module, sitename + 'Extractor') Where module is the current module (or the module that has these functions). To get this module as an object: import sys module = sys.modules[__name__] Assuming of course, that the generic main function does something like this: figure out sitename based on input. get the extractor function for the site get processor function for the site call the extractor then the processor A: You use OOP when it makes sense, when it makes developing the solution quicker and when it makes the end result easier to read, understand and maintain. In this case it might make sense to create a generic Extractor interface/class and then have subclasses for Twitter, MySpace, Facebook, etc but this really depends on how site-specific the extraction is. The idea of this kind of abstraction is to hide such details. If you can do it, it makes sense. If you can't you probably need a different approach. It may also be that similar benefits can be obtained from good decomposition of a procedural solution. Remember at the end of the day that all these things are just tools. Pick the best one for that particular job rather than picking the hammer and then trying to turn everything into a nail. A: I regularly define classes for solving problems for a few reasons, I'll model an example of my thinking below. I have no compunctions about mixing OO models and procedural styles, that's often more a reflection of your work society than personal religion. It often works to have a procedural facade for a class hierarchy if that's what other maintainers expect. (Please excuse the PHP syntax.) I'm developing strategies and I can follow a generic model. So a possible modeling of your task might involve getting something to chew on URLs passed. This works if you want to simplify the outer logic a lot, and remove conditionals. This shows that I can use DNRY for the gather() method. // batch process method function MunchPages( $list_of_urls ) { foreach( $list_of_urls as $url ) { $muncher = PageMuncher::MuncherForUrl( $url ); $muncher->gather(); $muncher->process(); } } // factory method encaps strategy selection function MuncherForUrl( $url ) { if( strpos( $url, 'facebook.com' )) return new FacebookPageMuncher( $url ); if( ... ) return new .... ; } // common tasks defined in base PageMuncher class PageMuncher { function gather() { /* use some curl or what */ } function process() {} } class FacebookPageMuncher extends PageMuncher { function process() { /* I do it 'this' way for FB */ } } I'm creating a set of routines that are ideally hidden, and better yet, shared. An example of this might be having a class that defines toolbox methods common to a task. More specific tasks could extend the toolbox to develop their own behavior. class PageMuncherUtils { static function begin( $html, $context ) { // process assertions about html and context } static function report_fail( $context ) {} static function exit_retry( $context ) {} } // elsewhere I compose the methods in cases I don't wish to inherit them class TwitterPageMuncher { function validateAnchor( $html, $context ) { if( ! PageMuncherUtils::begin( $html, $context )) return PageMuncherUtils::report_fail( $context ); } } I want to organize my code to convey broader meaning to the maintainer. Consider that if I have even one remote service I'm interfacing with, I might be diving into different APIs inside their interface, and I want to group those routines along similar topics. Below, I show an example how I like to define a class defining common constants, a class defining basic service methods, and a more specific class for weather alerts because the alert should know how to refresh itself, and it's a more specific than the weather service itself, but also leverages the WeatherAPI constants as well. class WeatherAPI { const URL = 'http://weather.net'; const URI_TOMORROW = '/nextday/'; const URI_YESTERDAY= '/yesterday/'; const API_KEY = '123'; } class WeatherService { function get( $uri ) { } function forecast( $dateurl ) { } function alerts( $dateurl ) { return new WeatherAlert( $this->get( WeatherAPI::URL.$date ."?api=".WeatherAPI::API_KEY )); } } class WeatherAlert { function refresh() {} } // exercise: $alert = WeatherService::alerts( WeatherAPI::URI_TOMORROW ); $alert->refresh();
General programming question. When to use OOP?
My program needs to do 2 things. Extract stuff from a webpage. Do stuff with a webpage. However, there are many webpages, such as Twitter and Facebook. should I do this? def facebookExtract(): code here def twitterExtract(): code here def myspaceExtract(): code here def facebookProcess(): code here def twitterProcess(): code here def myspaceProcess(): code here Or, should I have some sort of class? When is it recommended to use classes, and when is it recommend to just use functions?
[ "\"My program needs to do 2 things.\"\nWhen you start out like that, the objects cannot be seen. You're perspective isn't right.\nChange your thinking.\n\"My program works with stuff\"\nThat's OO thinking. What \"stuff\" does your program work with? Define the stuff. Those are your basic classes. There's a class for each kind of stuff.\n\"My program gets the stuff from various sources\"\nThere's a class for each source.\n\"My program displays the stuff\"\nThis is usually a combination of accessor methods of the stuff plus some \"reporting\" classes that gather parts of the stuff to display it.\nWhen you start out defining the \"stuff\" not the \"do\", you're doing OO programming. OO applies to everything, since every single program involves \"doing\" and \"stuff\". You can chose the \"doing\" POV (which is can be procedural or functional), or you can chose the \"stuff\" POV (which is object-oriented.)\n", "My favorite rule of thumb: if you're in doubt (unspoken assumption: \"and you're a reasonable person rather than a fanatic\";-), make and use some classes. I've often found myself refactoring code originally written as simple functions into classes -- for example, any time the simple functions' best way to communicating with each others is with globals, that's a code smell, a strong hint that the system's factoring is not really good -- and often refactoring the OOP way is a reasonable fix for that.\nPython is multi-paradigm, but its central paradigm is OOP (much like, say, C++). When a procedural or functional approach (maybe through generators &c) is optimal for some part of the system, that generally stands out -- for example, static functions are also a code smell, and if your classes have any substantial amount of those THAT is a hint to refactor things to avoid that requirement.\nSo, assuming you have a rich grasp of all the paradigms Python affords -- if you're STILL in doubt, that suggests you probably want to go OOP for that part of your system! Just because Python supports OOP even more wholly than it supports functional programming and the like.\nFrom your very skeletal code, it seems to me that each extract/process pair belongs together and probably needs to communicate state, so a small set of classes with extraction and processing methods seems a natural fit.\n", "Put as much of the common stuff together in a single function. Once you've factored as much out as possible, build a mechanism for branching to the appropriate function for each website.\nOne possible way to do this is with python's if/else clauses, but if you have many such functions, you may want something more elegant such as \nF = __import__('yourproject.facebookmodule')\nThis lets you put the code that's specific for facebook in it's own area. Since you pass __import__() a string, you can modify that at runtime based on which site you're accessing, and then just call function F in your generic worker code.\nMore on that here:\nhttp://effbot.org/zone/import-confusion.htm\n", "It's up to you. I personally try to stay away from Java-style classes when programming in python. Instead, I use dicts and/or simple objects.\nFor instance, after defining these functions (the ones you defined in the question), I'd create a simple dict, maybe like this:\n{ 'facebook' : { 'process' : facebookProcess, 'extract': facebookExtract }, \n ..... \n}\n\nor, better yet, use introspection to get the process/extract function automatically:\ndef processor(sitename):\n return getattr(module, sitename + 'Process')\n\ndef extractor(sitename):\n return getattr(module, sitename + 'Extractor')\n\nWhere module is the current module (or the module that has these functions).\nTo get this module as an object:\nimport sys\nmodule = sys.modules[__name__]\n\nAssuming of course, that the generic main function does something like this:\n\n figure out sitename based on input.\n get the extractor function for the site\n get processor function for the site\n call the extractor then the processor\n\n", "You use OOP when it makes sense, when it makes developing the solution quicker and when it makes the end result easier to read, understand and maintain.\nIn this case it might make sense to create a generic Extractor interface/class and then have subclasses for Twitter, MySpace, Facebook, etc but this really depends on how site-specific the extraction is. The idea of this kind of abstraction is to hide such details. If you can do it, it makes sense. If you can't you probably need a different approach.\nIt may also be that similar benefits can be obtained from good decomposition of a procedural solution.\nRemember at the end of the day that all these things are just tools. Pick the best one for that particular job rather than picking the hammer and then trying to turn everything into a nail.\n", "I regularly define classes for solving problems for a few reasons, I'll model an example of my thinking below. I have no compunctions about mixing OO models and procedural styles, that's often more a reflection of your work society than personal religion. It often works to have a procedural facade for a class hierarchy if that's what other maintainers expect.\n(Please excuse the PHP syntax.)\n\n I'm developing strategies and I can follow a generic model. So a possible modeling of your task might involve getting something to chew on URLs passed. This works if you want to simplify the outer logic a lot, and remove conditionals. This shows that I can use DNRY for the gather() method. \n// batch process method\nfunction MunchPages( $list_of_urls )\n{\n foreach( $list_of_urls as $url )\n {\n $muncher = PageMuncher::MuncherForUrl( $url );\n $muncher->gather();\n $muncher->process();\n }\n}\n// factory method encaps strategy selection\nfunction MuncherForUrl( $url )\n{\n if( strpos( $url, 'facebook.com' ))\n return new FacebookPageMuncher( $url );\n if( ... ) \n return new .... ;\n}\n// common tasks defined in base PageMuncher\nclass PageMuncher \n{\n function gather() { /* use some curl or what */ }\n function process() {}\n}\nclass FacebookPageMuncher extends PageMuncher\n{\n function process() { /* I do it 'this' way for FB */ }\n}\n\n I'm creating a set of routines that are ideally hidden, and better yet, shared. An example of this might be having a class that defines toolbox methods common to a task. More specific tasks could extend the toolbox to develop their own behavior. \nclass PageMuncherUtils\n{\n static function begin( $html, $context )\n {\n // process assertions about html and context\n }\n static function report_fail( $context ) {}\n static function exit_retry( $context ) {}\n}\n// elsewhere I compose the methods in cases I don't wish to inherit them\nclass TwitterPageMuncher\n{\n function validateAnchor( $html, $context )\n {\n if( ! PageMuncherUtils::begin( $html, $context )) \n return PageMuncherUtils::report_fail( $context );\n }\n}\n\n I want to organize my code to convey broader meaning to the maintainer. Consider that if I have even one remote service I'm interfacing with, I might be diving into different APIs inside their interface, and I want to group those routines along similar topics. Below, I show an example how I like to define a class defining common constants, a class defining basic service methods, and a more specific class for weather alerts because the alert should know how to refresh itself, and it's a more specific than the weather service itself, but also leverages the WeatherAPI constants as well. \nclass WeatherAPI\n{\n const URL = 'http://weather.net';\n const URI_TOMORROW = '/nextday/';\n const URI_YESTERDAY= '/yesterday/';\n const API_KEY = '123';\n}\nclass WeatherService\n{\n function get( $uri ) { }\n function forecast( $dateurl ) { }\n function alerts( $dateurl )\n {\n return new WeatherAlert( \n $this->get( WeatherAPI::URL.$date\n .\"?api=\".WeatherAPI::API_KEY ));\n }\n}\nclass WeatherAlert\n{\n function refresh() {}\n}\n// exercise:\n$alert = WeatherService::alerts( WeatherAPI::URI_TOMORROW );\n$alert->refresh();\n\n" ]
[ 21, 15, 3, 3, 1, 1 ]
[]
[]
[ "function", "oop", "python" ]
stackoverflow_0001641470_function_oop_python.txt
Q: Logging events in Python; How to log events inside classes? I built (just for fun) 3 classes to help me log some events in my work. here are them: class logMessage: def __init__(self,objectName,message,messageType): self.objectName = objectName self.message = message self.messageType = messageType self.dateTime = datetime.datetime.now() def __str__(self): return str(self.dateTime) + "\nObjeto de valor " + str(self.objectName) + " gerou uma mensagem do tipo: " + self.messageType + "\n" + self.message + "\n" class logHandler(): def __init__(self): self.messages = [] def __getitem__(self,index): return self.messages[index] def __len__(self): return len(self.messages) def __str__(self): colecaoString = "" for x in self.messages: colecaoString += str(x) + "\n" return colecaoString def dumpItem(self,index): temp = self.messages[index] del self.messages[index] return str(temp) def append(self,log): if isinstance(log,logMessage.logMessage): self.messages.append(log) else: self.newLogMessage(log, "Wrong object type. Not a log message. Impossible to log.","Error") def newLogMessage(self,objectInstance,message,messageType): newMessage = logMessage.logMessage(objectInstance,message,messageType) self.append(newMessage) Here is my question: Imagine i have other classes, such as Employee, and i want to log an event that happened INSIDE that class. How can i do that without always passing a logHandler instance to every other class i want to log? My idea would be to pass a logHandler to every init function, and then use it inside it. How can that be done, without doing what i specified? How would it work with global logHandler? Is there a way to discover in runtime if there is a logHandler instance in the program, and use it to create the messages? Thanks A: Just create an instance of your classes in the module you posted. Then just import your logging module in every file you want to log from and do something like this: yourloggingmodule.handler.newLogMessage(...) Where handler is the name of the instance you created. A: You could use the Borg pattern, meaning you can create local instances of your logger object and yet have them access the same state. Some would say that is elegant, others may say it's confusing. You decide. :)
Logging events in Python; How to log events inside classes?
I built (just for fun) 3 classes to help me log some events in my work. here are them: class logMessage: def __init__(self,objectName,message,messageType): self.objectName = objectName self.message = message self.messageType = messageType self.dateTime = datetime.datetime.now() def __str__(self): return str(self.dateTime) + "\nObjeto de valor " + str(self.objectName) + " gerou uma mensagem do tipo: " + self.messageType + "\n" + self.message + "\n" class logHandler(): def __init__(self): self.messages = [] def __getitem__(self,index): return self.messages[index] def __len__(self): return len(self.messages) def __str__(self): colecaoString = "" for x in self.messages: colecaoString += str(x) + "\n" return colecaoString def dumpItem(self,index): temp = self.messages[index] del self.messages[index] return str(temp) def append(self,log): if isinstance(log,logMessage.logMessage): self.messages.append(log) else: self.newLogMessage(log, "Wrong object type. Not a log message. Impossible to log.","Error") def newLogMessage(self,objectInstance,message,messageType): newMessage = logMessage.logMessage(objectInstance,message,messageType) self.append(newMessage) Here is my question: Imagine i have other classes, such as Employee, and i want to log an event that happened INSIDE that class. How can i do that without always passing a logHandler instance to every other class i want to log? My idea would be to pass a logHandler to every init function, and then use it inside it. How can that be done, without doing what i specified? How would it work with global logHandler? Is there a way to discover in runtime if there is a logHandler instance in the program, and use it to create the messages? Thanks
[ "Just create an instance of your classes in the module you posted. Then just import your logging module in every file you want to log from and do something like this:\nyourloggingmodule.handler.newLogMessage(...)\n\nWhere handler is the name of the instance you created.\n", "You could use the Borg pattern, meaning you can create local instances of your logger object and yet have them access the same state. Some would say that is elegant, others may say it's confusing. You decide. :)\n" ]
[ 1, 1 ]
[]
[]
[ "global_variables", "logging", "oop", "python" ]
stackoverflow_0001639468_global_variables_logging_oop_python.txt
Q: Is a Python Queue needed for simple byte stream between threads? I have a simple thread that grabs bytes from a Bluetooth RFCOMM (serial-port-like) socket and dumps them into a Queue.Queue (FIFO), which seems like the typical method to exchange data between threads. Works fine. Is this overkill though? Could I just use a bytearray then have my reader thread .append(somebyte) and the processing function just .pop(0)? I'm not sure if the protections in Queue are meant for more complex "multi-producer, multi-consumer queues" and a waste for a point-to-point byte stream. Doing things like flushing the queue or grabbing multiple bytes seem more awkward with the Queue vs. a simpler data type. I guess the answer might have to do with if .pop() is atomic, but would it even matter then?... A: With Queue, you're guaranteed to be threadsafe in any implementation and version of Python. Relying on this or that method of some other object being "atomic" (in a given implementation and version) typically leaves you at the mercy of this "atomicity" not being a strong guarantee (just an implementation artifact for the specific point release &c you're using) and therefore subtle, VERY hard-to-debug race conditions being introduced with any upgrade or port to other Python implementations. If your profiling tells you that Queue's strong and general guarantees are being a bottleneck for your specific producer-consumer use case, make your own simpler guaranteed-to-be-threadsafe FIFO queue/stream. For example, if you've found out that (net of race conditions) append and pop would be perfect for your use, just make a class that protects each with a lock acquire/release (use a with statement) -- Queue adds miniscule overhead to support multiple producers and consumers and you can shave those few nanoseconds off!-) A: Yes, pop() is atomic, but I'd stick with Queue if performance is not super important. A: If the rate of input is fast enough, you can always buffer bytes up into a string before pushing that onto the Queue. That will probably increase throughput by reducing the amount of locking done, at the expense of a little extra latency on the receiving end.
Is a Python Queue needed for simple byte stream between threads?
I have a simple thread that grabs bytes from a Bluetooth RFCOMM (serial-port-like) socket and dumps them into a Queue.Queue (FIFO), which seems like the typical method to exchange data between threads. Works fine. Is this overkill though? Could I just use a bytearray then have my reader thread .append(somebyte) and the processing function just .pop(0)? I'm not sure if the protections in Queue are meant for more complex "multi-producer, multi-consumer queues" and a waste for a point-to-point byte stream. Doing things like flushing the queue or grabbing multiple bytes seem more awkward with the Queue vs. a simpler data type. I guess the answer might have to do with if .pop() is atomic, but would it even matter then?...
[ "With Queue, you're guaranteed to be threadsafe in any implementation and version of Python. Relying on this or that method of some other object being \"atomic\" (in a given implementation and version) typically leaves you at the mercy of this \"atomicity\" not being a strong guarantee (just an implementation artifact for the specific point release &c you're using) and therefore subtle, VERY hard-to-debug race conditions being introduced with any upgrade or port to other Python implementations.\nIf your profiling tells you that Queue's strong and general guarantees are being a bottleneck for your specific producer-consumer use case, make your own simpler guaranteed-to-be-threadsafe FIFO queue/stream. For example, if you've found out that (net of race conditions) append and pop would be perfect for your use, just make a class that protects each with a lock acquire/release (use a with statement) -- Queue adds miniscule overhead to support multiple producers and consumers and you can shave those few nanoseconds off!-)\n", "Yes, pop() is atomic, but I'd stick with Queue if performance is not super important. \n", "If the rate of input is fast enough, you can always buffer bytes up into a string before pushing that onto the Queue. That will probably increase throughput by reducing the amount of locking done, at the expense of a little extra latency on the receiving end.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "multithreading", "python", "queue" ]
stackoverflow_0001640112_multithreading_python_queue.txt
Q: I want to create this environment variable for everyone, but it does not load during startup? (linux) I put this at the top, using "sudo vi /etc/profile": PYTHONPATH=/home/myuser:/home/myotheruser When I use putty and log in under my username, the python path does not work! I type "set", and it is there. But, importing things from that directory still does not work. When I manually do this, then it will work. EXPORT PYTHONPATH=/home/myuser:/home/myotheruser However, I don't want to do that everytime. I simply want this path to be appended to everyone, at startup. A: You need to export the PYTHONPATH even in /etc/profile. Make sure you have these lines both in /etc/profile. PYTHONPATH=... export PYTHONPATH After that login again.
I want to create this environment variable for everyone, but it does not load during startup? (linux)
I put this at the top, using "sudo vi /etc/profile": PYTHONPATH=/home/myuser:/home/myotheruser When I use putty and log in under my username, the python path does not work! I type "set", and it is there. But, importing things from that directory still does not work. When I manually do this, then it will work. EXPORT PYTHONPATH=/home/myuser:/home/myotheruser However, I don't want to do that everytime. I simply want this path to be appended to everyone, at startup.
[ "You need to export the PYTHONPATH even in /etc/profile.\nMake sure you have these lines both in /etc/profile.\nPYTHONPATH=...\nexport PYTHONPATH\n\nAfter that login again.\n" ]
[ 4 ]
[]
[]
[ "environment_variables", "linux", "python", "unix" ]
stackoverflow_0001642926_environment_variables_linux_python_unix.txt
Q: Django 1.1 forms, models and hiding fields Consider the following Django models: class Host(models.Model): # This is the hostname only name = models.CharField(max_length=255) class Url(models.Model): # The complete url url = models.CharField(max_length=255, db_index=True, unique=True) # A foreign key identifying the host of this url # (e.g. for http://www.example.com/index.html it will # point to a record in Host containing 'www.example.com' host = models.ForeignKey(Host, db_index=True) I also have this form: class UrlForm(forms.ModelForm): class Meta: model = Urls The problem is the following: I want to compute the value of the host field automatically, so I don't want it to appear on the HTML form displayed in the web page. If I use 'exclude' to omit this field from the form, how can I then use the form to save information in the database (which requires the host field to be present) ? A: Use commit=False: result = form.save(commit=False) result.host = calculate_the_host_from(result) result.save() A: You can use exclude and then in the forms "clean" method set whatever you want. So in your form: class myform(models.ModelForm): class Meta: model=Urls exclude= ("field_name") def clean(self): self.cleaned_data["field_name"] = "whatever" return self.cleaned_data
Django 1.1 forms, models and hiding fields
Consider the following Django models: class Host(models.Model): # This is the hostname only name = models.CharField(max_length=255) class Url(models.Model): # The complete url url = models.CharField(max_length=255, db_index=True, unique=True) # A foreign key identifying the host of this url # (e.g. for http://www.example.com/index.html it will # point to a record in Host containing 'www.example.com' host = models.ForeignKey(Host, db_index=True) I also have this form: class UrlForm(forms.ModelForm): class Meta: model = Urls The problem is the following: I want to compute the value of the host field automatically, so I don't want it to appear on the HTML form displayed in the web page. If I use 'exclude' to omit this field from the form, how can I then use the form to save information in the database (which requires the host field to be present) ?
[ "Use commit=False:\nresult = form.save(commit=False)\nresult.host = calculate_the_host_from(result)\nresult.save()\n\n", "You can use exclude and then in the forms \"clean\" method set whatever you want.\nSo in your form:\nclass myform(models.ModelForm):\n class Meta:\n model=Urls\n exclude= (\"field_name\")\n def clean(self):\n self.cleaned_data[\"field_name\"] = \"whatever\"\n return self.cleaned_data\n\n" ]
[ 3, 1 ]
[]
[]
[ "django", "forms", "models", "python" ]
stackoverflow_0001643171_django_forms_models_python.txt
Q: Introspecting a DLL with python I'm currently trying to do some introspection on a DLL with python. I want to create automatically a graphical test interface based on a DLL. I can load my DLL in python quite easily and I call some functions. The main problem is if I call "dir" on the object without calling any method, I've got in result >>> dir(myLib) ['_FuncPtr', '__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__getitem__', '__hash__', '__i nit__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__s etattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_func_fl ags_', '_func_restype_', '_handle', '_name'] and when I call manually a function (like "Read_Version") I have as result of the dir function >>> dir(myLib) ['Read_Version', '_FuncPtr', '__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__getitem__', '__hash__', '__i nit__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__s etattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_func_fl ags_', '_func_restype_', '_handle', '_name'] It seems that introspection work only on function I have already called and this is not really "usefull" ;). Have you got an other idea to fetch functions which are in a DLL ? (in python of course) I'm using python 2.6 under Windows. A: As far as I know, there is no easy way to do this. You have to use some external tool (e.g. link /dump /exports) or use a PE/DLL parser (e.g. pefile).
Introspecting a DLL with python
I'm currently trying to do some introspection on a DLL with python. I want to create automatically a graphical test interface based on a DLL. I can load my DLL in python quite easily and I call some functions. The main problem is if I call "dir" on the object without calling any method, I've got in result >>> dir(myLib) ['_FuncPtr', '__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__getitem__', '__hash__', '__i nit__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__s etattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_func_fl ags_', '_func_restype_', '_handle', '_name'] and when I call manually a function (like "Read_Version") I have as result of the dir function >>> dir(myLib) ['Read_Version', '_FuncPtr', '__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__getitem__', '__hash__', '__i nit__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__s etattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_func_fl ags_', '_func_restype_', '_handle', '_name'] It seems that introspection work only on function I have already called and this is not really "usefull" ;). Have you got an other idea to fetch functions which are in a DLL ? (in python of course) I'm using python 2.6 under Windows.
[ "As far as I know, there is no easy way to do this. You have to use some external tool (e.g. link /dump /exports) or use a PE/DLL parser (e.g. pefile).\n" ]
[ 2 ]
[]
[]
[ "ctypes", "python" ]
stackoverflow_0001642938_ctypes_python.txt
Q: visualizing id, x, y, t data I have the following vehicle data vehicle_id, position_x, position_y, time The data represents the position of a vehicle at time 't' . The data is also available as a linear reference. I was wondering what's a simple way to visualize the vehicle movement as an animation? I would prefer a solution that I can integrate with python EDIT The animation I plan on doing should be similar to the 2d one found in this video A: What kind of animation did you have in mind? You can try PyGame for desktop app. They have a nice tutorial about this. A: I'd imagine its best done on a map; consider integrating (Google) maps with a custom path representing the vehicle. A: Use pygame for it.
visualizing id, x, y, t data
I have the following vehicle data vehicle_id, position_x, position_y, time The data represents the position of a vehicle at time 't' . The data is also available as a linear reference. I was wondering what's a simple way to visualize the vehicle movement as an animation? I would prefer a solution that I can integrate with python EDIT The animation I plan on doing should be similar to the 2d one found in this video
[ "What kind of animation did you have in mind? You can try PyGame for desktop app. They have a nice tutorial about this.\n", "I'd imagine its best done on a map; consider integrating (Google) maps with a custom path representing the vehicle.\n", "Use pygame for it.\n" ]
[ 3, 2, 2 ]
[]
[]
[ "animation", "python", "visualization" ]
stackoverflow_0001643265_animation_python_visualization.txt
Q: Folder and file organization for Python development What is the best way to organize code that belongs to the same project in a Python development environment? What are the do and donts of Python project organization? Do you separate each class in a file? Project A Classes "subsystem1" class1 class2 subsystem1Module "subsystem2" "utils" "etc" Tests Whatever etc? Any suggestions? Oh, and please describe what are the (possible) problems of each type of organization. What are considered best practices for organizing Python code? A: Some suggestions are at http://jcalderone.livejournal.com/39794.html and http://infinitemonkeycorps.net/docs/pph/ A: There are not that many issues that are going to be applicable only to Python. This website: Software Configuration Management Patterns and the associate book describes some Source Code Management patterns. The issues are described in the familiar patterns language so you should be able to find the information you need for your requirements. As with all patterns there is also discussion on the trade-offs.
Folder and file organization for Python development
What is the best way to organize code that belongs to the same project in a Python development environment? What are the do and donts of Python project organization? Do you separate each class in a file? Project A Classes "subsystem1" class1 class2 subsystem1Module "subsystem2" "utils" "etc" Tests Whatever etc? Any suggestions? Oh, and please describe what are the (possible) problems of each type of organization. What are considered best practices for organizing Python code?
[ "Some suggestions are at http://jcalderone.livejournal.com/39794.html and http://infinitemonkeycorps.net/docs/pph/\n", "There are not that many issues that are going to be applicable only to Python. This website: Software Configuration Management Patterns and the associate book describes some Source Code Management patterns.\nThe issues are described in the familiar patterns language so you should be able to find the information you need for your requirements. As with all patterns there is also discussion on the trade-offs.\n" ]
[ 9, 1 ]
[]
[]
[ "code_organization", "organization", "python" ]
stackoverflow_0001642975_code_organization_organization_python.txt
Q: Grouping data on year mydata = [{'date': datetime.datetime(2009, 1, 31, 0, 0), 'value': 14, 'year': u'2009'}, {'date': datetime.datetime(2009, 2, 28, 0, 0), 'value': 84, 'year': u'2009'}, {'date': datetime.datetime(2009, 3, 31, 0, 0), 'value': 77, 'year': u'2009'}, {'date': datetime.datetime(2009, 4, 30, 0, 0), 'value': 80, 'year': u'2009'}, {'date': datetime.datetime(2009, 5, 31, 0, 0), 'value': 6, 'year': u'2009'}, {'date': datetime.datetime(2009, 6, 30, 0, 0), 'value': 16, 'year': u'2009'}, {'date': datetime.datetime(2009, 7, 31, 0, 0), 'value': 16, 'year': u'2009'}, {'date': datetime.datetime(2009, 8, 31, 0, 0), 'value': 1, 'year': u'2009'}, {'date': datetime.datetime(2009, 9, 30, 0, 0), 'value': 9, 'year': u'2009'}, {'date': datetime.datetime(2008, 1, 31, 0, 0), 'value': 77, 'year': u'2008'}, {'date': datetime.datetime(2008, 2, 29, 0, 0), 'value': 60, 'year': u'2008'}, {'date': datetime.datetime(2008, 3, 31, 0, 0), 'value': 28, 'year': u'2008'}, {'date': datetime.datetime(2008, 4, 30, 0, 0), 'value': 9, 'year': u'2008'}, {'date': datetime.datetime(2008, 5, 31, 0, 0), 'value': 74, 'year': u'2008'}, {'date': datetime.datetime(2008, 6, 30, 0, 0), 'value': 70, 'year': u'2008'}, {'date': datetime.datetime(2008, 7, 31, 0, 0), 'value': 75, 'year': u'2008'}, {'date': datetime.datetime(2008, 8, 31, 0, 0), 'value': 7, 'year': u'2008'}, {'date': datetime.datetime(2008, 9, 30, 0, 0), 'value': 10, 'year': u'2008'}, {'date': datetime.datetime(2008, 10, 31, 0, 0), 'value': 54, 'year': u'2008'}, {'date': datetime.datetime(2008, 11, 30, 0, 0), 'value': 55, 'year': u'2008'}, {'date': datetime.datetime(2008, 12, 31, 0, 0), 'value': 40, 'year': u'2008'}, {'date': datetime.datetime(2007, 12, 31, 0, 0), 'value': 93, 'year': u'2007'},] In 'mydata', I get list of sequential monthly data. I wrote some code to group them on year. partial_req_data = dict([(k,[f for f in v]) for k,v in itertools.groupby(mydata, key=lambda x : x.get('year'))]) Now I further need some efficient code to fill the missing months with {}, i.e. empty dict. There are bad ways to do that, but am looking for good ones. required_data = {"2009": [{'date': datetime.datetime(2009, 1, 31, 0, 0), 'value': 14, 'year': u'2009' }, {'date': datetime.datetime(2009, 2, 28, 0, 0), 'value': 84, 'year': u'2009'}, {'date': datetime.datetime(2009, 3, 31, 0, 0), 'value': 77, 'year': u'2009'}, {'date': datetime.datetime(2009, 4, 30, 0, 0), 'value': 80, 'year': u'2009'}, {'date': datetime.datetime(2009, 5, 31, 0, 0), 'value': 6, 'year': u'2009'}, {'date': datetime.datetime(2009, 6, 30, 0, 0), 'value': 16, 'year': u'2009'}, {'date': datetime.datetime(2009, 7, 31, 0, 0), 'value': 16, 'year': u'2009'}, {'date': datetime.datetime(2009, 8, 31, 0, 0), 'value': 1, 'year': u'2009'}, {'date': datetime.datetime(2009, 9, 30, 0, 0), 'value': 9, 'year': u'2009'}, {}, {}, {}], "2008": [{'date': datetime.datetime(2008, 1, 31, 0, 0), 'value': 77, 'year': u'2008'}, {'date': datetime.datetime(2008, 2, 29, 0, 0), 'value': 60, 'year': u'2008'}, {'date': datetime.datetime(2008, 3, 31, 0, 0), 'value': 28, 'year': u'2008'}, {'date': datetime.datetime(2008, 4, 30, 0, 0), 'value': 9, 'year': u'2008'}, {'date': datetime.datetime(2008, 5, 31, 0, 0), 'value': 74, 'year': u'2008'}, {'date': datetime.datetime(2008, 6, 30, 0, 0), 'value': 70, 'year': u'2008'}, {'date': datetime.datetime(2008, 7, 31, 0, 0), 'value': 75, 'year': u'2008'}, {'date': datetime.datetime(2008, 8, 31, 0, 0), 'value': 7, 'year': u'2008'}, {'date': datetime.datetime(2008, 9, 30, 0, 0), 'value': 10, 'year': u'2008'}, {'date': datetime.datetime(2008, 10, 31, 0, 0), 'value': 54, 'year': u'2008'}, {'date': datetime.datetime(2008, 11, 30, 0, 0), 'value': 55, 'year': u'2008'}, {'date': datetime.datetime(2008, 12, 31, 0, 0), 'value': 40, 'year': u'2008'},] "2007": [{}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {'date': datetime.datetime(2007, 12, 31, 0, 0), 'value': 93, 'year': u'2007'}] } A: import datetime from itertools import groupby from pprint import pprint required_data={} for k,g in groupby(mydata,key=lambda x: x.get('year')): partial={} for datum in g: partial[datum.get('date').month]=datum required_data[k]=[partial.get(m,{}) for m in range(1,13)] pprint(required_data) For each year k, partial is a dict whose keys are months. The trick is to use partial.get(m,{}) since this will return the datum when it exists, or {} when it does not.
Grouping data on year
mydata = [{'date': datetime.datetime(2009, 1, 31, 0, 0), 'value': 14, 'year': u'2009'}, {'date': datetime.datetime(2009, 2, 28, 0, 0), 'value': 84, 'year': u'2009'}, {'date': datetime.datetime(2009, 3, 31, 0, 0), 'value': 77, 'year': u'2009'}, {'date': datetime.datetime(2009, 4, 30, 0, 0), 'value': 80, 'year': u'2009'}, {'date': datetime.datetime(2009, 5, 31, 0, 0), 'value': 6, 'year': u'2009'}, {'date': datetime.datetime(2009, 6, 30, 0, 0), 'value': 16, 'year': u'2009'}, {'date': datetime.datetime(2009, 7, 31, 0, 0), 'value': 16, 'year': u'2009'}, {'date': datetime.datetime(2009, 8, 31, 0, 0), 'value': 1, 'year': u'2009'}, {'date': datetime.datetime(2009, 9, 30, 0, 0), 'value': 9, 'year': u'2009'}, {'date': datetime.datetime(2008, 1, 31, 0, 0), 'value': 77, 'year': u'2008'}, {'date': datetime.datetime(2008, 2, 29, 0, 0), 'value': 60, 'year': u'2008'}, {'date': datetime.datetime(2008, 3, 31, 0, 0), 'value': 28, 'year': u'2008'}, {'date': datetime.datetime(2008, 4, 30, 0, 0), 'value': 9, 'year': u'2008'}, {'date': datetime.datetime(2008, 5, 31, 0, 0), 'value': 74, 'year': u'2008'}, {'date': datetime.datetime(2008, 6, 30, 0, 0), 'value': 70, 'year': u'2008'}, {'date': datetime.datetime(2008, 7, 31, 0, 0), 'value': 75, 'year': u'2008'}, {'date': datetime.datetime(2008, 8, 31, 0, 0), 'value': 7, 'year': u'2008'}, {'date': datetime.datetime(2008, 9, 30, 0, 0), 'value': 10, 'year': u'2008'}, {'date': datetime.datetime(2008, 10, 31, 0, 0), 'value': 54, 'year': u'2008'}, {'date': datetime.datetime(2008, 11, 30, 0, 0), 'value': 55, 'year': u'2008'}, {'date': datetime.datetime(2008, 12, 31, 0, 0), 'value': 40, 'year': u'2008'}, {'date': datetime.datetime(2007, 12, 31, 0, 0), 'value': 93, 'year': u'2007'},] In 'mydata', I get list of sequential monthly data. I wrote some code to group them on year. partial_req_data = dict([(k,[f for f in v]) for k,v in itertools.groupby(mydata, key=lambda x : x.get('year'))]) Now I further need some efficient code to fill the missing months with {}, i.e. empty dict. There are bad ways to do that, but am looking for good ones. required_data = {"2009": [{'date': datetime.datetime(2009, 1, 31, 0, 0), 'value': 14, 'year': u'2009' }, {'date': datetime.datetime(2009, 2, 28, 0, 0), 'value': 84, 'year': u'2009'}, {'date': datetime.datetime(2009, 3, 31, 0, 0), 'value': 77, 'year': u'2009'}, {'date': datetime.datetime(2009, 4, 30, 0, 0), 'value': 80, 'year': u'2009'}, {'date': datetime.datetime(2009, 5, 31, 0, 0), 'value': 6, 'year': u'2009'}, {'date': datetime.datetime(2009, 6, 30, 0, 0), 'value': 16, 'year': u'2009'}, {'date': datetime.datetime(2009, 7, 31, 0, 0), 'value': 16, 'year': u'2009'}, {'date': datetime.datetime(2009, 8, 31, 0, 0), 'value': 1, 'year': u'2009'}, {'date': datetime.datetime(2009, 9, 30, 0, 0), 'value': 9, 'year': u'2009'}, {}, {}, {}], "2008": [{'date': datetime.datetime(2008, 1, 31, 0, 0), 'value': 77, 'year': u'2008'}, {'date': datetime.datetime(2008, 2, 29, 0, 0), 'value': 60, 'year': u'2008'}, {'date': datetime.datetime(2008, 3, 31, 0, 0), 'value': 28, 'year': u'2008'}, {'date': datetime.datetime(2008, 4, 30, 0, 0), 'value': 9, 'year': u'2008'}, {'date': datetime.datetime(2008, 5, 31, 0, 0), 'value': 74, 'year': u'2008'}, {'date': datetime.datetime(2008, 6, 30, 0, 0), 'value': 70, 'year': u'2008'}, {'date': datetime.datetime(2008, 7, 31, 0, 0), 'value': 75, 'year': u'2008'}, {'date': datetime.datetime(2008, 8, 31, 0, 0), 'value': 7, 'year': u'2008'}, {'date': datetime.datetime(2008, 9, 30, 0, 0), 'value': 10, 'year': u'2008'}, {'date': datetime.datetime(2008, 10, 31, 0, 0), 'value': 54, 'year': u'2008'}, {'date': datetime.datetime(2008, 11, 30, 0, 0), 'value': 55, 'year': u'2008'}, {'date': datetime.datetime(2008, 12, 31, 0, 0), 'value': 40, 'year': u'2008'},] "2007": [{}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {'date': datetime.datetime(2007, 12, 31, 0, 0), 'value': 93, 'year': u'2007'}] }
[ "import datetime\nfrom itertools import groupby\nfrom pprint import pprint\n\nrequired_data={}\nfor k,g in groupby(mydata,key=lambda x: x.get('year')):\n partial={}\n for datum in g:\n partial[datum.get('date').month]=datum \n required_data[k]=[partial.get(m,{}) for m in range(1,13)]\npprint(required_data)\n\nFor each year k, partial is a dict whose keys are months. \nThe trick is to use partial.get(m,{}) since this will return the datum when it exists, or {} when it does not.\n" ]
[ 7 ]
[]
[]
[ "json", "python" ]
stackoverflow_0001643377_json_python.txt