content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Python xml.dom.minidom Unicode I'm trying to create an xml document in python, however some of the strings i'm working with are encoded in unicode. Is there a way to create a text node using xml.dom.minidom using unicode strings? Is there another module I can use? Thanks. A: In theory, per the docs: the DOMString defined in the recommendation is mapped to a Python string or Unicode string. Applications should be able to handle Unicode whenever a string is returned from the DOM. so you should be fine with either a Unicode string, or a Python string (utf-8 is the default encoding in XML). In practice, in Python 2, I've sometimes had problems with Unicode strings in xml.dom (I've switched almost entirely away from it and to ElementTree a while ago, so I'm not positive that the problems are still there in recent Python 2 releases). If you do meet problems using Unicode strings directly, I think you'll want to try encoded strings instead, e.g., thedoc.createTextNode(u'pié'.encode('utf-8')). In Python 3, of course, strs are Unicode, so everything's rather different in this regard;-). A: The dom objects seem to have an encoding argument, see 20.7.1 of the Python docs. Read the footnote as well; take care to use the proper encoding string. A: Is there a way to create a text node using xml.dom.minidom using unicode strings? Yes, createTextNode always takes Unicode strings. The text model of the XML information set is Unicode, as you can see: >>> doc= minidom.parseString('<a>b</a>') >>> doc.documentElement.firstChild.data u'b' So: >>> doc.createTextNode(u'Hell\xF6') # OK <DOM Text node "u'Hell\xf6'"> Minidom does allow you to put non-Unicode strings in the DOM, but if you do and they contain non-ASCII characters you'll come a cropper later on: >>> doc.documentElement.appendChild(doc.createTextNode('Hell\xF6')) # Wrong, not Unicode string <DOM Text node "'Hell\xF6'"> >>> doc.toxml() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/xml/dom/minidom.py", line 45, in toxml return self.toprettyxml("", "", encoding) File "/usr/lib/python2.6/xml/dom/minidom.py", line 60, in toprettyxml return writer.getvalue() File "/usr/lib/python2.6/StringIO.py", line 270, in getvalue self.buf += ''.join(self.buflist) UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128) This is assuming that by “encoded in unicode” you mean you are using Unicode strings. If you mean something else, like you've got byte strings in a UTF-8 encoding, you need to convert those byte strings to Unicode strings before you put them in the DOM: >>> b= 'Hell\xc3\xb6' # Hellö encoded in UTF-8 bytes >>> u= b.decode('utf-8') # Proper Unicode string Hellö >>> doc.documentElement.appendChild(doc.createTextNode(u)) >>> doc.toxml() u'<?xml version="1.0" ?><a>bHell\xf6</a>' # correct!
Python xml.dom.minidom Unicode
I'm trying to create an xml document in python, however some of the strings i'm working with are encoded in unicode. Is there a way to create a text node using xml.dom.minidom using unicode strings? Is there another module I can use? Thanks.
[ "In theory, per the docs:\n\nthe DOMString defined in the\n recommendation is mapped to a Python\n string or Unicode string. Applications\n should be able to handle Unicode\n whenever a string is returned from the\n DOM.\n\nso you should be fine with either a Unicode string, or a Python string (utf-8 is the default encoding in XML).\nIn practice, in Python 2, I've sometimes had problems with Unicode strings in xml.dom (I've switched almost entirely away from it and to ElementTree a while ago, so I'm not positive that the problems are still there in recent Python 2 releases).\nIf you do meet problems using Unicode strings directly, I think you'll want to try encoded strings instead, e.g., thedoc.createTextNode(u'pié'.encode('utf-8')).\nIn Python 3, of course, strs are Unicode, so everything's rather different in this regard;-).\n", "The dom objects seem to have an encoding argument, see 20.7.1 of the Python docs. Read the footnote as well; take care to use the proper encoding string.\n", "\nIs there a way to create a text node using xml.dom.minidom using unicode strings?\n\nYes, createTextNode always takes Unicode strings. The text model of the XML information set is Unicode, as you can see:\n>>> doc= minidom.parseString('<a>b</a>')\n>>> doc.documentElement.firstChild.data\nu'b'\n\nSo:\n>>> doc.createTextNode(u'Hell\\xF6') # OK\n<DOM Text node \"u'Hell\\xf6'\">\n\nMinidom does allow you to put non-Unicode strings in the DOM, but if you do and they contain non-ASCII characters you'll come a cropper later on:\n>>> doc.documentElement.appendChild(doc.createTextNode('Hell\\xF6')) # Wrong, not Unicode string\n<DOM Text node \"'Hell\\xF6'\">\n\n>>> doc.toxml()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/lib/python2.6/xml/dom/minidom.py\", line 45, in toxml\n return self.toprettyxml(\"\", \"\", encoding)\n File \"/usr/lib/python2.6/xml/dom/minidom.py\", line 60, in toprettyxml\n return writer.getvalue()\n File \"/usr/lib/python2.6/StringIO.py\", line 270, in getvalue\n self.buf += ''.join(self.buflist)\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)\n\nThis is assuming that by “encoded in unicode” you mean you are using Unicode strings. If you mean something else, like you've got byte strings in a UTF-8 encoding, you need to convert those byte strings to Unicode strings before you put them in the DOM:\n>>> b= 'Hell\\xc3\\xb6' # Hellö encoded in UTF-8 bytes\n>>> u= b.decode('utf-8') # Proper Unicode string Hellö\n>>> doc.documentElement.appendChild(doc.createTextNode(u))\n>>> doc.toxml()\nu'<?xml version=\"1.0\" ?><a>bHell\\xf6</a>' # correct!\n\n" ]
[ 3, 1, 1 ]
[]
[]
[ "python", "unicode" ]
stackoverflow_0001610948_python_unicode.txt
Q: appengine, urlfetch, and the content-length header I have a Google Appengine app requesting pages from another server using urllib2 POSTs. I recently enabled gzip compression on the other server running Apache2, and the Appengine page requests started failing on key-error, indicating 'content-length' is not in the headers. I am not explicitly declaring gzip as an accepted encoding in my requests from Appengine, but it is possible Appengine is adding that header. Googling has not turned up any clear indication that Appengine's urlfetch implicitly adds a header to accept gzip encoding. Apache2, if I recall correctly, omits content-length headers when the response is compressed, but that should not affect non-compressed responses from the same server. Does anybody have any insight as to what is happening, why the content-length header is being omitted? A: According to this thread: http://groups.google.com/group/google-appengine-java/browse_thread/thread/5c5f2a7e2d2beadc?pli=1) on an Appengine Java newsgroup, Google does generally set the 'Accept-Encoding: gzip' header on urlfetch requests, and then decompresses (ungzips) the input before handing the data to the script. So apparently, Appengine adds an accept-encoding: gzip header implicitly on the requests way out to the internet, and decompresses the response, but does not insert a content-length into the headers for the decompressed data size. So if the outside server will provide gzipped responses, the net result to the Appengine script (after all the pre- and post- processing behavior by Appengine described above) is the loss of the content-length header.
appengine, urlfetch, and the content-length header
I have a Google Appengine app requesting pages from another server using urllib2 POSTs. I recently enabled gzip compression on the other server running Apache2, and the Appengine page requests started failing on key-error, indicating 'content-length' is not in the headers. I am not explicitly declaring gzip as an accepted encoding in my requests from Appengine, but it is possible Appengine is adding that header. Googling has not turned up any clear indication that Appengine's urlfetch implicitly adds a header to accept gzip encoding. Apache2, if I recall correctly, omits content-length headers when the response is compressed, but that should not affect non-compressed responses from the same server. Does anybody have any insight as to what is happening, why the content-length header is being omitted?
[ "According to this thread:\nhttp://groups.google.com/group/google-appengine-java/browse_thread/thread/5c5f2a7e2d2beadc?pli=1) \non an Appengine Java newsgroup, Google does generally set the 'Accept-Encoding: gzip' header on urlfetch requests, and then decompresses (ungzips) the input before handing the data to the script.\nSo apparently, Appengine adds an accept-encoding: gzip header implicitly on the requests way out to the internet, and decompresses the response, but does not insert a content-length into the headers for the decompressed data size. So if the outside server will provide gzipped responses, the net result to the Appengine script (after all the pre- and post- processing behavior by Appengine described above) is the loss of the content-length header.\n" ]
[ 2 ]
[]
[]
[ "apache2", "google_app_engine", "post", "python" ]
stackoverflow_0001590844_apache2_google_app_engine_post_python.txt
Q: In python, how can I do a non-blocking system call? In Python, is it possible to do a non-blocking system call without forking off a thread? i.e., can I avoid: import thread thread.start_new_thread(os.system,('cmd',)) A: Use the subprocess module (Popen) and have the result written to a file. You can either "wait" for the subprocess to terminate or proceed with other business and poll for the result in the file etc.
In python, how can I do a non-blocking system call?
In Python, is it possible to do a non-blocking system call without forking off a thread? i.e., can I avoid: import thread thread.start_new_thread(os.system,('cmd',))
[ "Use the subprocess module (Popen) and have the result written to a file. You can either \"wait\" for the subprocess to terminate or proceed with other business and poll for the result in the file etc.\n" ]
[ 10 ]
[]
[]
[ "python" ]
stackoverflow_0001613713_python.txt
Q: Daemon dies unexpectedly I have a python script, which I daemonise using this code def daemonise(): from os import fork, setsid, umask, dup2 from sys import stdin, stdout, stderr if fork(): exit(0) umask(0) setsid() if fork(): exit(0) stdout.flush() stderr.flush() si = file('/dev/null', 'r') so = file('daemon-%s.out'%os.getpid(), 'a+') se = file('daemon-%s.err'%os.getpid(), 'a+') dup2(si.fileno(), stdin.fileno()) dup2(so.fileno(), stdout.fileno()) dup2(se.fileno(), stderr.fileno()) print 'this file has the output from daemon%s'%os.getpid() print >> stderr, 'this file has the errors from daemon%s'%os.getpid() The script is in while True: try: funny_code(); sleep(10); except:pass; loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons. [Edit] Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.) A: You really should use python-daemon for this which is a library that implements PEP 3141 for a standard daemon process library. This way you will ensure that your application does all the right things for whichever type of UNIX it is running under. No need to reinvent the wheel. A: Why are you silently swallowing all exceptions? Try to see what exceptions are being caught by this: while True: try: funny_code() sleep(10) except BaseException, e: print e.__class__, e.message pass Something unexpected might be happening which is causing it to fail, but you'll never know if you blindly ignore all the exceptions. I recommend using supervisord (written in Python, very easy to use) for daemonizing and monitoring processes. Running under supervisord you would not have to use your daemonise function. A: What I've used in my clients is daemontools. It is a proven, well tested tool to run anything daemonized. You just write your application without any daemonization, to run on foreground; Then create a daemontools service folder for it, and it will discover and automatically restart your application from now on, and every time the system restarts. It can also handle log rotation and stuff. Saves a lot of tedious, repeated work.
Daemon dies unexpectedly
I have a python script, which I daemonise using this code def daemonise(): from os import fork, setsid, umask, dup2 from sys import stdin, stdout, stderr if fork(): exit(0) umask(0) setsid() if fork(): exit(0) stdout.flush() stderr.flush() si = file('/dev/null', 'r') so = file('daemon-%s.out'%os.getpid(), 'a+') se = file('daemon-%s.err'%os.getpid(), 'a+') dup2(si.fileno(), stdin.fileno()) dup2(so.fileno(), stdout.fileno()) dup2(se.fileno(), stderr.fileno()) print 'this file has the output from daemon%s'%os.getpid() print >> stderr, 'this file has the errors from daemon%s'%os.getpid() The script is in while True: try: funny_code(); sleep(10); except:pass; loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons. [Edit] Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.)
[ "You really should use python-daemon for this which is a library that implements PEP 3141 for a standard daemon process library. This way you will ensure that your application does all the right things for whichever type of UNIX it is running under. No need to reinvent the wheel.\n", "Why are you silently swallowing all exceptions? Try to see what exceptions are being caught by this:\nwhile True:\n try:\n funny_code()\n sleep(10)\n except BaseException, e:\n print e.__class__, e.message\n pass\n\nSomething unexpected might be happening which is causing it to fail, but you'll never know if you blindly ignore all the exceptions.\nI recommend using supervisord (written in Python, very easy to use) for daemonizing and monitoring processes. Running under supervisord you would not have to use your daemonise function.\n", "What I've used in my clients is daemontools. It is a proven, well tested tool to run anything daemonized.\nYou just write your application without any daemonization, to run on foreground; Then create a daemontools service folder for it, and it will discover and automatically restart your application from now on, and every time the system restarts.\nIt can also handle log rotation and stuff. Saves a lot of tedious, repeated work.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "daemon", "python" ]
stackoverflow_0001599798_daemon_python.txt
Q: Is there easy way in python to extrapolate data points to the future? I have a simple numpy array, for every date there is a data point. Something like this: >>> import numpy as np >>> from datetime import date >>> from datetime import date >>> x = np.array( [(date(2008,3,5), 4800 ), (date(2008,3,15), 4000 ), (date(2008,3, 20), 3500 ), (date(2008,4,5), 3000 ) ] ) Is there easy way to extrapolate data points to the future: date(2008,5,1), date(2008, 5, 20) etc? I understand it can be done with mathematical algorithms. But here I am seeking for some low hanging fruit. Actually I like what numpy.linalg.solve does, but it does not look applicable for the extrapolation. Maybe I am absolutely wrong. Actually to be more specific I am building a burn-down chart (xp term): 'x=date and y=volume of work to be done', so I have got the already done sprints and I want to visualise how the future sprints will go if the current situation persists. And finally I want to predict the release date. So the nature of 'volume of work to be done' is it always goes down on burn-down charts. Also I want to get the extrapolated release date: date when the volume becomes zero. This is all for showing to dev team how things go. The preciseness is not so important here :) The motivation of dev team is the main factor. That means I am absolutely fine with the very approximate extrapolation technique. A: It's all too easy for extrapolation to generate garbage; try this. Many different extrapolations are of course possible; some produce obvious garbage, some non-obvious garbage, many are ill-defined. """ extrapolate y,m,d data with scipy UnivariateSpline """ import numpy as np from scipy.interpolate import UnivariateSpline # pydoc scipy.interpolate.UnivariateSpline -- fitpack, unclear from datetime import date from pylab import * # ipython -pylab __version__ = "denis 23oct" def daynumber( y,m,d ): """ 2005,1,1 -> 0 2006,1,1 -> 365 ... """ return date( y,m,d ).toordinal() - date( 2005,1,1 ).toordinal() days, values = np.array([ (daynumber(2005,1,1), 1.2 ), (daynumber(2005,4,1), 1.8 ), (daynumber(2005,9,1), 5.3 ), (daynumber(2005,10,1), 5.3 ) ]).T dayswanted = np.array([ daynumber( year, month, 1 ) for year in range( 2005, 2006+1 ) for month in range( 1, 12+1 )]) np.set_printoptions( 1 ) # .1f print "days:", days print "values:", values print "dayswanted:", dayswanted title( "extrapolation with scipy.interpolate.UnivariateSpline" ) plot( days, values, "o" ) for k in (1,2,3): # line parabola cubicspline extrapolator = UnivariateSpline( days, values, k=k ) y = extrapolator( dayswanted ) label = "k=%d" % k print label, y plot( dayswanted, y, label=label ) # pylab legend( loc="lower left" ) grid(True) savefig( "extrapolate-UnivariateSpline.png", dpi=50 ) show() Added: a Scipy ticket says, "The behavior of the FITPACK classes in scipy.interpolate is much more complex than the docs would lead one to believe" -- imho true of other software doc too. A: A simple way of doing extrapolations is to use interpolating polynomials or splines: there are many routines for this in scipy.interpolate, and there are quite easy to use (just give the (x, y) points, and you get a function [a callable, precisely]). Now, as as been pointed in this thread, you cannot expect the extrapolation to be always meaningful (especially when you are far from your data points) if you don't have a model for your data. However, I encourage you to play with the polynomial or spline interpolations from scipy.interpolate to see whether the results you obtain suit you. A: The mathematical models are the way to go in this case. For instance, if you have only three data points, you can have absolutely no indication on how the trend will unfold (could be any of two parabola.) Get some statistics courses and try to implement the algorithms. Try Wikibooks. A: You have to swpecify over which function you need extrapolation. Than you can use regression http://en.wikipedia.org/wiki/Regression_analysis to find paratmeters of function. And extrapolate this in future. For instance: translate dates into x values and use first day as x=0 for your problem the values shoul be aproximatly (0,1.2), (400,1.8),(900,5.3) Now you decide that his points lies on function of type a+bx+cx^2 Use the method of least squers to find a,b and c http://en.wikipedia.org/wiki/Linear_least_squares (i will provide full source, but later, beacuase I do not have time for this)
Is there easy way in python to extrapolate data points to the future?
I have a simple numpy array, for every date there is a data point. Something like this: >>> import numpy as np >>> from datetime import date >>> from datetime import date >>> x = np.array( [(date(2008,3,5), 4800 ), (date(2008,3,15), 4000 ), (date(2008,3, 20), 3500 ), (date(2008,4,5), 3000 ) ] ) Is there easy way to extrapolate data points to the future: date(2008,5,1), date(2008, 5, 20) etc? I understand it can be done with mathematical algorithms. But here I am seeking for some low hanging fruit. Actually I like what numpy.linalg.solve does, but it does not look applicable for the extrapolation. Maybe I am absolutely wrong. Actually to be more specific I am building a burn-down chart (xp term): 'x=date and y=volume of work to be done', so I have got the already done sprints and I want to visualise how the future sprints will go if the current situation persists. And finally I want to predict the release date. So the nature of 'volume of work to be done' is it always goes down on burn-down charts. Also I want to get the extrapolated release date: date when the volume becomes zero. This is all for showing to dev team how things go. The preciseness is not so important here :) The motivation of dev team is the main factor. That means I am absolutely fine with the very approximate extrapolation technique.
[ "It's all too easy for extrapolation to generate garbage; try this. \nMany different extrapolations are of course possible;\nsome produce obvious garbage, some non-obvious garbage, many are ill-defined.\n\n\n\"\"\" extrapolate y,m,d data with scipy UnivariateSpline \"\"\"\nimport numpy as np\nfrom scipy.interpolate import UnivariateSpline\n # pydoc scipy.interpolate.UnivariateSpline -- fitpack, unclear\nfrom datetime import date\nfrom pylab import * # ipython -pylab\n\n__version__ = \"denis 23oct\"\n\n\ndef daynumber( y,m,d ):\n \"\"\" 2005,1,1 -> 0 2006,1,1 -> 365 ... \"\"\"\n return date( y,m,d ).toordinal() - date( 2005,1,1 ).toordinal()\n\ndays, values = np.array([\n (daynumber(2005,1,1), 1.2 ),\n (daynumber(2005,4,1), 1.8 ),\n (daynumber(2005,9,1), 5.3 ),\n (daynumber(2005,10,1), 5.3 )\n ]).T\ndayswanted = np.array([ daynumber( year, month, 1 )\n for year in range( 2005, 2006+1 )\n for month in range( 1, 12+1 )])\n\nnp.set_printoptions( 1 ) # .1f\nprint \"days:\", days\nprint \"values:\", values\nprint \"dayswanted:\", dayswanted\n\ntitle( \"extrapolation with scipy.interpolate.UnivariateSpline\" )\nplot( days, values, \"o\" )\nfor k in (1,2,3): # line parabola cubicspline\n extrapolator = UnivariateSpline( days, values, k=k )\n y = extrapolator( dayswanted )\n label = \"k=%d\" % k\n print label, y\n plot( dayswanted, y, label=label ) # pylab\n\nlegend( loc=\"lower left\" )\ngrid(True)\nsavefig( \"extrapolate-UnivariateSpline.png\", dpi=50 )\nshow()\n\nAdded: a Scipy ticket says,\n\"The behavior of the FITPACK classes in\nscipy.interpolate is much more complex than the docs would lead one to believe\" --\nimho true of other software doc too.\n", "A simple way of doing extrapolations is to use interpolating polynomials or splines: there are many routines for this in scipy.interpolate, and there are quite easy to use (just give the (x, y) points, and you get a function [a callable, precisely]).\nNow, as as been pointed in this thread, you cannot expect the extrapolation to be always meaningful (especially when you are far from your data points) if you don't have a model for your data. However, I encourage you to play with the polynomial or spline interpolations from scipy.interpolate to see whether the results you obtain suit you.\n", "The mathematical models are the way to go in this case. For instance, if you have only three data points, you can have absolutely no indication on how the trend will unfold (could be any of two parabola.)\nGet some statistics courses and try to implement the algorithms. Try Wikibooks.\n", "You have to swpecify over which function you need extrapolation.\nThan you can use regression http://en.wikipedia.org/wiki/Regression_analysis to find paratmeters of function. And extrapolate this in future.\nFor instance:\ntranslate dates into x values and use first day as x=0 for your problem the values shoul be aproximatly\n(0,1.2), (400,1.8),(900,5.3)\nNow you decide that his points lies on function of type\na+bx+cx^2\nUse the method of least squers to find a,b and c\nhttp://en.wikipedia.org/wiki/Linear_least_squares\n(i will provide full source, but later, beacuase I do not have time for this)\n" ]
[ 17, 4, 3, 1 ]
[]
[]
[ "burndowncharts", "interpolation", "numpy", "python", "spline" ]
stackoverflow_0001599754_burndowncharts_interpolation_numpy_python_spline.txt
Q: How does `setup.py sdist` work? I'm trying to make a source distribution of my project with setup.py sdist. I already have a functioning setup.py that I can install with. But when I do the sdist, all I get is another my_project folder inside my my_project folder, a MANIFEST file I have no interest in, and a zip file which contains two text files, and not my project. What am I doing wrong? Where is the documentation on sdist? Update: Here's my setup.py: #!/usr/bin/env python import os from distutils.core import setup import distutils from general_misc import package_finder try: distutils.dir_util.remove_tree('build', verbose=True) except: pass my_long_description = \ '''\ GarlicSim is a platform for writing, running and analyzing simulations. It can handle any kind of simulation: Physics, game theory, epidemic spread, electronics, etc. ''' my_packages = package_finder.get_packages('', include_self=True, recursive=True) setup( name='GarlicSim', version='0.1', description='A Pythonic framework for working with simulations', author='Ram Rachum', author_email='[email protected]', url='http://garlicsim.org', packages=my_packages, package_dir={'': '..'}, license= "LGPL 2.1 License", long_description = my_long_description, ) try: distutils.dir_util.remove_tree('build', verbose=True) except: pass A: Tarek Ziade explained this, and related software packaging tools, in this article (broken original link) called Writing a Package in Python. Basically, it creates a simple package by creating a release tree where everything needed to run the package is copied. This tree is then archived in one or many archived files (often, it just creates one tar ball). The archive is basically a copy of the source tree. A: the "sdist" command is for creating a "source" distribution of a package. Usually, one would combine this command with the "upload" command to distribute the package through Pypi (for example).
How does `setup.py sdist` work?
I'm trying to make a source distribution of my project with setup.py sdist. I already have a functioning setup.py that I can install with. But when I do the sdist, all I get is another my_project folder inside my my_project folder, a MANIFEST file I have no interest in, and a zip file which contains two text files, and not my project. What am I doing wrong? Where is the documentation on sdist? Update: Here's my setup.py: #!/usr/bin/env python import os from distutils.core import setup import distutils from general_misc import package_finder try: distutils.dir_util.remove_tree('build', verbose=True) except: pass my_long_description = \ '''\ GarlicSim is a platform for writing, running and analyzing simulations. It can handle any kind of simulation: Physics, game theory, epidemic spread, electronics, etc. ''' my_packages = package_finder.get_packages('', include_self=True, recursive=True) setup( name='GarlicSim', version='0.1', description='A Pythonic framework for working with simulations', author='Ram Rachum', author_email='[email protected]', url='http://garlicsim.org', packages=my_packages, package_dir={'': '..'}, license= "LGPL 2.1 License", long_description = my_long_description, ) try: distutils.dir_util.remove_tree('build', verbose=True) except: pass
[ "Tarek Ziade explained this, and related software packaging tools, in this article (broken original link) called Writing a Package in Python.\nBasically, it creates a simple package by\ncreating a release tree where everything needed to run the package is copied. This tree is then archived in one or many archived files (often, it just creates one tar ball). The archive is basically a copy of the source tree.\n", "the \"sdist\" command is for creating a \"source\" distribution of a package. Usually, one would combine this command with the \"upload\" command to distribute the package through Pypi (for example).\n" ]
[ 10, 8 ]
[]
[]
[ "distutils", "python" ]
stackoverflow_0001614086_distutils_python.txt
Q: How do I add text describing the code into a Python source file? When writing code in Python, how can you write something next to it that explains what the code is doing, but which doesn't affect the code? A: I think you're talking about comments? There are plain comments, which start with #: return sys.stdin.readline() # This is a comment And also Docstrings, which document modules, classes, methods and functions: def getline(): """This is a docstring""" return sys.stdin.readline() Unlike many other languages, Python does not have a multiline comment syntax (though docstrings can be multiline). A: Write a comment? Python comments start with #. A: You mean comments? Use the # character before your comment. http://en.wikibooks.org/wiki/Python_Programming/Source_Documentation_and_Comments # This is a comment print("Hello comment!")
How do I add text describing the code into a Python source file?
When writing code in Python, how can you write something next to it that explains what the code is doing, but which doesn't affect the code?
[ "I think you're talking about comments?\nThere are plain comments, which start with #:\nreturn sys.stdin.readline() # This is a comment\n\nAnd also Docstrings, which document modules, classes, methods and functions:\ndef getline():\n \"\"\"This is a docstring\"\"\"\n return sys.stdin.readline()\n\nUnlike many other languages, Python does not have a multiline comment syntax (though docstrings can be multiline).\n", "Write a comment? Python comments start with #.\n", "You mean comments? Use the # character before your comment.\nhttp://en.wikibooks.org/wiki/Python_Programming/Source_Documentation_and_Comments\n\n# This is a comment\nprint(\"Hello comment!\")\n\n" ]
[ 17, 2, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001614190_python.txt
Q: restore registry from file I am trying to migrate microsoft office settings from one system to other system by backing up office registry and restoring it on the destination machine using Python.I am able to do the saving part,but on trying to restore the existing settings in destination machine to overwrite existing office settings,i am getting an error. This is the code for restoring :- import os, sys import _winreg import win32api import win32con import win32security priv_flags = win32security.TOKEN_ADJUST_PRIVILEGES | win32security.TOKEN_QUERY hToken = win32security.OpenProcessToken (win32api.GetCurrentProcess (), priv_flags) backup_privilege_id = win32security.LookupPrivilegeValue (None, "SeBackupPrivilege") restore_privilege_id = win32security.LookupPrivilegeValue (None, "SeRestorePrivilege") win32security.AdjustTokenPrivileges ( hToken, 0, [ (backup_privilege_id, win32security.SE_PRIVILEGE_ENABLED), (restore_privilege_id, win32security.SE_PRIVILEGE_ENABLED) ] ) result = _winreg.LoadKey (_winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Office",ur"Office.registry") print "Restored Office Settings" here "office.registry" is the backed up hive HKEY_CURRENT_USER\Software\Microsoft\Office I am getting WindowsError: [Errno 5] Access is denied. Please help me to identify my mistake A: The registry system has a built-in method for updating registry keys by creating and importing a .reg text file. I suggest that you try to write your changes to a .reg file and import that. Also, you don't mention what Windows version you are using. In the newer versions, the permission system is rather more complex than it used to be.
restore registry from file
I am trying to migrate microsoft office settings from one system to other system by backing up office registry and restoring it on the destination machine using Python.I am able to do the saving part,but on trying to restore the existing settings in destination machine to overwrite existing office settings,i am getting an error. This is the code for restoring :- import os, sys import _winreg import win32api import win32con import win32security priv_flags = win32security.TOKEN_ADJUST_PRIVILEGES | win32security.TOKEN_QUERY hToken = win32security.OpenProcessToken (win32api.GetCurrentProcess (), priv_flags) backup_privilege_id = win32security.LookupPrivilegeValue (None, "SeBackupPrivilege") restore_privilege_id = win32security.LookupPrivilegeValue (None, "SeRestorePrivilege") win32security.AdjustTokenPrivileges ( hToken, 0, [ (backup_privilege_id, win32security.SE_PRIVILEGE_ENABLED), (restore_privilege_id, win32security.SE_PRIVILEGE_ENABLED) ] ) result = _winreg.LoadKey (_winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Office",ur"Office.registry") print "Restored Office Settings" here "office.registry" is the backed up hive HKEY_CURRENT_USER\Software\Microsoft\Office I am getting WindowsError: [Errno 5] Access is denied. Please help me to identify my mistake
[ "The registry system has a built-in method for updating registry keys by creating and importing a .reg text file. I suggest that you try to write your changes to a .reg file and import that.\nAlso, you don't mention what Windows version you are using. In the newer versions, the permission system is rather more complex than it used to be.\n" ]
[ 1 ]
[]
[]
[ "python", "registry" ]
stackoverflow_0001074467_python_registry.txt
Q: Pythoncard item setsize Below is the base class of my pythoncard application: class MyBackground(model.Background): def on_initialize(self, event): # if you have any initialization # including sizer setup, do it here self.setLayout() def setLayout(self): sizer1 = wx.BoxSizer(wx.VERTICAL) # main sizer for item in self.components.itervalues(): item.SetSize(item.GetBestSize()) print item.GetBestSize(),item.GetSize() # here sizer1.Add(item, 0, wx.ALL, 10) sizer1.Fit(self) self.panel.SetSizer(sizer1) self.panel.Layout() self.visible = 1 Which uses the resource file with content below: {'application':{'type':'Application', 'name':'Template', 'backgrounds': [ {'type':'Background', 'name':'bgTemplate', 'title':u'Standard Template with no menus', 'size': (800, 600), 'statusBar':1, 'style':['wx.MINIMIZE_BOX', 'wx.CLOSE_BOX', 'wx.MAXIMIZE_BOX', 'wx.FRAME_SHAPED', 'wx.CAPTION', 'wx.DEFAULT_FRAME_STYLE', 'wx.FULL_REPAINT_ON_RESIZE', 'wx.HW_SCROLLBAR_AUTO'], 'components': [ {'backgroundColor': '&H00FFFFFF&', 'name': 'MinMax0', 'position': (1080, 9900), 'size': (732, 220), 'text': '10000', 'type': 'TextField'}]}]} On the line that I have marked with a comment saying 'here' prints (80, 21) (732, 220) , which i expected to be (80, 21) (80, 21). How can i set the size of the components in a pythoncard application? A: Why 80,21? You told it to make the item 732,220 and that's what it did. Or is there something else that you didn't tell us?
Pythoncard item setsize
Below is the base class of my pythoncard application: class MyBackground(model.Background): def on_initialize(self, event): # if you have any initialization # including sizer setup, do it here self.setLayout() def setLayout(self): sizer1 = wx.BoxSizer(wx.VERTICAL) # main sizer for item in self.components.itervalues(): item.SetSize(item.GetBestSize()) print item.GetBestSize(),item.GetSize() # here sizer1.Add(item, 0, wx.ALL, 10) sizer1.Fit(self) self.panel.SetSizer(sizer1) self.panel.Layout() self.visible = 1 Which uses the resource file with content below: {'application':{'type':'Application', 'name':'Template', 'backgrounds': [ {'type':'Background', 'name':'bgTemplate', 'title':u'Standard Template with no menus', 'size': (800, 600), 'statusBar':1, 'style':['wx.MINIMIZE_BOX', 'wx.CLOSE_BOX', 'wx.MAXIMIZE_BOX', 'wx.FRAME_SHAPED', 'wx.CAPTION', 'wx.DEFAULT_FRAME_STYLE', 'wx.FULL_REPAINT_ON_RESIZE', 'wx.HW_SCROLLBAR_AUTO'], 'components': [ {'backgroundColor': '&H00FFFFFF&', 'name': 'MinMax0', 'position': (1080, 9900), 'size': (732, 220), 'text': '10000', 'type': 'TextField'}]}]} On the line that I have marked with a comment saying 'here' prints (80, 21) (732, 220) , which i expected to be (80, 21) (80, 21). How can i set the size of the components in a pythoncard application?
[ "Why 80,21?\nYou told it to make the item 732,220 and that's what it did.\nOr is there something else that you didn't tell us?\n" ]
[ 0 ]
[]
[]
[ "python", "pythoncard", "wxpython" ]
stackoverflow_0000441815_python_pythoncard_wxpython.txt
Q: Dice Roller using Tkinter Thank you to everybody who helped answer my last question: My friend took the code, and has attempted to use Tkinter to make a box that we could use to make things nicer-looking, but he has been unable to integrate the dice roller from the last question and the Tkinter. Any help or ideas in getting the dice roller into the code below would be wonderful! from Tkinter import * def callme(): label3 = Label(root, text = 'Haha! I lied!') label3.pack(padx = 10, pady = 10) root = Tk() label = Label(root, text = 'How many dice do you want to roll?') label.pack(padx = 10, pady = 10) entry = Entry(root,bg = 'white').pack(padx = 10, pady = 10) label2 = Label(root, text = 'How many dice do you want to roll?') label2.pack(padx = 10, pady = 10) entry = Entry(root,bg = 'white').pack(padx = 10, pady = 10) frame = Frame(root, bg = 'yellow') button = Button(frame, command = callme, text = 'Roll!',width = 5, height = 2) frame.pack() button.pack(padx=10,pady=10) root.mainloop() A: Because half the fun is trying to work it out yourself I'll give you a few hints instead of a complete program. You should store the variables for your input entries, so you can use them later to get the values out again, but don't do this: entry = Entry(root,bg = 'white').pack(padx = 10, pady = 10) That doesn't quite do what you expect, because you're also calling pack() it won't return the entry widget, you'll end up with NoneType instead beacuse pack() returns nothing. Use the following: entry = Entry(root,bg = 'white') entry.pack(padx = 10, pady = 10) Also, use two separate entry variables, because you want one for your number of rolls, and one for the number sides, for example: entry_sides = Entry(root,bg = 'white') entry_sides.pack(padx = 10, pady = 10) In your callme() function, you can then get those values out again, to use them to roll the dice: number_of_sides = int(entry_sides.get()) And finally, instead of creating label3 inside callme, create it as part of the frame, as you did with the other labels, and then update its value after you calculate the value of the dice roll: def callme(): # get dice total here using variables from entry label3.config(text = str(dice_total))
Dice Roller using Tkinter
Thank you to everybody who helped answer my last question: My friend took the code, and has attempted to use Tkinter to make a box that we could use to make things nicer-looking, but he has been unable to integrate the dice roller from the last question and the Tkinter. Any help or ideas in getting the dice roller into the code below would be wonderful! from Tkinter import * def callme(): label3 = Label(root, text = 'Haha! I lied!') label3.pack(padx = 10, pady = 10) root = Tk() label = Label(root, text = 'How many dice do you want to roll?') label.pack(padx = 10, pady = 10) entry = Entry(root,bg = 'white').pack(padx = 10, pady = 10) label2 = Label(root, text = 'How many dice do you want to roll?') label2.pack(padx = 10, pady = 10) entry = Entry(root,bg = 'white').pack(padx = 10, pady = 10) frame = Frame(root, bg = 'yellow') button = Button(frame, command = callme, text = 'Roll!',width = 5, height = 2) frame.pack() button.pack(padx=10,pady=10) root.mainloop()
[ "Because half the fun is trying to work it out yourself I'll give you a few hints instead of a complete program.\nYou should store the variables for your input entries, so you can use them later to get the values out again, but don't do this:\nentry = Entry(root,bg = 'white').pack(padx = 10, pady = 10)\n\nThat doesn't quite do what you expect, because you're also calling pack() it won't return the entry widget, you'll end up with NoneType instead beacuse pack() returns nothing. Use the following:\nentry = Entry(root,bg = 'white')\nentry.pack(padx = 10, pady = 10)\n\nAlso, use two separate entry variables, because you want one for your number of rolls, and one for the number sides, for example:\nentry_sides = Entry(root,bg = 'white')\nentry_sides.pack(padx = 10, pady = 10)\n\nIn your callme() function, you can then get those values out again, to use them to roll the dice:\nnumber_of_sides = int(entry_sides.get())\n\nAnd finally, instead of creating label3 inside callme, create it as part of the frame, as you did with the other labels, and then update its value after you calculate the value of the dice roll:\ndef callme():\n # get dice total here using variables from entry\n label3.config(text = str(dice_total))\n\n" ]
[ 2 ]
[]
[]
[ "python", "random", "tkinter" ]
stackoverflow_0001614244_python_random_tkinter.txt
Q: How does indexing a list with a tuple work? I am learning Python and came across this example: W = ((0,1,2),(3,4,5),(0,4,8),(2,4,6)) b = ['a','b','c','d','e','f','g','h','i'] for row in W: print b[row[0]], b[row[1]], b[row[2]] which prints: a b c d e f a e i c e g I am trying to figure out why! I get that for example the first time thru the expanded version is: print b[(0,1,2)[0]], b[(0,1,2)[1]], b[(0,1,2)[2]] But I don't understand how the (0,1,2) is interacting. Can anyone offer an explanation? Thanks. (this is an abbreviated version of some code for a tic tac toe game, and it works well, I just don't get this part) A: In shot, the (0,1,2) does nothing. Its a tuple and can be indexed just like a list, so b[(0,1,2)[0]] becomes b[0] since (0,1,2)[0] == 0. In the first step Python does b[row[0]] → b[(0,1,2)[0]] → b[0] → 'a' Btw, to get multiple items from a sequence at once you can use a operator: from operator import itemgetter for row in W: print itemgetter(*row)(b) A: it iterates over a tuple of tuples, each row is a three-element tuple, when printing it accesses three elements of the b list by index, which is what row tuple contains. probably, a slightly less cluttered way to do this is: for f, s, t in W: print b[f], b[s], b[t] A: Indexing a tuple just extracts the nth element, just as when indexing an array. That is, the expanded version print b[(0,1,2)[0]], b[(0,1,2)[1]], b[(0,1,2)[2]] is equal to print b[0], b[1], b[2] IE, the 0th element of the (0, 1, 2) tuple ((0, 1, 2)[0]) is 0. A: for row in W: first tuple placed into row is (0,1,2) in other words, W[0] == (0,1,2) Therefore, since `row` == (0,1,2), then row[0] == 0 So the [0]th element of b == 'a' b[0] == 'a' and so on... b[1] == 'b' b[2] == 'c' A: Try to write down the values of all variables in each step: the result you get is right. interaction 1: row is (0,1,2) b[row[0]], b[row[1]], b[row[2]] is b[(0,1,2)[0], (0,1,2)[1], (0,1,2)[2]], == b[0], b[1], b[2] interaction 2: row is (3,4,5) b[row[0]], b[row[1]], b[row[2]] is b[3], b[4], b[5] A: A Python interactive shell will help you see what is going on: In [78]: W = ((0,1,2),(3,4,5),(0,4,8),(2,4,6)) In [79]: b = ['a','b','c','d','e','f','g','h','i'] In [81]: row=W[0] # The first time throught the for-loop, row equals W[0] In [82]: row Out[82]: (0, 1, 2) In [83]: row[0] Out[83]: 0 In [84]: b[row[0]] Out[84]: 'a' In [85]: b[row[1]] Out[85]: 'b' In [86]: b[row[2]] Out[86]: 'c'
How does indexing a list with a tuple work?
I am learning Python and came across this example: W = ((0,1,2),(3,4,5),(0,4,8),(2,4,6)) b = ['a','b','c','d','e','f','g','h','i'] for row in W: print b[row[0]], b[row[1]], b[row[2]] which prints: a b c d e f a e i c e g I am trying to figure out why! I get that for example the first time thru the expanded version is: print b[(0,1,2)[0]], b[(0,1,2)[1]], b[(0,1,2)[2]] But I don't understand how the (0,1,2) is interacting. Can anyone offer an explanation? Thanks. (this is an abbreviated version of some code for a tic tac toe game, and it works well, I just don't get this part)
[ "In shot, the (0,1,2) does nothing. Its a tuple and can be indexed just like a list, so b[(0,1,2)[0]] becomes b[0] since (0,1,2)[0] == 0.\nIn the first step Python does b[row[0]] → b[(0,1,2)[0]] → b[0] → 'a' \nBtw, to get multiple items from a sequence at once you can use a operator:\nfrom operator import itemgetter\nfor row in W:\n print itemgetter(*row)(b)\n\n", "it iterates over a tuple of tuples, each row is a three-element tuple, when printing it accesses three elements of the b list by index, which is what row tuple contains.\nprobably, a slightly less cluttered way to do this is:\nfor f, s, t in W:\n print b[f], b[s], b[t]\n\n", "Indexing a tuple just extracts the nth element, just as when indexing an array. That is, the expanded version\nprint b[(0,1,2)[0]], b[(0,1,2)[1]], b[(0,1,2)[2]]\n\nis equal to\nprint b[0], b[1], b[2]\n\nIE, the 0th element of the (0, 1, 2) tuple ((0, 1, 2)[0]) is 0.\n", "for row in W:\nfirst tuple placed into row is (0,1,2)\nin other words, W[0] == (0,1,2)\nTherefore, since `row` == (0,1,2), then row[0] == 0\n\nSo the [0]th element of b == 'a'\nb[0] == 'a'\n\nand so on... \nb[1] == 'b'\nb[2] == 'c'\n\n", "Try to write down the values of all variables in each step: the result you get is right.\ninteraction 1:\n\nrow is (0,1,2)\nb[row[0]], b[row[1]], b[row[2]] is b[(0,1,2)[0], (0,1,2)[1], (0,1,2)[2]], == b[0], b[1], b[2]\n\ninteraction 2:\n\nrow is (3,4,5)\nb[row[0]], b[row[1]], b[row[2]] is b[3], b[4], b[5]\n\n", "A Python interactive shell will help you see what is going on:\nIn [78]: W = ((0,1,2),(3,4,5),(0,4,8),(2,4,6))\n\nIn [79]: b = ['a','b','c','d','e','f','g','h','i']\n\nIn [81]: row=W[0] # The first time throught the for-loop, row equals W[0]\n\nIn [82]: row\nOut[82]: (0, 1, 2)\n\nIn [83]: row[0]\nOut[83]: 0\n\nIn [84]: b[row[0]]\nOut[84]: 'a'\n\nIn [85]: b[row[1]]\nOut[85]: 'b'\n\nIn [86]: b[row[2]]\nOut[86]: 'c'\n\n" ]
[ 4, 4, 0, 0, 0, 0 ]
[]
[]
[ "indexing", "list", "python", "tuples" ]
stackoverflow_0001614613_indexing_list_python_tuples.txt
Q: Copying values from a dictionary into an object in Python I have a dictionary that I would like to iterate through and copy the key-value pairs into an object in Python. The dictionary is POST, and the object is a Model (in Django, perhaps Django has better way to do this). In PHP, I'd be able to use variable assignments: foreach($post as $key => $value) { $my_model->$key = $value; } And in Javascript I could treat the object with array assignments: for(var key in post) { my_model[key] = post[key]; } However, I can't seem to do so in Python. The only way I've seen is by using the objects __dict__ property, and it feels ever so slightly dirty. Plus it can raise KeyErrors. A: for key, value in post.iteritems(): setattr(my_model, key, value) A: You can use setattr, but it's likely the wrong way to do it in your context. You should look at Django's ModelForms documentation first.
Copying values from a dictionary into an object in Python
I have a dictionary that I would like to iterate through and copy the key-value pairs into an object in Python. The dictionary is POST, and the object is a Model (in Django, perhaps Django has better way to do this). In PHP, I'd be able to use variable assignments: foreach($post as $key => $value) { $my_model->$key = $value; } And in Javascript I could treat the object with array assignments: for(var key in post) { my_model[key] = post[key]; } However, I can't seem to do so in Python. The only way I've seen is by using the objects __dict__ property, and it feels ever so slightly dirty. Plus it can raise KeyErrors.
[ "for key, value in post.iteritems():\n setattr(my_model, key, value)\n\n", "You can use setattr, but it's likely the wrong way to do it in your context. You should look at Django's ModelForms documentation first.\n" ]
[ 5, 4 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001614804_django_python.txt
Q: How do I do 'from foo import *' using Python's __import__ function I want to replace settings.py in my Django system with a module implemented as a directory settings containing __init__.py. This will try to import a module named after the server, thus allowing for per-server settings. If I don't know the name of a module before I import it then I can't use the import keyword but must instead use the __import__ function. But this does not add the contents of the module to the settings module. I need the equivalent of from MACHINE_NAME import *. Or I need a way to iterate over vars(m) (where m is the loaded module) and add them to the current namespace. But I can't work out how to refer to the current namespace in order to make the assignment. In other words, I can't use setattr(x, ..) or modify x.__dict__, because I don't know what to use for x. I can't think of much else to try now apart from using exec. This seems a little feeble to me. Am I missing some aspect of Pythonic introspection that would allow me to manipulate the current scope while still in it? A: For similar situation where based on lang setting I import different messages in messages.py module it is something like # set values in current namespace for name in vars(messages): v = getattr(messages, name) globals()[name] = v Btw why do you want to create a package for settings.py? whatever you want to do can be done in settings.py directly?
How do I do 'from foo import *' using Python's __import__ function
I want to replace settings.py in my Django system with a module implemented as a directory settings containing __init__.py. This will try to import a module named after the server, thus allowing for per-server settings. If I don't know the name of a module before I import it then I can't use the import keyword but must instead use the __import__ function. But this does not add the contents of the module to the settings module. I need the equivalent of from MACHINE_NAME import *. Or I need a way to iterate over vars(m) (where m is the loaded module) and add them to the current namespace. But I can't work out how to refer to the current namespace in order to make the assignment. In other words, I can't use setattr(x, ..) or modify x.__dict__, because I don't know what to use for x. I can't think of much else to try now apart from using exec. This seems a little feeble to me. Am I missing some aspect of Pythonic introspection that would allow me to manipulate the current scope while still in it?
[ "For similar situation where based on lang setting I import different messages in messages.py module it is something like\n# set values in current namespace\nfor name in vars(messages):\n v = getattr(messages, name)\n globals()[name] = v\n\nBtw why do you want to create a package for settings.py? whatever you want to do can be done in settings.py directly?\n" ]
[ 1 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001614884_import_python.txt
Q: In Python, can I print 3 lists in order by index number? So I have three lists: ['this', 'is', 'the', 'first', 'list'] [1, 2, 3, 4, 5] [0.01, 0.2, 0.3, 0.04, 0.05] Is there a way that would allow me to print the values in these lists in order by index? e.g. this, 1, 0.01 (all items at list[0]) is, 2, 0.2 (all items at list[1]) the, 3, 0.3 (all items at list[2]) first, 4, 0.04 (all items at list[3]) list, 5, 0.05 (all items at list[4]) The number of items in each list varies each time the script is run, but they always end up with the same number of values in the end. So, one time, the script could create three arrays with 30 items, another time, it could create only 15 values in each, etc. A: What you are probably looking for is called zip: >>> x = ['this', 'is', 'the', 'first', 'list'] >>> y = [1, 2, 3, 4, 5] >>> z = [0.01, 0.2, 0.3, 0.04, 0.05] >>> zip(x,y,z) [('this', 1, 0.01), ('is', 2, 0.20000000000000001), ('the', 3, 0.29999999999999999), ('first', 4, 0.040000000000000001), ('list', 5, 0.050000000000000003)] >>> for (a,b,c) in zip(x,y,z): ... print a, b, c ... this 1 0.01 is 2 0.2 the 3 0.3 first 4 0.04 list 5 0.05 A: Use zip for items in zip(L1, L2, L3): print items items will be a tuple with a value from each list, in order. A: lists = ( ['this', 'is', 'the', 'first', 'list'], [1, 2, 3, 4, 5], [0.01, 0.2, 0.3, 0.04, 0.05]) print zip(*lists) zips the lists together and stops when the shortest list runs out of items. A: This is the most immediately obvious way (to a python newb), and there is likely a better way, but here goes: #Your list of lists. uberlist = ( list1, list2, list3 ) #No sense in duplicating this definition multiple times. #Define it up front. uberRange = range( len( uberList ) ); #Since each will be the same length, you can use one range. for i in range( len( list1 ) ): # Iterate through the sub lists. for j in uberRange: #Output print uberlist[ j ][ i ];
In Python, can I print 3 lists in order by index number?
So I have three lists: ['this', 'is', 'the', 'first', 'list'] [1, 2, 3, 4, 5] [0.01, 0.2, 0.3, 0.04, 0.05] Is there a way that would allow me to print the values in these lists in order by index? e.g. this, 1, 0.01 (all items at list[0]) is, 2, 0.2 (all items at list[1]) the, 3, 0.3 (all items at list[2]) first, 4, 0.04 (all items at list[3]) list, 5, 0.05 (all items at list[4]) The number of items in each list varies each time the script is run, but they always end up with the same number of values in the end. So, one time, the script could create three arrays with 30 items, another time, it could create only 15 values in each, etc.
[ "What you are probably looking for is called zip:\n>>> x = ['this', 'is', 'the', 'first', 'list']\n>>> y = [1, 2, 3, 4, 5]\n>>> z = [0.01, 0.2, 0.3, 0.04, 0.05]\n>>> zip(x,y,z)\n[('this', 1, 0.01), ('is', 2, 0.20000000000000001), ('the', 3, 0.29999999999999999), ('first', 4, 0.040000000000000001), ('list', 5, 0.050000000000000003)]\n>>> for (a,b,c) in zip(x,y,z):\n... print a, b, c\n... \nthis 1 0.01\nis 2 0.2\nthe 3 0.3\nfirst 4 0.04\nlist 5 0.05\n\n", "Use zip\nfor items in zip(L1, L2, L3):\n print items\n\nitems will be a tuple with a value from each list, in order.\n", "lists = ( ['this', 'is', 'the', 'first', 'list'], \n [1, 2, 3, 4, 5], \n [0.01, 0.2, 0.3, 0.04, 0.05])\nprint zip(*lists)\n\nzips the lists together and stops when the shortest list runs out of items.\n", "This is the most immediately obvious way (to a python newb), and there is likely a better way, but here goes:\n#Your list of lists.\nuberlist = ( list1, list2, list3 )\n\n#No sense in duplicating this definition multiple times.\n#Define it up front.\nuberRange = range( len( uberList ) );\n\n#Since each will be the same length, you can use one range.\nfor i in range( len( list1 ) ):\n # Iterate through the sub lists.\n for j in uberRange:\n #Output\n print uberlist[ j ][ i ];\n\n" ]
[ 9, 3, 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001615289_list_python.txt
Q: How to script a google search without google license key? I'm looking at 'pygoogle' python library for google search, call from my python script. But google doesn't give out license key anymore, and looks like pygoogle needs license key to work. Does anyone have suggestions of libraries to use for scripting web searches? Languages doesn't matter. It can be in python, perl, lisp, forth, or whatever. Of course, it needs to get around the license key issue. Or, would yahoo, excite, or any other sites provide apis allow scripting searches for free? Any comments are welcomed. I'm new on web searches. Thanks. Jay A: They don't give out keys for the SOAP API anymore, because that's deprecated. But you can use their AJAX API, which is now the preferred interface. You can get a developer key here. A: Your question made me look. Yahoo! now offers a new search service called BOSS which looks very interesting. Microsoft offers access via the Bing API A: in case you are still looking - this is how (with ajax api in python): http://dcortesi.com/2008/05/28/google-ajax-search-api-example-python-code/
How to script a google search without google license key?
I'm looking at 'pygoogle' python library for google search, call from my python script. But google doesn't give out license key anymore, and looks like pygoogle needs license key to work. Does anyone have suggestions of libraries to use for scripting web searches? Languages doesn't matter. It can be in python, perl, lisp, forth, or whatever. Of course, it needs to get around the license key issue. Or, would yahoo, excite, or any other sites provide apis allow scripting searches for free? Any comments are welcomed. I'm new on web searches. Thanks. Jay
[ "They don't give out keys for the SOAP API anymore, because that's deprecated. But you can use their AJAX API, which is now the preferred interface. You can get a developer key here.\n", "Your question made me look. \n\nYahoo! now offers a new search service called BOSS which looks very interesting.\nMicrosoft offers access via the Bing API\n\n", "in case you are still looking - this is how (with ajax api in python): http://dcortesi.com/2008/05/28/google-ajax-search-api-example-python-code/\n" ]
[ 10, 0, 0 ]
[]
[]
[ "api", "perl", "python", "search" ]
stackoverflow_0001608442_api_perl_python_search.txt
Q: Python dictionary creation error I am trying to create a Python dictionary from a stored list. This first method works >>> myList = [] >>> myList.append('Prop1') >>> myList.append('Prop2') >>> myDict = dict([myList]) However, the following method does not work >>> myList2 = ['Prop1','Prop2','Prop3','Prop4'] >>> myDict2 = dict([myList2]) ValueError: dictionary update sequence element #0 has length 3; 2 is required So I am wondering why the first method using append works but the second method doesn't work? Is there a difference between myList and myList2? Edit Checked again myList2 actually has more than two elements. Updated second example to reflect this. A: You're doing it wrong. The dict() constructor doesn't take a list of items (much less a list containing a single list of items), it takes an iterable of 2-element iterables. So if you changed your code to be: myList = [] myList.append(["mykey1", "myvalue1"]) myList.append(["mykey2", "myvalue2"]) myDict = dict(myList) Then you would get what you expect: >>> myDict {'mykey2': 'myvalue2', 'mykey1': 'myvalue1'} The reason that this works: myDict = dict([['prop1', 'prop2']]) {'prop1': 'prop2'} Is because it's interpreting it as a list which contains one element which is a list which contains two elements. Essentially, the dict constructor takes its first argument and executes code similar to this: for key, value in myList: print key, "=", value
Python dictionary creation error
I am trying to create a Python dictionary from a stored list. This first method works >>> myList = [] >>> myList.append('Prop1') >>> myList.append('Prop2') >>> myDict = dict([myList]) However, the following method does not work >>> myList2 = ['Prop1','Prop2','Prop3','Prop4'] >>> myDict2 = dict([myList2]) ValueError: dictionary update sequence element #0 has length 3; 2 is required So I am wondering why the first method using append works but the second method doesn't work? Is there a difference between myList and myList2? Edit Checked again myList2 actually has more than two elements. Updated second example to reflect this.
[ "You're doing it wrong.\nThe dict() constructor doesn't take a list of items (much less a list containing a single list of items), it takes an iterable of 2-element iterables. So if you changed your code to be:\nmyList = []\nmyList.append([\"mykey1\", \"myvalue1\"])\nmyList.append([\"mykey2\", \"myvalue2\"])\nmyDict = dict(myList)\n\nThen you would get what you expect:\n>>> myDict\n{'mykey2': 'myvalue2', 'mykey1': 'myvalue1'}\n\nThe reason that this works:\nmyDict = dict([['prop1', 'prop2']])\n{'prop1': 'prop2'}\n\nIs because it's interpreting it as a list which contains one element which is a list which contains two elements.\nEssentially, the dict constructor takes its first argument and executes code similar to this:\nfor key, value in myList:\n print key, \"=\", value\n\n" ]
[ 13 ]
[]
[]
[ "python", "python_2.5" ]
stackoverflow_0001615501_python_python_2.5.txt
Q: Engines built on pygame I am currently writing an RTS for pygame and have written a number of modules on top of pygame for common things, such as efficient collision detection, a state system and more featured sprites. Now while writing games, except for Rect and Surface, I barely write any calls to Pygame itself. When Googling around on this, I have not found anything that seems to actually take over from pygame for most of the game design. So, I was wondering if anyone else has used any engine built on pygame and found it good to work with. I was also planning on releasing the game and engine as open source, so I want to know how much others have done, and so what I should do or integrate into my engine. A: Well Pygame itself is essentially a Python wrapper to SDL calls. I think in essence you would just be wrapping a wrapper. You could always build your own adapter API, but what in particular about Pygame's API do you dislike so much that you feel you need to separate it from your code? I think your generic methods, like custom collision detection, could be separated out into its own engine module, essentially separating it from the rest of your game code, but essentially you are just layering on top of Pygame with this approach, not wrapping it. EDIT: Just as a follow up now that the question has changed. Short answer, no I'm not familiar with any. You may want to check out The Independent Gaming Source forums, those people seem fairly knowledge. Just make sure you post any answers you find back here. Long answer, it could be possible that the "engine" space between the Pygame, which handles calls to SDL and I think some additional logic (like collision detection), and the game code itself is too small a space for anyone to write a generic library for it. Essentially different types of games have different engine requirements, and the generic parts of the engine that are shared across all game types seems to be covered by Pygame itself. If you have written an RTS game in Pygame then you certainly could separate the RTS engine from your game logic, it would probably help your overall design by separating concerns. Also, it may be worth releasing that engine piece so that other people wanting to write a RTS in Pygame could benefit from it.
Engines built on pygame
I am currently writing an RTS for pygame and have written a number of modules on top of pygame for common things, such as efficient collision detection, a state system and more featured sprites. Now while writing games, except for Rect and Surface, I barely write any calls to Pygame itself. When Googling around on this, I have not found anything that seems to actually take over from pygame for most of the game design. So, I was wondering if anyone else has used any engine built on pygame and found it good to work with. I was also planning on releasing the game and engine as open source, so I want to know how much others have done, and so what I should do or integrate into my engine.
[ "Well Pygame itself is essentially a Python wrapper to SDL calls. I think in essence you would just be wrapping a wrapper.\nYou could always build your own adapter API, but what in particular about Pygame's API do you dislike so much that you feel you need to separate it from your code?\nI think your generic methods, like custom collision detection, could be separated out into its own engine module, essentially separating it from the rest of your game code, but essentially you are just layering on top of Pygame with this approach, not wrapping it. \nEDIT:\nJust as a follow up now that the question has changed. Short answer, no I'm not familiar with any. You may want to check out The Independent Gaming Source forums, those people seem fairly knowledge. Just make sure you post any answers you find back here.\nLong answer, it could be possible that the \"engine\" space between the Pygame, which handles calls to SDL and I think some additional logic (like collision detection), and the game code itself is too small a space for anyone to write a generic library for it. Essentially different types of games have different engine requirements, and the generic parts of the engine that are shared across all game types seems to be covered by Pygame itself.\nIf you have written an RTS game in Pygame then you certainly could separate the RTS engine from your game logic, it would probably help your overall design by separating concerns. Also, it may be worth releasing that engine piece so that other people wanting to write a RTS in Pygame could benefit from it.\n" ]
[ 6 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0001615693_pygame_python.txt
Q: Setting an object in the Django cache API fails due to pickle error I'm trying to manually set an object in the Django cache API but it fails (i think due to pickling?) The object is given to me by a third party, my code is: def index(request, template_name="mytemplate.htm"): user_list = cache.get("user_list_ds") if user_list is None: # this is the expensive bit I'm trying to cache # given to me by a third part user_list = graffiti.user_list("top", 100).responseObj().blocks() cache.set("user_list_ds", user_list, 10*60) # 10 minutes return render_to_response(template_name, { 'user_list' : user_list,}, context_instance = RequestContext(request)) When I run this I get an error; Can't pickle <type 'etree._Element'>: import of module etree failed in - cache.set("user_list_ds", user_list, 10*60) # 10 minutes I'm very new to python, and I'm wondering how best to resolve this, do i need to pickle something first? A: It appears that you need to install ElementTree, because the pickle operation tries and fails to import the etree module. UPDATE: Looking at it further, are you actually trying to cache document nodes? If you're trying to cache the data from the node, you probably need to do some processing of the value you're currently storing in user_list.
Setting an object in the Django cache API fails due to pickle error
I'm trying to manually set an object in the Django cache API but it fails (i think due to pickling?) The object is given to me by a third party, my code is: def index(request, template_name="mytemplate.htm"): user_list = cache.get("user_list_ds") if user_list is None: # this is the expensive bit I'm trying to cache # given to me by a third part user_list = graffiti.user_list("top", 100).responseObj().blocks() cache.set("user_list_ds", user_list, 10*60) # 10 minutes return render_to_response(template_name, { 'user_list' : user_list,}, context_instance = RequestContext(request)) When I run this I get an error; Can't pickle <type 'etree._Element'>: import of module etree failed in - cache.set("user_list_ds", user_list, 10*60) # 10 minutes I'm very new to python, and I'm wondering how best to resolve this, do i need to pickle something first?
[ "It appears that you need to install ElementTree, because the pickle operation tries and fails to import the etree module.\nUPDATE: Looking at it further, are you actually trying to cache document nodes? If you're trying to cache the data from the node, you probably need to do some processing of the value you're currently storing in user_list.\n" ]
[ 2 ]
[]
[]
[ "django", "memcached", "pickle", "python" ]
stackoverflow_0001615721_django_memcached_pickle_python.txt
Q: How to programmatically change urls of images in word documents I have a set of word documents that contains a lot of non-embedded images in them. The url that the images point to no longer exist. I would like to programmatically change the domain name of the url to something else. How can I go about doing this in Java or Python ? A: This is the sort of thing that VBA is for: Sub HlinkChanger() Dim oRange As Word.Range Dim oField As Field Dim link As Variant With ActiveDocument .Range.AutoFormat For Each oRange In .StoryRanges For Each oFld In oRange.Fields If oFld.Type = wdFieldHyperlink Then For Each link In oFld.Result.Hyperlinks // the hyperlink is stored in link.Address // strip the first x characters of the URL // and replace them with your new URL Next link End If Next oFld Set oRange = oRange.NextStoryRange Next oRange A: Perhaps the Microsoft Office Word binary file format specification could help you here, though someone who already did stuff like this might come up with a better answer. A: You want to do this in Java or Python. Try OpenOffice. In OpenOffice, you can insert Java or Python code as a "Makro". I'm sure there will be a possibility to change the image URLs. A: The VBA answer is the closest because this is best done using the Microsoft Word COM API. However, you can use this just as well from Python. I've used it myself to import data into a database from hundreds of forms that were Word Documents. This article explains the basics. Note that even though it creates a class wrapper for the WordDocument COM object, you don't need to do this if you don't want to. You can just access the COM API directly. For documentation of the WordDocument COM API, open a word document, press Alt-F11 to open the VBA editor, and then F2 to view the object browser. This allows you to browse through all of the objects and the methods that they provide. An introduction to Python and the COM object model is found here.
How to programmatically change urls of images in word documents
I have a set of word documents that contains a lot of non-embedded images in them. The url that the images point to no longer exist. I would like to programmatically change the domain name of the url to something else. How can I go about doing this in Java or Python ?
[ "This is the sort of thing that VBA is for:\nSub HlinkChanger()\nDim oRange As Word.Range\nDim oField As Field\nDim link As Variant\nWith ActiveDocument\n.Range.AutoFormat\nFor Each oRange In .StoryRanges\n For Each oFld In oRange.Fields\n If oFld.Type = wdFieldHyperlink Then\n For Each link In oFld.Result.Hyperlinks\n // the hyperlink is stored in link.Address\n // strip the first x characters of the URL\n // and replace them with your new URL\n Next link\n End If\n Next oFld\n Set oRange = oRange.NextStoryRange\nNext oRange\n\n", "Perhaps the Microsoft Office Word binary file format specification could help you here, though someone who already did stuff like this might come up with a better answer.\n", "You want to do this in Java or Python. Try OpenOffice.\nIn OpenOffice, you can insert Java or Python code as a \"Makro\".\nI'm sure there will be a possibility to change the image URLs.\n", "The VBA answer is the closest because this is best done using the Microsoft Word COM API. However, you can use this just as well from Python. I've used it myself to import data into a database from hundreds of forms that were Word Documents.\nThis article explains the basics. Note that even though it creates a class wrapper for the WordDocument COM object, you don't need to do this if you don't want to. You can just access the COM API directly.\nFor documentation of the WordDocument COM API, open a word document, press Alt-F11 to open the VBA editor, and then F2 to view the object browser. This allows you to browse through all of the objects and the methods that they provide. An introduction to Python and the COM object model is found here. \n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "image", "java", "ms_word", "python" ]
stackoverflow_0000428308_image_java_ms_word_python.txt
Q: Resources (resx) with Python Do you know of any Python module for resources (resx files) manipulation? P.S.: I know I could write a custom wrapper on top of base XML processor available, I'm just checking out before going to hack my own code... A: This question Resources (resx) maintenance in big projects has an answer pointing to some .NET source code for a tool to manage RESX resources. Since IronPython can interface with any existing .NET objects written in C#, you should be able to adapt that RESX tool source code into an object that you can then use in IronPython.
Resources (resx) with Python
Do you know of any Python module for resources (resx files) manipulation? P.S.: I know I could write a custom wrapper on top of base XML processor available, I'm just checking out before going to hack my own code...
[ "This question Resources (resx) maintenance in big projects has an answer pointing to some .NET source code for a tool to manage RESX resources. Since IronPython can interface with any existing .NET objects written in C#, you should be able to adapt that RESX tool source code into an object that you can then use in IronPython.\n" ]
[ 1 ]
[]
[]
[ "module", "python", "resources", "resx" ]
stackoverflow_0000812644_module_python_resources_resx.txt
Q: copy worksheet from one spreadsheet to another Is it possible to copy spreadsheets with gdata API or worksheets from one spreadsheet to other? For now i copy all cells from one worksheet to another. One cell per request. It is too slow. I read about "cell batch processing" and write this code: src_key = 'rFQqEnFWuR6qoU2HEfdVuTw'; dst_key = 'rPCVJ80MHt7K2EVlXNqytLQ' sheetcl = gdata.spreadsheet.service.SpreadsheetsService('[email protected]','p') dcs = gdata.docs.service.DocsService('[email protected]', 'p') src_worksheets = sheetcl.GetWorksheetsFeed(src_key) dst_worksheets = sheetcl.GetWorksheetsFeed(dst_key) for src_worksheet, dst_worksheet in zip(src_worksheets.entry, dst_worksheets.entry): sheet_id = src_worksheet.id.text.split('/')[-1] dst_sheet_id = dst_worksheet.id.text.split('/')[-1] cells_feed = sheetcl.GetCellsFeed(src_key, sheet_id) dst_cells_feed = sheetcl.GetCellsFeed(dst_key, dst_sheet_id) for cell in cells_feed.entry: dst_cells_feed.AddInsert(cell) sheetcl.ExecuteBatch(dst_cells_feed, dst_cells_feed.GetBatchLink().href) But it doesn't work. As I suppose reason is that each cell in inner loop has its id which consist of spreadsheet_id: >>> cell.id.text 'http://spreadsheets.google.com/feeds/cells/rFQqEnFWuR6qoU2HEfdVuTw/default/private/full/R1C1' >>> A: You probably should be using a cell range query to get a whole row at a time, then inserting rows in the target spreadsheet.
copy worksheet from one spreadsheet to another
Is it possible to copy spreadsheets with gdata API or worksheets from one spreadsheet to other? For now i copy all cells from one worksheet to another. One cell per request. It is too slow. I read about "cell batch processing" and write this code: src_key = 'rFQqEnFWuR6qoU2HEfdVuTw'; dst_key = 'rPCVJ80MHt7K2EVlXNqytLQ' sheetcl = gdata.spreadsheet.service.SpreadsheetsService('[email protected]','p') dcs = gdata.docs.service.DocsService('[email protected]', 'p') src_worksheets = sheetcl.GetWorksheetsFeed(src_key) dst_worksheets = sheetcl.GetWorksheetsFeed(dst_key) for src_worksheet, dst_worksheet in zip(src_worksheets.entry, dst_worksheets.entry): sheet_id = src_worksheet.id.text.split('/')[-1] dst_sheet_id = dst_worksheet.id.text.split('/')[-1] cells_feed = sheetcl.GetCellsFeed(src_key, sheet_id) dst_cells_feed = sheetcl.GetCellsFeed(dst_key, dst_sheet_id) for cell in cells_feed.entry: dst_cells_feed.AddInsert(cell) sheetcl.ExecuteBatch(dst_cells_feed, dst_cells_feed.GetBatchLink().href) But it doesn't work. As I suppose reason is that each cell in inner loop has its id which consist of spreadsheet_id: >>> cell.id.text 'http://spreadsheets.google.com/feeds/cells/rFQqEnFWuR6qoU2HEfdVuTw/default/private/full/R1C1' >>>
[ "You probably should be using a cell range query to get a whole row at a time, then inserting rows in the target spreadsheet.\n" ]
[ 1 ]
[]
[]
[ "gdata", "gdata_api", "google_docs", "python" ]
stackoverflow_0000944419_gdata_gdata_api_google_docs_python.txt
Q: Running commands from a file in PDB I would like to run a set of python commands from a file in the PDB debugger. Related to this, can I set up a file that is automatically run when PDB starts? A: make a subclass of pdb.Pdb and put a call to your extra stuff in the __init__ alternatively pdb.Pdb() looks for a .pdbrc file, so you may be able to put your stuff in there # Read $HOME/.pdbrc and ./.pdbrc self.rcLines = [] if 'HOME' in os.environ: envHome = os.environ['HOME'] try: rcFile = open(os.path.join(envHome, ".pdbrc")) except IOError: pass else: for line in rcFile.readlines(): self.rcLines.append(line) rcFile.close() try: rcFile = open(".pdbrc") except IOError: pass else: for line in rcFile.readlines(): self.rcLines.append(line) rcFile.close()
Running commands from a file in PDB
I would like to run a set of python commands from a file in the PDB debugger. Related to this, can I set up a file that is automatically run when PDB starts?
[ "make a subclass of pdb.Pdb and put a call to your extra stuff in the __init__\nalternatively\npdb.Pdb() looks for a .pdbrc file, so you may be able to put your stuff in there\n # Read $HOME/.pdbrc and ./.pdbrc\n self.rcLines = []\n if 'HOME' in os.environ:\n envHome = os.environ['HOME']\n try:\n rcFile = open(os.path.join(envHome, \".pdbrc\"))\n except IOError:\n pass\n else:\n for line in rcFile.readlines():\n self.rcLines.append(line)\n rcFile.close()\n try:\n rcFile = open(\".pdbrc\")\n except IOError:\n pass\n else:\n for line in rcFile.readlines():\n self.rcLines.append(line)\n rcFile.close()\n\n" ]
[ 1 ]
[]
[]
[ "debugging", "pdb", "python" ]
stackoverflow_0001614983_debugging_pdb_python.txt
Q: Navigating in the python terminal I want a 64-bit python interpreter on my Mac so I had to rebuild from source. However, with my own custom build interpreter I run into issues when I try to navigate when I run the interpreter from inside a shell. Typing python into the bash shell results in the familiar: Python 2.6.3 (r263:75183, Oct 23 2009, 14:23:25) [GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> However, when I try to navigate with left-arrow and right-arrow, I get weird characters: Python 2.6.3 (r263:75183, Oct 23 2009, 14:23:25) [GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> ^[[A^[[A^[[A^[[D^[[C^[[C^[[A^[[B^[[D^[[C This does not occur in Apple's default interpreter. What is causing this? How can I fix it? A: Sounds like your custom build didn't include readline. Should be a simple config change and rebuild, check here for more info. A: Installing the GNU readline library from here, and rebuilding python fixes the problem.
Navigating in the python terminal
I want a 64-bit python interpreter on my Mac so I had to rebuild from source. However, with my own custom build interpreter I run into issues when I try to navigate when I run the interpreter from inside a shell. Typing python into the bash shell results in the familiar: Python 2.6.3 (r263:75183, Oct 23 2009, 14:23:25) [GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> However, when I try to navigate with left-arrow and right-arrow, I get weird characters: Python 2.6.3 (r263:75183, Oct 23 2009, 14:23:25) [GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> ^[[A^[[A^[[A^[[D^[[C^[[C^[[A^[[B^[[D^[[C This does not occur in Apple's default interpreter. What is causing this? How can I fix it?
[ "Sounds like your custom build didn't include readline. Should be a simple config change and rebuild, check here for more info.\n", "Installing the GNU readline library from here, and rebuilding python fixes the problem.\n" ]
[ 6, 0 ]
[]
[]
[ "navigation", "python", "shell" ]
stackoverflow_0001616321_navigation_python_shell.txt
Q: Trying to position subplots next to each other I am trying to position two subplots next to each other (as opposed to under each other). I am expecting to see [sp1] [sp2] Instead, only the second plot [sp2] is getting displayed. from matplotlib import pyplot x = [0, 1, 2] pyplot.figure() # sp1 pyplot.subplot(211) pyplot.bar(x, x) # sp2 pyplot.subplot(221) pyplot.plot(x, x) pyplot.show() A: The 3 numbers are rows, columns, and plot #. What you're doing is respecifying the number of columns in your second call to subplot, which in turn changes the configuration and causes pyplot to start over. What you mean is: subplot(121) # 1 row, 2 columns, Plot 1 ... subplot(122) # 1 row, 2 columns, Plot 2 A: from matplotlib import pyplot x = [0, 1, 2] pyplot.figure() # sp1 pyplot.subplot(121) pyplot.bar(x, x) # sp2 pyplot.subplot(122) pyplot.plot(x, x) pyplot.show()
Trying to position subplots next to each other
I am trying to position two subplots next to each other (as opposed to under each other). I am expecting to see [sp1] [sp2] Instead, only the second plot [sp2] is getting displayed. from matplotlib import pyplot x = [0, 1, 2] pyplot.figure() # sp1 pyplot.subplot(211) pyplot.bar(x, x) # sp2 pyplot.subplot(221) pyplot.plot(x, x) pyplot.show()
[ "The 3 numbers are rows, columns, and plot #. What you're doing is respecifying the number of columns in your second call to subplot, which in turn changes the configuration and causes pyplot to start over.\nWhat you mean is:\nsubplot(121) # 1 row, 2 columns, Plot 1\n...\nsubplot(122) # 1 row, 2 columns, Plot 2\n\n", "from matplotlib import pyplot\n\nx = [0, 1, 2]\n\npyplot.figure()\n\n# sp1\npyplot.subplot(121)\npyplot.bar(x, x)\n\n# sp2\npyplot.subplot(122)\npyplot.plot(x, x)\n\npyplot.show()\n\n" ]
[ 12, 5 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0001616427_matplotlib_python.txt
Q: writing a font viewer - getting font properties, loading ttf dynamically I'm trying to write a font viewer for TrueType / OpenType fonts with VB6 / VB5 code (under Windows). it is surprisingly difficult: 1) in VB / winAPI, i did not find how to extract the font's name, or font properties in general. 2) i can install the font (using AddFontResource API function), but then have to uninstall it. However, while (AddFontResource" expects a pathname, removing the font requires the font's name which is unknown to me. is there a way to use an non-installed font ttf) ? is there a way to extract a font's properties using vb6 ? (I can write the program in wxPython but i know even less about fonts in python than with VB) A: You could use the FreeType library. A: It indeed is. I have faced the same problem myself (see my question). I ended up writing my own parser though because I needed to detect if the font was corrupt or not. There is a AddFontMemResourceEx function which: When the function succeeds, the caller of this function can free the memory pointed to by pbFont because the system has made its own copy of the memory. To remove the fonts that were installed, call RemoveFontMemResourceEx. However, when the process goes away, the system will unload the fonts even if the process did not call RemoveFontMemResource. Also, you can use the Font and Text Functions to get the font metrics.
writing a font viewer - getting font properties, loading ttf dynamically
I'm trying to write a font viewer for TrueType / OpenType fonts with VB6 / VB5 code (under Windows). it is surprisingly difficult: 1) in VB / winAPI, i did not find how to extract the font's name, or font properties in general. 2) i can install the font (using AddFontResource API function), but then have to uninstall it. However, while (AddFontResource" expects a pathname, removing the font requires the font's name which is unknown to me. is there a way to use an non-installed font ttf) ? is there a way to extract a font's properties using vb6 ? (I can write the program in wxPython but i know even less about fonts in python than with VB)
[ "You could use the FreeType library.\n", "It indeed is. I have faced the same problem myself (see my question). I ended up writing my own parser though because I needed to detect if the font was corrupt or not. There is a AddFontMemResourceEx function which:\n\nWhen the function succeeds, the caller of this function can free the memory pointed to by pbFont because the system has made its own copy of the memory. To remove the fonts that were installed, call RemoveFontMemResourceEx. However, when the process goes away, the system will unload the fonts even if the process did not call RemoveFontMemResource.\n\nAlso, you can use the Font and Text Functions to get the font metrics.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "truetype", "vb6" ]
stackoverflow_0001616505_python_truetype_vb6.txt
Q: Python POST XML not executed The Headers post fine but the associated XML seems to be taken as string data only, XML is not processed. XML string is of the form: params = '''<?xml version="1.0" encoding"="UTF-8 "?> <MainRequest> <requestEnvelope><errorLanguage>en_US</errorLanguage> </requestEnvelope></MainRequest>''' The POST is of the form: enc_params = urllib.quote(params) request = urllib2.Request("https://myURL/",enc_params, headers) The send of the XML is of the form: %3C%3Fxml%20version%3D%221.0%22%20encoding%22%3D%22UTF-8%20%22%3F%3E%0A%3CMainReq uest%3E%0A%3CrequestEnvelope%3E%3CerrorLanguage%3Een_US%3C/errorLanguage%3E%0A%3 C/requestEnvelope%3E The error message then indicates XML content is missing. Any ideas would be helpful. A: Are you adding a content-type header? To tell the server your request is XML, add the following before sending the request: request.add_header('Content-Type', 'text/xml') A: Take out the "urllib.quote()" call. That's what created the string which starts "%3C%3Fxml". If you want to POST XML then just send that XML string as the data, along with the Content-Type that ataylor mentioned. (But in most cases that doesn't make a difference.)
Python POST XML not executed
The Headers post fine but the associated XML seems to be taken as string data only, XML is not processed. XML string is of the form: params = '''<?xml version="1.0" encoding"="UTF-8 "?> <MainRequest> <requestEnvelope><errorLanguage>en_US</errorLanguage> </requestEnvelope></MainRequest>''' The POST is of the form: enc_params = urllib.quote(params) request = urllib2.Request("https://myURL/",enc_params, headers) The send of the XML is of the form: %3C%3Fxml%20version%3D%221.0%22%20encoding%22%3D%22UTF-8%20%22%3F%3E%0A%3CMainReq uest%3E%0A%3CrequestEnvelope%3E%3CerrorLanguage%3Een_US%3C/errorLanguage%3E%0A%3 C/requestEnvelope%3E The error message then indicates XML content is missing. Any ideas would be helpful.
[ "Are you adding a content-type header? To tell the server your request is XML, add the following before sending the request:\nrequest.add_header('Content-Type', 'text/xml')\n\n", "Take out the \"urllib.quote()\" call. That's what created the string which starts \"%3C%3Fxml\". If you want to POST XML then just send that XML string as the data, along with the Content-Type that ataylor mentioned. (But in most cases that doesn't make a difference.)\n" ]
[ 2, 0 ]
[]
[]
[ "python", "xmlhttprequest" ]
stackoverflow_0001615115_python_xmlhttprequest.txt
Q: server side marker clustering in django I am creating a mashup in django and google maps and I am wondering if there is a way of clustering markers on the server side using django/python. A: I have implemented server side clustering in Django on my real estate/rentals site; I explain it here. A: I came up with the code below to figure out if one marker is close enough to another for clustering - close if two cluster icons start overlapping. Works for the whole world map for all zoom levels. The problem is that map projection is non-linear and you can't just set some delta_lang delta_lat tolerance - both will depend on the lattitude. For local maps it is not a problem though. If you want to do all on the server side you will have to calculate clustered markers for each zoomlelvel either per AJAX call or print them all at once. function isCloseTo($other,$z){//$z is zoomlevel $delta_lat = abs($this->lattitude - $other->lattitude); $delta_lng = abs($this->longitude - $other->longitude); $l = abs($this->lattitude); $l2 = $l*$l; $l3 = $l2*$l; $l4 = $l3*$l; $factor = 1 +0.0000312*$l +0.0003604*$l2 -0.000009858*$l3 +0.0000001506*$l4; $tol_lat = (45.42*exp(-0.6894339*$z)/3)/$factor; $tol_lng = 21.845*exp(-0.67686*$z)/2; if ($delta_lat < $tol_lat and $delta_lng < $tol_lng){ return true; } else{ return false; } }
server side marker clustering in django
I am creating a mashup in django and google maps and I am wondering if there is a way of clustering markers on the server side using django/python.
[ "I have implemented server side clustering in Django on my real estate/rentals site; I explain it here.\n", "I came up with the code below to figure out if one marker is close enough to another for clustering - close if two cluster icons start overlapping. Works for the whole world map for all zoom levels.\nThe problem is that map projection is non-linear and you can't just set some delta_lang delta_lat tolerance - both will depend on the lattitude. For local maps it is not a problem though.\nIf you want to do all on the server side you will have to calculate clustered markers for each zoomlelvel either per AJAX call or print them all at once.\nfunction isCloseTo($other,$z){//$z is zoomlevel\n $delta_lat = abs($this->lattitude - $other->lattitude);\n $delta_lng = abs($this->longitude - $other->longitude);\n\n $l = abs($this->lattitude);\n $l2 = $l*$l;\n $l3 = $l2*$l;\n $l4 = $l3*$l;\n\n $factor = 1\n +0.0000312*$l\n +0.0003604*$l2\n -0.000009858*$l3\n +0.0000001506*$l4;\n\n $tol_lat = (45.42*exp(-0.6894339*$z)/3)/$factor;\n $tol_lng = 21.845*exp(-0.67686*$z)/2;\n if ($delta_lat < $tol_lat and $delta_lng < $tol_lng){\n return true;\n }\n else{\n return false;\n }\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "django", "google_maps", "python" ]
stackoverflow_0001131400_django_google_maps_python.txt
Q: Python strategy for extracting text from malformed html pages I'm trying to extract text from arbitrary html pages. Some of the pages (which I have no control over) have malformed html or scripts which make this difficult. Also I'm on a shared hosting environment, so I can install any python lib, but I can't just install anything I want on the server. pyparsing and html2text.py also did not seem to work for malformed html pages. Example URL is http://apnews.myway.com/article/20091015/D9BB7CGG1.html My current implementation is approximately the following: # Try using BeautifulSoup 3.0.7a soup = BeautifulSoup.BeautifulSoup(s) comments = soup.findAll(text=lambda text:isinstance(text,Comment)) [comment.extract() for comment in comments] c=soup.findAll('script') for i in c: i.extract() body = bsoup.body(text=True) text = ''.join(body) # if BeautifulSoup can't handle it, # alter html by trying to find 1st instance of "<body" and replace everything prior to that, with "<html><head></head>" # try beautifulsoup again with new html if beautifulsoup still does not work, then I resort to using a heuristic of looking at the 1st char, last char (to see if they looks like its a code line # < ; and taking a sample of the line and then check if the tokens are english words, or numbers. If to few of the tokens are words or numbers, then I guess that the line is code. I could use machine learning to inspect each line, but that seems a little expensive and I would probably have to train it (since I don't know that much about unsupervised learning machines), and of course write it as well. Any advice, tools, strategies would be most welcome. Also I realize that the latter part of that is rather messy since if I get a line that is determine to contain code, I currently throw away the entire line, even if there is some small amount of actual English text in the line. A: Try not to laugh, but: class TextFormatter: def __init__(self,lynx='/usr/bin/lynx'): self.lynx = lynx def html2text(self, unicode_html_source): "Expects unicode; returns unicode" return Popen([self.lynx, '-assume-charset=UTF-8', '-display-charset=UTF-8', '-dump', '-stdin'], stdin=PIPE, stdout=PIPE).communicate(input=unicode_html_source.encode('utf-8'))[0].decode('utf-8') I hope you've got lynx! A: Well, it depends how good the solution has to be. I had a similar problem, importing hundreds of old html pages into a new website. I basically did # remove all that crap around the body and let BS fix the tags newhtml = "<html><body>%s</body></html>" % ( u''.join( unicode( tag ) for tag in BeautifulSoup( oldhtml ).body.contents )) # use html2text to turn it into text text = html2text( newhtml ) and it worked out, but of course the documents could be so bad that even BS can't salvage much. A: BeautifulSoup will do bad with malformed HTML. What about some regex-fu? >>> import re >>> >>> html = """<p>This is paragraph with a bunch of lines ... from a news story.</p>""" >>> >>> pattern = re.compile('(?<=p>).+(?=</p)', re.DOTALL) >>> pattern.search(html).group() 'This is paragraph with a bunch of lines\nfrom a news story.' You can then assembly a list of valid tags from which you want to extract information.
Python strategy for extracting text from malformed html pages
I'm trying to extract text from arbitrary html pages. Some of the pages (which I have no control over) have malformed html or scripts which make this difficult. Also I'm on a shared hosting environment, so I can install any python lib, but I can't just install anything I want on the server. pyparsing and html2text.py also did not seem to work for malformed html pages. Example URL is http://apnews.myway.com/article/20091015/D9BB7CGG1.html My current implementation is approximately the following: # Try using BeautifulSoup 3.0.7a soup = BeautifulSoup.BeautifulSoup(s) comments = soup.findAll(text=lambda text:isinstance(text,Comment)) [comment.extract() for comment in comments] c=soup.findAll('script') for i in c: i.extract() body = bsoup.body(text=True) text = ''.join(body) # if BeautifulSoup can't handle it, # alter html by trying to find 1st instance of "<body" and replace everything prior to that, with "<html><head></head>" # try beautifulsoup again with new html if beautifulsoup still does not work, then I resort to using a heuristic of looking at the 1st char, last char (to see if they looks like its a code line # < ; and taking a sample of the line and then check if the tokens are english words, or numbers. If to few of the tokens are words or numbers, then I guess that the line is code. I could use machine learning to inspect each line, but that seems a little expensive and I would probably have to train it (since I don't know that much about unsupervised learning machines), and of course write it as well. Any advice, tools, strategies would be most welcome. Also I realize that the latter part of that is rather messy since if I get a line that is determine to contain code, I currently throw away the entire line, even if there is some small amount of actual English text in the line.
[ "Try not to laugh, but:\nclass TextFormatter:\n def __init__(self,lynx='/usr/bin/lynx'):\n self.lynx = lynx\n\n def html2text(self, unicode_html_source):\n \"Expects unicode; returns unicode\"\n return Popen([self.lynx, \n '-assume-charset=UTF-8', \n '-display-charset=UTF-8', \n '-dump', \n '-stdin'], \n stdin=PIPE, \n stdout=PIPE).communicate(input=unicode_html_source.encode('utf-8'))[0].decode('utf-8')\n\nI hope you've got lynx!\n", "Well, it depends how good the solution has to be. I had a similar problem, importing hundreds of old html pages into a new website. I basically did\n# remove all that crap around the body and let BS fix the tags\nnewhtml = \"<html><body>%s</body></html>\" % (\n u''.join( unicode( tag ) for tag in BeautifulSoup( oldhtml ).body.contents ))\n# use html2text to turn it into text\ntext = html2text( newhtml )\n\nand it worked out, but of course the documents could be so bad that even BS can't salvage much.\n", "BeautifulSoup will do bad with malformed HTML. What about some regex-fu?\n>>> import re\n>>> \n>>> html = \"\"\"<p>This is paragraph with a bunch of lines\n... from a news story.</p>\"\"\"\n>>> \n>>> pattern = re.compile('(?<=p>).+(?=</p)', re.DOTALL)\n>>> pattern.search(html).group()\n'This is paragraph with a bunch of lines\\nfrom a news story.'\n\nYou can then assembly a list of valid tags from which you want to extract information.\n" ]
[ 5, 0, 0 ]
[]
[]
[ "html", "html_content_extraction", "python", "text" ]
stackoverflow_0001615072_html_html_content_extraction_python_text.txt
Q: Are there any problems with this symlink traversal code for Windows? In my efforts to resolve Python issue 1578269, I've been working on trying to resolve the target of a symlink in a robust way. I started by using GetFinalPathNameByHandle as recommended here on stackoverflow and by Microsoft, but it turns out that technique fails when the target is in use (such as with pagefile.sys). So, I've written a new routine to accomplish this using CreateFile and DeviceIoControl (as it appears this is what Explorer does). The relevant code from jaraco.windows.filesystem is included below. The question is, is there a better technique for reliably resolving symlinks in Windows? Can you identify any issues with this implementation? def relpath(path, start=os.path.curdir): """ Like os.path.relpath, but actually honors the start path if supplied. See http://bugs.python.org/issue7195 """ return os.path.normpath(os.path.join(start, path)) def trace_symlink_target(link): """ Given a file that is known to be a symlink, trace it to its ultimate target. Raises TargetNotPresent when the target cannot be determined. Raises ValueError when the specified link is not a symlink. """ if not is_symlink(link): raise ValueError("link must point to a symlink on the system") while is_symlink(link): orig = os.path.dirname(link) link = _trace_symlink_immediate_target(link) link = relpath(link, orig) return link def _trace_symlink_immediate_target(link): handle = CreateFile( link, 0, FILE_SHARE_READ|FILE_SHARE_WRITE|FILE_SHARE_DELETE, None, OPEN_EXISTING, FILE_FLAG_OPEN_REPARSE_POINT|FILE_FLAG_BACKUP_SEMANTICS, None, ) res = DeviceIoControl(handle, FSCTL_GET_REPARSE_POINT, None, 10240) bytes = create_string_buffer(res) p_rdb = cast(bytes, POINTER(REPARSE_DATA_BUFFER)) rdb = p_rdb.contents if not rdb.tag == IO_REPARSE_TAG_SYMLINK: raise RuntimeError("Expected IO_REPARSE_TAG_SYMLINK, but got %d" % rdb.tag) return rdb.get_print_name() A: Unfortunately I can't test with Vista until next week, but GetFinalPathNameByHandle should work, even for files in use - what's the problem you noticed? In your code above, you forget to close the file handle.
Are there any problems with this symlink traversal code for Windows?
In my efforts to resolve Python issue 1578269, I've been working on trying to resolve the target of a symlink in a robust way. I started by using GetFinalPathNameByHandle as recommended here on stackoverflow and by Microsoft, but it turns out that technique fails when the target is in use (such as with pagefile.sys). So, I've written a new routine to accomplish this using CreateFile and DeviceIoControl (as it appears this is what Explorer does). The relevant code from jaraco.windows.filesystem is included below. The question is, is there a better technique for reliably resolving symlinks in Windows? Can you identify any issues with this implementation? def relpath(path, start=os.path.curdir): """ Like os.path.relpath, but actually honors the start path if supplied. See http://bugs.python.org/issue7195 """ return os.path.normpath(os.path.join(start, path)) def trace_symlink_target(link): """ Given a file that is known to be a symlink, trace it to its ultimate target. Raises TargetNotPresent when the target cannot be determined. Raises ValueError when the specified link is not a symlink. """ if not is_symlink(link): raise ValueError("link must point to a symlink on the system") while is_symlink(link): orig = os.path.dirname(link) link = _trace_symlink_immediate_target(link) link = relpath(link, orig) return link def _trace_symlink_immediate_target(link): handle = CreateFile( link, 0, FILE_SHARE_READ|FILE_SHARE_WRITE|FILE_SHARE_DELETE, None, OPEN_EXISTING, FILE_FLAG_OPEN_REPARSE_POINT|FILE_FLAG_BACKUP_SEMANTICS, None, ) res = DeviceIoControl(handle, FSCTL_GET_REPARSE_POINT, None, 10240) bytes = create_string_buffer(res) p_rdb = cast(bytes, POINTER(REPARSE_DATA_BUFFER)) rdb = p_rdb.contents if not rdb.tag == IO_REPARSE_TAG_SYMLINK: raise RuntimeError("Expected IO_REPARSE_TAG_SYMLINK, but got %d" % rdb.tag) return rdb.get_print_name()
[ "Unfortunately I can't test with Vista until next week, but GetFinalPathNameByHandle should work, even for files in use - what's the problem you noticed?\nIn your code above, you forget to close the file handle.\n" ]
[ 0 ]
[]
[]
[ "filesystems", "python", "symlink_traversal", "winapi", "windows" ]
stackoverflow_0001616245_filesystems_python_symlink_traversal_winapi_windows.txt
Q: Using Python, how do I get an array of file info objects, based on a search of a file system? Currently I have a bash script which runs the find command, like so: find /storage/disk-1/Media/Video/TV -name *.avi -mtime -7 This gets a list of TV shows that were added to my system in the last 7 days. I then go on to create some symbolic links so I can get to my newest TV shows. I'm looking to re-code this in Python, but I have a few questions I can seem to find the answers for using Google (maybe I'm not searching for the right thing). I think the best way to sum this up is to ask the question: How do I perform a search on my file system (should I call find?) which gives me an array of file info objects (containing the modify date, file name, etc) so that I may sort them based on date, and other such things? A: import os, time allfiles = [] now = time.time() # walk will return triples (current dir, list of subdirs, list of regular files) # file names are relative to dir at first for dir, subdirs, files in os.walk("/storage/disk-1/Media/Video/TV"): for f in files: if not f.endswith(".avi"): continue # compute full path name f = os.path.join(dir, f) st = os.stat(f) if st.st_mtime < now - 3600*24*7: # too old continue allfiles.append((f, st)) This will return all files that find also returned, as a list of pairs (filename, stat result). A: look into module os: os.walk is the function which walks the file system, os.path is the module which gives the file mtime and other file informations. also os.path defines a lot of functions for parsing and splitting filenames. also of interest, module glob defines a functions for "globbing" strings (matching a string using unix wildcards rules) from this, building a list of file matching some criterion should be easy. A: You can use "find" through the "subprocess" module. Afterwards, use the "split" string function to dissect each line For each file, use the OS module (e.g. getmtime etc.) to get file information or Use the "walk" and "glob" modules to get the file paths in objects
Using Python, how do I get an array of file info objects, based on a search of a file system?
Currently I have a bash script which runs the find command, like so: find /storage/disk-1/Media/Video/TV -name *.avi -mtime -7 This gets a list of TV shows that were added to my system in the last 7 days. I then go on to create some symbolic links so I can get to my newest TV shows. I'm looking to re-code this in Python, but I have a few questions I can seem to find the answers for using Google (maybe I'm not searching for the right thing). I think the best way to sum this up is to ask the question: How do I perform a search on my file system (should I call find?) which gives me an array of file info objects (containing the modify date, file name, etc) so that I may sort them based on date, and other such things?
[ "import os, time\n\nallfiles = []\nnow = time.time()\n\n# walk will return triples (current dir, list of subdirs, list of regular files)\n# file names are relative to dir at first\nfor dir, subdirs, files in os.walk(\"/storage/disk-1/Media/Video/TV\"):\n for f in files:\n if not f.endswith(\".avi\"):\n continue\n # compute full path name\n f = os.path.join(dir, f)\n st = os.stat(f)\n if st.st_mtime < now - 3600*24*7:\n # too old\n continue\n allfiles.append((f, st))\n\nThis will return all files that find also returned, as a list of pairs (filename, stat result).\n", "look into module os: os.walk is the function which walks the file system, os.path is the module which gives the file mtime and other file informations. also os.path defines a lot of functions for parsing and splitting filenames. \nalso of interest, module glob defines a functions for \"globbing\" strings (matching a string using unix wildcards rules)\nfrom this, building a list of file matching some criterion should be easy. \n", "\nYou can use \"find\" through the \"subprocess\" module. \nAfterwards, use the \"split\" string function to dissect each line\nFor each file, use the OS module (e.g. getmtime etc.) to get file information\n\nor\n\nUse the \"walk\" and \"glob\" modules to get the file paths in objects\n\n" ]
[ 3, 2, 1 ]
[]
[]
[ "fileinfo", "find", "python" ]
stackoverflow_0001617666_fileinfo_find_python.txt
Q: Vim Python omni-completion failing to work on system modules I'm noticing that even for system modules, code completion doesn't work too well. For example, if I have a simple file that does: import re p = re.compile(pattern) m = p.search(line) If I type p., I don't get completion for methods I'd expect to see (I don't see search() for example, but I do see others, such as func_closure(), func_code()). If I type m., I don't get any completion what so ever (I'd expect .groups(), in this case). This doesn't seem to affect all modules.. Has any one seen this behaviour and knows how to correct it? I'm running Vim 7.2 on WinXP, with the latest pythoncomplete.vim from vim.org (0.9), running python 2.6.2. A: Completion for this kind of things is tricky, because it would need to execute the actual code to work. For example p.search() could return None or a MatchObject, depending on the data that is passed to it. This is why omni-completion does not work here, and probably never will. It works for things that can be statically determined, for example a module's contents. A: I never got the builtin omnicomplete to work for any languages. I had the most success with pysmell (which seems to have been updated slightly more recently on github than in the official repo). I still didn't find it to be reliable enough to use consistently but I can't remember exactly why. I've resorted to building an extensive set of snipMate snippets for my primary libraries and using the default tab completion to supplement.
Vim Python omni-completion failing to work on system modules
I'm noticing that even for system modules, code completion doesn't work too well. For example, if I have a simple file that does: import re p = re.compile(pattern) m = p.search(line) If I type p., I don't get completion for methods I'd expect to see (I don't see search() for example, but I do see others, such as func_closure(), func_code()). If I type m., I don't get any completion what so ever (I'd expect .groups(), in this case). This doesn't seem to affect all modules.. Has any one seen this behaviour and knows how to correct it? I'm running Vim 7.2 on WinXP, with the latest pythoncomplete.vim from vim.org (0.9), running python 2.6.2.
[ "Completion for this kind of things is tricky, because it would need to execute the actual code to work.\nFor example p.search() could return None or a MatchObject, depending on the data that is passed to it.\nThis is why omni-completion does not work here, and probably never will. It works for things that can be statically determined, for example a module's contents.\n", "I never got the builtin omnicomplete to work for any languages. I had the most success with pysmell (which seems to have been updated slightly more recently on github than in the official repo). I still didn't find it to be reliable enough to use consistently but I can't remember exactly why.\nI've resorted to building an extensive set of snipMate snippets for my primary libraries and using the default tab completion to supplement.\n" ]
[ 2, 0 ]
[]
[]
[ "omnicomplete", "python", "vim" ]
stackoverflow_0001617415_omnicomplete_python_vim.txt
Q: How do I validate a Django form containing a file on App Engine with google-app-engine-django? I have a model and form like so: class Image(BaseModel): original = db.BlobProperty() class ImageForm(ModelForm): class Meta: model = Image I do the following in my view: form = ImageForm(request.POST, request.FILES, instance=image) if form.is_valid(): And I get: AttributeError at /image/add/ 'NoneType' object has no attribute 'validate' Traced to: /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/db/djangoforms.py in property_clean value: The value to validate. 606. Raises: forms.ValidationError if the value cannot be validated. """ if value is not None: try: prop.validate(prop.make_value_from_form(value)) ... except (db.BadValueError, ValueError), e: raise forms.ValidationError(unicode(e)) 615. 616. class ModelFormOptions(object): """A simple class to hold internal options for a ModelForm class. ▼ Local vars prop None value InMemoryUploadedFile: Nearby.jpg (image/jpeg) Any ideas how to get it to validate? It looks like FileField does not have a validate method, which Django expects... A: It doesn't work with default django version. And bug is reported. BTW, default installation raise other exeption. So possible problem with other part of your code. A: The docs for the constructor say: files: dict of file upload values; Django 0.97 or later only Are you using Django 0.97? The bundled Django is 0.96, unless you explicitly select 1.0 or 1.1. Have you tried validating the form without the files parameter?
How do I validate a Django form containing a file on App Engine with google-app-engine-django?
I have a model and form like so: class Image(BaseModel): original = db.BlobProperty() class ImageForm(ModelForm): class Meta: model = Image I do the following in my view: form = ImageForm(request.POST, request.FILES, instance=image) if form.is_valid(): And I get: AttributeError at /image/add/ 'NoneType' object has no attribute 'validate' Traced to: /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/db/djangoforms.py in property_clean value: The value to validate. 606. Raises: forms.ValidationError if the value cannot be validated. """ if value is not None: try: prop.validate(prop.make_value_from_form(value)) ... except (db.BadValueError, ValueError), e: raise forms.ValidationError(unicode(e)) 615. 616. class ModelFormOptions(object): """A simple class to hold internal options for a ModelForm class. ▼ Local vars prop None value InMemoryUploadedFile: Nearby.jpg (image/jpeg) Any ideas how to get it to validate? It looks like FileField does not have a validate method, which Django expects...
[ "It doesn't work with default django version. And bug is reported. \nBTW, default installation raise other exeption. So possible problem with other part of your code.\n", "The docs for the constructor say:\n\nfiles: dict of file upload values;\n Django 0.97 or later only\n\nAre you using Django 0.97? The bundled Django is 0.96, unless you explicitly select 1.0 or 1.1. Have you tried validating the form without the files parameter?\n" ]
[ 4, 0 ]
[]
[]
[ "django", "django_forms", "google_app_engine", "python", "validation" ]
stackoverflow_0001573913_django_django_forms_google_app_engine_python_validation.txt
Q: What's wrong with this `setup.py`? I've been having problems withe getting setup.py to do the sdist thing correctly. I boiled it down to this. I have the following directory structure: my_package\ my_subpackage\ __init__.py deep_module.py __init__.py module.py setup.py And here's what I have in setup.py: #!/usr/bin/env python from distutils.core import setup import distutils setup( name='a', version='0.1', description='a', author='a', author_email='[email protected]', url='http://a.org', packages=['my_package','my_package.my_subpackage'], package_dir={'': '..'}, license= "a", long_description = 'aaa', ) (The 'aaa' stuff is just placeholder.) Anyway, it works okay when I do setup.py install, but when I try to do setup.py sdist, a few curious things happen: A MANIFEST file is created. A copy of the my_package folder is created inside the existing my_package folder (though it misses a few of the setup-related files I think.) A dist folder is created, inside it a zipfile, inside that a folder with the package name, but inside that folder there isn't the whole package like I hoped but only two files, setup.py and PKG-INFO. What am I doing wrong? How do I make sdist work? A: The problem is well explained here: Setuptools has many silent failure modes. One of them is failure to include all files in sdist release (well not exactly a failure, you could RTFM, but the default behavior is unexpected). This post will serve as a google-yourself-answer for this problem, until we get new, shinier, Distribute solving all of our problems. As comments point out, the bug (misdesign) is actually in distutils -- setuptools just fails to fix it (if you're using svn, things are actually a bit better). I can reproduce your problem as you observe it, i.e., shortening file names a bit, I have: $ ls -lR total 8 -rw-r--r-- 1 aleax eng 0 Oct 24 11:25 __init__.py -rw-r--r-- 1 aleax eng 0 Oct 24 11:25 modu.py drwxr-xr-x 4 aleax eng 136 Oct 24 11:25 mysub -rw-r--r-- 1 aleax eng 323 Oct 24 11:26 setup.py ./mysub: total 0 -rw-r--r-- 1 aleax eng 0 Oct 24 11:25 __init__.py -rw-r--r-- 1 aleax eng 0 Oct 24 11:25 deepmod.py and running python setup.py sdist produces (as well as warnings): $ ls -lR total 16 -rw-r--r-- 1 aleax eng 104 Oct 24 11:35 MANIFEST -rw-r--r-- 2 aleax eng 0 Oct 24 11:25 __init__.py drwxr-xr-x 3 aleax eng 102 Oct 24 11:35 dist -rw-r--r-- 2 aleax eng 0 Oct 24 11:25 modu.py drwxr-xr-x 5 aleax eng 170 Oct 24 11:35 mypack drwxr-xr-x 4 aleax eng 136 Oct 24 11:25 mysub -rw-r--r-- 1 aleax eng 323 Oct 24 11:26 setup.py ./dist: total 8 -rw-r--r-- 1 aleax eng 483 Oct 24 11:35 a-0.1.tar.gz ./mypack: total 0 -rw-r--r-- 2 aleax eng 0 Oct 24 11:25 __init__.py -rw-r--r-- 2 aleax eng 0 Oct 24 11:25 modu.py drwxr-xr-x 4 aleax eng 136 Oct 24 11:35 mysub ./mypack/mysub: total 0 -rw-r--r-- 2 aleax eng 0 Oct 24 11:25 __init__.py -rw-r--r-- 2 aleax eng 0 Oct 24 11:25 deepmod.py ./mysub: total 0 -rw-r--r-- 2 aleax eng 0 Oct 24 11:25 __init__.py -rw-r--r-- 2 aleax eng 0 Oct 24 11:25 deepmod.py One solution is to change the directory layout as follows (from the current mypack dir): $ mkdir mypack $ mv __init__.py modu.py mysub/ mypack $ touch README.txt so getting: $ ls -lR total 8 -rw-r--r-- 1 aleax eng 0 Oct 24 11:37 README.txt drwxr-xr-x 5 aleax eng 170 Oct 24 11:37 mypack -rw-r--r-- 1 aleax eng 323 Oct 24 11:26 setup.py ./mypack: total 0 -rw-r--r-- 1 aleax eng 0 Oct 24 11:25 __init__.py -rw-r--r-- 1 aleax eng 0 Oct 24 11:25 modu.py drwxr-xr-x 4 aleax eng 136 Oct 24 11:25 mysub ./mypack/mysub: total 0 -rw-r--r-- 1 aleax eng 0 Oct 24 11:25 __init__.py -rw-r--r-- 1 aleax eng 0 Oct 24 11:25 deepmod.py (and getting rid of one of the warnings, the one about README -- the one about missing MANIFEST.in clearly remains;-). Also change one line of setup.py to: package_dir={'': '.'}, Now, after python setup.py sdist, you do get a decent tarball: $ tar tvf dist/a-0.1.tar.gz drwxr-xr-x aleax/eng 0 2009-10-24 11:40:05 a-0.1/ drwxr-xr-x aleax/eng 0 2009-10-24 11:40:05 a-0.1/mypack/ -rw-r--r-- aleax/eng 0 2009-10-24 11:25:30 a-0.1/mypack/__init__.py -rw-r--r-- aleax/eng 0 2009-10-24 11:25:30 a-0.1/mypack/modu.py drwxr-xr-x aleax/eng 0 2009-10-24 11:40:05 a-0.1/mypack/mysub/ -rw-r--r-- aleax/eng 0 2009-10-24 11:25:30 a-0.1/mypack/mysub/__init__.py -rw-r--r-- aleax/eng 0 2009-10-24 11:25:30 a-0.1/mypack/mysub/deepmod.py -rw-r--r-- aleax/eng 156 2009-10-24 11:40:05 a-0.1/PKG-INFO -rw-r--r-- aleax/eng 0 2009-10-24 11:37:41 a-0.1/README.txt -rw-r--r-- aleax/eng 322 2009-10-24 11:39:46 a-0.1/setup.py the MANIFEST file is still created in your current directory of course, but I hope that's not a problem. A: Instead of this: my_package\ my_subpackage\ __init__.py deep_module.py __init__.py module.py setup.py Try this: my_package_source\ setup.py README.txt my_package\ my_subpackage\ __init__.py deep_module.py __init__.py module.py You don't actually need a README, it's just for illustrative purpose for what kind of things sit in the root directory of your project's folder. === EDIT ====================================== I should elaborate. After you run it your directory should then look something like this: my_package_source\ setup.py README.txt MANIFEST PKG-INFO dist\ my_package_0.X.tar.gz (or .zip on windows I believe) my_package\ my_subpackage\ __init__.py deep_module.py __init__.py module.py Use the package under the dist directory to distribute.
What's wrong with this `setup.py`?
I've been having problems withe getting setup.py to do the sdist thing correctly. I boiled it down to this. I have the following directory structure: my_package\ my_subpackage\ __init__.py deep_module.py __init__.py module.py setup.py And here's what I have in setup.py: #!/usr/bin/env python from distutils.core import setup import distutils setup( name='a', version='0.1', description='a', author='a', author_email='[email protected]', url='http://a.org', packages=['my_package','my_package.my_subpackage'], package_dir={'': '..'}, license= "a", long_description = 'aaa', ) (The 'aaa' stuff is just placeholder.) Anyway, it works okay when I do setup.py install, but when I try to do setup.py sdist, a few curious things happen: A MANIFEST file is created. A copy of the my_package folder is created inside the existing my_package folder (though it misses a few of the setup-related files I think.) A dist folder is created, inside it a zipfile, inside that a folder with the package name, but inside that folder there isn't the whole package like I hoped but only two files, setup.py and PKG-INFO. What am I doing wrong? How do I make sdist work?
[ "The problem is well explained here:\n\nSetuptools has many silent failure\n modes. One of them is failure to\n include all files in sdist release\n (well not exactly a failure, you could\n RTFM, but the default behavior is\n unexpected). This post will serve as a\n google-yourself-answer for this\n problem, until we get new, shinier,\n Distribute solving all of our\n problems.\n\nAs comments point out, the bug (misdesign) is actually in distutils -- setuptools just fails to fix it (if you're using svn, things are actually a bit better).\nI can reproduce your problem as you observe it, i.e., shortening file names a bit, I have:\n$ ls -lR\ntotal 8\n-rw-r--r-- 1 aleax eng 0 Oct 24 11:25 __init__.py\n-rw-r--r-- 1 aleax eng 0 Oct 24 11:25 modu.py\ndrwxr-xr-x 4 aleax eng 136 Oct 24 11:25 mysub\n-rw-r--r-- 1 aleax eng 323 Oct 24 11:26 setup.py\n\n./mysub:\ntotal 0\n-rw-r--r-- 1 aleax eng 0 Oct 24 11:25 __init__.py\n-rw-r--r-- 1 aleax eng 0 Oct 24 11:25 deepmod.py\n\nand running python setup.py sdist produces (as well as warnings):\n$ ls -lR\ntotal 16\n-rw-r--r-- 1 aleax eng 104 Oct 24 11:35 MANIFEST\n-rw-r--r-- 2 aleax eng 0 Oct 24 11:25 __init__.py\ndrwxr-xr-x 3 aleax eng 102 Oct 24 11:35 dist\n-rw-r--r-- 2 aleax eng 0 Oct 24 11:25 modu.py\ndrwxr-xr-x 5 aleax eng 170 Oct 24 11:35 mypack\ndrwxr-xr-x 4 aleax eng 136 Oct 24 11:25 mysub\n-rw-r--r-- 1 aleax eng 323 Oct 24 11:26 setup.py\n\n./dist:\ntotal 8\n-rw-r--r-- 1 aleax eng 483 Oct 24 11:35 a-0.1.tar.gz\n\n./mypack:\ntotal 0\n-rw-r--r-- 2 aleax eng 0 Oct 24 11:25 __init__.py\n-rw-r--r-- 2 aleax eng 0 Oct 24 11:25 modu.py\ndrwxr-xr-x 4 aleax eng 136 Oct 24 11:35 mysub\n\n./mypack/mysub:\ntotal 0\n-rw-r--r-- 2 aleax eng 0 Oct 24 11:25 __init__.py\n-rw-r--r-- 2 aleax eng 0 Oct 24 11:25 deepmod.py\n\n./mysub:\ntotal 0\n-rw-r--r-- 2 aleax eng 0 Oct 24 11:25 __init__.py\n-rw-r--r-- 2 aleax eng 0 Oct 24 11:25 deepmod.py\n\nOne solution is to change the directory layout as follows (from the current mypack dir):\n$ mkdir mypack\n$ mv __init__.py modu.py mysub/ mypack\n$ touch README.txt\n\nso getting:\n$ ls -lR\ntotal 8\n-rw-r--r-- 1 aleax eng 0 Oct 24 11:37 README.txt\ndrwxr-xr-x 5 aleax eng 170 Oct 24 11:37 mypack\n-rw-r--r-- 1 aleax eng 323 Oct 24 11:26 setup.py\n\n./mypack:\ntotal 0\n-rw-r--r-- 1 aleax eng 0 Oct 24 11:25 __init__.py\n-rw-r--r-- 1 aleax eng 0 Oct 24 11:25 modu.py\ndrwxr-xr-x 4 aleax eng 136 Oct 24 11:25 mysub\n\n./mypack/mysub:\ntotal 0\n-rw-r--r-- 1 aleax eng 0 Oct 24 11:25 __init__.py\n-rw-r--r-- 1 aleax eng 0 Oct 24 11:25 deepmod.py\n\n(and getting rid of one of the warnings, the one about README -- the one about missing MANIFEST.in clearly remains;-). Also change one line of setup.py to:\npackage_dir={'': '.'},\n\nNow, after python setup.py sdist, you do get a decent tarball:\n$ tar tvf dist/a-0.1.tar.gz \ndrwxr-xr-x aleax/eng 0 2009-10-24 11:40:05 a-0.1/\ndrwxr-xr-x aleax/eng 0 2009-10-24 11:40:05 a-0.1/mypack/\n-rw-r--r-- aleax/eng 0 2009-10-24 11:25:30 a-0.1/mypack/__init__.py\n-rw-r--r-- aleax/eng 0 2009-10-24 11:25:30 a-0.1/mypack/modu.py\ndrwxr-xr-x aleax/eng 0 2009-10-24 11:40:05 a-0.1/mypack/mysub/\n-rw-r--r-- aleax/eng 0 2009-10-24 11:25:30 a-0.1/mypack/mysub/__init__.py\n-rw-r--r-- aleax/eng 0 2009-10-24 11:25:30 a-0.1/mypack/mysub/deepmod.py\n-rw-r--r-- aleax/eng 156 2009-10-24 11:40:05 a-0.1/PKG-INFO\n-rw-r--r-- aleax/eng 0 2009-10-24 11:37:41 a-0.1/README.txt\n-rw-r--r-- aleax/eng 322 2009-10-24 11:39:46 a-0.1/setup.py\n\nthe MANIFEST file is still created in your current directory of course, but I hope that's not a problem.\n", "Instead of this:\nmy_package\\\n my_subpackage\\\n __init__.py\n deep_module.py\n __init__.py\n module.py\n setup.py\n\nTry this:\nmy_package_source\\\n setup.py\n README.txt\n my_package\\\n my_subpackage\\\n __init__.py\n deep_module.py\n __init__.py\n module.py\n\nYou don't actually need a README, it's just for illustrative purpose for what kind of things sit in the root directory of your project's folder.\n=== EDIT ======================================\nI should elaborate. After you run it your directory should then look something like this:\nmy_package_source\\\n setup.py\n README.txt\n MANIFEST\n PKG-INFO\n dist\\\n my_package_0.X.tar.gz (or .zip on windows I believe)\n my_package\\\n my_subpackage\\\n __init__.py\n deep_module.py\n __init__.py\n module.py\n\nUse the package under the dist directory to distribute.\n" ]
[ 6, 4 ]
[]
[]
[ "distribution", "distutils", "python" ]
stackoverflow_0001618674_distribution_distutils_python.txt
Q: Mapping complex python objects to django models I have a certain class structure in my app, that currently utilizes django for presentation. I was not using the model layer at all, - the database interaction routines are hand-written. I am, however, considering the possibility of actually using django to its full potential and actually using the database-abstraction layer. The question is how to best integrate my existing class structure with the model layer. An example class: class UpperClass(base): def __init__(self, attr1, attr2): self.attr1 = attr1 self.attr2 = attr2 # attr1 and attr2 are actually instances of, say, # CustomType1 and CustomType2 Sow here is how am I going to map this to a django model: class UpperClass(models.Model): attr1 = CustomType1Field(...) attr2 = CustomType2Field(...) That's straightforward enough - all the serialization and validation stuff is already written, so it will not be hard at all to conjure up the custom field classes for CustomType1 and CustomType2. The real question is, where do I put the custom (not-database-related) behaviour of the actual UpperClass. In my understanding, models are there for "getting things in and out of the database", but where does the behaviour go then? Do I embed the non-database-related methods in the Model instances of the UpperClass? Really, I am at a loss here. Hope this makes at least partial sense to you. A: It somewhat depends on what specific behavior you want to encode. In most cases, you should try to put per-object behavior into the model class. It's a regular Python class, after all, so you can give it any methods you desire. You do need to account for the persistent nature, of course, e.g. by avoiding additional member data beyond those specified in the schema. A: This is largely a philosophical question re: MVC and how you choose to implement it. Arguably there is no right way here, but assuming that you want the model objects to all behave in a certain way regardless of the view that interacts with them, it makes sense to attach this behavior to the model. If the behavior is specific to only a certain view, and there are many other views that interact with the model, then it might make more sense to attach it to the view.
Mapping complex python objects to django models
I have a certain class structure in my app, that currently utilizes django for presentation. I was not using the model layer at all, - the database interaction routines are hand-written. I am, however, considering the possibility of actually using django to its full potential and actually using the database-abstraction layer. The question is how to best integrate my existing class structure with the model layer. An example class: class UpperClass(base): def __init__(self, attr1, attr2): self.attr1 = attr1 self.attr2 = attr2 # attr1 and attr2 are actually instances of, say, # CustomType1 and CustomType2 Sow here is how am I going to map this to a django model: class UpperClass(models.Model): attr1 = CustomType1Field(...) attr2 = CustomType2Field(...) That's straightforward enough - all the serialization and validation stuff is already written, so it will not be hard at all to conjure up the custom field classes for CustomType1 and CustomType2. The real question is, where do I put the custom (not-database-related) behaviour of the actual UpperClass. In my understanding, models are there for "getting things in and out of the database", but where does the behaviour go then? Do I embed the non-database-related methods in the Model instances of the UpperClass? Really, I am at a loss here. Hope this makes at least partial sense to you.
[ "It somewhat depends on what specific behavior you want to encode. In most cases, you should try to put per-object behavior into the model class. It's a regular Python class, after all, so you can give it any methods you desire. You do need to account for the persistent nature, of course, e.g. by avoiding additional member data beyond those specified in the schema.\n", "This is largely a philosophical question re: MVC and how you choose to implement it. Arguably there is no right way here, but assuming that you want the model objects to all behave in a certain way regardless of the view that interacts with them, it makes sense to attach this behavior to the model. If the behavior is specific to only a certain view, and there are many other views that interact with the model, then it might make more sense to attach it to the view.\n" ]
[ 4, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001618773_django_python.txt
Q: how to reduce size of exe using py2exe I developed a small program using python and wxwidgets. It is a very simple program that uses only a mini frame to display some information when needed, and the rest of the time it shows nothing, only an icon in the taskbar. When I build the exe using py2exe (single file exe mode, optimized), I get a 6MB size file! I tried not including some libraries or dll that were not needed but still, I can't see why I get such a big file for just a mini frame and an icon in the taskbar. Is there any way to reduce the size of the exe generated with py2exe? here is what I did to reduce a bit myself: options = {"py2exe":{"excludes" : ['_gtkagg', '_tkagg', 'bsddb', 'curses', 'email', 'pywin.debugger', 'pywin.debugger.dbgcon', 'pywin.dialogs', 'tcl', 'Tkconstants', 'Tkinter'], "dll_excludes": ['libgdk-win32-2.0-0.dll', 'libgobject-2.0-0.dll', 'tcl84.dll', 'tk84.dll'], Thanks. A: The fact that your program is simple does not mean that it's small. Your program has many dependency through the wxWidget stack and 6 Mb does not look that big with all this in mind. But, back to the question. To shrink a py2exe generated program, you can do a few obvious things: Distribute less stuff: it seems you already started this road. Look at everything that is distributed along with your program and eliminate it if it's not needed. DllDepend can tell you why a DLL is distributed with your program (probably because a pyd needs it). For other modules, remove them and try if it still works... Compress it better: run upx on each of your dll. Compress your final program/archive with 7zip maximum compression level. A: Most of the space is the Python runtime itself. py2exe doesn't "compile" your program to native x86 instructions, or anything like that. It just bundles Python, your *.pyc files, and any modules your program uses up into a bundle that runs on its own. You could therefore choose to distribute only your *.pyc files and leave it up to the user to provide their own Python distribution and install any needed modules. This isn't a very popular option on Windows, but it's what usually happens everywhere else.
how to reduce size of exe using py2exe
I developed a small program using python and wxwidgets. It is a very simple program that uses only a mini frame to display some information when needed, and the rest of the time it shows nothing, only an icon in the taskbar. When I build the exe using py2exe (single file exe mode, optimized), I get a 6MB size file! I tried not including some libraries or dll that were not needed but still, I can't see why I get such a big file for just a mini frame and an icon in the taskbar. Is there any way to reduce the size of the exe generated with py2exe? here is what I did to reduce a bit myself: options = {"py2exe":{"excludes" : ['_gtkagg', '_tkagg', 'bsddb', 'curses', 'email', 'pywin.debugger', 'pywin.debugger.dbgcon', 'pywin.dialogs', 'tcl', 'Tkconstants', 'Tkinter'], "dll_excludes": ['libgdk-win32-2.0-0.dll', 'libgobject-2.0-0.dll', 'tcl84.dll', 'tk84.dll'], Thanks.
[ "The fact that your program is simple does not mean that it's small. Your program has many dependency through the wxWidget stack and 6 Mb does not look that big with all this in mind.\nBut, back to the question. To shrink a py2exe generated program, you can do a few obvious things:\n\nDistribute less stuff: it seems you already started this road. Look at everything that is distributed along with your program and eliminate it if it's not needed. DllDepend can tell you why a DLL is distributed with your program (probably because a pyd needs it). For other modules, remove them and try if it still works...\nCompress it better: run upx on each of your dll. Compress your final program/archive with 7zip maximum compression level.\n\n", "Most of the space is the Python runtime itself. py2exe doesn't \"compile\" your program to native x86 instructions, or anything like that. It just bundles Python, your *.pyc files, and any modules your program uses up into a bundle that runs on its own.\nYou could therefore choose to distribute only your *.pyc files and leave it up to the user to provide their own Python distribution and install any needed modules. This isn't a very popular option on Windows, but it's what usually happens everywhere else.\n" ]
[ 5, 3 ]
[]
[]
[ "executable", "filesize", "py2exe", "python" ]
stackoverflow_0001617868_executable_filesize_py2exe_python.txt
Q: regular expression in python I am sorry if this question is too simple but I can't seem to reason this out. I want to parse a string. I want to extract the words between 'The final score is: ' and '.'. In other words, if i have a string "The final score is: 25." I want it to extract 25. I don't know how to do this. Should I use match? or split?? Thanks A: If you know your information is always going to be formatted like that then you can simply use split: s = "The final score is: 25" score = s.split(':')[1].strip() This will result in score being 25. I use .strip() at the end to remove any whitespace as a safety measure. A: You're talking about "capturing" some characters between "The final score is:" and ".". This means you have a group. A group requires ()'s. See http://docs.python.org/library/re.html for all the rules. Since this smells like homework, I won't provide everything. The RE will have the form matcher = r'something:(something).' The ()'s define group which is save in the match object and can be retrieved. You have RE rules to match specific letters 'T', 'h', 'e', etc. You have RE rules to match digits '\d' A: If you're trying to understand regular expressions and not just trying to get a value out of a string, you may find this helpful. The first concept you need is that of grouping. Parentheses in a regular expression delimit a group; re.match() has a groups() method that returns a tuple of the text matched by the groups in the pattern. For instance: >>> re.match('foo(bar)baz', 'foobarbaz').groups() ('bar',) So in your case, you would create a pattern that matched text up to the colon, then a group that matched the text you're searching for. And here we get to the second part of the problem: what patterns should you search for? For instance, this pattern will definitely work: The final score is: (25). But it's not exceptionally useful, since it will only return a match (and 25 in the first group) if the string you're matching is The final score is: 25.. It won't match any other string. When you're composing a regular expression, the question you ask yourself is: "What parts of the input string can change, and how?" That tells you what kind of patterns to write. For instance, if your source always contains one and only one colon, the first part of your pattern can be [^:]*:. You're defining a class of characters that's everything other than a colon ([^:]), saying that you want to match it zero or more times (*), and then saying that you want to match the colon (:). If you know that your source always ends with a period, you can formulate the pattern used for the group the same way: "match every character that's not a period", or [^.]*. And you'll end up with this: s = 'The final score is: 25.' >>> re.match(r'[^:]*:([^.]*)', s).groups() (' 25',) This breaks if the value you're trying to capture contains a period, though. For a pattern that captures everything except the terminal period, you can define your group as ([\$]*) (using the $ end-of-line metacharacter in this way means that you want to, match zero or more of the remaining characters in this line) followed by .$. The terminal .$ means that in order for the pattern to match, it has to match the period at the end of the line. The group captures as many characters as it can right up until the point that grabbing any more will cause the pattern to not match. That means that this works: >>> s = "The final score is: this.is.something.different." >>> re.match(r'[^:]*:([^\$]*).$', s).groups() (' this.is.something.different',) Okay, now let's look at another possible approach. Let's suppose that we don't know anything about the input except that there's going to be a colon, then somewhere after that a number, which may or may not be at the end of the string. In this case, our capturing group is clearly going to be ([\d]*), which grabs all of the digits it finds. But how do we formulate a pattern that correctly matches the widest range of possible inputs possible? Like this: >>> s = '9. The answer is: 25 or thereabouts.' >>> re.match(r'[^:]*[^\d]*([\d]*)', s).groups() ('25',) Left to right, that pattern says: first, match everything that's not a colon. Then, once you hit the colon, match everything that's not a digit. Then grab all the digits. I hope that helps. I'm still trying to learn regular expressions myself, which is why I'm bothering to write an answer as detailed as this.
regular expression in python
I am sorry if this question is too simple but I can't seem to reason this out. I want to parse a string. I want to extract the words between 'The final score is: ' and '.'. In other words, if i have a string "The final score is: 25." I want it to extract 25. I don't know how to do this. Should I use match? or split?? Thanks
[ "If you know your information is always going to be formatted like that then you can simply use split:\ns = \"The final score is: 25\"\nscore = s.split(':')[1].strip()\n\nThis will result in score being 25. I use .strip() at the end to remove any whitespace as a safety measure.\n", "You're talking about \"capturing\" some characters between \"The final score is:\" and \".\". This means you have a group. A group requires ()'s.\nSee http://docs.python.org/library/re.html for all the rules.\nSince this smells like homework, I won't provide everything. The RE will have the form\nmatcher = r'something:(something).'\n\nThe ()'s define group which is save in the match object and can be retrieved.\nYou have RE rules to match specific letters 'T', 'h', 'e', etc.\nYou have RE rules to match digits '\\d'\n", "If you're trying to understand regular expressions and not just trying to get a value out of a string, you may find this helpful.\nThe first concept you need is that of grouping. Parentheses in a regular expression delimit a group; re.match() has a groups() method that returns a tuple of the text matched by the groups in the pattern. For instance:\n>>> re.match('foo(bar)baz', 'foobarbaz').groups()\n('bar',)\n\nSo in your case, you would create a pattern that matched text up to the colon, then a group that matched the text you're searching for. And here we get to the second part of the problem: what patterns should you search for? For instance, this pattern will definitely work:\nThe final score is: (25).\n\nBut it's not exceptionally useful, since it will only return a match (and 25 in the first group) if the string you're matching is The final score is: 25.. It won't match any other string.\nWhen you're composing a regular expression, the question you ask yourself is: \"What parts of the input string can change, and how?\" That tells you what kind of patterns to write. \nFor instance, if your source always contains one and only one colon, the first part of your pattern can be [^:]*:. You're defining a class of characters that's everything other than a colon ([^:]), saying that you want to match it zero or more times (*), and then saying that you want to match the colon (:).\nIf you know that your source always ends with a period, you can formulate the pattern used for the group the same way: \"match every character that's not a period\", or [^.]*. And you'll end up with this:\ns = 'The final score is: 25.'\n>>> re.match(r'[^:]*:([^.]*)', s).groups()\n(' 25',)\n\nThis breaks if the value you're trying to capture contains a period, though. For a pattern that captures everything except the terminal period, you can define your group as ([\\$]*) (using the $ end-of-line metacharacter in this way means that you want to, match zero or more of the remaining characters in this line) followed by .$. The terminal .$ means that in order for the pattern to match, it has to match the period at the end of the line. The group captures as many characters as it can right up until the point that grabbing any more will cause the pattern to not match.\nThat means that this works:\n>>> s = \"The final score is: this.is.something.different.\"\n>>> re.match(r'[^:]*:([^\\$]*).$', s).groups()\n(' this.is.something.different',)\n\nOkay, now let's look at another possible approach. Let's suppose that we don't know anything about the input except that there's going to be a colon, then somewhere after that a number, which may or may not be at the end of the string. In this case, our capturing group is clearly going to be ([\\d]*), which grabs all of the digits it finds. But how do we formulate a pattern that correctly matches the widest range of possible inputs possible? Like this:\n>>> s = '9. The answer is: 25 or thereabouts.'\n>>> re.match(r'[^:]*[^\\d]*([\\d]*)', s).groups()\n('25',)\n\nLeft to right, that pattern says: first, match everything that's not a colon. Then, once you hit the colon, match everything that's not a digit. Then grab all the digits.\nI hope that helps. I'm still trying to learn regular expressions myself, which is why I'm bothering to write an answer as detailed as this.\n" ]
[ 3, 2, 2 ]
[ "you don't need a regex to do this... do the split on \":\" , then get the last element. OR , just use string slicing \n>>> \"The final score is: 25.\"[-3:-1]\n'25'\n\n>>> s.split(\":\")[-1].strip() #use -1 to always get the last element\n'25'\n\n" ]
[ -1 ]
[ "python", "regex", "string" ]
stackoverflow_0001617808_python_regex_string.txt
Q: Openoffice3.1 pyuno confusing errors I'm trying to get the sample and other sample codes i find for pyuno running with openoffice 3.1.1 and python 2.5 with no luck. Unfortunately, pyuno does not give any clues about what goes wrong. In [1]: import uno In [2]: local = uno.getComponentContext() In [3]: resolver = local.ServiceManager.createInstanceWithContext("com.sun.star.bridge.UnoUrlResolver", local) --------------------------------------------------------------------------- com.sun.star.uno.RuntimeException Traceback (most recent call last) /opt/openoffice.org/basis3.1/program/ in () com.sun.star.uno.RuntimeException: : 'tuple' object has no attribute 'getTypes', traceback follows no traceback available below is the output of execution of /opt/openoffice.org/basis3.1/program/officehelper.py which basically boots the headless office instance and returns a related context object. den@ev:/opt/openoffice.org/basis3.1/program > python officehelper.py Traceback (most recent call last): File "officehelper.py", line 42, in from com.sun.star.connection import NoConnectException File "uno.py", line 273, in _uno_import RuntimeException = pyuno.getClass( "com.sun.star.uno.RuntimeException" ) RuntimeError: pyuno.getClass: expecting one string argument pyuno takes only 1 argument and it hasto be a string, as defined in http://udk.openoffice.org/source/browse/udk/pyuno/source/module/pyuno_module.cxx?rev=1.14&view=markup i could not manage to get pyuno.getClass work anyway. any suggestions about how to get pyuno working? A: In [1]: import uno In [2]: local = uno.getComponentContext() In [3]: resolver = local.ServiceManager.createInstanceWithContext("com.sun.star.bridge.UnoUrlResolver", local) OOP gone wrong, imho. i know its OT, but i tried getting uno to work before, and gave up. this is pure Steve Yegge Prose (read up on http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html). when you translate those lines into a more speakable form, they come out roughly as: "Let 'local' be the result of calling method 'get component context' of 'uno'. Let the 'service manager' be the attribute 'service manager' of 'local'. Let 'resolver' be the result of calling the 'service manager' method 'create instance with context', using arguments 'com sun star bridge uno url resolver' and 'local'." omg. no surprise something is wrong with a program that is so atrociously over-specific, convoluted, and self-referential while being not self-aware... you call a sub-method of 'local' and that sub-method has to be told what 'local' means? say what? hats off to the fearless developers who can cut through this. happy debugging. ADDED: thx for comment and points. the pyuno problem i cannot do anything about in fact, but i encourage to persue a patient trytrytry approach with a clear deadline. i also suggest to file an outright B.U.G. with the pyuno people (if they are in fact active—i got the impression that this was a rather silent project) because of the nonsense error message: the method in question appears to request one string argument, and it gets one, and it complains it did. this is so not helpful to the degree it becomes reasonable to declare a code fault. in this kind of situation i often look into the sources. but you already did that, right? i hate people to ask back ‘why do you want to do this?’ when i ask for help. however, sometimes someone (maybe you) does come up with another workable path in the process, one that does not include a solution to the particular problem, but helps to solve the superordinate one. so, if i may ask: what is the big picture?
Openoffice3.1 pyuno confusing errors
I'm trying to get the sample and other sample codes i find for pyuno running with openoffice 3.1.1 and python 2.5 with no luck. Unfortunately, pyuno does not give any clues about what goes wrong. In [1]: import uno In [2]: local = uno.getComponentContext() In [3]: resolver = local.ServiceManager.createInstanceWithContext("com.sun.star.bridge.UnoUrlResolver", local) --------------------------------------------------------------------------- com.sun.star.uno.RuntimeException Traceback (most recent call last) /opt/openoffice.org/basis3.1/program/ in () com.sun.star.uno.RuntimeException: : 'tuple' object has no attribute 'getTypes', traceback follows no traceback available below is the output of execution of /opt/openoffice.org/basis3.1/program/officehelper.py which basically boots the headless office instance and returns a related context object. den@ev:/opt/openoffice.org/basis3.1/program > python officehelper.py Traceback (most recent call last): File "officehelper.py", line 42, in from com.sun.star.connection import NoConnectException File "uno.py", line 273, in _uno_import RuntimeException = pyuno.getClass( "com.sun.star.uno.RuntimeException" ) RuntimeError: pyuno.getClass: expecting one string argument pyuno takes only 1 argument and it hasto be a string, as defined in http://udk.openoffice.org/source/browse/udk/pyuno/source/module/pyuno_module.cxx?rev=1.14&view=markup i could not manage to get pyuno.getClass work anyway. any suggestions about how to get pyuno working?
[ "In [1]: import uno\nIn [2]: local = uno.getComponentContext()\nIn [3]: resolver = local.ServiceManager.createInstanceWithContext(\"com.sun.star.bridge.UnoUrlResolver\", local)\nOOP gone wrong, imho. i know its OT, but i tried getting uno to work before, and gave up. this is pure Steve Yegge Prose (read up on http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html).\nwhen you translate those lines into a more speakable form, they come out roughly as:\n\"Let 'local' be the result of calling method 'get component context' of 'uno'. Let the 'service manager' be the attribute 'service manager' of 'local'. Let 'resolver' be the result of calling the 'service manager' method 'create instance with context', using arguments 'com sun star bridge uno url resolver' and 'local'.\"\nomg. no surprise something is wrong with a program that is so atrociously over-specific, convoluted, and self-referential while being not self-aware... you call a sub-method of 'local' and that sub-method has to be told what 'local' means? say what? hats off to the fearless developers who can cut through this. happy debugging.\nADDED:\nthx for comment and points.\nthe pyuno problem i cannot do anything about in fact, but i encourage to persue a patient trytrytry approach with a clear deadline. \ni also suggest to file an outright B.U.G. with the pyuno people (if they are in fact active—i got the impression that this was a rather silent project) because of the nonsense error message: the method in question appears to request one string argument, and it gets one, and it complains it did. this is so not helpful to the degree it becomes reasonable to declare a code fault. \nin this kind of situation i often look into the sources. but you already did that, right?\ni hate people to ask back ‘why do you want to do this?’ when i ask for help. however, sometimes someone (maybe you) does come up with another workable path in the process, one that does not include a solution to the particular problem, but helps to solve the superordinate one. so, if i may ask: what is the big picture?\n" ]
[ 5 ]
[]
[]
[ "openoffice.org", "python", "pyuno", "uno" ]
stackoverflow_0001618833_openoffice.org_python_pyuno_uno.txt
Q: Python + SVN + Windows/Mac = Invalid syntax? I'm pretty sure the following error is related to the fact that I'm sharing code via SVN with a colleague that is using a Windows system. Myself, I use Python on Mac, editing with TextMate. #!/usr/bin/python import os from google.appengine.api import users from google.appengine.ext import webapp ... When running that code, I get a syntax error: events.py:2 invalid syntax Is there an end-of-line issue when using SVN? Thankful for every hint. edit The problem seems not to be caused by SVN. Interestingly, executing directly at the Shell, there's no Syntax Error. But both validating with Textmate (using PyCheckMate) and trying to launch with GoogleAppEngineLauncher return the error. A: While Python shouldn't care about line endings, your Mac won't like having a CRLF line ending on the first line, so this may be your problem. 0000000 # ! / u s r / b i n / p y t h o n \r \n ^^ SVN can be told to translate line endings by setting the svn:eol-style property to native. It will then translate your LF endings to CRLF when the file is checked out in Windows, and translate your colleague's CRLF endings to LF when you check out on your Mac. A: Byte order mark? No, native EOL needed. I would be surprised if Python didn't just silently ignore the usual variations on record terminator styles. A shell or kernel might not, but then you would see something like python: bad interpreter. It's possible that there is a BOM, or byte order mark at the top of the file. Check it with od -c events.py. The BOM is not needed and not recommended in UTF-8, but for some reason Windows Notepad has the annoying habit of inserting one as the very first character in a file. So, we figured it out in the comments by noting that python events.py worked fine, making it clear that the \r was confusing the kernel's script execution. (#! interp [arg] is actually processed by the kernel, if that fails, the shell will try it, leading to the error seen.) The solution is in the svn manual in the property svn:eol-style. A: While it's true that line endings are (usually) different between Windows and the rest of the computing world, Python is designed to be agnostic about the issue. Normally Python has no trouble with varied line endings. I tried running a Python script on my Mac with a wide variety of line endings and had no problems. Note that I was running my script using the command: python test.py instead of ./test.py It might be worth trying both forms to see whether your problem is really Python or whether it is related to your shell/kernel. I know that some environments do have trouble with CRLF endings on the shebang line. A: Subversion ignores all end-of-line(EOL)-styles. It will never touch your files, unless you told it so. How can you tell Subversion to use a different EOL-Style? By setting property svn:eol-style to files in question: svn propset svn:eol-style LF /path/to/my/file/in/workingcopy You can check if your files have these properties set by using: svn proplist /path/to/my/file/in/workingcopy
Python + SVN + Windows/Mac = Invalid syntax?
I'm pretty sure the following error is related to the fact that I'm sharing code via SVN with a colleague that is using a Windows system. Myself, I use Python on Mac, editing with TextMate. #!/usr/bin/python import os from google.appengine.api import users from google.appengine.ext import webapp ... When running that code, I get a syntax error: events.py:2 invalid syntax Is there an end-of-line issue when using SVN? Thankful for every hint. edit The problem seems not to be caused by SVN. Interestingly, executing directly at the Shell, there's no Syntax Error. But both validating with Textmate (using PyCheckMate) and trying to launch with GoogleAppEngineLauncher return the error.
[ "While Python shouldn't care about line endings, your Mac won't like having a CRLF line ending on the first line, so this may be your problem.\n0000000 # ! / u s r / b i n / p y t h o n \\r \\n\n ^^\n\nSVN can be told to translate line endings by setting the svn:eol-style property to native. It will then translate your LF endings to CRLF when the file is checked out in Windows, and translate your colleague's CRLF endings to LF when you check out on your Mac.\n", "Byte order mark? No, native EOL needed.\nI would be surprised if Python didn't just silently ignore the usual variations on record terminator styles.\nA shell or kernel might not, but then you would see something like python: bad interpreter.\nIt's possible that there is a BOM, or byte order mark at the top of the file. Check it with od -c events.py.\nThe BOM is not needed and not recommended in UTF-8, but for some reason Windows Notepad has the annoying habit of inserting one as the very first character in a file.\nSo, we figured it out in the comments by noting that python events.py worked fine, making it clear that the \\r was confusing the kernel's script execution. (#! interp [arg] is actually processed by the kernel, if that fails, the shell will try it, leading to the error seen.) The solution is in the svn manual in the property svn:eol-style.\n", "While it's true that line endings are (usually) different between Windows and the rest of the computing world, Python is designed to be agnostic about the issue. Normally Python has no trouble with varied line endings.\nI tried running a Python script on my Mac with a wide variety of line endings and had no problems. Note that I was running my script using the command:\n\npython test.py\n\ninstead of\n\n./test.py\n\nIt might be worth trying both forms to see whether your problem is really Python or whether it is related to your shell/kernel. I know that some environments do have trouble with CRLF endings on the shebang line.\n", "Subversion ignores all end-of-line(EOL)-styles. It will never touch your files, unless you told it so.\nHow can you tell Subversion to use a different EOL-Style?\nBy setting property svn:eol-style to files in question: \nsvn propset svn:eol-style LF /path/to/my/file/in/workingcopy\n\nYou can check if your files have these properties set by using:\nsvn proplist /path/to/my/file/in/workingcopy\n\n" ]
[ 4, 1, 1, 1 ]
[]
[]
[ "google_app_engine", "operating_system", "python", "svn", "textmate" ]
stackoverflow_0001619229_google_app_engine_operating_system_python_svn_textmate.txt
Q: Grouping on an attribute using itertools.groupby I have a list of instances of the class 'Chromosome'. The list is sorted on the Chromosome attribute 'equation' I now want to remove instances where the attribute 'equation' is the same, leaving only one. I don't know how to pass the key, ie the ?, so that it groups on 'equation'. b = [a for a,b in groupby(list, ?)] A: import operator [a for a, b in groupby(thelist, operator.attrgetter('equation')] Btw, don't use builtin type names (such as list, file, etc) for your own identifiers, it's a confusing and best-avoided practice that will eventually byte you with peculiar bugs unless you wean yourself off it (i.e one day you'll be maintaining your code, and find yourself using list(sometuple) to make a list out of some tuple, or the like... and get weird errors if you've used list to mean something different than list in this scope!-). A: Here is how to do it: b = [a for a,b in groupby(list, key=lambda a: a.equation)]
Grouping on an attribute using itertools.groupby
I have a list of instances of the class 'Chromosome'. The list is sorted on the Chromosome attribute 'equation' I now want to remove instances where the attribute 'equation' is the same, leaving only one. I don't know how to pass the key, ie the ?, so that it groups on 'equation'. b = [a for a,b in groupby(list, ?)]
[ "import operator\n\n[a for a, b in groupby(thelist, operator.attrgetter('equation')]\n\nBtw, don't use builtin type names (such as list, file, etc) for your own identifiers, it's a confusing and best-avoided practice that will eventually byte you with peculiar bugs unless you wean yourself off it (i.e one day you'll be maintaining your code, and find yourself using list(sometuple) to make a list out of some tuple, or the like... and get weird errors if you've used list to mean something different than list in this scope!-).\n", "Here is how to do it:\nb = [a for a,b in groupby(list, key=lambda a: a.equation)]\n\n" ]
[ 7, 3 ]
[]
[]
[ "python" ]
stackoverflow_0001619227_python.txt
Q: python locale currency and negative numbers Why? C:\path\>manage.py shell Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import locale >>> locale.getlocale() ('Spanish_Colombia', '1252') >>> locale.currency( 1885, grouping=True ) '$ 1.885,00' >>> locale.currency( -1885, grouping=True ) '($ 1.885,00)' Can´t return $ -1.885,00? A: In your locale, surrounding a number with brackets indicates it's a negative number. Check this in Control Panel/Regional and Language settings, Python's probably picking it up from there. A: Parentheses around a number to indicate it's a debit (i.e. negative), not a credit (positive), are a common convention in accounting (I guess because the parentheses are more visible than a small minus/dash in front, and it's crucial to distinguish debits from credits;-). So, it's not surprising that many locales express that convention as the "proper way" to format negative numbers. If you want to use some parts of your locale's conventions, such as the $ sign and the commas, but not others, such as the parentheses, you'll have to use abs(yournumber) instead of just yournumber as the input to locale.currency, then, if yournumber < 0, do a little string manipulation to find the first digit and form a new string with the dash in front of it (or other string manipulations, depending on your desired way to express negative amounts -- e.g. the sign "minus" [dash] could go before the currency symbol, or at the right of the whole string). Why do you think that whoever's going to be reading that output wants some but not all of the locale's conventions to apply to it, by the way?
python locale currency and negative numbers
Why? C:\path\>manage.py shell Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import locale >>> locale.getlocale() ('Spanish_Colombia', '1252') >>> locale.currency( 1885, grouping=True ) '$ 1.885,00' >>> locale.currency( -1885, grouping=True ) '($ 1.885,00)' Can´t return $ -1.885,00?
[ "In your locale, surrounding a number with brackets indicates it's a negative number. Check this in Control Panel/Regional and Language settings, Python's probably picking it up from there.\n", "Parentheses around a number to indicate it's a debit (i.e. negative), not a credit (positive), are a common convention in accounting (I guess because the parentheses are more visible than a small minus/dash in front, and it's crucial to distinguish debits from credits;-).\nSo, it's not surprising that many locales express that convention as the \"proper way\" to format negative numbers. If you want to use some parts of your locale's conventions, such as the $ sign and the commas, but not others, such as the parentheses, you'll have to use abs(yournumber) instead of just yournumber as the input to locale.currency, then, if yournumber < 0, do a little string manipulation to find the first digit and form a new string with the dash in front of it (or other string manipulations, depending on your desired way to express negative amounts -- e.g. the sign \"minus\" [dash] could go before the currency symbol, or at the right of the whole string).\nWhy do you think that whoever's going to be reading that output wants some but not all of the locale's conventions to apply to it, by the way?\n" ]
[ 2, 2 ]
[]
[]
[ "currency", "django", "locale", "python" ]
stackoverflow_0001618975_currency_django_locale_python.txt
Q: Python zlib output, how to recover out of mysql utf-8 table? In python, I compressed a string using zlib, and then inserted it into a mysql column that is of type blob, using the utf-8 encoding. The string comes back as utf-8, but it's not clear how to get it back into a format where I can decompress it. Here is some pseduo-output: valueInserted = zlib.compress('a') = 'x\x9cK\x04\x00\x00b\x00b' valueFromSqlColumn = u'x\x9cK\x04\x00\x00b\x00b' zlib.decompress(valueFromSqlColumn) UnicodeEncodeError: 'ascii' codec can't encode character u'\x9c' in position 1: ordinal not in range(128) if i do this, it inserts some extra characters: valueFromSqlColumn.encode('utf-8') = 'x\xc2\x9cK\x04\x00\x00b\x00b' Any suggestions? A: Unicode is designed to be compatible with latin-1, so try: >>> import zlib >>> u = zlib.compress("test").decode('latin1') >>> u u'x\x9c+I-.\x01\x00\x04]\x01\xc1' And then >>> zlib.decompress(u.encode('latin1')) 'test' EDIT: Fixed typo, latin-1 isn't designed to be compatible with unicode, it's the other way around. A: You have a unicode object that is really encoding bytes. That's unfortunate, since unicode strings should really only be coding text, right? Anyway, what we want to do is to construct a byte string.. this is a str in Python 2.x. We see by the printed string you gave u'x\x9cK\x04\x00\x00b\x00b' that the byte values are encoded as unicode codepoints. We can get the numerical value of a codepoint by using the function ord(..). Then we can get the byte string representation of that number with the function chr(..). Let's try this: >>> ord(u"A") 65 >>> chr(_) 'A' So we can decode the string ourselves: >>> udata = u'x\x9cK\x04\x00\x00b\x00b' >>> bdata = "".join(chr(ord(uc)) for uc in udata) >>> bdata 'x\x9cK\x04\x00\x00b\x00b' (Wait, what does the above code do? The join stuff? What we first do is create a list of the code points in the string: >>> [ord(uc) for uc in udata] [120, 156, 75, 4, 0, 0, 98, 0, 98] Then we intepret the numbers as bytes, converting them individually: >>> [chr(ord(uc)) for uc in udata] ['x', '\x9c', 'K', '\x04', '\x00', '\x00', 'b', '\x00', 'b'] Finally, we join them with "" as separator using "".join(list-of-strings) End of Wait..) However, cls cleverly notes that the Latin-1 encoding has the property that a character's byte value in the Latin-1 encoding is equal to the character's codepoint in Unicode. Given, of course, that the character is inside the range 0 to 255 where Latin-1 is defined. This means we can do the byte conversion directly with Latin-1: >>> udata = u'x\x9cK\x04\x00\x00b\x00b' >>> udata.encode("latin-1") 'x\x9cK\x04\x00\x00b\x00b' Which as you can see, gives the same result. A: valueInserted = zlib.compress('a') = 'x\x9cK\x04\x00\x00b\x00b' Note that this is an str object. You say that you "inserted it into a mysql column that is of type blob, using the utf-8 encoding". AS the compressed string is binary, not text, "blob" is an appropriate type of column, but ANY encoding or other transformation is a very bad idea. You need to be able to recover from the database EXACTLY right down to the last bit what you inserted, otherwise the decompression will fail, either by raising an error or (less likely, but worse) silently producing garbage. You say that you get back after whatever process you go through in inserting it and extracting it again is: valueFromSqlColumn = u'x\x9cK\x04\x00\x00b\x00b' Note carefully that there is only one tiny visual difference: u'something' instead of 'something'. That makes it a unicode object. Based on your own evidence so far, "comes back as UTF-8" is not correct. A unicode object and a str object encoded in utf8 are not the same thing. Guess 1: insert as raw string, extract with latin1 decode. Guess 2: insert as compressed.decode('latin1').encode('utf8'), extract with utf8 decode. You really need to understand the process of inserting and extracting, including what encodes and decodes happen by default. Then you really need to fix your code. However in the meantime you can probably kludge up what you've got. Note this: >>> valueFromSqlColumn = u'x\x9cK\x04\x00\x00b\x00b' >>> all(ord(char) <= 255 for char in valueFromSqlColumn) True Do some trials with more complicated input than 'a'. If, as I guess, you see that all of the unicode characters have an ordinal in range(256), then you have a simple kludge: >>> compressed = valueFromSqlColumn.encode('latin1') >>> compressed 'x\x9cK\x04\x00\x00b\x00b' >>> zlib.decompress(compressed) 'a' Why this works is that Latin1 encoding/decoding doesn't change the ordinal. You could recover the original compressed value by: >>> compressed2 = ''.join(chr(ord(uc)) for uc in valueFromSqlColumn) >>> compressed2 'x\x9cK\x04\x00\x00b\x00b' >>> compressed2 == compressed True if you think using .encode('latin1') is too much like voodoo. If the above doesn't work (i.e. some ordinals are not in range(256)), then you will need to produce a small runnable script that shows exactly and reproducibly how you are compressing, inserting into the database, and retrieving from the database ... sprinkle lots of print "variable", repr(variable) around your code so that you can see what is happening.
Python zlib output, how to recover out of mysql utf-8 table?
In python, I compressed a string using zlib, and then inserted it into a mysql column that is of type blob, using the utf-8 encoding. The string comes back as utf-8, but it's not clear how to get it back into a format where I can decompress it. Here is some pseduo-output: valueInserted = zlib.compress('a') = 'x\x9cK\x04\x00\x00b\x00b' valueFromSqlColumn = u'x\x9cK\x04\x00\x00b\x00b' zlib.decompress(valueFromSqlColumn) UnicodeEncodeError: 'ascii' codec can't encode character u'\x9c' in position 1: ordinal not in range(128) if i do this, it inserts some extra characters: valueFromSqlColumn.encode('utf-8') = 'x\xc2\x9cK\x04\x00\x00b\x00b' Any suggestions?
[ "Unicode is designed to be compatible with latin-1, so try:\n>>> import zlib\n>>> u = zlib.compress(\"test\").decode('latin1')\n>>> u\nu'x\\x9c+I-.\\x01\\x00\\x04]\\x01\\xc1'\n\nAnd then\n>>> zlib.decompress(u.encode('latin1'))\n'test'\n\nEDIT: Fixed typo, latin-1 isn't designed to be compatible with unicode, it's the other way around.\n", "You have a unicode object that is really encoding bytes. That's unfortunate, since unicode strings should really only be coding text, right?\nAnyway, what we want to do is to construct a byte string.. this is a str in Python 2.x. We see by the printed string you gave u'x\\x9cK\\x04\\x00\\x00b\\x00b' that the byte values are encoded as unicode codepoints. We can get the numerical value of a codepoint by using the function ord(..). Then we can get the byte string representation of that number with the function chr(..). Let's try this:\n>>> ord(u\"A\")\n65\n>>> chr(_)\n'A'\n\nSo we can decode the string ourselves:\n>>> udata = u'x\\x9cK\\x04\\x00\\x00b\\x00b'\n>>> bdata = \"\".join(chr(ord(uc)) for uc in udata)\n>>> bdata\n'x\\x9cK\\x04\\x00\\x00b\\x00b'\n\n(Wait, what does the above code do? The join stuff? What we first do is create a list of the code points in the string:\n>>> [ord(uc) for uc in udata]\n[120, 156, 75, 4, 0, 0, 98, 0, 98]\n\nThen we intepret the numbers as bytes, converting them individually:\n>>> [chr(ord(uc)) for uc in udata]\n['x', '\\x9c', 'K', '\\x04', '\\x00', '\\x00', 'b', '\\x00', 'b']\n\nFinally, we join them with \"\" as separator using \"\".join(list-of-strings)\nEnd of Wait..)\nHowever, cls cleverly notes that the Latin-1 encoding has the property that a character's byte value in the Latin-1 encoding is equal to the character's codepoint in Unicode. Given, of course, that the character is inside the range 0 to 255 where Latin-1 is defined. This means we can do the byte conversion directly with Latin-1:\n>>> udata = u'x\\x9cK\\x04\\x00\\x00b\\x00b'\n>>> udata.encode(\"latin-1\")\n'x\\x9cK\\x04\\x00\\x00b\\x00b'\n\nWhich as you can see, gives the same result.\n", "valueInserted = zlib.compress('a') = 'x\\x9cK\\x04\\x00\\x00b\\x00b'\n\nNote that this is an str object. You say that you \"inserted it into a mysql column that is of type blob, using the utf-8 encoding\". AS the compressed string is binary, not text, \"blob\" is an appropriate type of column, but ANY encoding or other transformation is a very bad idea. You need to be able to recover from the database EXACTLY right down to the last bit what you inserted, otherwise the decompression will fail, either by raising an error or (less likely, but worse) silently producing garbage. \nYou say that you get back after whatever process you go through in inserting it and extracting it again is:\nvalueFromSqlColumn = u'x\\x9cK\\x04\\x00\\x00b\\x00b'\n\nNote carefully that there is only one tiny visual difference: u'something' instead of 'something'. That makes it a unicode object. Based on your own evidence so far, \"comes back as UTF-8\" is not correct. A unicode object and a str object encoded in utf8 are not the same thing.\nGuess 1: insert as raw string, extract with latin1 decode.\nGuess 2: insert as compressed.decode('latin1').encode('utf8'), extract with utf8 decode.\nYou really need to understand the process of inserting and extracting, including what encodes and decodes happen by default. \nThen you really need to fix your code. However in the meantime you can probably kludge up what you've got.\nNote this:\n>>> valueFromSqlColumn = u'x\\x9cK\\x04\\x00\\x00b\\x00b'\n>>> all(ord(char) <= 255 for char in valueFromSqlColumn)\nTrue\n\nDo some trials with more complicated input than 'a'. If, as I guess, you see that all of the unicode characters have an ordinal in range(256), then you have a simple kludge:\n>>> compressed = valueFromSqlColumn.encode('latin1')\n>>> compressed\n'x\\x9cK\\x04\\x00\\x00b\\x00b'\n>>> zlib.decompress(compressed)\n'a'\n\nWhy this works is that Latin1 encoding/decoding doesn't change the ordinal. You could recover the original compressed value by:\n>>> compressed2 = ''.join(chr(ord(uc)) for uc in valueFromSqlColumn)\n>>> compressed2\n'x\\x9cK\\x04\\x00\\x00b\\x00b'\n>>> compressed2 == compressed\nTrue\n\nif you think using .encode('latin1') is too much like voodoo.\nIf the above doesn't work (i.e. some ordinals are not in range(256)), then you will need to produce a small runnable script that shows exactly and reproducibly how you are compressing, inserting into the database, and retrieving from the database ... sprinkle lots of print \"variable\", repr(variable) around your code so that you can see what is happening.\n" ]
[ 7, 2, 1 ]
[]
[]
[ "mysql", "python", "unicode", "utf_8", "zlib" ]
stackoverflow_0001618926_mysql_python_unicode_utf_8_zlib.txt
Q: with python's win32com and parsing html problem I'm new to python. I want to extract some text from the CNN website. I want to use python win32com module. EDIT: on [why win32com] Because of javascript in website... I thought of using win32com; I have looked for other solution but without success in regard to my requirement. In fact, I wanted to use mechanize or a similiar solution but this didn't work [for me]. Is it possible to use beautifulsoup or lxml with win32com? Anyone who knows how to extract some text from cnn webiste, please help me! Specifically I want to extract text in cnn website from 'Sponsored links' 'Money' import win32com.client from time import sleep from win32com.client import Dispatch import urllib,urllib2 from BeautifulSoup import BeautifulSoup ie = Dispatch("InternetExplorer.Application") ie.Visible = 1 ie.Navigate("http://www.cnn.com") sleep(15) ie.Quit() A: Are you trying to parse some text on cnn's web site? You can get the page with import urllib f = urllib.urlopen('http://www.cnn.com') page = f.read() f.close() You can then use BeautifulSoup to find whatever it is you are looking for on page. Why win32com, dispatch, etc.?
with python's win32com and parsing html problem
I'm new to python. I want to extract some text from the CNN website. I want to use python win32com module. EDIT: on [why win32com] Because of javascript in website... I thought of using win32com; I have looked for other solution but without success in regard to my requirement. In fact, I wanted to use mechanize or a similiar solution but this didn't work [for me]. Is it possible to use beautifulsoup or lxml with win32com? Anyone who knows how to extract some text from cnn webiste, please help me! Specifically I want to extract text in cnn website from 'Sponsored links' 'Money' import win32com.client from time import sleep from win32com.client import Dispatch import urllib,urllib2 from BeautifulSoup import BeautifulSoup ie = Dispatch("InternetExplorer.Application") ie.Visible = 1 ie.Navigate("http://www.cnn.com") sleep(15) ie.Quit()
[ "Are you trying to parse some text on cnn's web site?\nYou can get the page with\nimport urllib\nf = urllib.urlopen('http://www.cnn.com')\npage = f.read()\nf.close()\n\nYou can then use BeautifulSoup to find whatever it is you are looking for on page.\nWhy win32com, dispatch, etc.?\n" ]
[ 1 ]
[]
[]
[ "parsing", "python" ]
stackoverflow_0001619489_parsing_python.txt
Q: Using Python Regular Expression in Django I have an web address: http://www.example.com/org/companyA I want to be able to pass CompanyA to a view using regular expressions. This is what I have: (r'^org/?P<company_name>\w+/$',"orgman.views.orgman") and it doesn't match. Ideally all URL's that look like example.com/org/X would pass x to the view. Thanks in advance! A: You need to wrap the group name in parentheses. The syntax for named groups is (?P<name>regex), not ?P<name>regex. Also, if you don't want to require a trailing slash, you should make it optional. It's easy to test regular expression matching with the Python interpreter, for example: >>> import re >>> re.match(r'^org/?P<company_name>\w+/$', 'org/companyA') >>> re.match(r'^org/(?P<company_name>\w+)/?$', 'org/companyA') <_sre.SRE_Match object at 0x10049c378> >>> re.match(r'^org/(?P<company_name>\w+)/?$', 'org/companyA').groupdict() {'company_name': 'companyA'} A: Your regex isn't valid. It should probably look like r'^org/(?P<company_name>\w+)/$' A: It should look more like r'^org/(?P<company_name>\w+)' >>> r = re.compile(r'^org/(?P<company_name>\w+)') >>> r.match('org/companyA').groups() ('companyA',)
Using Python Regular Expression in Django
I have an web address: http://www.example.com/org/companyA I want to be able to pass CompanyA to a view using regular expressions. This is what I have: (r'^org/?P<company_name>\w+/$',"orgman.views.orgman") and it doesn't match. Ideally all URL's that look like example.com/org/X would pass x to the view. Thanks in advance!
[ "You need to wrap the group name in parentheses. The syntax for named groups is (?P<name>regex), not ?P<name>regex. Also, if you don't want to require a trailing slash, you should make it optional.\nIt's easy to test regular expression matching with the Python interpreter, for example:\n>>> import re\n>>> re.match(r'^org/?P<company_name>\\w+/$', 'org/companyA')\n>>> re.match(r'^org/(?P<company_name>\\w+)/?$', 'org/companyA')\n<_sre.SRE_Match object at 0x10049c378>\n>>> re.match(r'^org/(?P<company_name>\\w+)/?$', 'org/companyA').groupdict()\n{'company_name': 'companyA'}\n\n", "Your regex isn't valid. It should probably look like\nr'^org/(?P<company_name>\\w+)/$'\n\n", "It should look more like r'^org/(?P<company_name>\\w+)'\n>>> r = re.compile(r'^org/(?P<company_name>\\w+)')\n>>> r.match('org/companyA').groups()\n('companyA',)\n\n" ]
[ 20, 2, 1 ]
[]
[]
[ "django", "django_urls", "python", "regex" ]
stackoverflow_0001619554_django_django_urls_python_regex.txt
Q: How to import win32api I'm trying to use some python-2.1 code to control another program (ArcGIS). The version of python I am using is 2.5. I am getting the following error message when I run the code. <type'exceptions.ImportError'>: No module named win32api Failed to execute (polyline2geonetwork2). I tried installing pywin32-214.win32-py2.5.exe but I still get the same error message. I can't figure out if I need to do anything to my original python install so it knows that I have installed this. I think the problematic part of my code is the following: import win32com.client, sys, string, os, re, time, math gp = win32com.client.Dispatch("esriGeoprocessing.GpDispatch.1") conn = win32com.client.Dispatch(r'ADODB.Connection') Thanks for your help - I am quite new to python. A: Your sys.path is ['C:\\Documents and Settings\\david\\My Documents\\GIS_References\\public\\funconn_public', 'C:\\Python25\\Lib\\idlelib', 'C:\\Program Files\\ArcGIS\\bin', 'C:\\WINDOWS\\system32\\python25.zip', 'C:\\Python25\\DLLs', 'C:\\Python25\\lib', 'C:\\Python25\\lib\\plat-win', 'C:\\Python25\\lib\\lib-tk', 'C:\\Python25', 'C:\\Python25\\lib\\site-packages', 'C:\\Python25\\lib\\site-packages\\win32', 'C:\\Python25\\lib\\site-packages\\win32\\lib', 'C:\\Python25\\lib\\site-packages\\Pythonwin'] and winapi.py is located in C:\Python25\Lib\site-packages\isapi\test\build\bdist.win32\winexe\temp. Notice that this directory is not listed in your sys.path. To get things working, you'll need to put C:\Python25\Lib\site-packages\isapi\test\build\bdist.win32\winexe\temp in your sys.path. It appears winapi.py is not yet installed. It is in a test\build...\temp directory. I don't know much about Windows+Python. Maybe there is documentation that came with winapi.py which explains how the installation is suppose to be achieved. A quick (but ugly) fix is to manually insert the needed directory into sys.path. By this I mean, you can edit polyline2geonetwork.py and put import sys sys.path.append(r'C:\Python25\Lib\site-packages\isapi\test\build\bdist.win32\winexe\temp') near the top of the file. A: print out sys.path right before the import and make sure the path to win32com is in there
How to import win32api
I'm trying to use some python-2.1 code to control another program (ArcGIS). The version of python I am using is 2.5. I am getting the following error message when I run the code. <type'exceptions.ImportError'>: No module named win32api Failed to execute (polyline2geonetwork2). I tried installing pywin32-214.win32-py2.5.exe but I still get the same error message. I can't figure out if I need to do anything to my original python install so it knows that I have installed this. I think the problematic part of my code is the following: import win32com.client, sys, string, os, re, time, math gp = win32com.client.Dispatch("esriGeoprocessing.GpDispatch.1") conn = win32com.client.Dispatch(r'ADODB.Connection') Thanks for your help - I am quite new to python.
[ "Your sys.path is \n['C:\\\\Documents and Settings\\\\david\\\\My Documents\\\\GIS_References\\\\public\\\\funconn_public', 'C:\\\\Python25\\\\Lib\\\\idlelib', 'C:\\\\Program Files\\\\ArcGIS\\\\bin', 'C:\\\\WINDOWS\\\\system32\\\\python25.zip', 'C:\\\\Python25\\\\DLLs', 'C:\\\\Python25\\\\lib', 'C:\\\\Python25\\\\lib\\\\plat-win', 'C:\\\\Python25\\\\lib\\\\lib-tk', 'C:\\\\Python25', 'C:\\\\Python25\\\\lib\\\\site-packages', 'C:\\\\Python25\\\\lib\\\\site-packages\\\\win32', 'C:\\\\Python25\\\\lib\\\\site-packages\\\\win32\\\\lib', 'C:\\\\Python25\\\\lib\\\\site-packages\\\\Pythonwin']\n\nand winapi.py is located in C:\\Python25\\Lib\\site-packages\\isapi\\test\\build\\bdist.win32\\winexe\\temp.\nNotice that this directory is not listed in your sys.path. To get things working, you'll need to put C:\\Python25\\Lib\\site-packages\\isapi\\test\\build\\bdist.win32\\winexe\\temp in your sys.path.\nIt appears winapi.py is not yet installed. It is in a test\\build...\\temp directory.\nI don't know much about Windows+Python. Maybe there is documentation that came with winapi.py which explains how the installation is suppose to be achieved.\nA quick (but ugly) fix is to manually insert the needed directory into sys.path.\nBy this I mean, you can edit polyline2geonetwork.py and put\nimport sys\nsys.path.append(r'C:\\Python25\\Lib\\site-packages\\isapi\\test\\build\\bdist.win32\\winexe\\temp')\n\nnear the top of the file. \n", "print out sys.path right before the import and make sure the path to win32com is in there\n" ]
[ 2, 1 ]
[]
[]
[ "import", "python", "python_2.5" ]
stackoverflow_0001619469_import_python_python_2.5.txt
Q: Extracting decimals from a number in Python I am writing a function to extract decimals from a number. Ignore the exception and its syntax, I am working on 2.5.2 (default Leopard version). My function does not yet handle 0's. My issue is, the function produces random errors with certain numbers, and I don't understand the reason. I will post an error readout after the code. Function: def extractDecimals(num): try: if(num > int(num)): decimals = num - int(num) while(decimals > int(decimals)): print 'decimal: ' + str(decimals) print 'int: ' + str(int(decimals)) decimals *= 10 decimals = int(decimals) return decimals else: raise DecimalError(num) except DecimalError, e: e.printErrorMessage() Exception Class: class DecimalError(Exception): def __init__(self, value): self.value = value def printErrorMessage(self): print 'The number, ' + str(self.value) + ', is not a decimal.' Here is error output when I input the number 1.988: decimal: 0.988 int: 0 decimal: 9.88 int: 9 decimal: 98.8 int: 98 decimal: 988.0 int: 987 decimal: 9880.0 int: 9879 decimal: 98800.0 int: 98799 decimal: 988000.0 int: 987999 decimal: 9880000.0 int: 9879999 decimal: 98800000.0 int: 98799999 decimal: 988000000.0 int: 987999999 decimal: 9880000000.0 int: 9879999999 decimal: 98800000000.0 int: 98799999999 decimal: 988000000000.0 int: 987999999999 decimal: 9.88e+12 int: 9879999999999 decimal: 9.88e+13 int: 98799999999999 decimal: 9.88e+14 int: 987999999999999 9879999999999998 I do not know why this error is popping up. Hopefully you guys can help me out. A: The problem is that (binary) floating point numbers aren't precisely representable as decimals. See Why can't decimal numbers be represented exactly in binary? for more information. A: As Ned Batchelder said, not all decimals are exactly representable as floats. A float is represented by a certain number of binary digits which are used to approximate the decimal as closely as possible. You can never assume a float is exactly equal to a decimal. In [49]: num Out[49]: 1.988 In [50]: decimals=num - int(num) In [51]: decimals Out[51]: 0.98799999999999999 In [52]: print decimals # Notice that print rounds the result, masking the inaccuracy. 0.988 See http://en.wikipedia.org/wiki/Floating_point for more info on the binary representation of floats. There are other ways to achieve you goal. Here is one way, using string operations: def extractDecimals(num): try: numstr=str(num) return int(numstr[numstr.find('.')+1:]) except ValueError, e: print 'The number, %s is not a decimal.'%num A: As others have already pointed out, the issue you are seeing is due to the inexact representation of floating point numbers Try your program with Python's Decimal from decimal import Decimal extractDecimals(Decimal("0.988")) A: As has already been said, floating point numbers are not exactly equal to decimals. You can see this by using the modulus operator like so: >>> 0.988 % 1 0.98799999999999999 >>> 9.88 % 1 0.88000000000000078 >>> 98.8 % 1 0.79999999999999716 This gives the remainder of division by 1, or the decimal. A: As others have said in their answers, arithmetic with floats doesn't always result in what you expect due to rounding errors. In this case, perhaps converting the float into a string and back is better? In [1]: num = 1.988 In [2]: num_str = str(num) In [3]: decimal = num_str.split('.')[1] In [4]: decimal = int(decimal) In [5]: decimal Out[5]: 988
Extracting decimals from a number in Python
I am writing a function to extract decimals from a number. Ignore the exception and its syntax, I am working on 2.5.2 (default Leopard version). My function does not yet handle 0's. My issue is, the function produces random errors with certain numbers, and I don't understand the reason. I will post an error readout after the code. Function: def extractDecimals(num): try: if(num > int(num)): decimals = num - int(num) while(decimals > int(decimals)): print 'decimal: ' + str(decimals) print 'int: ' + str(int(decimals)) decimals *= 10 decimals = int(decimals) return decimals else: raise DecimalError(num) except DecimalError, e: e.printErrorMessage() Exception Class: class DecimalError(Exception): def __init__(self, value): self.value = value def printErrorMessage(self): print 'The number, ' + str(self.value) + ', is not a decimal.' Here is error output when I input the number 1.988: decimal: 0.988 int: 0 decimal: 9.88 int: 9 decimal: 98.8 int: 98 decimal: 988.0 int: 987 decimal: 9880.0 int: 9879 decimal: 98800.0 int: 98799 decimal: 988000.0 int: 987999 decimal: 9880000.0 int: 9879999 decimal: 98800000.0 int: 98799999 decimal: 988000000.0 int: 987999999 decimal: 9880000000.0 int: 9879999999 decimal: 98800000000.0 int: 98799999999 decimal: 988000000000.0 int: 987999999999 decimal: 9.88e+12 int: 9879999999999 decimal: 9.88e+13 int: 98799999999999 decimal: 9.88e+14 int: 987999999999999 9879999999999998 I do not know why this error is popping up. Hopefully you guys can help me out.
[ "The problem is that (binary) floating point numbers aren't precisely representable as decimals. See Why can't decimal numbers be represented exactly in binary? for more information.\n", "As Ned Batchelder said, not all decimals are exactly representable as floats. A float is represented by a certain number of binary digits which are used to approximate the decimal as closely as possible. You can never assume a float is exactly equal to a decimal.\nIn [49]: num\nOut[49]: 1.988\n\nIn [50]: decimals=num - int(num)\n\nIn [51]: decimals\nOut[51]: 0.98799999999999999\n\nIn [52]: print decimals # Notice that print rounds the result, masking the inaccuracy.\n0.988\n\nSee http://en.wikipedia.org/wiki/Floating_point for more info on the binary representation of floats.\nThere are other ways to achieve you goal. Here is one way, using string operations:\ndef extractDecimals(num):\n try:\n numstr=str(num)\n return int(numstr[numstr.find('.')+1:])\n except ValueError, e:\n print 'The number, %s is not a decimal.'%num\n\n", "As others have already pointed out, the issue you are seeing is due to the inexact representation of floating point numbers\nTry your program with Python's Decimal\nfrom decimal import Decimal\nextractDecimals(Decimal(\"0.988\"))\n\n", "As has already been said, floating point numbers are not exactly equal to decimals. You can see this by using the modulus operator like so:\n>>> 0.988 % 1\n0.98799999999999999\n>>> 9.88 % 1\n0.88000000000000078\n>>> 98.8 % 1\n0.79999999999999716\n\nThis gives the remainder of division by 1, or the decimal.\n", "As others have said in their answers, arithmetic with floats doesn't always result in what you expect due to rounding errors. In this case, perhaps converting the float into a string and back is better?\nIn [1]: num = 1.988\n\nIn [2]: num_str = str(num)\n\nIn [3]: decimal = num_str.split('.')[1]\n\nIn [4]: decimal = int(decimal)\n\nIn [5]: decimal\nOut[5]: 988\n\n" ]
[ 5, 1, 1, 1, 0 ]
[]
[]
[ "python", "python_2.5" ]
stackoverflow_0001619818_python_python_2.5.txt
Q: What is causing this Python exception? I have a C++ app that uses Python to load some scripts. It calls some functions in the scripts, and everything works fine until the app exits and calls Py_Finalize. Then it displays the following: (GetName is a function in one of the scripts) Exception AttributeError: "'module' object has no attribute 'GetName'" in 'garbage collection' ignored Fatal Python error: unexpected exception during garbage collection Then the app crashes. I'm using Python 3.1 on Windows. Any advice would be appreciated. A: From the docs to Py_Finalize(): Bugs and caveats: The destruction of modules and objects in modules is done in random order; this may cause destructors (__del__() methods) to fail when they depend on other objects (even functions) or modules. Dynamically loaded extension modules loaded by Python are not unloaded. Small amounts of memory allocated by the Python interpreter may not be freed (if you find a leak, please report it). Memory tied up in circular references between objects is not freed. Some memory allocated by extension modules may not be freed. Some extensions may not work properly if their initialization routine is called more than once; this can happen if an application calls Py_Initialize() and Py_Finalize() more than once. Most likely a __del__ contains a call to <somemodule>.GetName(), but that module has already been destroyed by the time __del__ is called.
What is causing this Python exception?
I have a C++ app that uses Python to load some scripts. It calls some functions in the scripts, and everything works fine until the app exits and calls Py_Finalize. Then it displays the following: (GetName is a function in one of the scripts) Exception AttributeError: "'module' object has no attribute 'GetName'" in 'garbage collection' ignored Fatal Python error: unexpected exception during garbage collection Then the app crashes. I'm using Python 3.1 on Windows. Any advice would be appreciated.
[ "From the docs to Py_Finalize():\n\nBugs and caveats: The destruction of\n modules and objects in modules is done\n in random order; this may cause\n destructors (__del__() methods) to\n fail when they depend on other objects\n (even functions) or modules.\n Dynamically loaded extension modules\n loaded by Python are not unloaded.\n Small amounts of memory allocated by\n the Python interpreter may not be\n freed (if you find a leak, please\n report it). Memory tied up in circular\n references between objects is not\n freed. Some memory allocated by\n extension modules may not be freed.\n Some extensions may not work properly\n if their initialization routine is\n called more than once; this can happen\n if an application calls\n Py_Initialize() and Py_Finalize() more\n than once.\n\nMost likely a __del__ contains a call to <somemodule>.GetName(), but that module has already been destroyed by the time __del__ is called.\n" ]
[ 4 ]
[]
[]
[ "attributes", "exception", "python" ]
stackoverflow_0001619908_attributes_exception_python.txt
Q: name a column and then call items with name instead of index I have a list of columns that I will be analyzing. Instead of referring back to the specific index such as data[1][2], I'd like to assign a name to a column and then loop through the rows of the columns for different tasks. How do I assign a name to a column and then is this the correct format to refer back to it? for x in range (len(data)): if [column_name][x] .... A: There's a bunch of different ways of doing this. If you know the association between names and columns at the time you write your code, the easiest way by far is this: for row in data: (foo, bar, baz, bat) = row ...assuming that you don't need to update row (and data). If you need to update the row, copying the values to variables won't work. You need to manipulate the items via their indexes. In that case, aviraldg's approach is simplest: (foo, bar, baz, bat) = range(4) for row in data: row[foo] = row[bar] If you want to reference the columns by a string that contains their name, you'll need to copy the row to a dictionary whose key is the column name. Then, when you're done manipulating the dictionary, you need to update the original list, or replace it with a new one (which is what this code does): columns = ("foo", "bar", "baz", "bat") for i in range(len(data)): d = dict(zip(columns, data[i])) d["foo"] = d["bar"] data[i] = [d[col] for col in columns] A: The easiest way to use names instead of integers to access your data is to use a dictionary data = {'pig':[1, 2], 'cow':[3, 4], 'dog':[5,6]} if data in range(2): if data['dog'][1]==4:... There are other ways as well. For example, you could make a class and override __getitem__; or you could just assign variable names to column numbers in a 2d array, like dog=2, etc; it all depends on exactly what you want to do. A: I believe what you want to do is store a column in a variable and then reference it, instead of always using the subscript. Simple enough: varName=data[columnName] For consequent accesses to that column, varName[rowName] should do the trick
name a column and then call items with name instead of index
I have a list of columns that I will be analyzing. Instead of referring back to the specific index such as data[1][2], I'd like to assign a name to a column and then loop through the rows of the columns for different tasks. How do I assign a name to a column and then is this the correct format to refer back to it? for x in range (len(data)): if [column_name][x] ....
[ "There's a bunch of different ways of doing this. If you know the association between names and columns at the time you write your code, the easiest way by far is this:\nfor row in data:\n (foo, bar, baz, bat) = row\n\n...assuming that you don't need to update row (and data). \nIf you need to update the row, copying the values to variables won't work. You need to manipulate the items via their indexes. In that case, aviraldg's approach is simplest:\n(foo, bar, baz, bat) = range(4)\nfor row in data:\n row[foo] = row[bar]\n\nIf you want to reference the columns by a string that contains their name, you'll need to copy the row to a dictionary whose key is the column name. Then, when you're done manipulating the dictionary, you need to update the original list, or replace it with a new one (which is what this code does):\ncolumns = (\"foo\", \"bar\", \"baz\", \"bat\")\nfor i in range(len(data)):\n d = dict(zip(columns, data[i]))\n d[\"foo\"] = d[\"bar\"]\n data[i] = [d[col] for col in columns]\n\n", "The easiest way to use names instead of integers to access your data is to use a dictionary\ndata = {'pig':[1, 2], 'cow':[3, 4], 'dog':[5,6]}\n\nif data in range(2):\n if data['dog'][1]==4:...\n\nThere are other ways as well. For example, you could make a class and override __getitem__; or you could just assign variable names to column numbers in a 2d array, like dog=2, etc; it all depends on exactly what you want to do.\n", "I believe what you want to do is store a column in a variable and then reference it, instead of always using the subscript. Simple enough:\nvarName=data[columnName]\n\nFor consequent accesses to that column, varName[rowName] should do the trick\n" ]
[ 3, 2, 0 ]
[]
[]
[ "names", "python" ]
stackoverflow_0001619880_names_python.txt
Q: In Django, how can I embed something in CSS? background: url({{ MEDIA_URL }}/bg.jpg); That does not work, because this Django template function only works in .html! A: Django template function only works in .html! This is totally wrong. Django templates are independent of the format. It can be easily used on any other format, infact for many other purposes. how can I embed something in CSS? You serve your css from a static server like nginx or apache (configured for statics); Django does not even access these files. Only during development, for sheer convenience, you can ask django to serve statics too. In django templates, you need to load the template and render a context. For convenience, render to response does that, and returns it as a http response. If you need to edit a css, you will need to do the loading and rendering as well. When you do this, it might be easier to embed an inline style within the page than importing from an external css file. A: Actualy image in CSS can be relative to the CSS file. So you don't really need to put the MEDIA_URL in a CSS file.
In Django, how can I embed something in CSS?
background: url({{ MEDIA_URL }}/bg.jpg); That does not work, because this Django template function only works in .html!
[ "\nDjango template function only works in .html!\n\nThis is totally wrong. Django templates are independent of the format. It can be easily used on any other format, infact for many other purposes.\n\nhow can I embed something in CSS?\n\nYou serve your css from a static server like nginx or apache (configured for statics); Django does not even access these files. Only during development, for sheer convenience, you can ask django to serve statics too.\nIn django templates, you need to load the template and render a context. For convenience, render to response does that, and returns it as a http response. If you need to edit a css, you will need to do the loading and rendering as well. When you do this, it might be easier to embed an inline style within the page than importing from an external css file.\n", "Actualy image in CSS can be relative to the CSS file.\nSo you don't really need to put the MEDIA_URL in a CSS file.\n" ]
[ 9, 9 ]
[]
[]
[ "css", "django", "python", "templates" ]
stackoverflow_0001620122_css_django_python_templates.txt
Q: Which key:value store to use with Python? So I'm looking at various key:value (where value is either strictly a single value or possibly an object) stores for use with Python, and have found a few promising ones. I have no specific requirement as of yet because I am in the evaluation phase. I'm looking for what's good, what's bad, what are the corner cases these things handle well or don't, etc. I'm sure some of you have already tried them out so I'd love to hear your findings/problems/etc. on the various key:value stores with Python. I'm looking primarily at: memcached - http://www.danga.com/memcached/ python clients: http://pypi.python.org/pypi/python-memcached/1.40 http://www.tummy.com/Community/software/python-memcached/ CouchDB - http://couchdb.apache.org/ python clients: http://code.google.com/p/couchdb-python/ Tokyo Tyrant - http://1978th.net/tokyotyrant/ python clients: http://code.google.com/p/pytyrant/ Lightcloud - http://opensource.plurk.com/LightCloud/ Based on Tokyo Tyrant, written in Python Redis - http://redis.io/ python clients: http://pypi.python.org/pypi/txredis/0.1.1 MemcacheDB - http://memcachedb.org/ So I started benchmarking (simply inserting keys and reading them) using a simple count to generate numeric keys and a value of "A short string of text": memcached: CentOS 5.3/python-2.4.3-24.el5_3.6, libevent 1.4.12-stable, memcached 1.4.2 with default settings, 1 gig memory, 14,000 inserts per second, 16,000 seconds to read. No real optimization, nice. memcachedb claims on the order of 17,000 to 23,000 inserts per second, 44,000 to 64,000 reads per second. I'm also wondering how the others stack up speed wise. A: That mostly depends on your need. Read Caveats of Evaluating Databases to understand how to evaluate them. A: shelve (storing dictonaris in file / standard python module) ZODB - persisatnce object database (python objects database, no SQL) More persistance tools: http://wiki.python.org/moin/PersistenceTools A: My 5 cents: Do you need distributed systems with tera byte sized data or massive write performance? Well, they you need one of the big key:value/BigTable/Dynamo type things. That would by Cassandra, Tokyo Tyrant, Redis, etc. You need to make sure that the client library supports sharding so you can have multiple databases to write to. Which one to use here can only be decided by you after testing with data that looks like what you think you need. Do you need the data to be accessible from other systems/languages than Python? Since these databases have no structure to their data at all, if it's accessible from other languages/clients that yours depends on what you store in it. But if you need this CouchDB is a good choice, as it stores it's data a JSON documents, so you get interoperability. How good CouchDB is on really massive data and sharding is unclear though. Do you need neither interoperability with other languages than Python or distributed multi-server storage? Use ZODB.
Which key:value store to use with Python?
So I'm looking at various key:value (where value is either strictly a single value or possibly an object) stores for use with Python, and have found a few promising ones. I have no specific requirement as of yet because I am in the evaluation phase. I'm looking for what's good, what's bad, what are the corner cases these things handle well or don't, etc. I'm sure some of you have already tried them out so I'd love to hear your findings/problems/etc. on the various key:value stores with Python. I'm looking primarily at: memcached - http://www.danga.com/memcached/ python clients: http://pypi.python.org/pypi/python-memcached/1.40 http://www.tummy.com/Community/software/python-memcached/ CouchDB - http://couchdb.apache.org/ python clients: http://code.google.com/p/couchdb-python/ Tokyo Tyrant - http://1978th.net/tokyotyrant/ python clients: http://code.google.com/p/pytyrant/ Lightcloud - http://opensource.plurk.com/LightCloud/ Based on Tokyo Tyrant, written in Python Redis - http://redis.io/ python clients: http://pypi.python.org/pypi/txredis/0.1.1 MemcacheDB - http://memcachedb.org/ So I started benchmarking (simply inserting keys and reading them) using a simple count to generate numeric keys and a value of "A short string of text": memcached: CentOS 5.3/python-2.4.3-24.el5_3.6, libevent 1.4.12-stable, memcached 1.4.2 with default settings, 1 gig memory, 14,000 inserts per second, 16,000 seconds to read. No real optimization, nice. memcachedb claims on the order of 17,000 to 23,000 inserts per second, 44,000 to 64,000 reads per second. I'm also wondering how the others stack up speed wise.
[ "That mostly depends on your need.\nRead Caveats of Evaluating Databases to understand how to evaluate them. \n", "shelve (storing dictonaris in file / standard python module) \nZODB - persisatnce object database (python objects database, no SQL)\nMore persistance tools:\nhttp://wiki.python.org/moin/PersistenceTools\n", "My 5 cents:\nDo you need distributed systems with tera byte sized data or massive write performance? \nWell, they you need one of the big key:value/BigTable/Dynamo type things. That would by Cassandra, Tokyo Tyrant, Redis, etc. You need to make sure that the client library supports sharding so you can have multiple databases to write to. Which one to use here can only be decided by you after testing with data that looks like what you think you need.\nDo you need the data to be accessible from other systems/languages than Python?\nSince these databases have no structure to their data at all, if it's accessible from other languages/clients that yours depends on what you store in it. But if you need this CouchDB is a good choice, as it stores it's data a JSON documents, so you get interoperability. How good CouchDB is on really massive data and sharding is unclear though.\nDo you need neither interoperability with other languages than Python or distributed multi-server storage?\nUse ZODB.\n" ]
[ 7, 3, 3 ]
[ "How about Amazon's SimpleDB?\nThere is an open-source python library called boto for python interfacing Amazon web services.\n" ]
[ -1 ]
[ "couchdb", "memcached", "python", "tokyo_tyrant" ]
stackoverflow_0001617309_couchdb_memcached_python_tokyo_tyrant.txt
Q: Best resources for learning python in a 'class type setting'? One of the best classes I took in college was Programming Languages where the prof would introduce a language or language concept, do a little playing with it in real time, and send us home with like 10 small little functions or programs to write that used what we learned in class and stretched it just enough to make sure you really understood what was going on. I found that this style of learning was really enjoyable and engaging for me personally. What I'm looking for is a resource, ideally one that's online, that is in the same vein. Introduce basic operators -> make me use them. Introduce functions -> make me use them. Introduce recursion -> make me use it. Ideally there are ~3 or so questions with answers not in plain view on the site so I won't cheat :) While resources like this are good, they're not really what I'm looking for. Thanks for any resources! A: I am teaching Python to graduate students at the University of Paris, and I exactly chose the kind of approach you like! I could not agree more with how useful it can be. I thus had to ask myself the same question as the one you ask here: I would recommend the following sources, in the given order: Instant Python: for a quick overview Learn Python in 10 minutes: another overview The official tutorial: to be skimmed through, but with examples that you can try by yourself in the Python or IPython shell Building Skills in Python: A Programmer's Introduction to Python, by Stack Overflow contributor S. Lott (this book contains exercices) Dive into Python is quite good too, but is limited to the now quite old Python 2.3. Update: the book now also exists for Python 3. You can certainly find other online books, and I did look at all of them a few months ago (while preparing my class!); but beware: some of them contain examples that are not examples of good practice. The references above are a solid mix of theory and hands-on practice, and they cover a lot of material. A: Look at these: Instant Hacking Learning to Program How to Think Like a Computer Scientist: Learning with Python A: You can play around with my PythonTurtle. Check out the help screen. A: Dive, do not walk, into Python. A: As with any programming language, do the Project Euler problems. But don't just hack together a solution - try and come up with a solution that is Pythonic - i.e. it uses the strengths of the language. A: You can use sage for on-screen demonstrations. You can use pure python with it, but have the advantage of notebook interface. As a bonus, you can publish your sessions on the net, so the students can play with them.
Best resources for learning python in a 'class type setting'?
One of the best classes I took in college was Programming Languages where the prof would introduce a language or language concept, do a little playing with it in real time, and send us home with like 10 small little functions or programs to write that used what we learned in class and stretched it just enough to make sure you really understood what was going on. I found that this style of learning was really enjoyable and engaging for me personally. What I'm looking for is a resource, ideally one that's online, that is in the same vein. Introduce basic operators -> make me use them. Introduce functions -> make me use them. Introduce recursion -> make me use it. Ideally there are ~3 or so questions with answers not in plain view on the site so I won't cheat :) While resources like this are good, they're not really what I'm looking for. Thanks for any resources!
[ "I am teaching Python to graduate students at the University of Paris, and I exactly chose the kind of approach you like! I could not agree more with how useful it can be.\nI thus had to ask myself the same question as the one you ask here: I would recommend the following sources, in the given order:\n\nInstant Python: for a quick overview\nLearn Python in 10 minutes: another overview\nThe official tutorial: to be skimmed through, but with examples that you can try by yourself in the Python or IPython shell\nBuilding Skills in Python: A Programmer's Introduction to Python, by Stack Overflow contributor S. Lott (this book contains exercices)\nDive into Python is quite good too, but is limited to the now quite old Python 2.3. Update: the book now also exists for Python 3.\n\nYou can certainly find other online books, and I did look at all of them a few months ago (while preparing my class!); but beware: some of them contain examples that are not examples of good practice. The references above are a solid mix of theory and hands-on practice, and they cover a lot of material. \n", "Look at these:\n\nInstant Hacking\nLearning to Program\nHow to Think Like a Computer Scientist: Learning with Python\n\n", "You can play around with my PythonTurtle. Check out the help screen.\n", "Dive, do not walk, into Python.\n", "As with any programming language, do the Project Euler problems. But don't just hack together a solution - try and come up with a solution that is Pythonic - i.e. it uses the strengths of the language.\n", "You can use sage for on-screen demonstrations. You can use pure python with it, but have the advantage of notebook interface. As a bonus, you can publish your sessions on the net, so the students can play with them.\n" ]
[ 4, 1, 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001619018_python.txt
Q: What is a light python library that can eliminate HTML tags? (and only text) I know that NLTK has it. But any else? A: python standard module html.parser should allow you to parse simple html content and eliminate tags. you only have to derive HTMLParser, then overload all handle_*() methods so that they output or discard content, depending on the surrounding element tags. A: BeautifulSoup: http://www.crummy.com/software/BeautifulSoup/ From the home page: Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful: Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away. Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application. Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding. A: You may want to a look at Strip-o-Gram HTML Conversion Library: http://pypi.python.org/pypi/stripogram/1.5 example usage from readme.txt: from stripogram import html2text, html2safehtml mylumpofdodgyhtml # a lump of dodgy html ;-) # Only allow <b>, <a>, <i>, <br>, and <p> tags mylumpofcoolcleancollectedhtml = html2safehtml(mylumpofdodgyhtml,valid_tags=("b", "a", "i", "br", "p")) # Don't process <img> tags, just strip them out. Use an indent of 4 spaces # and a page that's 80 characters wide. mylumpoftext = html2text(mylumpofcoolcleancollectedhtml,ignore_tags=("img",),indent_width=4,page_width=80) A: If your licensing permits it, you could use html2text (the asciinator) (GPL).
What is a light python library that can eliminate HTML tags? (and only text)
I know that NLTK has it. But any else?
[ "python standard module html.parser should allow you to parse simple html content and eliminate tags. you only have to derive HTMLParser, then overload all handle_*() methods so that they output or discard content, depending on the surrounding element tags.\n", "BeautifulSoup: http://www.crummy.com/software/BeautifulSoup/ \nFrom the home page:\nBeautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful:\n\nBeautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.\nBeautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.\nBeautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding. \n\n", "You may want to a look at Strip-o-Gram HTML Conversion Library: http://pypi.python.org/pypi/stripogram/1.5\nexample usage from readme.txt:\n from stripogram import html2text, html2safehtml\n mylumpofdodgyhtml # a lump of dodgy html ;-)\n # Only allow <b>, <a>, <i>, <br>, and <p> tags\n mylumpofcoolcleancollectedhtml = html2safehtml(mylumpofdodgyhtml,valid_tags=(\"b\", \"a\", \"i\", \"br\", \"p\"))\n # Don't process <img> tags, just strip them out. Use an indent of 4 spaces \n # and a page that's 80 characters wide.\n mylumpoftext = html2text(mylumpofcoolcleancollectedhtml,ignore_tags=(\"img\",),indent_width=4,page_width=80)\n\n", "If your licensing permits it, you could use html2text (the asciinator) (GPL).\n" ]
[ 4, 4, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001620363_python.txt
Q: How to start a long-running process from a Django view? I need to run a process that might take hours to complete from a Django view. I don't need to know the state or communicate with it but I need that view to redirect away right after starting the process. I've tried using subprocess.Popen, using it within a new threading.Thread, multiprocessing.Process. However, the parent process keeps hanging until child terminates. The only way that almost gets it done is using a fork. Obviously that isn't good as it leaves a zombie process behind until parent terminates. That's what I'm trying to do when using fork: if os.fork() == 0: subprocess.Popen(["/usr/bin/python", script_path, "-v"]) else: return HttpResponseRedirect(reverse('view_to_redirect')) So, is there a way to run a completely independent process from a Django view with minimal casualties? Or am I doing something wrong? A: I don't know if this will be suitable for your case, nevertheless here is what I do: I use a task queue (via a django model); when the view is called, it enters a new record in the tasks and redirects happily. Tasks in turn are executed by cron on a regular basis independently from django. Edit: cron calls the relevant (and custom) django command to execute the task. A: First of all - try to using cron for you task, as early say shanyu. If it doesn't suit you - then try to use CeleryProject, for task Queue for Django. For working it uses RabbitMQ. And here is a little overview for simple using of basing futures A: http://code.google.com/p/django-command-extensions/wiki/JobsScheduling Is a nice library that that you can use to accomplish this task. A: Take a look at the code in kronos.py to see one solution to this problem. http://www.razorvine.net/download/kronos.py
How to start a long-running process from a Django view?
I need to run a process that might take hours to complete from a Django view. I don't need to know the state or communicate with it but I need that view to redirect away right after starting the process. I've tried using subprocess.Popen, using it within a new threading.Thread, multiprocessing.Process. However, the parent process keeps hanging until child terminates. The only way that almost gets it done is using a fork. Obviously that isn't good as it leaves a zombie process behind until parent terminates. That's what I'm trying to do when using fork: if os.fork() == 0: subprocess.Popen(["/usr/bin/python", script_path, "-v"]) else: return HttpResponseRedirect(reverse('view_to_redirect')) So, is there a way to run a completely independent process from a Django view with minimal casualties? Or am I doing something wrong?
[ "I don't know if this will be suitable for your case, nevertheless here is what I do: I use a task queue (via a django model); when the view is called, it enters a new record in the tasks and redirects happily. Tasks in turn are executed by cron on a regular basis independently from django. \nEdit: cron calls the relevant (and custom) django command to execute the task.\n", "First of all - try to using cron for you task, as early say shanyu. \nIf it doesn't suit you - then try to use CeleryProject, for task Queue for Django. For working it uses RabbitMQ. And here is a little overview for simple using of basing futures \n", "http://code.google.com/p/django-command-extensions/wiki/JobsScheduling\nIs a nice library that that you can use to accomplish this task.\n", "Take a look at the code in kronos.py to see one solution to this problem.\nhttp://www.razorvine.net/download/kronos.py\n" ]
[ 10, 5, 2, 2 ]
[]
[]
[ "django", "long_running_processes", "python" ]
stackoverflow_0001619397_django_long_running_processes_python.txt
Q: Friendlier error messages on import for missing modules I want to implement some friendlier error messages if the user tries to run a python script which tries to import modules that have not been installed. This includes printing out instructions on how to install the missing module. One way to do this would be to put a try..catch block around the imports, but this is a bit ugly since it would turn something simple like import some_module into try: import some_module except ImportError, e: handle_error(e) and it would have to be added to every file. Additionally, ImportError doesn't seem to store the name of the missing module anywhere (except for in the message) so you would have to put a separate try..catch around each import if you need to know the name (like I do). Parsing the name of the module is not the option since the message carried by ImportError might change for python version to version and depending on the user's locale. I guess I could use sys.excepthook to catch all exceptions and pass those except ImportError along. Or would it be possible to define something like safe_import some_module that would behave like I want? Does anyone know of any existing solutions to this problem? A: You can put, somewhere that it will always be executed (e.g. in site.py or sitecustomize.py): import __builtin__ realimport = __builtin__.__import__ def myimport(modname, *a): try: return realimport(modname, *a) except ImportError, e: print "Oops, couldn't import %s (%s)" % (modname, e) print "Here: add nice directions and whatever else" raise __builtin__.__import__ = myimport See the __import__ docs here. A: You don't have to catch ImportError for every import. You can use a global try..except block in the entry point of your script. You can get the name of the missing module from a ImportError instance using the message property. For example, if the entry point of your script is main.py: if __name__ == '__main__': try: import module1 import module2 module1.main() except ImportError as error: print "You don't have module {0} installed".format(error.message[16:]) Don't import anything outside the try..except block. This will cover module1 and module2 and all the modules imported by them and so on. As you said, you could define your own import_safe function: def import_safe(module): try: return __import__(module) except ImportError as error: print "You don't have module {0} installed".format(error.message[16:]) Then you can use the function: sys = import_safe('sys') gtk = import_safe('gkt') In my opinion, the fist strategy is better. The second one would make your code hard to read. And will change an essential part of the language. A: I would put additional modules into the package which, when imported, print out the more helpful message, and then raise a regular ImportError. When the true module is installed, your modules will get shadowed (make sure you add the directory where they live at the end of sys.path). A: Replace sys.excepthook: orig_excepthook = sys.excepthook def my_excepthook(type, value, tb): orig_excepthook(type, value, tb) if issubclass(type, ImportError): # print some friendly message sys.excepthook = my_excepthook
Friendlier error messages on import for missing modules
I want to implement some friendlier error messages if the user tries to run a python script which tries to import modules that have not been installed. This includes printing out instructions on how to install the missing module. One way to do this would be to put a try..catch block around the imports, but this is a bit ugly since it would turn something simple like import some_module into try: import some_module except ImportError, e: handle_error(e) and it would have to be added to every file. Additionally, ImportError doesn't seem to store the name of the missing module anywhere (except for in the message) so you would have to put a separate try..catch around each import if you need to know the name (like I do). Parsing the name of the module is not the option since the message carried by ImportError might change for python version to version and depending on the user's locale. I guess I could use sys.excepthook to catch all exceptions and pass those except ImportError along. Or would it be possible to define something like safe_import some_module that would behave like I want? Does anyone know of any existing solutions to this problem?
[ "You can put, somewhere that it will always be executed (e.g. in site.py or sitecustomize.py):\nimport __builtin__\n\nrealimport = __builtin__.__import__\n\ndef myimport(modname, *a):\n try:\n return realimport(modname, *a)\n except ImportError, e:\n print \"Oops, couldn't import %s (%s)\" % (modname, e)\n print \"Here: add nice directions and whatever else\"\n raise\n\n__builtin__.__import__ = myimport\n\nSee the __import__ docs here.\n", "You don't have to catch ImportError for every import. You can use a global try..except block in the entry point of your script.\nYou can get the name of the missing module from a ImportError instance using the message property.\nFor example, if the entry point of your script is main.py:\nif __name__ == '__main__':\n try:\n import module1\n import module2\n\n module1.main()\n except ImportError as error:\n print \"You don't have module {0} installed\".format(error.message[16:])\n\nDon't import anything outside the try..except block. This will cover module1 and module2 and all the modules imported by them and so on.\nAs you said, you could define your own import_safe function:\ndef import_safe(module):\n try:\n return __import__(module)\n except ImportError as error:\n print \"You don't have module {0} installed\".format(error.message[16:])\n\nThen you can use the function:\nsys = import_safe('sys')\ngtk = import_safe('gkt')\n\nIn my opinion, the fist strategy is better. The second one would make your code hard to read. And will change an essential part of the language. \n", "I would put additional modules into the package which, when imported, print out the more helpful message, and then raise a regular ImportError. When the true module is installed, your modules will get shadowed (make sure you add the directory where they live at the end of sys.path).\n", "Replace sys.excepthook:\norig_excepthook = sys.excepthook\n\ndef my_excepthook(type, value, tb):\n orig_excepthook(type, value, tb)\n if issubclass(type, ImportError):\n # print some friendly message\n\nsys.excepthook = my_excepthook\n\n" ]
[ 6, 3, 2, 1 ]
[]
[]
[ "exception_handling", "python" ]
stackoverflow_0001618762_exception_handling_python.txt
Q: Installing satchmo without ssh access I want to install satchmo in my virtual hosting, but they dont provide ssh access for it. I want to know if it is possible. As i can see, adding some Satchmo requirements(http://www.satchmoproject.com/docs/svn/requirements.html) to pythonpath in my .fcgi file seems to be working, but some requirements like pycrypto and trml2pdf look like they need to be build & installed. Is this so? Can i write some kind of script, that executes this installation over the web? What can i do at all if they will not write without beeing built? Alan A: Sure you can, In the past I wrote a cgi to download and build Python on hosting that only allowed ftp access. If you know the target platform it will be easier to set up a virtual machine locally, build the files you need there and upload the compiled versions. Make sure you statically link them if the hosting is missing libraries. The best idea might be to switch to hosting with ssh access though. A: i doubt you can install only over web (e.g. what about permissions - maybe some could be set over ftp, but other not) is it possible to ask your hosting provider to include satchmo in their services?
Installing satchmo without ssh access
I want to install satchmo in my virtual hosting, but they dont provide ssh access for it. I want to know if it is possible. As i can see, adding some Satchmo requirements(http://www.satchmoproject.com/docs/svn/requirements.html) to pythonpath in my .fcgi file seems to be working, but some requirements like pycrypto and trml2pdf look like they need to be build & installed. Is this so? Can i write some kind of script, that executes this installation over the web? What can i do at all if they will not write without beeing built? Alan
[ "Sure you can, In the past I wrote a cgi to download and build Python on hosting that only allowed ftp access.\nIf you know the target platform it will be easier to set up a virtual machine locally, build the files you need there and upload the compiled versions. Make sure you statically link them if the hosting is missing libraries.\nThe best idea might be to switch to hosting with ssh access though.\n", "i doubt you can install only over web (e.g. what about permissions - maybe some could be set over ftp, but other not)\nis it possible to ask your hosting provider to include satchmo in their services?\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python", "satchmo", "ssh" ]
stackoverflow_0001620620_django_python_satchmo_ssh.txt
Q: place labels between ticks in matplotlib, how do i place ticks labels between ticks (not below ticks) for example: when plotting a the stock price over time i would like the x axis minor ticks to display months and the years to show up between consecutive x axis major ticks (not just below the major ticks) ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- jan feb mar apr may jun jul aug sep oct nov dec jan feb mar apr may jun jul aug sep 2008 2009 A: Will this do the trick? enter code here x = 'j f m a m j j a s o n d j f m a m j j a s o n d'.split() y = abs(randn(24)) x[6] = 'j\n2008' # replace "j" (January) with ('j' and the appropriate year x[18] = 'j\n2009' bar(xrange(len(x)), y, width=0.1) bar(xrange(len(x)), y, width=0.1) xticks(xrange(len(x)), x, ha='center') A: Do you mean something like this: - http://matplotlib.sourceforge.net/examples/pylab_examples/barchart_demo.html ?? You must use xticks and the ha (or 'horizontalalignment') parameter: >>> x = 'j f m a m j j a s o n d'.split() >>> y = abs(randn(12)) >>> bar(xrange(len(x)), y, width=0.1) >>> xticks(xrange(len(x)), x, ha='center') look at help(xticks) and help(matplotlib.text.Text) for more options edit: sorry, I didn't see you are also asking for how to put the years labels below ticks. I think you must do it manually, have a look at the example I have linked to see how to do it.
place labels between ticks
in matplotlib, how do i place ticks labels between ticks (not below ticks) for example: when plotting a the stock price over time i would like the x axis minor ticks to display months and the years to show up between consecutive x axis major ticks (not just below the major ticks) ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- jan feb mar apr may jun jul aug sep oct nov dec jan feb mar apr may jun jul aug sep 2008 2009
[ "Will this do the trick?\nenter code here\nx = 'j f m a m j j a s o n d j f m a m j j a s o n d'.split()\ny = abs(randn(24))\nx[6] = 'j\\n2008' # replace \"j\" (January) with ('j' and the appropriate year\nx[18] = 'j\\n2009'\nbar(xrange(len(x)), y, width=0.1)\nbar(xrange(len(x)), y, width=0.1)\nxticks(xrange(len(x)), x, ha='center')\n\n\n", "Do you mean something like this:\n- http://matplotlib.sourceforge.net/examples/pylab_examples/barchart_demo.html ??\nYou must use xticks and the ha (or 'horizontalalignment') parameter:\n>>> x = 'j f m a m j j a s o n d'.split()\n>>> y = abs(randn(12))\n>>> bar(xrange(len(x)), y, width=0.1)\n\n>>> xticks(xrange(len(x)), x, ha='center')\n\nlook at help(xticks) and help(matplotlib.text.Text) for more options\nedit: sorry, I didn't see you are also asking for how to put the years labels below ticks. I think you must do it manually, have a look at the example I have linked to see how to do it.\n" ]
[ 5, 1 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0001620927_matplotlib_python.txt
Q: Python script to print all function definitions of a C/C++ file I want a python script to print list of all functions defined in a C/C++ file. e.g. abc.c defines two functions as: void func1() { } int func2(int i) { printf("%d", i); return 1; } I just want to search the file (abc.c) and print all the functions defined in it (function names only). In the example above, I would like to print func1, func2 using python script. A: I would suggest using the PLY lex/yacc tool. There's a prebuilt C parser, and the parser itself is quite fast. Once you have the file parsed, it shouldn't be too hard to find all of the functions. http://www.dabeaz.com/ply/ A: antlr is your tool A: To do this reliably, you'd need to parse the C or C++ code, and then grab the function definitions from the AST the parser produces. C is fairly easy to parse. As pavpanchekha mentions, the parser PLY comes with a C parser, and has been used to make the following relevant projects: pycparser CppHeaderParser Parsing C++ code is more complicated.. "Is there a good Python library that can parse C++" should be of help: C++ is notoriously hard to parse. Most people who try to do this properly end up taking apart a compiler. In fact this is (in part) why LLVM started: Apple needed a way they could parse C++ for use in XCode that matched the way the compiler parsed it. That's why there are projects like GCC_XML which you could combine with a python xml library. Finally, if your code doesn't need to be robust at all, you could run the code though a code-reformatter, like indent (for C code) to even things out, then use regular expressions to match the function definition. Yes this is a bad, hacky, error-prone idea, and you'll probably find function definitions in multiline comments and such, but it might work well enough.. A: This page, Parsing C++, mentions a couple of ANTLR grammars for C++. Since ANTLR has a Python API this seems like a reasonable way to proceed. Even though parsing may seem a lot more complex that regular expressions, this is a case where someone else has done almost all the work for you and you just need to interface to it from Python. Another alternative, where someone else has done the work of parsing C++ for you, is pygccxml which leverages GCCXML, an output extension for GCC to produce XML from the compilers internal representation. Since Python has great XML support, you just need to extract the information of interest to you.
Python script to print all function definitions of a C/C++ file
I want a python script to print list of all functions defined in a C/C++ file. e.g. abc.c defines two functions as: void func1() { } int func2(int i) { printf("%d", i); return 1; } I just want to search the file (abc.c) and print all the functions defined in it (function names only). In the example above, I would like to print func1, func2 using python script.
[ "I would suggest using the PLY lex/yacc tool. There's a prebuilt C parser, and the parser itself is quite fast. Once you have the file parsed, it shouldn't be too hard to find all of the functions.\nhttp://www.dabeaz.com/ply/\n", "antlr is your tool\n", "To do this reliably, you'd need to parse the C or C++ code, and then grab the function definitions from the AST the parser produces.\nC is fairly easy to parse. As pavpanchekha mentions, the parser PLY comes with a C parser, and has been used to make the following relevant projects:\n\npycparser\nCppHeaderParser\n\nParsing C++ code is more complicated.. \"Is there a good Python library that can parse C++\" should be of help:\n\nC++ is notoriously hard to parse. Most people who try to do this properly end up taking apart a compiler. In fact this is (in part) why LLVM started: Apple needed a way they could parse C++ for use in XCode that matched the way the compiler parsed it.\nThat's why there are projects like GCC_XML which you could combine with a python xml library.\n\nFinally, if your code doesn't need to be robust at all, you could run the code though a code-reformatter, like indent (for C code) to even things out, then use regular expressions to match the function definition. Yes this is a bad, hacky, error-prone idea, and you'll probably find function definitions in multiline comments and such, but it might work well enough..\n", "This page, Parsing C++, mentions a couple of ANTLR grammars for C++. Since ANTLR has a Python API this seems like a reasonable way to proceed.\nEven though parsing may seem a lot more complex that regular expressions, this is a case where someone else has done almost all the work for you and you just need to interface to it from Python.\nAnother alternative, where someone else has done the work of parsing C++ for you, is pygccxml which leverages GCCXML, an output extension for GCC to produce XML from the compilers internal representation. Since Python has great XML support, you just need to extract the information of interest to you.\n" ]
[ 3, 2, 2, 1 ]
[]
[]
[ "parsing", "python", "string" ]
stackoverflow_0001620851_parsing_python_string.txt
Q: Nesting generator expressions in the argument list for a python function call I like to use the following idiom for combining lists together, sometimes: >>> list(itertools.chain(*[[(e, n) for e in l] for n, l in (('a', [1,2]),('b',[3,4]))])) [(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')] (I know there are easier ways to get this particular result, but it comes comes in handy when you want to iterate over the elements in lists of lists of lists, or something like that. The trouble is, when you use generator expressions, this becomes error prone. E.g. >>> list(itertools.chain(*(((e, n) for e in l) for n, l in (('a', [1,2]),('b',[3,4]))))) [(1, 'b'), (2, 'b'), (3, 'b'), (4, 'b')] What's happening here is that the inner generator expressions get passed as arguments to itertools.chain, so at the the time they're evaluated, the outer generator expression has finished, and n is fixed at its final value, 'b'. I'm wondering whether anyone has thought of ways to avoid this kind of error, beyond "don't do that." A: wouldn't a nested list comprehension be more appropriate? >>> tt = (('a', [1,2]),('b',[3,4])) >>> [(s, i) for i, l in tt for s in l] [(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')] A: Your approach almost works, you just need to flatten the generators. See how the for e in l is moved to the very right >>> list(itertools.chain((e, n) for n, l in (('a', [1,2]),('b',[3,4])) for e in l )) [(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')] Here is how to do the same thing using itertools.product >>> X=itertools.chain(*(itertools.product(*i[::-1]) for i in (('a', [1,2]),('b',[3,4])))) >>> print list(X) [(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')] or if you are allowed to switch the tuples around >>> X=itertools.chain(*(itertools.product(*i) for i in (([1,2],'a'),([3,4],'b')))) >>> print list(X) [(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')] A: I'm going to suggest data = (('a', [1,2]), ('b', [3,4])) result = [] for letter, numbers in data: for number in numbers: result.append((number, letter)) It's a lot more readable than your solution family. Fancy is not a good thing.
Nesting generator expressions in the argument list for a python function call
I like to use the following idiom for combining lists together, sometimes: >>> list(itertools.chain(*[[(e, n) for e in l] for n, l in (('a', [1,2]),('b',[3,4]))])) [(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')] (I know there are easier ways to get this particular result, but it comes comes in handy when you want to iterate over the elements in lists of lists of lists, or something like that. The trouble is, when you use generator expressions, this becomes error prone. E.g. >>> list(itertools.chain(*(((e, n) for e in l) for n, l in (('a', [1,2]),('b',[3,4]))))) [(1, 'b'), (2, 'b'), (3, 'b'), (4, 'b')] What's happening here is that the inner generator expressions get passed as arguments to itertools.chain, so at the the time they're evaluated, the outer generator expression has finished, and n is fixed at its final value, 'b'. I'm wondering whether anyone has thought of ways to avoid this kind of error, beyond "don't do that."
[ "wouldn't a nested list comprehension be more appropriate?\n>>> tt = (('a', [1,2]),('b',[3,4]))\n>>> [(s, i) for i, l in tt for s in l]\n[(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')]\n\n", "Your approach almost works, you just need to flatten the generators. See how the for e in l is moved to the very right\n>>> list(itertools.chain((e, n) for n, l in (('a', [1,2]),('b',[3,4])) for e in l ))\n[(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')]\n\nHere is how to do the same thing using itertools.product\n>>> X=itertools.chain(*(itertools.product(*i[::-1]) for i in (('a', [1,2]),('b',[3,4]))))\n>>> print list(X)\n[(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')]\n\nor if you are allowed to switch the tuples around\n>>> X=itertools.chain(*(itertools.product(*i) for i in (([1,2],'a'),([3,4],'b'))))\n>>> print list(X)\n[(1, 'a'), (2, 'a'), (3, 'b'), (4, 'b')]\n\n", "I'm going to suggest\ndata = (('a', [1,2]), ('b', [3,4]))\n\nresult = []\nfor letter, numbers in data:\n for number in numbers:\n result.append((number, letter))\n\nIt's a lot more readable than your solution family. Fancy is not a good thing.\n" ]
[ 6, 2, 0 ]
[]
[]
[ "generator", "list_comprehension", "python" ]
stackoverflow_0001620957_generator_list_comprehension_python.txt
Q: Python RegEx skipping the first few characters? Hey I have a fairly basic question about regular expressions. I want to just return the text inside (and including) the body tags, and I know the following isn't right because it'll also match all the characters before the opening body tag. I was wondering how you would go about skipping those? x = re.match('(.*<body).*?(</body>)', fileString) Thanks! A: I don't know Python, but here's a quick example thrown together using Beautiful Soup, which I often see recommended for Python HTML parsing. import BeautifulSoup soup = BeautifulSoup(fileString) bodyTag = soup.html.body.string That will (in theory) deal with all the complexities of HTML, which is very difficult with pure regex-based answers, because it's not what regex was designed for. A: Here is some example code which uses regex to find all the text between <body>...</body> tags. Although this demonstrates some features of python's re module, note that the Beautiful Soup module is very easy to use and is a better tool to use if you plan on parsing HTML or XML. (See below for an example of how you could parse this using BeautifulSoup.) #!/usr/bin/env python import re # Here we have a string with a multiline <body>...</body> fileString='''baz<body>foo baby foo baby foo baby foo </body><body>bar</body>''' # re.DOTALL tells re that '.' should match any character, including newlines. x = re.search('(<body>.*?</body>)', fileString, re.DOTALL) for match in x.groups(): print(match) # <body>foo # baby foo # baby foo # baby foo # </body> If you wish to collect all matches, you could use re.findall: print(re.findall('(<body>.*?</body>)', fileString, re.DOTALL)) # ['<body>foo\nbaby foo\nbaby foo\nbaby foo\n</body>', '<body>bar</body>'] and if you plan to use this pattern more than once, you can pre-compile it: pat=re.compile('(<body>.*?</body>)', re.DOTALL) print(pat.findall(fileString)) # ['<body>foo\nbaby foo\nbaby foo\nbaby foo\n</body>', '<body>bar</body>'] And here is how you could do it with BeautifulSoup: #!/usr/bin/env python from BeautifulSoup import BeautifulSoup fileString='''baz<body>foo baby foo baby foo baby foo </body><body>bar</body>''' soup = BeautifulSoup(fileString) print(soup.body) # <body>foo # baby foo # baby foo # baby foo # </body> print(soup.findAll('body')) # [<body>foo # baby foo # baby foo # baby foo # </body>, <body>bar</body>] A: You cannot parse HTML with regex. HTML is not a regular language. Use an HTML parser like lxml instead.
Python RegEx skipping the first few characters?
Hey I have a fairly basic question about regular expressions. I want to just return the text inside (and including) the body tags, and I know the following isn't right because it'll also match all the characters before the opening body tag. I was wondering how you would go about skipping those? x = re.match('(.*<body).*?(</body>)', fileString) Thanks!
[ "I don't know Python, but here's a quick example thrown together using Beautiful Soup, which I often see recommended for Python HTML parsing.\nimport BeautifulSoup\n\nsoup = BeautifulSoup(fileString)\n\nbodyTag = soup.html.body.string\n\nThat will (in theory) deal with all the complexities of HTML, which is very difficult with pure regex-based answers, because it's not what regex was designed for.\n", "Here is some example code which uses regex to find all the text between <body>...</body> tags. Although this demonstrates some features of python's re module, note that the Beautiful Soup module is very easy to use and is a better tool to use if you plan on parsing HTML or XML. (See below for an example of how you could parse this using BeautifulSoup.)\n#!/usr/bin/env python\nimport re\n\n# Here we have a string with a multiline <body>...</body>\nfileString='''baz<body>foo\nbaby foo\nbaby foo\nbaby foo\n</body><body>bar</body>'''\n\n# re.DOTALL tells re that '.' should match any character, including newlines.\nx = re.search('(<body>.*?</body>)', fileString, re.DOTALL)\nfor match in x.groups():\n print(match)\n# <body>foo\n# baby foo\n# baby foo\n# baby foo\n# </body>\n\nIf you wish to collect all matches, you could use re.findall:\nprint(re.findall('(<body>.*?</body>)', fileString, re.DOTALL))\n# ['<body>foo\\nbaby foo\\nbaby foo\\nbaby foo\\n</body>', '<body>bar</body>']\n\nand if you plan to use this pattern more than once, you can pre-compile it:\npat=re.compile('(<body>.*?</body>)', re.DOTALL)\nprint(pat.findall(fileString))\n# ['<body>foo\\nbaby foo\\nbaby foo\\nbaby foo\\n</body>', '<body>bar</body>']\n\nAnd here is how you could do it with BeautifulSoup:\n#!/usr/bin/env python\nfrom BeautifulSoup import BeautifulSoup\n\nfileString='''baz<body>foo\nbaby foo\nbaby foo\nbaby foo\n</body><body>bar</body>'''\nsoup = BeautifulSoup(fileString)\nprint(soup.body)\n# <body>foo\n# baby foo\n# baby foo\n# baby foo\n# </body>\n\nprint(soup.findAll('body'))\n# [<body>foo\n# baby foo\n# baby foo\n# baby foo\n# </body>, <body>bar</body>]\n\n", "You cannot parse HTML with regex. HTML is not a regular language. Use an HTML parser like lxml instead.\n" ]
[ 9, 2, 0 ]
[ " x = re.match('.*(<body>.*?</body>)', fileString)\n\nConsider minidom for HTML parsing.\n", "x = re.search('(<body>.*</body>)', fileString)\nx.group(1)\n\nLess typing than the match answers\n", "Does your fileString contain multiple lines? In that case you may need to specify it or skip the lines explicitly:\nx = re.match(r\"(?:.|\\n)*(<body>(?:.|\\n)*</body>)\", fileString)\n\nor, more simply with the re module:\nx = re.match(r\".*(<body>.*</body>)\", fileString, re.DOTALL)\n\nx.groups()[0] should contain your string if x is not None.\n" ]
[ -2, -2, -2 ]
[ "html_parsing", "python", "regex" ]
stackoverflow_0001620889_html_parsing_python_regex.txt
Q: dynamically adding functions to a Python module Our framework requires wrapping certain functions in some ugly boilerplate code: def prefix_myname_suffix(obj): def actual(): print 'hello world' obj.register(actual) return obj I figured this might be simplified with a decorator: @register def myname(): print 'hello world' However, that turned out to be rather tricky, mainly because the framework looks for a certain pattern of function names at module level. I've tried the following within the decorator, to no avail: current_module = __import__(__name__) new_name = prefix + func.__name__ + suffix # method A current_module[new_name] = func # method B func.__name__ = new_name current_module += func Any help would be appreciated! A: use either current_module.new_name = func or setattr(current_module, new_name, func) A: It seems the solution to your problem would be to make the decorated function act as the original function. Try using the function mergeFunctionMetadata from Twisted, found here: twisted/python/util.py It makes your decorated function act as the original, hopefully making the framework pick it up.
dynamically adding functions to a Python module
Our framework requires wrapping certain functions in some ugly boilerplate code: def prefix_myname_suffix(obj): def actual(): print 'hello world' obj.register(actual) return obj I figured this might be simplified with a decorator: @register def myname(): print 'hello world' However, that turned out to be rather tricky, mainly because the framework looks for a certain pattern of function names at module level. I've tried the following within the decorator, to no avail: current_module = __import__(__name__) new_name = prefix + func.__name__ + suffix # method A current_module[new_name] = func # method B func.__name__ = new_name current_module += func Any help would be appreciated!
[ "use either \ncurrent_module.new_name = func\n\nor \nsetattr(current_module, new_name, func)\n\n", "It seems the solution to your problem would be to make the decorated function act as the original function. \nTry using the function mergeFunctionMetadata from Twisted, found here:\ntwisted/python/util.py\nIt makes your decorated function act as the original, hopefully making the framework pick it up.\n" ]
[ 22, 0 ]
[]
[]
[ "decorator", "metaprogramming", "python" ]
stackoverflow_0001621350_decorator_metaprogramming_python.txt
Q: How to make easy_install execute custom commands in setup.py? I want my setup.py to do some custom actions besides just installing the Python package (like installing an init.d script, creating directories and files, etc.) I know I can customize the distutils/setuptools classes to do my own actions. The problem I am having is that everything works when I cd to the package directory and do "python setup.py install", but my custom classes don't seem to be executed when I do "easy_install mypackage.tar.gz". Here's my setup.py file (create an empty myfoobar.py file in the same dir to test): import setuptools from setuptools.command import install as _install class install(_install.install): def initialize_options(self): _install.install.initialize_options(self) def finalize_options(self): _install.install.finalize_options(self) def run(self): # Why is this never executed when tarball installed with easy_install? # It does work with: python setup.py install import pdb;pdb.set_trace() _install.install.run(self) setuptools.setup( name = 'myfoobar', version = '0.1', platforms = ['any'], description = 'Test package', author = 'Someone', py_modules = ['myfoobar'], cmdclass = {'install': install}, ) The same thing happens even if I import "setup" and "install" from distutils. Any ideas how I could make easy_install execute my custom classes? To clarify, I don't want to use anything extra, like Buildout or Paver. A: Paver takes setuptools to the next level and lets you write custom tasks. It allows you to extend the typical setup.py file and provides a simple way to bootstrap the Paver environment. A: It cannot be done. Enthought has a custom version of setuptools that does support this, but otherwise it is in the bug tracker as a wishlist item that has been under discussion since June. However, there are ways to trick the system and you might consider them. One way is to have your most important module, the one that is always imported first when using your package, do the post install actions the first time it is called. Then you have to clean up after yourself, and consider the case where you cannot write into the library because an admin installed the package and the first user is someone other than admin. In the worst case, this would involve creating a ~/.mypackage directory for ever user who uses the package, and rerunning the postinstall once for each new user. Every time the module is imported, it checks for the existence of ~/.mypackage. If it is not there, it runs the postinstall and creates it. If it is there, it skips the postinstall.
How to make easy_install execute custom commands in setup.py?
I want my setup.py to do some custom actions besides just installing the Python package (like installing an init.d script, creating directories and files, etc.) I know I can customize the distutils/setuptools classes to do my own actions. The problem I am having is that everything works when I cd to the package directory and do "python setup.py install", but my custom classes don't seem to be executed when I do "easy_install mypackage.tar.gz". Here's my setup.py file (create an empty myfoobar.py file in the same dir to test): import setuptools from setuptools.command import install as _install class install(_install.install): def initialize_options(self): _install.install.initialize_options(self) def finalize_options(self): _install.install.finalize_options(self) def run(self): # Why is this never executed when tarball installed with easy_install? # It does work with: python setup.py install import pdb;pdb.set_trace() _install.install.run(self) setuptools.setup( name = 'myfoobar', version = '0.1', platforms = ['any'], description = 'Test package', author = 'Someone', py_modules = ['myfoobar'], cmdclass = {'install': install}, ) The same thing happens even if I import "setup" and "install" from distutils. Any ideas how I could make easy_install execute my custom classes? To clarify, I don't want to use anything extra, like Buildout or Paver.
[ "Paver takes setuptools to the next level and lets you write custom tasks. It allows you to extend the typical setup.py file and provides a simple way to bootstrap the Paver environment.\n", "It cannot be done. Enthought has a custom version of setuptools that does support this, but otherwise it is in the bug tracker as a wishlist item that has been under discussion since June.\nHowever, there are ways to trick the system and you might consider them. One way is to have your most important module, the one that is always imported first when using your package, do the post install actions the first time it is called. Then you have to clean up after yourself, and consider the case where you cannot write into the library because an admin installed the package and the first user is someone other than admin.\nIn the worst case, this would involve creating a ~/.mypackage directory for ever user who uses the package, and rerunning the postinstall once for each new user. Every time the module is imported, it checks for the existence of ~/.mypackage. If it is not there, it runs the postinstall and creates it. If it is there, it skips the postinstall.\n" ]
[ 5, 4 ]
[]
[]
[ "easy_install", "python", "setuptools" ]
stackoverflow_0001446682_easy_install_python_setuptools.txt
Q: Fastest way to convert '(-1,0)' into tuple(-1, 0)? I've got a huge tuple of strings, which are being returned from a program. An example tuple being returned might look like this: ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)') I can convert these strings to real tuples (with integers inside), but i am hoping someone knows a nice trick to speed this up. Anything i've come up with feels like i am doing it a relatively "slow" way. And as i have mentioned, these lists can be big, so a fast way would be much appreciated! Thanks edit one Alright, so its seeming that eval is a slower method of doing this. But so far i've got 4 methods tested, thanks for any comments and submissions! :) Also, someone asked on the size of my tuples. It will range anywhere from a few, to hopefully no more than a few million. Not "too" big, but big enough that speed is an important factor. I'm not here to micro-optimize, just learn any new nifty tricks i might not be aware of. Eg, eval() is something i often forget about, even though it doesn't seem to do so well in this case. edit two I also wanted to note that the string format shouldn't change. So no need to check the format. Also, this is an embedded Python v2.6.2, so anything requiring 2.6 is fine. 3.0 on the other hand, not so much ;) Looking great guys, again, thanks for all the input :) edit 3 Yet another note. I noticed i had been returning code that didn't result in a "tuple", this is ok, and sorry if anyone thought the end result "had" to be a tuple. Something of like format is fine. import timeit test_tuple = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)', '(7,0)',) def timeit_a(): '''''' def convert_tup_strings(tup_string): first_int, last_int = tup_string[1:-1].split(',') return (int(first_int), int(last_int)) return map(convert_tup_strings, test_tuple) def timeit_a_1(): '''''' def convert_tup_strings(tup_string): return map(int, tup_string[1:-1].split(',')) return map(convert_tup_strings, test_tuple) def timeit_b(): converted = [] for tup_string in test_tuple: first_int, last_int = tup_string[1:-1].split(',') converted.append((int(first_int), int(last_int))) return converted def timeit_b_1(): converted = [] for tup_string in test_tuple: converted.append(map(int, tup_string[1:-1].split(','))) return converted def timeit_c(): '''''' return [eval(t) for t in test_tuple] def timeit_d(): '''''' return map(eval, test_tuple) def timeit_e(): '''''' return map(lambda a: tuple(map(int, a[1:-1].split(','))), test_tuple) print 'Timeit timeit_a: %s' % timeit.timeit(timeit_a) print 'Timeit timeit_a_1: %s' % timeit.timeit(timeit_a_1) print 'Timeit timeit_b: %s' % timeit.timeit(timeit_b) print 'Timeit timeit_b_1: %s' % timeit.timeit(timeit_b_1) print 'Timeit timeit_c: %s' % timeit.timeit(timeit_c) print 'Timeit timeit_d: %s' % timeit.timeit(timeit_d) print 'Timeit timeit_e: %s' % timeit.timeit(timeit_e) Results in: Timeit timeit_a: 15.8954099772 Timeit timeit_a_1: 18.5484214589 Timeit timeit_b: 15.3137666465 Timeit timeit_b_1: 17.8405181116 Timeit timeit_c: 91.9587832802 Timeit timeit_d: 89.8858157489 Timeit timeit_e: 20.1564312947 A: I don't advice you to use eval at all. It is slow and insecure. You can do this: result = map(lambda a: tuple(map(int, a[1:-1].split(','))), s) The numbers speak for themselves: timeit.Timer("map(lambda a: tuple(map(int, a[1:-1].split(','))), s)", "s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')").timeit(100000) 1.8787779808044434 timeit.Timer("map(eval, s)", "s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')").timeit(100000) 11.571426868438721 A: map(eval, tuples) This won't account for the case where one of the tuples isn't syntactically correct. For that, I'd recommend something like: def do(tup): try: return eval(tup) except: return None map(do, tuples) Both methods tested for speed: >>> tuples = ["(1,0)"] * 1000000 >>> # map eval >>> st = time.time(); parsed = map(eval, tuples); print "%.2f s" % (time.time() - st) 16.02 s >>> # map do >>> >>> st = time.time(); parsed = map(do, tuples); print "%.2f s" % (time.time() - st) 18.46 s For 1,000,000 tuples that's not bad (but isn't great either). The overhead, presumably, is in parsing Python one million times by using eval. However, it is the easiest way to do what you're after. The answer using list comprehension instead of map is about as slow as my try/except case (interesting in itself): >>> st = time.time(); parsed = [eval(t) for t in tuples]; print "%.2f s" % (time.time() - st) 18.13 s All that being said, I'm going to venture premature optimization is at work here -- parsing strings is always slow. How many tuples are you expecting? A: I'd do string parsing if you know the format. Faster than eval(). >>> tuples = ["(1,0)"] * 1000000 >>> import time >>> st = time.time(); parsed = map(eval, tuples); print "%.2f s" % (time.time() - st) 32.71 s >>> def parse(s) : ... return s[1:-1].split(",") ... >>> parse("(1,0)") ['1', '0'] >>> st = time.time(); parsed = map(parse, tuples); print "%.2f s" % (time.time() - st) 5.05 s if you need ints >>> def parse(s) : ... return map(int, s[1:-1].split(",")) ... >>> parse("(1,0)") [1, 0] >>> st = time.time(); parsed = map(parse, tuples); print "%.2f s" % (time.time() - st) 9.62 s A: My computer is slower than Nadia's, however this runs faster >>> timeit.Timer( "list((int(a),int(c)) for a,b,c in (x[1:-1].partition(',') for x in s))", "s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')").timeit(100000) 3.2250211238861084 than this >>> timeit.Timer( "map(lambda a: tuple(map(int, a[1:-1].split(','))), s)", "s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')").timeit(100000) 3.8979239463806152 using a list comprehension is faster still >>> timeit.Timer( "[(int(a),int(c)) for a,b,c in (x[1:-1].partition(',') for x in s)]", "s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')").timeit(100000) 2.452484130859375 A: If you're sure the input is well formed: tuples = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)') result = [eval(t) for t in tuples] A: You can get a parser up and running pretty quickly with YAPPS. A: you can just use yaml or json to parse it into tuples for you. A: import ast list_of_tuples = map(ast.literal_eval, tuple_of_strings)
Fastest way to convert '(-1,0)' into tuple(-1, 0)?
I've got a huge tuple of strings, which are being returned from a program. An example tuple being returned might look like this: ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)') I can convert these strings to real tuples (with integers inside), but i am hoping someone knows a nice trick to speed this up. Anything i've come up with feels like i am doing it a relatively "slow" way. And as i have mentioned, these lists can be big, so a fast way would be much appreciated! Thanks edit one Alright, so its seeming that eval is a slower method of doing this. But so far i've got 4 methods tested, thanks for any comments and submissions! :) Also, someone asked on the size of my tuples. It will range anywhere from a few, to hopefully no more than a few million. Not "too" big, but big enough that speed is an important factor. I'm not here to micro-optimize, just learn any new nifty tricks i might not be aware of. Eg, eval() is something i often forget about, even though it doesn't seem to do so well in this case. edit two I also wanted to note that the string format shouldn't change. So no need to check the format. Also, this is an embedded Python v2.6.2, so anything requiring 2.6 is fine. 3.0 on the other hand, not so much ;) Looking great guys, again, thanks for all the input :) edit 3 Yet another note. I noticed i had been returning code that didn't result in a "tuple", this is ok, and sorry if anyone thought the end result "had" to be a tuple. Something of like format is fine. import timeit test_tuple = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)', '(7,0)',) def timeit_a(): '''''' def convert_tup_strings(tup_string): first_int, last_int = tup_string[1:-1].split(',') return (int(first_int), int(last_int)) return map(convert_tup_strings, test_tuple) def timeit_a_1(): '''''' def convert_tup_strings(tup_string): return map(int, tup_string[1:-1].split(',')) return map(convert_tup_strings, test_tuple) def timeit_b(): converted = [] for tup_string in test_tuple: first_int, last_int = tup_string[1:-1].split(',') converted.append((int(first_int), int(last_int))) return converted def timeit_b_1(): converted = [] for tup_string in test_tuple: converted.append(map(int, tup_string[1:-1].split(','))) return converted def timeit_c(): '''''' return [eval(t) for t in test_tuple] def timeit_d(): '''''' return map(eval, test_tuple) def timeit_e(): '''''' return map(lambda a: tuple(map(int, a[1:-1].split(','))), test_tuple) print 'Timeit timeit_a: %s' % timeit.timeit(timeit_a) print 'Timeit timeit_a_1: %s' % timeit.timeit(timeit_a_1) print 'Timeit timeit_b: %s' % timeit.timeit(timeit_b) print 'Timeit timeit_b_1: %s' % timeit.timeit(timeit_b_1) print 'Timeit timeit_c: %s' % timeit.timeit(timeit_c) print 'Timeit timeit_d: %s' % timeit.timeit(timeit_d) print 'Timeit timeit_e: %s' % timeit.timeit(timeit_e) Results in: Timeit timeit_a: 15.8954099772 Timeit timeit_a_1: 18.5484214589 Timeit timeit_b: 15.3137666465 Timeit timeit_b_1: 17.8405181116 Timeit timeit_c: 91.9587832802 Timeit timeit_d: 89.8858157489 Timeit timeit_e: 20.1564312947
[ "I don't advice you to use eval at all. It is slow and insecure. You can do this:\nresult = map(lambda a: tuple(map(int, a[1:-1].split(','))), s)\n\nThe numbers speak for themselves:\ntimeit.Timer(\"map(lambda a: tuple(map(int, a[1:-1].split(','))), s)\", \"s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')\").timeit(100000)\n\n1.8787779808044434\n\ntimeit.Timer(\"map(eval, s)\", \"s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')\").timeit(100000)\n\n11.571426868438721\n\n", "map(eval, tuples)\n\nThis won't account for the case where one of the tuples isn't syntactically correct. For that, I'd recommend something like:\ndef do(tup):\n try: return eval(tup)\n except: return None\n\nmap(do, tuples)\n\nBoth methods tested for speed:\n>>> tuples = [\"(1,0)\"] * 1000000\n\n>>> # map eval\n>>> st = time.time(); parsed = map(eval, tuples); print \"%.2f s\" % (time.time() - st)\n16.02 s\n\n>>> # map do\n>>> >>> st = time.time(); parsed = map(do, tuples); print \"%.2f s\" % (time.time() - st)\n18.46 s\n\nFor 1,000,000 tuples that's not bad (but isn't great either). The overhead, presumably, is in parsing Python one million times by using eval. However, it is the easiest way to do what you're after.\nThe answer using list comprehension instead of map is about as slow as my try/except case (interesting in itself):\n>>> st = time.time(); parsed = [eval(t) for t in tuples]; print \"%.2f s\" % (time.time() - st)\n18.13 s\n\nAll that being said, I'm going to venture premature optimization is at work here -- parsing strings is always slow. How many tuples are you expecting?\n", "I'd do string parsing if you know the format. Faster than eval().\n>>> tuples = [\"(1,0)\"] * 1000000\n>>> import time\n>>> st = time.time(); parsed = map(eval, tuples); print \"%.2f s\" % (time.time() - st)\n32.71 s\n>>> def parse(s) :\n... return s[1:-1].split(\",\")\n...\n>>> parse(\"(1,0)\")\n['1', '0']\n>>> st = time.time(); parsed = map(parse, tuples); print \"%.2f s\" % (time.time() - st)\n5.05 s\n\nif you need ints\n>>> def parse(s) :\n... return map(int, s[1:-1].split(\",\"))\n...\n>>> parse(\"(1,0)\")\n[1, 0]\n>>> st = time.time(); parsed = map(parse, tuples); print \"%.2f s\" % (time.time() - st)\n9.62 s\n\n", "My computer is slower than Nadia's, however this runs faster\n>>> timeit.Timer(\n \"list((int(a),int(c)) for a,b,c in (x[1:-1].partition(',') for x in s))\", \n \"s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')\").timeit(100000)\n3.2250211238861084\n\nthan this\n>>> timeit.Timer(\n \"map(lambda a: tuple(map(int, a[1:-1].split(','))), s)\", \n \"s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')\").timeit(100000)\n3.8979239463806152\n\nusing a list comprehension is faster still\n>>> timeit.Timer(\n \"[(int(a),int(c)) for a,b,c in (x[1:-1].partition(',') for x in s)]\", \n \"s = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')\").timeit(100000)\n2.452484130859375\n\n", "If you're sure the input is well formed:\ntuples = ('(-1,0)', '(1,0)', '(2,0)', '(3,0)', '(4,0)', '(5,0)', '(6,0)')\nresult = [eval(t) for t in tuples]\n\n", "You can get a parser up and running pretty quickly with YAPPS. \n", "you can just use yaml or json to parse it into tuples for you.\n", "import ast\n\nlist_of_tuples = map(ast.literal_eval, tuple_of_strings)\n\n" ]
[ 10, 3, 2, 2, 1, 1, 1, 0 ]
[]
[]
[ "python", "string", "tuples" ]
stackoverflow_0001618965_python_string_tuples.txt
Q: how to extract some text by use lxml? i want to extract some text in certain website. here is web address what i want to extract some text to make scraper. http://news.search.naver.com/search.naver?sm=tab_hty&where=news&query=times&x=0&y=0 in this page, i want to extract some text with subject and content field separately. for example,if you open that page, you can see some text in page, JAPAN TOKYO INTERNATIONAL FILM FESTIVAL EPA연합뉴스 세계 | 2009.10.25 (일) 오후 7:21 Japan, 25 October 2009. Gayet won the Best Actress Award for her role in the film 'Eight Times Up' directed by French filmmaker Xabi Molia. EPA/DAI KUROKAWA JAPAN TOKYO INTERNATIONAL FILM FESTIVAL EPA연합뉴스 세계 | 2009.10.25 (일) 오후 7:18 she learns that she won the Best Actress Award for her role in the film 'Eight Times Up' by French film director Xabi Molia during the award ceremony of the 22nd Tokyo ... and so on ,,,, and finally i want to extract text such like format SUBJECT:JAPAN TOKYO INTERNATIONAL FILM FESTIVAL CONTENT:EPA연합뉴스 세계 | 2009.10.25 (일) 오후 7:21 Japan, 25 October 2009. Gayet won the Best Actress Award for her role in the film 'Eight Times Up' directed by French filmmaker Xabi Molia. EPA/DAI KUROKAWA SUBJECT: ... CONTENT: ... AND SO ON.. if anyone help,really appreciate. thanks in advance. A: In general, to solve such problems you must first download the page of interest as text (use urllib.urlopen or anything else, even external utilities such as curl or wget, but not a browser since you want to see how the page looks before any Javascript has had a chance to run) and study it to understand its structure. In this case, after some study, you'll find the relevant parts are (snipping some irrelevant parts in head and breaking lines up for readability)...: <body onload=nx_init();> <dl> <dt> <a href="http://news.naver.com/main/read.nhn?mode=LSD&mid=sec&sid1=&oid=091&aid=0002497340" [[snipping other attributes of this tag]]> JAPAN TOKYO INTERNATIONAL FILM FESTIVAL</a> </dt> <dd class="txt_inline"> EPA¿¬ÇÕ´º½º ¼¼°è <span class="bar"> |</span> 2009.10.25 (ÀÏ) ¿ÀÈÄ 7:21</dd> <dd class="sh_news_passage"> Japan, 25 October 2009. Gayet won the Best Actress Award for her role in the film 'Eight <b> Times</b> Up' directed by French filmmaker Xabi Molia. EPA/DAI KUROKAWA</dd> and so forth. So, you want as "subject" the content of an <a> tag within a <dt>, and as "content" the content of <dd> tags following it (in the same <dl>). The headers you get contain: Content-Type: text/html; charset=ks_c_5601-1987 so you must also find a way to interpret that encoding into Unicode -- I believe that encoding is also known as 'euc_kr' and my Python installation appears to come with a codec for it, but you should check yours, too. Once you've determined all of these aspects, you try to lxml.etree.parse the URL -- and, just like so many other web pages, it doesn't parse -- it doesn't really present well formed HTML (try w3c's validators on it to find out about some of the ways it's broken). Because badly-formed HTML is so common on the web, there exist "tolerant parsers" that try to compensate for common errors. The most popular in Python is BeautifulSoup, and indeed lxml comes with it -- with lxml 2.0.3 or later, you can use BeautifulSoup as the underlying parser, then proceed "just as if" the document had parsed correctly -- but I find it simpler to use BeautifulSoup directly. For example, here's a script to emit the first few subject/content pairs at that URL (they've changed currently, originally they were being the same as you give;-). You need a terminal that supports Unicode output (for example, I run this without problem on a Mac's Terminal.App set to utf-8) -- of course, instead of the prints you can otherwise collect the Unicode fragments (e.g. append them to a list and ''.join them when you have all the required pieces), encode them however you wish, etc, etc. from BeautifulSoup import BeautifulSoup import urllib def getit(pagetext, howmany=0): soup = BeautifulSoup(pagetext) results = [] dls = soup.findAll('dl') for adl in dls: thedt = adl.dt while thedt: thea = thedt.a if thea: print 'SUBJECT:', thea.string thedd = thedt.findNextSibling('dd') if thedd: print 'CONTENT:', while thedd: for x in thedd.findAll(text=True): print x, thedd = thedd.findNextSibling('dd') print howmany -= 1 if not howmany: return print thedt = thedt.findNextSibling('dt') theurl = ('http://news.search.naver.com/search.naver?' 'sm=tab%5Fhty&where=news&query=times&x=0&y=0') thepage = urllib.urlopen(theurl).read() getit(thepage, 3) The logic in lxml, or "BeautifulSoup in lxml clothing", is not very different, just the spelling and capitalization of the various navigational operations changes a bit.
how to extract some text by use lxml?
i want to extract some text in certain website. here is web address what i want to extract some text to make scraper. http://news.search.naver.com/search.naver?sm=tab_hty&where=news&query=times&x=0&y=0 in this page, i want to extract some text with subject and content field separately. for example,if you open that page, you can see some text in page, JAPAN TOKYO INTERNATIONAL FILM FESTIVAL EPA연합뉴스 세계 | 2009.10.25 (일) 오후 7:21 Japan, 25 October 2009. Gayet won the Best Actress Award for her role in the film 'Eight Times Up' directed by French filmmaker Xabi Molia. EPA/DAI KUROKAWA JAPAN TOKYO INTERNATIONAL FILM FESTIVAL EPA연합뉴스 세계 | 2009.10.25 (일) 오후 7:18 she learns that she won the Best Actress Award for her role in the film 'Eight Times Up' by French film director Xabi Molia during the award ceremony of the 22nd Tokyo ... and so on ,,,, and finally i want to extract text such like format SUBJECT:JAPAN TOKYO INTERNATIONAL FILM FESTIVAL CONTENT:EPA연합뉴스 세계 | 2009.10.25 (일) 오후 7:21 Japan, 25 October 2009. Gayet won the Best Actress Award for her role in the film 'Eight Times Up' directed by French filmmaker Xabi Molia. EPA/DAI KUROKAWA SUBJECT: ... CONTENT: ... AND SO ON.. if anyone help,really appreciate. thanks in advance.
[ "In general, to solve such problems you must first download the page of interest as text (use urllib.urlopen or anything else, even external utilities such as curl or wget, but not a browser since you want to see how the page looks before any Javascript has had a chance to run) and study it to understand its structure. In this case, after some study, you'll find the relevant parts are (snipping some irrelevant parts in head and breaking lines up for readability)...:\n<body onload=nx_init();>\n <dl>\n <dt>\n<a href=\"http://news.naver.com/main/read.nhn?mode=LSD&mid=sec&sid1=&oid=091&aid=0002497340\"\n [[snipping other attributes of this tag]]>\nJAPAN TOKYO INTERNATIONAL FILM FESTIVAL</a>\n</dt>\n <dd class=\"txt_inline\">\nEPA¿¬ÇÕ´º½º ¼¼°è <span class=\"bar\">\n|</span>\n 2009.10.25 (ÀÏ) ¿ÀÈÄ 7:21</dd>\n <dd class=\"sh_news_passage\">\n Japan, 25 October 2009. Gayet won the Best Actress Award for her role in the film 'Eight <b>\nTimes</b>\n Up' directed by French filmmaker Xabi Molia. EPA/DAI KUROKAWA</dd>\n\nand so forth. So, you want as \"subject\" the content of an <a> tag within a <dt>, and as \"content\" the content of <dd> tags following it (in the same <dl>).\nThe headers you get contain:\nContent-Type: text/html; charset=ks_c_5601-1987\n\nso you must also find a way to interpret that encoding into Unicode -- I believe that encoding is also known as 'euc_kr' and my Python installation appears to come with a codec for it, but you should check yours, too.\nOnce you've determined all of these aspects, you try to lxml.etree.parse the URL -- and, just like so many other web pages, it doesn't parse -- it doesn't really present well formed HTML (try w3c's validators on it to find out about some of the ways it's broken).\nBecause badly-formed HTML is so common on the web, there exist \"tolerant parsers\" that try to compensate for common errors. The most popular in Python is BeautifulSoup, and indeed lxml comes with it -- with lxml 2.0.3 or later, you can use BeautifulSoup as the underlying parser, then proceed \"just as if\" the document had parsed correctly -- but I find it simpler to use BeautifulSoup directly.\nFor example, here's a script to emit the first few subject/content pairs at that URL (they've changed currently, originally they were being the same as you give;-). You need a terminal that supports Unicode output (for example, I run this without problem on a Mac's Terminal.App set to utf-8) -- of course, instead of the prints you can otherwise collect the Unicode fragments (e.g. append them to a list and ''.join them when you have all the required pieces), encode them however you wish, etc, etc.\nfrom BeautifulSoup import BeautifulSoup\nimport urllib\n\ndef getit(pagetext, howmany=0):\n soup = BeautifulSoup(pagetext)\n results = []\n dls = soup.findAll('dl')\n for adl in dls:\n thedt = adl.dt\n while thedt:\n thea = thedt.a\n if thea:\n print 'SUBJECT:', thea.string\n thedd = thedt.findNextSibling('dd')\n if thedd:\n print 'CONTENT:',\n while thedd:\n for x in thedd.findAll(text=True):\n print x,\n thedd = thedd.findNextSibling('dd')\n print\n howmany -= 1\n if not howmany: return\n print\n thedt = thedt.findNextSibling('dt')\n\ntheurl = ('http://news.search.naver.com/search.naver?'\n 'sm=tab%5Fhty&where=news&query=times&x=0&y=0')\nthepage = urllib.urlopen(theurl).read()\ngetit(thepage, 3)\n\nThe logic in lxml, or \"BeautifulSoup in lxml clothing\", is not very different, just the spelling and capitalization of the various navigational operations changes a bit.\n" ]
[ 2 ]
[]
[]
[ "lxml", "parsing", "python" ]
stackoverflow_0001621410_lxml_parsing_python.txt
Q: Finding and adding twitter users? Any suggestions for a good twitter library (preferably in Ruby or Python)? I have a list of usernames, and I need to be able to programmatically follow these users. I tried twitter4r in Ruby, but finding users doesn't seem to work. When I do twitter = Twitter::Client.new(:login => 'mylogin', :password => 'mypassword') user = Twitter::User.find('ev', twitter) ...the user returned always seems to be some guy named "Jose Italo", no matter what username I try. Similary, I tried python-twitter, but following users doesn't seem to work. When I do api = twitter.Api(username='mylogin', password='mypassword') user = api.GetUser('ev') api.CreateFriendship(user) ...I get this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "build/bdist.macosx-10.5-i386/egg/twitter.py", line 1769, in CreateFriendship File "build/bdist.macosx-10.5-i386/egg/simplejson/__init__.py", line 307, in loads File "build/bdist.macosx-10.5-i386/egg/simplejson/decoder.py", line 335, in decode File "build/bdist.macosx-10.5-i386/egg/simplejson/decoder.py", line 353, in raw_decode ValueError: No JSON object could be decoded So any suggestions for a working library, or how to get twitter4r or python-twitter working? A: http://github.com/jnunemaker/twitter/ has been working pretty good for me. Although, If I am just doing something simple, I usually resort to the bare HTTP API. In this case it would be this one: http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-friendships%C2%A0create Using Ruby with RestClient that would look something like this: require "rest_client" require "json" r = RestClient.post "http://username:[email protected]/friendships/create.json", :screen_name => "user_to_follow" j = JSON.parse(r) And you have the response as a hash. Easy.
Finding and adding twitter users?
Any suggestions for a good twitter library (preferably in Ruby or Python)? I have a list of usernames, and I need to be able to programmatically follow these users. I tried twitter4r in Ruby, but finding users doesn't seem to work. When I do twitter = Twitter::Client.new(:login => 'mylogin', :password => 'mypassword') user = Twitter::User.find('ev', twitter) ...the user returned always seems to be some guy named "Jose Italo", no matter what username I try. Similary, I tried python-twitter, but following users doesn't seem to work. When I do api = twitter.Api(username='mylogin', password='mypassword') user = api.GetUser('ev') api.CreateFriendship(user) ...I get this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "build/bdist.macosx-10.5-i386/egg/twitter.py", line 1769, in CreateFriendship File "build/bdist.macosx-10.5-i386/egg/simplejson/__init__.py", line 307, in loads File "build/bdist.macosx-10.5-i386/egg/simplejson/decoder.py", line 335, in decode File "build/bdist.macosx-10.5-i386/egg/simplejson/decoder.py", line 353, in raw_decode ValueError: No JSON object could be decoded So any suggestions for a working library, or how to get twitter4r or python-twitter working?
[ "http://github.com/jnunemaker/twitter/ has been working pretty good for me.\nAlthough, If I am just doing something simple, I usually resort to the bare HTTP API. In this case it would be this one: http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-friendships%C2%A0create\nUsing Ruby with RestClient that would look something like this:\nrequire \"rest_client\"\nrequire \"json\"\n\nr = RestClient.post \"http://username:[email protected]/friendships/create.json\",\n :screen_name => \"user_to_follow\"\nj = JSON.parse(r)\n\nAnd you have the response as a hash. Easy.\n" ]
[ 1 ]
[]
[]
[ "python", "ruby", "twitter" ]
stackoverflow_0001621863_python_ruby_twitter.txt
Q: Is there a way to split a string by every nth separator in Python? For example, if I had the following string: "this-is-a-string" Could I split it by every 2nd "-" rather than every "-" so that it returns two values ("this-is" and "a-string") rather than returning four? A: Here’s another solution: span = 2 words = "this-is-a-string".split("-") print ["-".join(words[i:i+span]) for i in range(0, len(words), span)] A: >>> s="a-b-c-d-e-f-g-h-i-j-k-l" # use zip(*[i]*n) >>> i=iter(s.split('-')) # for the nth case >>> map("-".join,zip(i,i)) ['a-b', 'c-d', 'e-f', 'g-h', 'i-j', 'k-l'] >>> i=iter(s.split('-')) >>> map("-".join,zip(*[i]*3)) ['a-b-c', 'd-e-f', 'g-h-i', 'j-k-l'] >>> i=iter(s.split('-')) >>> map("-".join,zip(*[i]*4)) ['a-b-c-d', 'e-f-g-h', 'i-j-k-l'] Sometimes itertools.izip is faster as you can see in the results >>> from itertools import izip >>> s="a-b-c-d-e-f-g-h-i-j-k-l" >>> i=iter(s.split("-")) >>> ["-".join(x) for x in izip(i,i)] ['a-b', 'c-d', 'e-f', 'g-h', 'i-j', 'k-l'] Here is a version that sort of works with an odd number of parts depending what output you desire in that case. You might prefer to trim the '-' off the end of the last element with .rstrip('-') for example. >>> from itertools import izip_longest >>> s="a-b-c-d-e-f-g-h-i-j-k-l-m" >>> i=iter(s.split('-')) >>> map("-".join,izip_longest(i,i,fillvalue="")) ['a-b', 'c-d', 'e-f', 'g-h', 'i-j', 'k-l', 'm-'] Here are some timings $ python -m timeit -s 'import re;r=re.compile("[^-]+-[^-]+");s="a-b-c-d-e-f-g-h-i-j-k-l"' 'r.findall(s)' 100000 loops, best of 3: 4.31 usec per loop $ python -m timeit -s 'from itertools import izip;s="a-b-c-d-e-f-g-h-i-j-k-l"' 'i=iter(s.split("-"));["-".join(x) for x in izip(i,i)]' 100000 loops, best of 3: 5.41 usec per loop $ python -m timeit -s 's="a-b-c-d-e-f-g-h-i-j-k-l"' 'i=iter(s.split("-"));["-".join(x) for x in zip(i,i)]' 100000 loops, best of 3: 7.3 usec per loop $ python -m timeit -s 's="a-b-c-d-e-f-g-h-i-j-k-l"' 't=s.split("-");["-".join(t[i:i+2]) for i in range(0, len(t), 2)]' 100000 loops, best of 3: 7.49 usec per loop $ python -m timeit -s 's="a-b-c-d-e-f-g-h-i-j-k-l"' '["-".join([x,y]) for x,y in zip(s.split("-")[::2], s.split("-")[1::2])]' 100000 loops, best of 3: 9.51 usec per loop A: Regular expressions handle this easily: import re s = "aaaa-aa-bbbb-bb-c-ccccc-d-ddddd" print re.findall("[^-]+-[^-]+", s) Output: ['aaaa-aa', 'bbbb-bb', 'c-ccccc', 'd-ddddd'] Update for Nick D: n = 3 print re.findall("-".join(["[^-]+"] * n), s) Output: ['aaaa-aa-bbbb', 'bb-c-ccccc'] A: EDIT: The original code I posted didn't work. This version does: I don't think you can split on every other one, but you could split on every - and join every pair. chunks = [] content = "this-is-a-string" split_string = content.split('-') for i in range(0, len(split_string) - 1,2) : if i < len(split_string) - 1: chunks.append("-".join([split_string[i], split_string[i+1]])) else: chunks.append(split_string[i]) A: I think several of the already given solutions are good enough, but just for fun, I did this version: def twosplit(s,sep): first=s.find(sep) if first>=0: second=s.find(sep,first+1) if second>=0: return [s[0:second]] + twosplit(s[second+1:],sep) else: return [s] else: return [s] print twosplit("this-is-a-string","-")
Is there a way to split a string by every nth separator in Python?
For example, if I had the following string: "this-is-a-string" Could I split it by every 2nd "-" rather than every "-" so that it returns two values ("this-is" and "a-string") rather than returning four?
[ "Here’s another solution:\nspan = 2\nwords = \"this-is-a-string\".split(\"-\")\nprint [\"-\".join(words[i:i+span]) for i in range(0, len(words), span)]\n\n", ">>> s=\"a-b-c-d-e-f-g-h-i-j-k-l\" # use zip(*[i]*n)\n>>> i=iter(s.split('-')) # for the nth case \n>>> map(\"-\".join,zip(i,i)) \n['a-b', 'c-d', 'e-f', 'g-h', 'i-j', 'k-l']\n\n>>> i=iter(s.split('-'))\n>>> map(\"-\".join,zip(*[i]*3))\n['a-b-c', 'd-e-f', 'g-h-i', 'j-k-l']\n>>> i=iter(s.split('-'))\n>>> map(\"-\".join,zip(*[i]*4))\n['a-b-c-d', 'e-f-g-h', 'i-j-k-l']\n\nSometimes itertools.izip is faster as you can see in the results\n>>> from itertools import izip\n>>> s=\"a-b-c-d-e-f-g-h-i-j-k-l\"\n>>> i=iter(s.split(\"-\"))\n>>> [\"-\".join(x) for x in izip(i,i)]\n['a-b', 'c-d', 'e-f', 'g-h', 'i-j', 'k-l']\n\nHere is a version that sort of works with an odd number of parts depending what output you desire in that case. You might prefer to trim the '-' off the end of the last element with .rstrip('-') for example.\n>>> from itertools import izip_longest\n>>> s=\"a-b-c-d-e-f-g-h-i-j-k-l-m\"\n>>> i=iter(s.split('-'))\n>>> map(\"-\".join,izip_longest(i,i,fillvalue=\"\"))\n['a-b', 'c-d', 'e-f', 'g-h', 'i-j', 'k-l', 'm-']\n\nHere are some timings\n$ python -m timeit -s 'import re;r=re.compile(\"[^-]+-[^-]+\");s=\"a-b-c-d-e-f-g-h-i-j-k-l\"' 'r.findall(s)'\n100000 loops, best of 3: 4.31 usec per loop\n\n$ python -m timeit -s 'from itertools import izip;s=\"a-b-c-d-e-f-g-h-i-j-k-l\"' 'i=iter(s.split(\"-\"));[\"-\".join(x) for x in izip(i,i)]'\n100000 loops, best of 3: 5.41 usec per loop\n\n$ python -m timeit -s 's=\"a-b-c-d-e-f-g-h-i-j-k-l\"' 'i=iter(s.split(\"-\"));[\"-\".join(x) for x in zip(i,i)]'\n100000 loops, best of 3: 7.3 usec per loop\n\n$ python -m timeit -s 's=\"a-b-c-d-e-f-g-h-i-j-k-l\"' 't=s.split(\"-\");[\"-\".join(t[i:i+2]) for i in range(0, len(t), 2)]'\n100000 loops, best of 3: 7.49 usec per loop\n\n$ python -m timeit -s 's=\"a-b-c-d-e-f-g-h-i-j-k-l\"' '[\"-\".join([x,y]) for x,y in zip(s.split(\"-\")[::2], s.split(\"-\")[1::2])]'\n100000 loops, best of 3: 9.51 usec per loop\n\n", "Regular expressions handle this easily:\nimport re\ns = \"aaaa-aa-bbbb-bb-c-ccccc-d-ddddd\"\nprint re.findall(\"[^-]+-[^-]+\", s)\n\nOutput:\n['aaaa-aa', 'bbbb-bb', 'c-ccccc', 'd-ddddd']\n\nUpdate for Nick D:\nn = 3\nprint re.findall(\"-\".join([\"[^-]+\"] * n), s)\n\nOutput:\n['aaaa-aa-bbbb', 'bb-c-ccccc']\n\n", "EDIT: The original code I posted didn't work. This version does:\nI don't think you can split on every other one, but you could split on every - and join every pair.\nchunks = []\ncontent = \"this-is-a-string\"\nsplit_string = content.split('-')\n\nfor i in range(0, len(split_string) - 1,2) :\n if i < len(split_string) - 1:\n chunks.append(\"-\".join([split_string[i], split_string[i+1]]))\n else:\n chunks.append(split_string[i])\n\n", "I think several of the already given solutions are good enough, but just for fun, I did this version:\ndef twosplit(s,sep):\n first=s.find(sep)\n if first>=0:\n second=s.find(sep,first+1)\n if second>=0:\n return [s[0:second]] + twosplit(s[second+1:],sep)\n else:\n return [s]\n else:\n return [s]\n print twosplit(\"this-is-a-string\",\"-\")\n\n" ]
[ 46, 16, 12, 1, 0 ]
[ "l = 'this-is-a-string'.split()\nnl = []\nss = \"\"\nc = 0\nfor s in l:\n c += 1\n if c%2 == 0:\n ss = s\n else:\n ss = \"%s-%s\"%(ss,s)\n nl.insert(ss)\n\nprint nl\n\n" ]
[ -1 ]
[ "python", "split", "string" ]
stackoverflow_0001621906_python_split_string.txt
Q: Python Selenium handling Timeout Exceptions with a long list of URLs I am using selenium RC to cycle through a long list of URLs, sequentially writing the HTML from each URL to a csv file. Problem: the program frequently exits at various points in list due to URL "Timed out after 30000ms" exceptions. Instead of stopping the program when it hits a URL time-out, I was trying to have the program simply write a note of the time-out in the CSV file (in the row where the HTML for the URL would have gone) and move on to the next URL in the list. I attempted to add an 'else' clause to my program but it doesnt seem to help (see below) -- ie: the program still stops every time it hits a timeout. I also seem to get 30000ms timeout exceptions even when I open selenium-server with a 60000ms timeout window --eg: "java -jar selenium-server.jar -timeout 600000" ??? Any advice would be much appreciated. Thank you. from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://www.MainDomain.com") self.selenium.start() def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('SubDomainList.csv', 'rb')) for row in spamReader: sel.open(row[0]) sel.wait_for_page_to_load("400000") time.sleep(5) html = sel.get_html_source() ofile = open('output4001-5000.csv', 'ab') ofile.write(html + '\n') ofile.close else: ofile = open('outputTest.csv', 'ab') ofile.write("URL Timeout" + '\n') ofile.close def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() A: Try the following: from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://example.com") self.selenium.start() self.selenium.set_timeout("60000") def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('SubDomainList.csv', 'rb')) for row in spamReader: try: sel.open(row[0]) except Exception, e: ofile = open('outputTest.csv', 'ab') ofile.write("error on %s: %s" % (row[0],e)) else: time.sleep(5) html = sel.get_html_source() ofile = open('output4001-5000.csv', 'ab') ofile.write(html.encode('utf-8') + '\n') ofile.close() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() Some comments: You don't need a wait_for_page_to_load after an open, that will cause you timeouts because once the page is loaded after the opeen, it will start waiting again and the page will not be loading. Most of the failures you get from selenium (timeouts, object not found) can be caught with try-except statements You should set the timeout in your tests withing the test itself (using set_timeout), that way it doesn't depend on the way you start the server, it will always wait the time you wanted
Python Selenium handling Timeout Exceptions with a long list of URLs
I am using selenium RC to cycle through a long list of URLs, sequentially writing the HTML from each URL to a csv file. Problem: the program frequently exits at various points in list due to URL "Timed out after 30000ms" exceptions. Instead of stopping the program when it hits a URL time-out, I was trying to have the program simply write a note of the time-out in the CSV file (in the row where the HTML for the URL would have gone) and move on to the next URL in the list. I attempted to add an 'else' clause to my program but it doesnt seem to help (see below) -- ie: the program still stops every time it hits a timeout. I also seem to get 30000ms timeout exceptions even when I open selenium-server with a 60000ms timeout window --eg: "java -jar selenium-server.jar -timeout 600000" ??? Any advice would be much appreciated. Thank you. from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://www.MainDomain.com") self.selenium.start() def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('SubDomainList.csv', 'rb')) for row in spamReader: sel.open(row[0]) sel.wait_for_page_to_load("400000") time.sleep(5) html = sel.get_html_source() ofile = open('output4001-5000.csv', 'ab') ofile.write(html + '\n') ofile.close else: ofile = open('outputTest.csv', 'ab') ofile.write("URL Timeout" + '\n') ofile.close def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main()
[ "Try the following:\nfrom selenium import selenium\nimport unittest, time, re, csv, logging\n\nclass Untitled(unittest.TestCase):\n def setUp(self):\n self.verificationErrors = []\n self.selenium = selenium(\"localhost\", 4444, \"*firefox\", \"http://example.com\")\n self.selenium.start()\n self.selenium.set_timeout(\"60000\")\n\n def test_untitled(self):\n sel = self.selenium\n spamReader = csv.reader(open('SubDomainList.csv', 'rb'))\n for row in spamReader:\n try:\n sel.open(row[0])\n except Exception, e:\n ofile = open('outputTest.csv', 'ab')\n ofile.write(\"error on %s: %s\" % (row[0],e))\n else:\n time.sleep(5)\n html = sel.get_html_source()\n ofile = open('output4001-5000.csv', 'ab')\n ofile.write(html.encode('utf-8') + '\\n')\n ofile.close()\n\n def tearDown(self):\n self.selenium.stop()\n self.assertEqual([], self.verificationErrors)\n\nif __name__ == \"__main__\":\n unittest.main()\n\nSome comments:\n\nYou don't need a wait_for_page_to_load after an open, that will cause you timeouts because once the page is loaded after the opeen, it will start waiting again and the page will not be loading.\nMost of the failures you get from selenium (timeouts, object not found) can be caught with try-except statements\nYou should set the timeout in your tests withing the test itself (using set_timeout), that way it doesn't depend on the way you start the server, it will always wait the time you wanted\n\n" ]
[ 3 ]
[]
[]
[ "exception", "loops", "python", "selenium", "timeout" ]
stackoverflow_0001621806_exception_loops_python_selenium_timeout.txt
Q: White listing certain HTML tags in python? Let's say allowed_bits = ['a', 'p'] re.compile(r'<(%s)[^>]*(/>|.*?</\1>)' % ('|'.join(allowed_bits))) matches: <a href="blah blah">blah</a> <p /> and not: <html>blah blah blah</html> What I want to do is turn it on its head, so that it matches <html>blah blah</html> <script type="text/javascript">blah blah</script> and not: <p>Hello</p> My thinking was to do something like: re.compile(r'<(**^**%s)[^>]*(/>|.*?</\1>)' % ('|'.join(allowed_bits))) but this doesn't work. Any ideas? I want to negatively match. A: Use a negative lookahead assertion (?! … ): re.compile(r'<(?!%s)[^>](/>|.?)' % ('|'.join(allowed_bits)))
White listing certain HTML tags in python?
Let's say allowed_bits = ['a', 'p'] re.compile(r'<(%s)[^>]*(/>|.*?</\1>)' % ('|'.join(allowed_bits))) matches: <a href="blah blah">blah</a> <p /> and not: <html>blah blah blah</html> What I want to do is turn it on its head, so that it matches <html>blah blah</html> <script type="text/javascript">blah blah</script> and not: <p>Hello</p> My thinking was to do something like: re.compile(r'<(**^**%s)[^>]*(/>|.*?</\1>)' % ('|'.join(allowed_bits))) but this doesn't work. Any ideas? I want to negatively match.
[ "Use a negative lookahead assertion (?! … ):\nre.compile(r'<(?!%s)[^>](/>|.?)' % ('|'.join(allowed_bits)))\n\n" ]
[ 2 ]
[]
[]
[ "python", "regex", "regex_negation" ]
stackoverflow_0001622314_python_regex_regex_negation.txt
Q: How do I trim the number of words in Python? For example... s = 'The fox jumped over a big brown log.' k = FUNCTION(s,4) k should be... "The fox jumped over" I can write my own function that splits on whitespace, cuts the list and then joins the list. But that is too much work. Anyone else knows a simpler way? A: Something like this: def f(s, n): return ' '.join(s.split()[:n]) But it is still splitting, slicing and joining... A: x = "Hello how are you?" ' '.join(x.split()[1:3]) >>> "how are" you can then change the numbers 1 and 3 in the list to get which words you want it to return def split_me(string, start, end, skip): return ' '.join(string.split()[start:end:skip]) remember lists are indexed starting at 0 and the skip is so you can skip words: split_me("Hello how are you", 0, 3, 2) >>> 'hello are' A: Why do you say "that is too much work"? Too much work for you, or too much for the computer? Is that code part of some performance critical code? Just do what works and move on to something more important. A: Try something like this: def f(s, n): return " ".join( s.split()[:n] )
How do I trim the number of words in Python?
For example... s = 'The fox jumped over a big brown log.' k = FUNCTION(s,4) k should be... "The fox jumped over" I can write my own function that splits on whitespace, cuts the list and then joins the list. But that is too much work. Anyone else knows a simpler way?
[ "Something like this:\ndef f(s, n):\n return ' '.join(s.split()[:n])\n\nBut it is still splitting, slicing and joining...\n", "x = \"Hello how are you?\"\n' '.join(x.split()[1:3])\n>>> \"how are\"\n\nyou can then change the numbers 1 and 3 in the list to get which words you want it to return\ndef split_me(string, start, end, skip):\n return ' '.join(string.split()[start:end:skip])\n\nremember lists are indexed starting at 0 and the skip is so you can skip words:\nsplit_me(\"Hello how are you\", 0, 3, 2)\n>>> 'hello are'\n\n", "Why do you say \"that is too much work\"? Too much work for you, or too much for the computer? Is that code part of some performance critical code? \nJust do what works and move on to something more important.\n", "Try something like this:\ndef f(s, n):\n return \" \".join( s.split()[:n] )\n\n" ]
[ 8, 3, 1, 1 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001622421_python_string.txt
Q: Werkzeug in General, and in Python 3.1 I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi. History: PHP + MySQL years ago PHP + Python 2.x + MySQL recently and current Python + PostgreSQL working on it We use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous. We typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python). Question 1: In reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for. Question 2: Is there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1. If there is not a version, what would it take to upgrade it to work on Python 3.1? Note: I've run 2to3 on the Werkzeug source code, and it does python-compile without Edit: The project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there? Thoughts appreciated! A: mod_wsgi for Python 3.x is also not ready. There is no satisfactory definition of WSGI for Python 3.x yet; the WEB-SIG are still bashing out the issues. mod_wsgi targets a guess at what might be in it, but there are very likely to be changes to both the spec and to standard libraries. Any web application you write today in Python 3.1 is likely to break in the future. It's a bit of a shambles. Today, for webapps you can only realistically use Python 2.x. A: I haven't used Werkzeug, so I can only answer question 2: No, Werkzeug does not work on Python 3. In fact, very little works on Python 3 as of today. Porting is not difficult, but you can't port until all your third-party libraries have been ported, so progress is slow. One big stopper has been setuptools, which is a very popular package to use. Setuptools is unmaintained, but there is a maintained fork called Distribute. Distribute was released with Python 3 support just a week or two ago. I hope package support for Python 3 will pick up now. But it will still be a long time, at least months probably a year or so, before any major project like Werkzeug will be ported to Python 3. A: I can only answer question one: I started using it for some small webstuff but now moved on to rework larger apps with it. Why Werkzeug? The modular concept is really helpful. You can hook in modules as you like, make stuff easily context aware and you get good request file handling for free which is able to cope with 300mb+ files by not storing it in memory. Disadvantages... Well sometimes modularity needs some upfront thought (django f.ex. gives you everything all at once, stripping stuff out is hard to do there though) but for me it works fine.
Werkzeug in General, and in Python 3.1
I've been looking really hard at all of the way**(s)** one can develop web applications using Python. For reference, we are using RHEL 64bit, apache, mod_wsgi. History: PHP + MySQL years ago PHP + Python 2.x + MySQL recently and current Python + PostgreSQL working on it We use a great library for communicating between PHP and Python (interface in PHP, backend in Python)... However, with a larger upcoming project starting, using 100% python may be very advantagous. We typically prefer not to have a monolithic framework dictating how things are done. A collection of useful helpers and utilities are much preferred (be it PHP or Python). Question 1: In reading a number of answers from experienced Python users, I've seen Werkzeug recommended a number of times. I would love it if several people with direct experience using Werkzeug to develop professional web applications could comment (in as much detail as their fingers feel like) why they use it, why they like it, and anything to watch out for. Question 2: Is there a version of Werkzeug that supports Python 3.1.1. I've succefully installed mod_wsgi on Apache 2.2 with Python 3.1.1. If there is not a version, what would it take to upgrade it to work on Python 3.1? Note: I've run 2to3 on the Werkzeug source code, and it does python-compile without Edit: The project that we are starting is not slated to be finished until nearly a year from now. At which point, I'm guessing Python 3.X will be a lot more mainstream. Furthermore, considering that we are running the App (not distributing it), can anyone comment on the viability of bashing through some of the Python 3 issues now, so that when a year from now arrives, we are more-or-less already there? Thoughts appreciated!
[ "mod_wsgi for Python 3.x is also not ready. There is no satisfactory definition of WSGI for Python 3.x yet; the WEB-SIG are still bashing out the issues. mod_wsgi targets a guess at what might be in it, but there are very likely to be changes to both the spec and to standard libraries. Any web application you write today in Python 3.1 is likely to break in the future.\nIt's a bit of a shambles. Today, for webapps you can only realistically use Python 2.x.\n", "I haven't used Werkzeug, so I can only answer question 2:\nNo, Werkzeug does not work on Python 3. In fact, very little works on Python 3 as of today. Porting is not difficult, but you can't port until all your third-party libraries have been ported, so progress is slow.\nOne big stopper has been setuptools, which is a very popular package to use. Setuptools is unmaintained, but there is a maintained fork called Distribute. Distribute was released with Python 3 support just a week or two ago. I hope package support for Python 3 will pick up now. But it will still be a long time, at least months probably a year or so, before any major project like Werkzeug will be ported to Python 3.\n", "I can only answer question one:\nI started using it for some small webstuff but now moved on to rework larger apps with it. Why Werkzeug? The modular concept is really helpful. You can hook in modules as you like, make stuff easily context aware and you get good request file handling for free which is able to cope with 300mb+ files by not storing it in memory.\nDisadvantages... Well sometimes modularity needs some upfront thought (django f.ex. gives you everything all at once, stripping stuff out is hard to do there though) but for me it works fine.\n" ]
[ 3, 1, 1 ]
[]
[]
[ "python", "python_3.x", "werkzeug" ]
stackoverflow_0001523706_python_python_3.x_werkzeug.txt
Q: what exactly is a "register machine"? From http://code.google.com/p/unladen-swallow/wiki/ProjectPlan I quote: "Using a JIT will also allow us to move Python from a stack-based machine to a register machine, which has been shown to improve performance in other similar languages (Ierusalimschy et al, 2005; Shi et al, 2005)." In college I built a simple compiler for a language with recursive procedures - which maintained stack frames for each procedure called - so that they can be called recursively and so that parameters and return values would work.... 2 things: 1) Am I right in thinking that what I implemented would be considered a "stack-based machine" given the terminology used in the quotation above? 2) If my assumption in point (1) was right, how does a "register machine" work? i.e. how is it different from a stack-based machine? Thanks! A: A register machine is a hardware or software unit that when working with data takes it from memory, puts it in a location where it can work with it quickly, and then returns the result. For example a regular CPU is a register machine. Since the ALU (the unit that works with numbers in a CPU) can only work with numbers in a register. A stack based machine adds the data onto a stack and then either pops or pushes stuff onto it. For example, adding two numbers would be Push 2 // Push 2 onto the stack Push 3 // Push 3 onto the stack Add // Add the top two things on the stack. When in a register machine it would be something like this. Load x, r0 // Load x onto register 0 Load y, r1 // Load y onto register 1 Add r0, r1, r2 // Add 1 and 2 and store the result in register 2 A: A register machine almost always has a stack, also. But a stack machine rarely has architecturally visible registers, or it may only have one or two. A register machine may have some stack ops and may even have a stack addressing mode. The difference is one of orientation. The register machine will mostly have instructions that operate on registers, and will have a handful of ops for loading and storing between the registers and the stack or memory. A stack machine .. and these are very rare as actual hardware devices .. will operate directly on the stack with its instructions and wll have a handlful of ops for loading and storing between the stack and memory. Now, the reasons that hardware register machines are faster than hardware stack machines are possibly unrelated to the reasons that software "register" VM's are faster, according to the cited paper, than software "stack" machines. For the software VM's, it's apparently the case that fewer instructions need to be executed. This was determined empirically according to claims in the cited paper, but I imagine it's because far fewer overhead instructions like push, pop, and exchange need to be done in the register machine, and because the register machine can reuse operands easily if they are still lying around in the register file, without needing load or push ops. Of course, it's all just memory really; they are virtual registers. A: A register machine uses a fixed number of registers or buckets for storing intermediate values for computation. For example the "add" instruction could add the values in two specific registers and store the result in another register. A stack based machine uses a stack for storing intermediate values during computation. For example, to add two numbers the "add" instructions pops off two values from the stack, adds them, and pushes the result back onto the stack. A: 1) Am I right in thinking that what I implemented would be considered a "stack-based machine" given the terminology used in the quotation above? Not really. A stack of some sort is pretty much the only way to implement recursive function calls. But a "stack-based machine" goes much further in doing everything via the stack. Not just function calls, but also arithmetic operations. In a way, they behave as if every machine instruction is a function call handled via the stack. It makes for a very simple machine design, but rather hard-to-write assembler/machine code. 2) If my assumption in point (1) was right, how does a "register machine" work? i.e. how is it different from a stack-based machine? A register machine has some fast internal storage (registers) and performs most of its operations on data in these registers. There are additional machine instructions for copying data between registers and main memory. IIRC there are two kinds of stack machines: Accumulator machines have an "accumulator", which is basically a single register that holds the result of calculations (and may also supply an operand), with most machine instructions operating on the accumulator. "Pure" stack machines put the result of calculations on top of the stack after consuming the operands. A: A register machine is an abstract machine whose opcodes are defined by reference to their operation on a set of named registers, rather than by their operation on the top portion of a stack. In a register machine: add could be defined to take three register names as operands, add the contents of the first two, and place the result in the third. (More common is the design where only one or two are named and the result always goes in a special accumulator register, but that's not the point.) In a stack machine: add could be defined to pop two operands from the stack, add them, and push the result onto the stack. A: Did your compiler generate machine code? If so, then its target was a register machine (nearly all CPU designs are register machines). Stack machines store all values on a stack, whereas register machines have a fixed number of storage slots whose "addresses" do not change (unlike stack machines).
what exactly is a "register machine"?
From http://code.google.com/p/unladen-swallow/wiki/ProjectPlan I quote: "Using a JIT will also allow us to move Python from a stack-based machine to a register machine, which has been shown to improve performance in other similar languages (Ierusalimschy et al, 2005; Shi et al, 2005)." In college I built a simple compiler for a language with recursive procedures - which maintained stack frames for each procedure called - so that they can be called recursively and so that parameters and return values would work.... 2 things: 1) Am I right in thinking that what I implemented would be considered a "stack-based machine" given the terminology used in the quotation above? 2) If my assumption in point (1) was right, how does a "register machine" work? i.e. how is it different from a stack-based machine? Thanks!
[ "A register machine is a hardware or software unit that when working with data takes it from memory, puts it in a location where it can work with it quickly, and then returns the result.\nFor example a regular CPU is a register machine. Since the ALU (the unit that works with numbers in a CPU) can only work with numbers in a register.\nA stack based machine adds the data onto a stack and then either pops or pushes stuff onto it.\nFor example, adding two numbers would be\nPush 2 // Push 2 onto the stack\nPush 3 // Push 3 onto the stack\nAdd // Add the top two things on the stack.\n\nWhen in a register machine it would be something like this.\nLoad x, r0 // Load x onto register 0\nLoad y, r1 // Load y onto register 1\nAdd r0, r1, r2 // Add 1 and 2 and store the result in register 2\n\n", "A register machine almost always has a stack, also.\nBut a stack machine rarely has architecturally visible registers, or it may only have one or two.\nA register machine may have some stack ops and may even have a stack addressing mode.\nThe difference is one of orientation. The register machine will mostly have instructions that operate on registers, and will have a handful of ops for loading and storing between the registers and the stack or memory.\nA stack machine .. and these are very rare as actual hardware devices .. will operate directly on the stack with its instructions and wll have a handlful of ops for loading and storing between the stack and memory.\nNow, the reasons that hardware register machines are faster than hardware stack machines are possibly unrelated to the reasons that software \"register\" VM's are faster, according to the cited paper, than software \"stack\" machines.\nFor the software VM's, it's apparently the case that fewer instructions need to be executed. This was determined empirically according to claims in the cited paper, but I imagine it's because far fewer overhead instructions like push, pop, and exchange need to be done in the register machine, and because the register machine can reuse operands easily if they are still lying around in the register file, without needing load or push ops. Of course, it's all just memory really; they are virtual registers.\n", "A register machine uses a fixed number of registers or buckets for storing intermediate values for computation. For example the \"add\" instruction could add the values in two specific registers and store the result in another register.\nA stack based machine uses a stack for storing intermediate values during computation. For example, to add two numbers the \"add\" instructions pops off two values from the stack, adds them, and pushes the result back onto the stack.\n", "\n1) Am I right in thinking that what I\n implemented would be considered a\n \"stack-based machine\" given the\n terminology used in the quotation\n above?\n\nNot really. A stack of some sort is pretty much the only way to implement recursive function calls. But a \"stack-based machine\" goes much further in doing everything via the stack. Not just function calls, but also arithmetic operations. In a way, they behave as if every machine instruction is a function call handled via the stack. It makes for a very simple machine design, but rather hard-to-write assembler/machine code.\n\n2) If my assumption in point (1) was\n right, how does a \"register machine\"\n work? i.e. how is it different from a\n stack-based machine?\n\nA register machine has some fast internal storage (registers) and performs most of its operations on data in these registers. There are additional machine instructions for copying data between registers and main memory.\nIIRC there are two kinds of stack machines:\n\nAccumulator machines have an \"accumulator\", which is basically a single register that holds the result of calculations (and may also supply an operand), with most machine instructions operating on the accumulator.\n\"Pure\" stack machines put the result of calculations on top of the stack after consuming the operands.\n\n", "A register machine is an abstract machine whose opcodes are defined by reference to their operation on a set of named registers, rather than by their operation on the top portion of a stack.\nIn a register machine: add could be defined to take three register names as operands, add the contents of the first two, and place the result in the third. (More common is the design where only one or two are named and the result always goes in a special accumulator register, but that's not the point.)\nIn a stack machine: add could be defined to pop two operands from the stack, add them, and push the result onto the stack.\n", "Did your compiler generate machine code? If so, then its target was a register machine (nearly all CPU designs are register machines).\nStack machines store all values on a stack, whereas register machines have a fixed number of storage slots whose \"addresses\" do not change (unlike stack machines).\n" ]
[ 24, 11, 5, 4, 2, 2 ]
[]
[]
[ "language_design", "language_implementation", "language_theory", "python" ]
stackoverflow_0001622530_language_design_language_implementation_language_theory_python.txt
Q: Overriding set methods in Python I want to create a custom set that will automatically convert objects into a different form for storage in the set (see Using a Python Dictionary as a key non-nested) for background. If I override add, remove, __contains__, __str__, update, __iter__, will that be sufficient to make the other operations behave properly, or do I need to override anything else? A: Working from collections's abstract classes, as @kaizer.se suggests, is the appropriate solution in 2.6 (not sure why you want to call super -- what functionality are you trying to delegate that can't best done by containment rather than inheritance?!). It's true that you don't get update -- by providing the abstract methods, you do get __le__, __lt__, __eq__, __ne__, __gt__, __ge__, __and__, __or__ __sub__, __xor__, and isdisjoint (from collections.Set) plus clear, pop, remove, __ior__, __iand__, __ixor__, and __isub__ (from collections.MutableSet), which is far more than you'd get from subclassing set (where you'd have to override every method of interest). You'll just have to provide other set methods you desire. Note that the abstract base classes like collections.Set are a pretty different beast from concrete classes, including builtins such as set and (in 2.6) good old sets.Set, deprecated but still around (removed in Python 3). ABCs are meant to inherit from (and can then synthesize some methods from you once you implement all the abstract methods, as you must) and secondarily to "register" classes with so they look as if they inherited from them even when they don't (to make isinstance more useable and useful). Here's a working example for Python 3.1 and 2.6 (no good reason to use 3.0, as 3.1 only has advantages over it, no disadvantage): import collections class LowercasingSet(collections.MutableSet): def __init__(self, initvalue=()): self._theset = set() for x in initvalue: self.add(x) def add(self, item): self._theset.add(item.lower()) def discard(self, item): self._theset.discard(item.lower()) def __iter__(self): return iter(self._theset) def __len__(self): return len(self._theset) def __contains__(self, item): try: return item.lower() in self._theset except AttributeError: return False A: In Python 2.6: import collections print collections.MutableSet.__abstractmethods__ # prints: # frozenset(['discard', 'add', '__iter__', '__len__', '__contains__']) subclass collections.MutableSet and override the methods in the list above. the update method itself is very easy, given that the bare minimum above is implemented def update(self, iterable): for x in iterable: self.add(x)
Overriding set methods in Python
I want to create a custom set that will automatically convert objects into a different form for storage in the set (see Using a Python Dictionary as a key non-nested) for background. If I override add, remove, __contains__, __str__, update, __iter__, will that be sufficient to make the other operations behave properly, or do I need to override anything else?
[ "Working from collections's abstract classes, as @kaizer.se suggests, is the appropriate solution in 2.6 (not sure why you want to call super -- what functionality are you trying to delegate that can't best done by containment rather than inheritance?!).\nIt's true that you don't get update -- by providing the abstract methods, you do get __le__, __lt__, __eq__, __ne__, __gt__, __ge__, __and__, __or__ __sub__, __xor__, and isdisjoint (from collections.Set) plus clear, pop, remove, __ior__, __iand__, __ixor__, and __isub__ (from collections.MutableSet), which is far more than you'd get from subclassing set (where you'd have to override every method of interest). You'll just have to provide other set methods you desire.\nNote that the abstract base classes like collections.Set are a pretty different beast from concrete classes, including builtins such as set and (in 2.6) good old sets.Set, deprecated but still around (removed in Python 3). ABCs are meant to inherit from (and can then synthesize some methods from you once you implement all the abstract methods, as you must) and secondarily to \"register\" classes with so they look as if they inherited from them even when they don't (to make isinstance more useable and useful).\nHere's a working example for Python 3.1 and 2.6 (no good reason to use 3.0, as 3.1 only has advantages over it, no disadvantage):\nimport collections\n\nclass LowercasingSet(collections.MutableSet):\n def __init__(self, initvalue=()):\n self._theset = set()\n for x in initvalue: self.add(x)\n def add(self, item):\n self._theset.add(item.lower())\n def discard(self, item):\n self._theset.discard(item.lower())\n def __iter__(self):\n return iter(self._theset)\n def __len__(self):\n return len(self._theset)\n def __contains__(self, item):\n try:\n return item.lower() in self._theset\n except AttributeError:\n return False\n\n", "In Python 2.6:\nimport collections\nprint collections.MutableSet.__abstractmethods__\n# prints:\n# frozenset(['discard', 'add', '__iter__', '__len__', '__contains__'])\n\nsubclass collections.MutableSet and override the methods in the list above.\nthe update method itself is very easy, given that the bare minimum above is implemented\ndef update(self, iterable):\n for x in iterable:\n self.add(x)\n\n" ]
[ 9, 1 ]
[]
[]
[ "python", "python_3.x", "set" ]
stackoverflow_0001622722_python_python_3.x_set.txt
Q: Execute and monitor multiple instances of external program in Python Main program is like this: PREPARE PARAMETERS FOR CHILD PROCESSES subprocess.Popen('python child.py param=example1'.split(' ')) subprocess.Popen('python child.py param=example2'.split(' ')) ... How to make main program to monitor each instances of child process it launched and restart it with its corresponding parameters if it's not running. The purpose for keep multiple instances of child process running instead of implementing a multi-thread architect within main process is to utilize as much CPU and database throughputs as possible. A: Keep a dict with the .pids of the child processes as keys, and the commandlines to restart them as corresponding values. i.e.: childid = [] for cmdline in cmdlines: p = subprocess.Popen(cmdline.split()) childid[p.pid] = cmdline os.wait will return whenever any child process terminates: it gives you (pid, exitstatus) of the child. So just restart appropriately and maintain childid. i.e.: while mustcontinue: pid, exitstat = os.wait() cmdline = childid.pop(pid) p = subprocess.Popen(cmdline.split()) childid[p.pid] = cmdline Presumably you have some criteria for when this infinite loop ends, I just used mustcontinue as the name for those criteria here;-).
Execute and monitor multiple instances of external program in Python
Main program is like this: PREPARE PARAMETERS FOR CHILD PROCESSES subprocess.Popen('python child.py param=example1'.split(' ')) subprocess.Popen('python child.py param=example2'.split(' ')) ... How to make main program to monitor each instances of child process it launched and restart it with its corresponding parameters if it's not running. The purpose for keep multiple instances of child process running instead of implementing a multi-thread architect within main process is to utilize as much CPU and database throughputs as possible.
[ "Keep a dict with the .pids of the child processes as keys, and the commandlines to restart them as corresponding values. i.e.:\nchildid = []\nfor cmdline in cmdlines:\n p = subprocess.Popen(cmdline.split())\n childid[p.pid] = cmdline\n\nos.wait will return whenever any child process terminates: it gives you (pid, exitstatus) of the child. So just restart appropriately and maintain childid. i.e.:\nwhile mustcontinue:\n pid, exitstat = os.wait()\n cmdline = childid.pop(pid)\n p = subprocess.Popen(cmdline.split())\n childid[p.pid] = cmdline\n\nPresumably you have some criteria for when this infinite loop ends, I just used mustcontinue as the name for those criteria here;-).\n" ]
[ 5 ]
[]
[]
[ "external_process", "python" ]
stackoverflow_0001623192_external_process_python.txt
Q: Adventure game - walking around inside a room I'm working on an adventure game in Python using Pygame. My main problem is how I am going to define the boundaries of the room and make the main character walk aroud without hitting a boundary every time. Sadly, I have never studied algorithms so I have no clue on how to calculate a path. I know this question is quite general and hard to answer but a point in the right direction would be very appreciated. Thanks! A: There are two easy ways of defining your boundaries which are appropriate for such a game. The simpler method is to divide your area into a grid, and use a 2D array to keep track of which squares in the grid are passable. Usually, this array stores your map information too, so in each position, there is a number that indicates whether that square contains grass or wall or road or mountain etc. (and therefore what picture to display). To give you a rough picture: ###### #.# # # ## # # # ###### A more complex method which is necessary if you want a sort of "maze" look, with thin walls, is to use a 2D array that indicates whether there is a vertical wall in between grid squares, and also whether there is a horizontal wall between grid squares. A rough picture (it looks a stretched in ASCII but hopefully you'll get the point): - - - - | | | - - | | - - - - The next thing to decide is what directions your character may move in (up/down/left/right is easiest, but diagonals are not too much harder). Then the program basically has to "mentally" explore the area, starting from your current position, hoping to come across the destination. A simple search that is easy to implement for up/down/left/right and will find you the shortest path, if there is one, is called Breadth-First search. Here is some pseudocode: queue = new Queue #just a simple first-in-first-out queue.push(startNode) while not queue.empty(): exploreNode = queue.pop() if isWalkable(exploreNode): #this doesn't work if you use #"thin walls". The check must go #where the pushes are instead if isTarget(exploreNode): #success!!! else: #push all neighbours queue.push( exploreNode.up ) queue.push( exploreNode.down ) queue.push( exploreNode.left ) queue.push( exploreNode.right ) This algorithm is slow for large maps, but it will get you used to some graph-search and pathfinding concepts. Once you've verified that it works properly, you can try replacing it with A* or something similar, which should give the same results in less time! A* and many other searching algorithms use a priority queue instead of a FIFO queue. This lets them consider "more likely" paths first, but get around to the roundabout paths if it turns out that the more direct paths are blocked. A: Sadly, I have never studied algorithms so I have no clue on how to calculate a path. Before you start writing games, you should educate yourself on those. This takes a little more effort at the beginning, but will save you much time later. A: I recommend you read up on the A* search algorithm as it is commonly used in games for pathing problems. If this game is two dimensional (or 2.5) I suggest you use a tile system as checking for collisions will be easier. Theres lots of information online that can get you started with these. A: I am not familiar with pygame, but many applications commonly use bounding volumes to define the edge of some region. The idea is that as your character walks, you will check if the characters volume intersects with the volume of a wall. You can then either adjust the velocity or stop your character from moving. Use differing shapes to get a smooth wall so that your character doesn't get stuck on the pointy edges. These concepts can be used for any application which requires quick edge and bounds detection.
Adventure game - walking around inside a room
I'm working on an adventure game in Python using Pygame. My main problem is how I am going to define the boundaries of the room and make the main character walk aroud without hitting a boundary every time. Sadly, I have never studied algorithms so I have no clue on how to calculate a path. I know this question is quite general and hard to answer but a point in the right direction would be very appreciated. Thanks!
[ "There are two easy ways of defining your boundaries which are appropriate for such a game.\nThe simpler method is to divide your area into a grid, and use a 2D array to keep track of which squares in the grid are passable. Usually, this array stores your map information too, so in each position, there is a number that indicates whether that square contains grass or wall or road or mountain etc. (and therefore what picture to display). To give you a rough picture:\n######\n#.# #\n# ## #\n# #\n######\n\nA more complex method which is necessary if you want a sort of \"maze\" look, with thin walls, is to use a 2D array that indicates whether there is a vertical wall in between grid squares, and also whether there is a horizontal wall between grid squares. A rough picture (it looks a stretched in ASCII but hopefully you'll get the point):\n - - - -\n| | |\n - - \n| |\n - - - -\n\nThe next thing to decide is what directions your character may move in (up/down/left/right is easiest, but diagonals are not too much harder). Then the program basically has to \"mentally\" explore the area, starting from your current position, hoping to come across the destination.\nA simple search that is easy to implement for up/down/left/right and will find you the shortest path, if there is one, is called Breadth-First search. Here is some pseudocode:\nqueue = new Queue #just a simple first-in-first-out\nqueue.push(startNode)\nwhile not queue.empty():\n exploreNode = queue.pop()\n if isWalkable(exploreNode): #this doesn't work if you use\n #\"thin walls\". The check must go\n #where the pushes are instead\n\n if isTarget(exploreNode):\n #success!!!\n else:\n #push all neighbours\n queue.push( exploreNode.up )\n queue.push( exploreNode.down )\n queue.push( exploreNode.left )\n queue.push( exploreNode.right )\n\nThis algorithm is slow for large maps, but it will get you used to some graph-search and pathfinding concepts. Once you've verified that it works properly, you can try replacing it with A* or something similar, which should give the same results in less time!\nA* and many other searching algorithms use a priority queue instead of a FIFO queue. This lets them consider \"more likely\" paths first, but get around to the roundabout paths if it turns out that the more direct paths are blocked.\n", "\nSadly, I have never studied algorithms so I have no clue on how to calculate a path.\n\nBefore you start writing games, you should educate yourself on those. This takes a little more effort at the beginning, but will save you much time later. \n", "I recommend you read up on the A* search algorithm as it is commonly used in games for pathing problems. \nIf this game is two dimensional (or 2.5) I suggest you use a tile system as checking for collisions will be easier. Theres lots of information online that can get you started with these.\n", "I am not familiar with pygame, but many applications commonly use bounding volumes to define the edge of some region. The idea is that as your character walks, you will check if the characters volume intersects with the volume of a wall. You can then either adjust the velocity or stop your character from moving. Use differing shapes to get a smooth wall so that your character doesn't get stuck on the pointy edges.\nThese concepts can be used for any application which requires quick edge and bounds detection.\n" ]
[ 4, 3, 3, 1 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0001623331_pygame_python.txt
Q: Does using Psyco with django make any sense? I know the benefits of Psyco for a Desktop app, but in a Web app where a process ( = a web page or an AJAX call) dies immediately after been fired, isn't it pointless ? A: You should be using fastcgi or wsgi with django, so the process won't be starting up for each request. You really need to write your code to be psyco friendly if you want decent gains, and you will not benefit if your bottleneck is the database. A: First, as gribbler and Ibrahim mentioned, your process won't die unless you are using pure CGI... which you shouldn't be using. Secondly, the bottleneck in most web apps are database queries, for which Psyco won't help. If you happen to have a some logic that is computationally intensive it can certainly make sense to use Psyco or Cython. In fact I read a report somewhere (sorry it's been a while so can't find a link now) by someone who was doing some complex calculations and had great results compiling their entire views.py with Cython. A: This guy got a performance increase out of it: http://www.alrond.com/en/2007/jan/25/performance-test-of-6-leading-frameworks/ It's a little bit outdated though.
Does using Psyco with django make any sense?
I know the benefits of Psyco for a Desktop app, but in a Web app where a process ( = a web page or an AJAX call) dies immediately after been fired, isn't it pointless ?
[ "You should be using fastcgi or wsgi with django, so the process won't be starting up for each request.\nYou really need to write your code to be psyco friendly if you want decent gains, and you will not benefit if your bottleneck is the database.\n", "First, as gribbler and Ibrahim mentioned, your process won't die unless you are using pure CGI... which you shouldn't be using.\nSecondly, the bottleneck in most web apps are database queries, for which Psyco won't help.\nIf you happen to have a some logic that is computationally intensive it can certainly make sense to use Psyco or Cython. In fact I read a report somewhere (sorry it's been a while so can't find a link now) by someone who was doing some complex calculations and had great results compiling their entire views.py with Cython.\n", "This guy got a performance increase out of it:\nhttp://www.alrond.com/en/2007/jan/25/performance-test-of-6-leading-frameworks/\nIt's a little bit outdated though.\n" ]
[ 4, 4, 4 ]
[]
[]
[ "django", "psyco", "python" ]
stackoverflow_0001623538_django_psyco_python.txt
Q: Escaping '<' and '>' in xml when using xml.dom.minidom I am stuck while escaping "<" and ">" in the xml file using xml.dom.minidom. I tried to get the unicode hex value and use that instead http://slayeroffice.com/tools/unicode_lookup/ Tried to use the standard "<" and ">" but still with no success. from xml.dom.minidom import Document doc = Document() e = doc.createElement("abc") s1 = '<hello>bhaskar</hello>' text = doc.createTextNode(s1) e.appendChild(text) e.toxml() '<abc>&lt;hello&gt;bhaskar&lt;/hello&gt;</abc>' same result with writexml() Also tried by specifying encoding 'UTF-8', 'utf-8', 'utf' in the toxml() writexml() calls but with same results. from xml.dom.minidom import Document doc = Document() e = doc.createElement("abc") s1 = u'&lt;hello&gt;bhaskar&lt;/hello&gt;' text = doc.createTextNode(s1) e.appendChild(text) e.toxml() u'<abc>&amp;lt;hello&amp;gt;bhaskar&amp;lt;/hello&amp;gt;</abc>' Tried other ways but with same results. Only way i could work-around is by overriding the writer import xml.dom.minidom as md # XXX Hack to handle '<' and '>' def wd(writer, data): data = data.replace("&lt;", "<").replace("&gt;", ">") writer.write(data) md._write_data = wd Edit - This is the code. import xml.dom.minidom as md doc = md.Document() entity_descr = doc.createElement("EntityDescriptor") doc.appendChild(entity_descr) entity_descr.setAttribute('xmlns', 'urn:oasis:names:tc:SAML:2.0:metadata') entity_descr.setAttribute('xmlns:saml', 'urn:oasis:names:tc:SAML:2.0:assertion') entity_descr.setAttribute('xmlns:ds', 'http://www.w3.org/2000/09/xmldsig#') # Get the entity_id from saml20_idp_settings entity_descr.setAttribute('entityID', self.group['entity_id']) idpssodescr = doc.createElement('IDPSSODescriptor') idpssodescr.setAttribute('WantAuthnRequestsSigned', 'true') idpssodescr.setAttribute('protocolSupportEnumeration', 'urn:oasis:names:tc:SAML:2.0:protocol') entity_descr.appendChild(idpssodescr) keydescr = doc.createElement('KeyDescriptor') keydescr.setAttribute('use', 'signing') idpssodescr.appendChild(keydescr) keyinfo = doc.createElement('ds:KeyInfo') keyinfo.setAttribute('xmlns:ds', 'http://www.w3.org/2000/09/xmldsig#') keydescr.appendChild(keyinfo) x509data = doc.createElement('ds:X509Data') keyinfo.appendChild(x509data) # check this part s = "this is a cert blah blah" x509cert = doc.createElement('ds:X509Certificate') cert = doc.createTextNode(s) x509cert.appendChild(cert) x509data.appendChild(x509cert) sso = doc.createElement('SingleSignOnService') sso.setAttribute('Binding', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect') sso.setAttribute('Location', 'http://googleapps/singleSignOn') idpssodescr.appendChild(sso) # Write the metadata file. fobj = open('metadata.xml', 'w') doc.writexml(fobj, " ", "", "\n", "UTF-8") fobj.close() This produces <?xml version="1.0" encoding="UTF-8"?> <EntityDescriptor entityID="skar" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"> <IDPSSODescriptor WantAuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> <KeyDescriptor use="signing"> <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:X509Data> <ds:X509Certificate> this is a cert blah blah </ds:X509Certificate> </ds:X509Data> </ds:KeyInfo> </KeyDescriptor> <SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http:///singleSignOn"/> </IDPSSODescriptor> </EntityDescriptor> Note the "This is a cert" comes seperately Have broken my head over this but with the same results. A: This is not a bug, it is a feature. To insert actual XML, insert DOM objects instead. Text inside an XML tag needs to be entity escaped though to be valid XML. from xml.dom.minidom import Document doc = Document() e = doc.createElement("abc") eh = doc.createElement("hello") s1 = 'bhaskar' text = doc.createTextNode(s1) eh.appendChild(text) e.appendChild(eh) e.toxml() EDIT: I don't know what Python's API is like, but it looks very similar to C#'s, so you might be able to do something like e.innerXml = s1 to do what you're trying to do... but that could be bad. The better thing to do is parse it and appendChild it as well. EDIT 2: I just ran this via Python locally, and there's definitely something wrong on your end, not in the libraries. Make sure that your string doesn't have any newlines or whitespace at the start of it. For reference, the test code I used was: Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from xml.dom.minidom import Document >>> cert = "---- START CERTIFICATE ----\n Hello world\n---- END CERTIFICATE ---" >>> doc = Document() >>> e = doc.createElement("cert") >>> certEl = doc.createTextNode(cert) >>> e.appendChild(certEl) <DOM Text node "'---- START'..."> >>> print e.toxml() <cert>---- START CERTIFICATE ---- Hello world ---- END CERTIFICATE ---</cert> >>> EDIT 3: The final edit. The problem is in your writexml call. Simply using the following fixes this: doc.writexml(fobj) # or doc.writexml(fobj, "", " ", "") Unfortuanately, it seems that you won't be able to use the newline parameter to get pretty printing though... it seems that the Python library (or atleast minidom) is written rather poorly and will modify TextNode's while printing them. Not so much a poor implementation as a naive one. A shame really... A: If you use "<" as text in XML, you need to escape it, else it is considered markup. So xml.dom is right in escaping it, since you've asked for a text node. Assuming you really want to insert a piece of XML, I recommend to use createElement("hello"). If you have a fragment of XML that you don't know the structure of, you should first parse it, and then move the nodes of that parse result into the other tree. If you want to hack, you can inherit from xml.dom.minidom.Text, and overwrite the writexml method. See the source of minidom for details.
Escaping '<' and '>' in xml when using xml.dom.minidom
I am stuck while escaping "<" and ">" in the xml file using xml.dom.minidom. I tried to get the unicode hex value and use that instead http://slayeroffice.com/tools/unicode_lookup/ Tried to use the standard "<" and ">" but still with no success. from xml.dom.minidom import Document doc = Document() e = doc.createElement("abc") s1 = '<hello>bhaskar</hello>' text = doc.createTextNode(s1) e.appendChild(text) e.toxml() '<abc>&lt;hello&gt;bhaskar&lt;/hello&gt;</abc>' same result with writexml() Also tried by specifying encoding 'UTF-8', 'utf-8', 'utf' in the toxml() writexml() calls but with same results. from xml.dom.minidom import Document doc = Document() e = doc.createElement("abc") s1 = u'&lt;hello&gt;bhaskar&lt;/hello&gt;' text = doc.createTextNode(s1) e.appendChild(text) e.toxml() u'<abc>&amp;lt;hello&amp;gt;bhaskar&amp;lt;/hello&amp;gt;</abc>' Tried other ways but with same results. Only way i could work-around is by overriding the writer import xml.dom.minidom as md # XXX Hack to handle '<' and '>' def wd(writer, data): data = data.replace("&lt;", "<").replace("&gt;", ">") writer.write(data) md._write_data = wd Edit - This is the code. import xml.dom.minidom as md doc = md.Document() entity_descr = doc.createElement("EntityDescriptor") doc.appendChild(entity_descr) entity_descr.setAttribute('xmlns', 'urn:oasis:names:tc:SAML:2.0:metadata') entity_descr.setAttribute('xmlns:saml', 'urn:oasis:names:tc:SAML:2.0:assertion') entity_descr.setAttribute('xmlns:ds', 'http://www.w3.org/2000/09/xmldsig#') # Get the entity_id from saml20_idp_settings entity_descr.setAttribute('entityID', self.group['entity_id']) idpssodescr = doc.createElement('IDPSSODescriptor') idpssodescr.setAttribute('WantAuthnRequestsSigned', 'true') idpssodescr.setAttribute('protocolSupportEnumeration', 'urn:oasis:names:tc:SAML:2.0:protocol') entity_descr.appendChild(idpssodescr) keydescr = doc.createElement('KeyDescriptor') keydescr.setAttribute('use', 'signing') idpssodescr.appendChild(keydescr) keyinfo = doc.createElement('ds:KeyInfo') keyinfo.setAttribute('xmlns:ds', 'http://www.w3.org/2000/09/xmldsig#') keydescr.appendChild(keyinfo) x509data = doc.createElement('ds:X509Data') keyinfo.appendChild(x509data) # check this part s = "this is a cert blah blah" x509cert = doc.createElement('ds:X509Certificate') cert = doc.createTextNode(s) x509cert.appendChild(cert) x509data.appendChild(x509cert) sso = doc.createElement('SingleSignOnService') sso.setAttribute('Binding', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect') sso.setAttribute('Location', 'http://googleapps/singleSignOn') idpssodescr.appendChild(sso) # Write the metadata file. fobj = open('metadata.xml', 'w') doc.writexml(fobj, " ", "", "\n", "UTF-8") fobj.close() This produces <?xml version="1.0" encoding="UTF-8"?> <EntityDescriptor entityID="skar" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"> <IDPSSODescriptor WantAuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> <KeyDescriptor use="signing"> <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:X509Data> <ds:X509Certificate> this is a cert blah blah </ds:X509Certificate> </ds:X509Data> </ds:KeyInfo> </KeyDescriptor> <SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http:///singleSignOn"/> </IDPSSODescriptor> </EntityDescriptor> Note the "This is a cert" comes seperately Have broken my head over this but with the same results.
[ "This is not a bug, it is a feature. To insert actual XML, insert DOM objects instead. Text inside an XML tag needs to be entity escaped though to be valid XML.\nfrom xml.dom.minidom import Document\ndoc = Document()\ne = doc.createElement(\"abc\")\neh = doc.createElement(\"hello\")\ns1 = 'bhaskar'\ntext = doc.createTextNode(s1)\n\neh.appendChild(text)\ne.appendChild(eh)\n\ne.toxml()\n\nEDIT: I don't know what Python's API is like, but it looks very similar to C#'s, so you might be able to do something like e.innerXml = s1 to do what you're trying to do... but that could be bad. The better thing to do is parse it and appendChild it as well.\nEDIT 2: I just ran this via Python locally, and there's definitely something wrong on your end, not in the libraries. Make sure that your string doesn't have any newlines or whitespace at the start of it. For reference, the test code I used was:\nPython 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) \n[GCC 4.3.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from xml.dom.minidom import Document\n>>> cert = \"---- START CERTIFICATE ----\\n Hello world\\n---- END CERTIFICATE ---\"\n>>> doc = Document()\n>>> e = doc.createElement(\"cert\")\n>>> certEl = doc.createTextNode(cert)\n>>> e.appendChild(certEl)\n<DOM Text node \"'---- START'...\">\n>>> print e.toxml()\n<cert>---- START CERTIFICATE ----\n Hello world\n---- END CERTIFICATE ---</cert>\n>>> \n\nEDIT 3: The final edit. The problem is in your writexml call. Simply using the following fixes this:\ndoc.writexml(fobj)\n# or\ndoc.writexml(fobj, \"\", \" \", \"\")\n\nUnfortuanately, it seems that you won't be able to use the newline parameter to get pretty printing though... it seems that the Python library (or atleast minidom) is written rather poorly and will modify TextNode's while printing them. Not so much a poor implementation as a naive one. A shame really...\n", "If you use \"<\" as text in XML, you need to escape it, else it is considered markup. So xml.dom is right in escaping it, since you've asked for a text node.\nAssuming you really want to insert a piece of XML, I recommend to use createElement(\"hello\"). If you have a fragment of XML that you don't know the structure of, you should first parse it, and then move the nodes of that parse result into the other tree.\nIf you want to hack, you can inherit from xml.dom.minidom.Text, and overwrite the writexml method. See the source of minidom for details.\n" ]
[ 6, 3 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0001623607_python_xml.txt
Q: SQLAlchemy: Shallow copy avoiding lazy loading I'm trying to automatically build a shallow copy of a SA-mapped object.. At the moment my function is just: newobj = src.__class__() for prop in class_mapper(src.__class__).iterate_properties: setattr(newobj, prop.key, getattr(src, prop.key)) but I'm having troubles with lazy relations... Obviously getattr triggers the lazy loading, but since I don't need their values right away, I'd like to just copy the "this should be lazy loaded"-state of the attribute... Is this possible? Edit: I need this for a "data logging" system.. That is, whenever someone updates a persisted entity, I have to generate a new record and then mark the old one as such. In order to do this I create a shallow copy of the entity (so SQLA issues an INSERT instead of an UPDATE) and work from there.. The system works quite nicely (it's been in production use for months) but now I'd like to enhance it so that it won't need that all the relations get lazy-loaded first.. A: What you need is to copy column properties only, which can be easily filtered using isinstance(prop, sqlalchemy.orm.ColumnProperty). Note, that you HAVE to copy externally stored relations (all many-to-many), since there is no columns corresponding to them in the main table. This can't be done with high-level interface without lazy-loading, so I'd prefer to accept this trade-off. Many-to-many relations can be determined with isinstance(prop, RelationProperty) and prop.secondary test. The resulting code will look like the following: from sqlalchemy.orm import object_mapper, ColumnProperty, RelationProperty newobj = type(src)() for prop in object_mapper(src).iterate_properties: if (isinstance(prop, ColumnProperty) or isinstance(prop, RelationProperty) and prop.secondary): setattr(newobj, prop.key, getattr(src, prop.key)) Also note, that SQLAlchemy is designed to maintain single object loaded for each identity, while your copy breaks this when identity (primary key) properties are copied too, but this is probably not your case if your are storing with new (versioned) identifier.
SQLAlchemy: Shallow copy avoiding lazy loading
I'm trying to automatically build a shallow copy of a SA-mapped object.. At the moment my function is just: newobj = src.__class__() for prop in class_mapper(src.__class__).iterate_properties: setattr(newobj, prop.key, getattr(src, prop.key)) but I'm having troubles with lazy relations... Obviously getattr triggers the lazy loading, but since I don't need their values right away, I'd like to just copy the "this should be lazy loaded"-state of the attribute... Is this possible? Edit: I need this for a "data logging" system.. That is, whenever someone updates a persisted entity, I have to generate a new record and then mark the old one as such. In order to do this I create a shallow copy of the entity (so SQLA issues an INSERT instead of an UPDATE) and work from there.. The system works quite nicely (it's been in production use for months) but now I'd like to enhance it so that it won't need that all the relations get lazy-loaded first..
[ "What you need is to copy column properties only, which can be easily filtered using isinstance(prop, sqlalchemy.orm.ColumnProperty). Note, that you HAVE to copy externally stored relations (all many-to-many), since there is no columns corresponding to them in the main table. This can't be done with high-level interface without lazy-loading, so I'd prefer to accept this trade-off. Many-to-many relations can be determined with isinstance(prop, RelationProperty) and prop.secondary test. The resulting code will look like the following:\nfrom sqlalchemy.orm import object_mapper, ColumnProperty, RelationProperty\n\nnewobj = type(src)()\nfor prop in object_mapper(src).iterate_properties:\n if (isinstance(prop, ColumnProperty) or\n isinstance(prop, RelationProperty) and prop.secondary):\n setattr(newobj, prop.key, getattr(src, prop.key))\n\nAlso note, that SQLAlchemy is designed to maintain single object loaded for each identity, while your copy breaks this when identity (primary key) properties are copied too, but this is probably not your case if your are storing with new (versioned) identifier.\n" ]
[ 6 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001623661_python_sqlalchemy.txt
Q: Restrict access to images on my website except through my own htmls On my website I store user pictures in a simple manner such as: "image/user_1.jpg". I don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com/images/user_2.jpg, www.mydomain.com/images/user_3.jpg, so on...) So far I have three solutions in mind: I tried using .htaccess to password protect the "images" folder. That helped me up to some point but some of the images started popping up a username and password request on my htmls (while amazingly some images did not) so this seems to be an unpredictable method. I can start converting my user_id's to an md5 hash with some salt. The images would be named as: /image/user_e4d909c290d0fb1ca068ffaddf22cbd0.jpg. I don't like this solution. It makes the file system way complicated. or I can user PHP's readfile() function or maybe something similar in Perl or Python. For instance I could pass a password using an md5 string to validate visitors as loggedin users with access to that image. I'm leaning towards option 3 but with a Perl or Python angle (assuming they would be faster than PHP). However I would like to see other ideas on the matter. Maybe there is a simple .htaccess trick to this? Basically all I want to make sure is that no one can view images from my website unless the images are directly called from within htmls hosted on my site. Thanks a lot, Haluk A: Any method you choose to determine the source of a request is only as reliable as the HTTP_REFERER information that is sent by the user's browser, which is not very. Requiring authentication is the only good way to protect content. A: Method #1 is not viable as it will ask for user name and password on each and every image requested. You probably got the prompt for some of the images and not for others due to caching issues. Method #2 looks the most appealing to me by being the least processor intensive, but with only the user_id passed through the md5 function the file name still quite easily guessable. You should go for md5('my secret string'.$user_id) for a better solution. Why are you picking #3 via Perl or Python? What's wrong with PHP's speed? Indeed if you're protecting your images this way you should go to the extra length of moving them out and above your webroot so they're only accessible via your script which first checks if the user is authenticated and then passes the avatar by reading it and outputting it. Alternatively, you could protect the directory with an htaccess file saying deny from all. Plus you should go for a HTTP_REFERER security either via PHP or via .htaccess. Good luck! A: You can look up "Hotlinking prevention" via htaccess and i think that should be a simple solution for the type of protection you need. However its not fool proof , people who will really want to get those images will find a work around by faking the referrer. http://altlab.com/htaccess_tutorial.html A: You are right considering option #3. Use service script that would validate user and readfile() an image. Be sure to set correct Content-Type HTTP header via header() function prior to serving an image. For better isolation images should be put above web root directory, or protected by well written .htaccess rules - there is definitely a way of protecting files and/or directories this way. A: As has been said hotlinking protection does not protect your files from listing just by altering their id. Plus Refferer can be easily faked. In this case I would recommend some kind of authentication. You must create PHP script that will serve images only if it verify logged user via COOKIES or SESSION. (I wouldn't recommend using md5 of user password). Maybe you'll need some SQL table to save access permissions. Oh and to protect your images you can just place .htaccess with deny from all to the images folder.
Restrict access to images on my website except through my own htmls
On my website I store user pictures in a simple manner such as: "image/user_1.jpg". I don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com/images/user_2.jpg, www.mydomain.com/images/user_3.jpg, so on...) So far I have three solutions in mind: I tried using .htaccess to password protect the "images" folder. That helped me up to some point but some of the images started popping up a username and password request on my htmls (while amazingly some images did not) so this seems to be an unpredictable method. I can start converting my user_id's to an md5 hash with some salt. The images would be named as: /image/user_e4d909c290d0fb1ca068ffaddf22cbd0.jpg. I don't like this solution. It makes the file system way complicated. or I can user PHP's readfile() function or maybe something similar in Perl or Python. For instance I could pass a password using an md5 string to validate visitors as loggedin users with access to that image. I'm leaning towards option 3 but with a Perl or Python angle (assuming they would be faster than PHP). However I would like to see other ideas on the matter. Maybe there is a simple .htaccess trick to this? Basically all I want to make sure is that no one can view images from my website unless the images are directly called from within htmls hosted on my site. Thanks a lot, Haluk
[ "Any method you choose to determine the source of a request is only as reliable as the HTTP_REFERER information that is sent by the user's browser, which is not very. Requiring authentication is the only good way to protect content. \n", "Method #1 is not viable as it will ask for user name and password on each and every image requested. You probably got the prompt for some of the images and not for others due to caching issues.\nMethod #2 looks the most appealing to me by being the least processor intensive, but with only the user_id passed through the md5 function the file name still quite easily guessable. You should go for md5('my secret string'.$user_id) for a better solution.\nWhy are you picking #3 via Perl or Python? What's wrong with PHP's speed? Indeed if you're protecting your images this way you should go to the extra length of moving them out and above your webroot so they're only accessible via your script which first checks if the user is authenticated and then passes the avatar by reading it and outputting it. Alternatively, you could protect the directory with an htaccess file saying deny from all.\nPlus you should go for a HTTP_REFERER security either via PHP or via .htaccess.\nGood luck!\n", "You can look up \"Hotlinking prevention\" via htaccess and i think that should be a simple solution for the type of protection you need. However its not fool proof , people who will really want to get those images will find a work around by faking the referrer.\nhttp://altlab.com/htaccess_tutorial.html\n", "You are right considering option #3. Use service script that would validate user and readfile() an image. Be sure to set correct Content-Type HTTP header via header() function prior to serving an image. For better isolation images should be put above web root directory, or protected by well written .htaccess rules - there is definitely a way of protecting files and/or directories this way.\n", "As has been said hotlinking protection does not protect your files from listing just by altering their id. Plus Refferer can be easily faked.\nIn this case I would recommend some kind of authentication. You must create PHP script that will serve images only if it verify logged user via COOKIES or SESSION. (I wouldn't recommend using md5 of user password).\nMaybe you'll need some SQL table to save access permissions.\nOh and to protect your images you can just place .htaccess with\ndeny from all\n\nto the images folder.\n" ]
[ 6, 5, 3, 2, 0 ]
[]
[]
[ "linux", "perl", "php", "python" ]
stackoverflow_0001623311_linux_perl_php_python.txt
Q: Python - RegExp - Modify text files Newbie to Python.... help requested with the following task :-) I have tree of various files, some of them are C source code. I would like to modify these C files with python script. The C code has 4 defines - #define ZR_LOG0(Id, Class, Seveity, Format) #define ZR_LOG1(Id, Class, Seveity, Format, Attr0) #define ZR_LOG2(Id, Class, Seveity, Format, Attr0, Attr1) #define ZR_LOG3(Id, Class, Seveity, Format, Attr0, Attr1, Attr2) there are various ZR_LOGn lines spread throughout the C source code. Example: ZR_LOG1 (1, LOG_CLASS_3, LOG_INFO, "hello world %d", 76); White spaces (spaces, tabs) may appear anywhere in the line (between the fields). The python script task is as follow: Replace any 'Id' field (which is an integer type that we don't care about its original value) with sequential counter. (The first 'LOG'... line we'll encounter the 'Id' field will get the value 0, the next one 1, and so on) In a separate output file, for each ZR_LOG line, we'll create an index line in the format { NewId, Format }, For the example above will get: { 0, "hello world %d" }, Appreciate your help with it.... I have started with the following code, you may either look at it or ignore it altogether. ''' Created on Oct 25, 2009 @author: Uri Shkolnik The following version does find & replace LOG Ids for all C source files in a dir (and below) with sequential counter, The files are assumed to be UTF-8 encoded. (which works fine if they are ASCII, because ASCII is a subset of UTF-8) It also assemble new index file, composed from all new IDs and format fields ''' import os, sys, re, shutil mydir= '/home/uri/proj1' searched_pattern0 = 'ZR_LOG0' def search_and_replace(filepath): ''' replaces all string by a regex substitution ''' backupName=filepath+'~re~' print 'reading:', filepath input = open(filepath,'rb') s=unicode(input.read(),'utf-8') input.close() m = re.match(ur'''[:space:]ZR_LOG[0-3].*\(.*[0-9]{0,10},LOG_''', s) print m def c_files_search(dummy, dirr, filess): ''' search directories for file pattern *.c ''' for child in filess: if '.c' == os.path.splitext(child)[1] and os.path.isfile(dirr+'/'+child): filepath = dirr+'/'+child search_and_replace(filepath) os.path.walk(mydir, c_files_search, 3) A: A few points: You can match whitespace with '\s'. The regexp 'capturing groups' are useful here. So, I would do something like this: output = '' counter = 1 for line in lines: # Match only ZR_LOG lines and capture everything surrounding "Id" match = re.match('^(.*\sZR_LOG[0-3]\s*\(\s*)' # group(1), before Id 'Id' '(,.*)$', # group(2), after Id line) if match: # Add everything before Id, the counter value and everything after Id output += match.group(1) + str(counter) + match.group(2) counter += 1 # And do extra logging etc. else: output += line
Python - RegExp - Modify text files
Newbie to Python.... help requested with the following task :-) I have tree of various files, some of them are C source code. I would like to modify these C files with python script. The C code has 4 defines - #define ZR_LOG0(Id, Class, Seveity, Format) #define ZR_LOG1(Id, Class, Seveity, Format, Attr0) #define ZR_LOG2(Id, Class, Seveity, Format, Attr0, Attr1) #define ZR_LOG3(Id, Class, Seveity, Format, Attr0, Attr1, Attr2) there are various ZR_LOGn lines spread throughout the C source code. Example: ZR_LOG1 (1, LOG_CLASS_3, LOG_INFO, "hello world %d", 76); White spaces (spaces, tabs) may appear anywhere in the line (between the fields). The python script task is as follow: Replace any 'Id' field (which is an integer type that we don't care about its original value) with sequential counter. (The first 'LOG'... line we'll encounter the 'Id' field will get the value 0, the next one 1, and so on) In a separate output file, for each ZR_LOG line, we'll create an index line in the format { NewId, Format }, For the example above will get: { 0, "hello world %d" }, Appreciate your help with it.... I have started with the following code, you may either look at it or ignore it altogether. ''' Created on Oct 25, 2009 @author: Uri Shkolnik The following version does find & replace LOG Ids for all C source files in a dir (and below) with sequential counter, The files are assumed to be UTF-8 encoded. (which works fine if they are ASCII, because ASCII is a subset of UTF-8) It also assemble new index file, composed from all new IDs and format fields ''' import os, sys, re, shutil mydir= '/home/uri/proj1' searched_pattern0 = 'ZR_LOG0' def search_and_replace(filepath): ''' replaces all string by a regex substitution ''' backupName=filepath+'~re~' print 'reading:', filepath input = open(filepath,'rb') s=unicode(input.read(),'utf-8') input.close() m = re.match(ur'''[:space:]ZR_LOG[0-3].*\(.*[0-9]{0,10},LOG_''', s) print m def c_files_search(dummy, dirr, filess): ''' search directories for file pattern *.c ''' for child in filess: if '.c' == os.path.splitext(child)[1] and os.path.isfile(dirr+'/'+child): filepath = dirr+'/'+child search_and_replace(filepath) os.path.walk(mydir, c_files_search, 3)
[ "A few points:\n\nYou can match whitespace with '\\s'.\nThe regexp 'capturing groups' are useful here.\n\nSo, I would do something like this:\noutput = ''\ncounter = 1\nfor line in lines:\n # Match only ZR_LOG lines and capture everything surrounding \"Id\"\n match = re.match('^(.*\\sZR_LOG[0-3]\\s*\\(\\s*)' # group(1), before Id\n 'Id'\n '(,.*)$', # group(2), after Id\n line)\n if match:\n # Add everything before Id, the counter value and everything after Id\n output += match.group(1) + str(counter) + match.group(2)\n counter += 1\n # And do extra logging etc.\n else:\n output += line\n\n" ]
[ 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001624050_python_regex.txt
Q: How to make a private download area with django? I would like to implement a private download area on a website powered by django. The user would have to be logged in with the appropriate rights in order to be able to get some static files. What would you recommend for writing this feature. Any tips or tricks? Thanks in advance Update: Maybe because of my bad english or my lack of knowledge about this architecture (that's why I am asking) but my question is: how to make sure that static files (served by the regular webserver without any need of django) access is controlled by the django authentication. I will read the django docs more carefully but i don't remember of an out-of-the-box solution for that problem. Update2: My host provider only allows FastCgi. A: So, searching I found this discussion thread. There were three things said you might be interested in. First there is the mod_python method Then there is the mod_wsgi method Both of which don't seem all that great. Better is the X-Sendfile header which isn't fully standard, but works at least within apache, and lighttpd. kibbitzing from here, we have the following. @login_required def serve_file(request, context): if <check if they have access to the file>: filename = "/var/www/myfile.xyz" response = HttpResponse(mimetype='application/force-download') response['Content-Disposition']='attachment;filename="%s"'%filename response["X-Sendfile"] = filename response['Content-length'] = os.stat("debug.py").st_size return response return <error state> and that should be almost exactly what you want. Just make sure you turn on X-Sendfile support in whatever you happen to be using. A: The XSendfile seems to be the right approach but It looks to be a bit complex to setup. I've decided to use a simpler way. Based on emeryc answer and django snippets http://www.djangosnippets.org/snippets/365/, I have written the following code and it seems to make what I want: @login_required def serve_file(request, filename): fullname = myapp.settings.PRIVATE_AREA+filename try: f = file(fullname, "rb") except Exception, e: return page_not_found(request, template_name='404.html') try: wrapper = FileWrapper(f) response = HttpResponse(wrapper, mimetype=mimetypes.guess_type(filename)[0]) response['Content-Length'] = os.path.getsize(fullname) response['Content-Disposition'] = 'attachment; filename={0}'.format(filename) return response except Exception, e: return page_not_found(request, template_name='500.html') A: There's tons of tutorials on how to enable authentication in Django. Do you need help with that? If so, start here. The next step is to create a View which lists your files. So do that, this is all basic Django. If you have problems with this step, go back and go through the Django tutorial. You'll get this. Finally, refer back to the first link (here is is again: authentication docs) and take a close look at the LOGIN_REQUIRED decorator. Protect your view with this decorator. This is all pretty basic Django stuff. If you've done this and have a specific question, post it here. But you put a pretty open ended question on SO and that's not a great way to get assistance.
How to make a private download area with django?
I would like to implement a private download area on a website powered by django. The user would have to be logged in with the appropriate rights in order to be able to get some static files. What would you recommend for writing this feature. Any tips or tricks? Thanks in advance Update: Maybe because of my bad english or my lack of knowledge about this architecture (that's why I am asking) but my question is: how to make sure that static files (served by the regular webserver without any need of django) access is controlled by the django authentication. I will read the django docs more carefully but i don't remember of an out-of-the-box solution for that problem. Update2: My host provider only allows FastCgi.
[ "So, searching I found this discussion thread.\nThere were three things said you might be interested in.\nFirst there is the mod_python method\nThen there is the mod_wsgi method\nBoth of which don't seem all that great.\nBetter is the X-Sendfile header which isn't fully standard, but works at least within apache, and lighttpd.\nkibbitzing from here, we have the following.\n@login_required\ndef serve_file(request, context):\n if <check if they have access to the file>:\n filename = \"/var/www/myfile.xyz\" \n response = HttpResponse(mimetype='application/force-download') \n response['Content-Disposition']='attachment;filename=\"%s\"'%filename\n response[\"X-Sendfile\"] = filename\n response['Content-length'] = os.stat(\"debug.py\").st_size\n return response\n return <error state>\n\nand that should be almost exactly what you want. Just make sure you turn on X-Sendfile support in whatever you happen to be using.\n", "The XSendfile seems to be the right approach but It looks to be a bit complex to setup. I've decided to use a simpler way.\nBased on emeryc answer and django snippets http://www.djangosnippets.org/snippets/365/, I have written the following code and it seems to make what I want:\n@login_required\ndef serve_file(request, filename):\n fullname = myapp.settings.PRIVATE_AREA+filename\n try:\n f = file(fullname, \"rb\")\n except Exception, e:\n return page_not_found(request, template_name='404.html')\n try:\n wrapper = FileWrapper(f)\n response = HttpResponse(wrapper, mimetype=mimetypes.guess_type(filename)[0])\n response['Content-Length'] = os.path.getsize(fullname)\n response['Content-Disposition'] = 'attachment; filename={0}'.format(filename)\n return response\n except Exception, e:\n return page_not_found(request, template_name='500.html')\n\n", "There's tons of tutorials on how to enable authentication in Django. Do you need help with that? If so, start here. \nThe next step is to create a View which lists your files. So do that, this is all basic Django. If you have problems with this step, go back and go through the Django tutorial. You'll get this. \nFinally, refer back to the first link (here is is again: authentication docs) and take a close look at the LOGIN_REQUIRED decorator. Protect your view with this decorator.\nThis is all pretty basic Django stuff. If you've done this and have a specific question, post it here. But you put a pretty open ended question on SO and that's not a great way to get assistance.\n" ]
[ 11, 3, 1 ]
[]
[]
[ "authentication", "django", "download", "python" ]
stackoverflow_0001609273_authentication_django_download_python.txt
Q: How to avoid repetition of exception handling? I've moved IOError handling to a separate function to avoid boilerplate when opening files for reading. But what if IOError fires when the file is being read? If sshfs disconnects, or file is deleted by root, etc.? def safe_open(*args): try: return open(*args) except IOError: quit('Error when opening file \'{0}\'. Error #{1[0]}: {1[1]}'.format(\ args[0], sys.exc_info()[1].args)) ... with safe_open(myfile, 'r') as f: for i in f: print i with safe_open(anotherfile, 'r') as f: try: conf = ''.join(f).format(**args) except KeyError: quit('\nOops, your template \'{0}\' has placeholders for' + \ 'parameters\nthat were not supplied in the command line: - {1}\n' + '\nCan\'t proceed. Ending. Nothing has been changed yet.'.format( \ args['host_template'], '\n - '.join(sys.exc_info()[1].args)), 1) File is read in different ways, so I don't see a way to put it into the function and pass the changing part as arguments. [Added: thought of this solution, but it makes a generator that can't be closed. If a loop is stopped, the file is left open.] def reader(*args): try: with safe_open(*args) as f: for i in f: yield i except IOError: print('IOError when trying to read \'{0}\''.format(args[0])) for i in reader(myfile, 'r'): pass # do some job A: I would probably try move the file operations into separate functions and wrap those in a try... except actually I just got an even better idea... put the error handling into a decorator and apply the decorator to each of the functions that does file operations def catch_io_errors(fn): def decorator(*args, **kwargs): try: return fn(*args, **kwargs) except IOError: quit('whatever error text') return decorator then you can put all file ops into their own functions and apply the decorator @catch_io_errors def read_file(): with open(myfile, 'r') as f: for i in f: print i or if you need compatibility with python 2.3: def read_file(): f = open(myfile, 'r') for i in f: print i f.close() read_file = catch_io_errors(read_file)
How to avoid repetition of exception handling?
I've moved IOError handling to a separate function to avoid boilerplate when opening files for reading. But what if IOError fires when the file is being read? If sshfs disconnects, or file is deleted by root, etc.? def safe_open(*args): try: return open(*args) except IOError: quit('Error when opening file \'{0}\'. Error #{1[0]}: {1[1]}'.format(\ args[0], sys.exc_info()[1].args)) ... with safe_open(myfile, 'r') as f: for i in f: print i with safe_open(anotherfile, 'r') as f: try: conf = ''.join(f).format(**args) except KeyError: quit('\nOops, your template \'{0}\' has placeholders for' + \ 'parameters\nthat were not supplied in the command line: - {1}\n' + '\nCan\'t proceed. Ending. Nothing has been changed yet.'.format( \ args['host_template'], '\n - '.join(sys.exc_info()[1].args)), 1) File is read in different ways, so I don't see a way to put it into the function and pass the changing part as arguments. [Added: thought of this solution, but it makes a generator that can't be closed. If a loop is stopped, the file is left open.] def reader(*args): try: with safe_open(*args) as f: for i in f: yield i except IOError: print('IOError when trying to read \'{0}\''.format(args[0])) for i in reader(myfile, 'r'): pass # do some job
[ "I would probably try move the file operations into separate functions and wrap those in a try... except\nactually I just got an even better idea... put the error handling into a decorator and apply the decorator to each of the functions that does file operations\ndef catch_io_errors(fn):\n def decorator(*args, **kwargs):\n try:\n return fn(*args, **kwargs)\n except IOError:\n quit('whatever error text')\n return decorator\n\nthen you can put all file ops into their own functions and apply the decorator\n@catch_io_errors\ndef read_file():\n with open(myfile, 'r') as f:\n for i in f:\n print i\n\nor if you need compatibility with python 2.3:\ndef read_file():\n f = open(myfile, 'r')\n for i in f:\n print i\n f.close()\n\nread_file = catch_io_errors(read_file)\n\n" ]
[ 7 ]
[]
[]
[ "exception_handling", "python" ]
stackoverflow_0001624518_exception_handling_python.txt
Q: Django or similar for composite primary keys I am writing a web application for my engineering company (warning: I am a programmer only by hobby) and was planning on using Django until I hit this snag. The models I want to use naturally have multi-column primary keys. Per http://code.djangoproject.com/ticket/373, I can't use Django, at least not a released version. Can anyone help me with a workaround, whether it be via another web framework (Python-based only, please) or by suggesting changes to the model so it will work with Django's limitations? I am really hoping for the latter, as I was hoping to use this as an opportunity to learn Django. Example: Table one has part_number and part_revision as two fields that should comprise a primary key. A P/N can exist at multiple revisions, but P/N + rev are unique. Table two has part_number, part_revision and dimension_number as its primary key. A P/N at a specific rev can have a number of dimensions, however, each is unique. Also, in this case, P/N + rev should be a ForeignKey of Table one. A: Why not add a normal primary key, and then specify that part_number and part_revision as unique_together? This essentially is the Djangoish (Djangonic?) way of doing what Mitch Wheat said. A: A work around is to create a surrogate key (an auto increment column) as the primary key column and place a unique index on your domain composite key. Foreign keys will then refer to the surrogate primary key column. A: I strongly suggest using a surrogate key. Not because it is "Djangoesque". Suppose that you use a composite key that includes part_number. What if some time later your company decides to change the format (and therefore values) of that field? Or in general terms, any field? You would not want to deal with changing primary keys. I don't know whatever benefit you see in using a composite key that consists of "real" values, but I reckon it isn't worth the hassle. Use meaningless, autoincremented keys (and that should probably render a composite key useless). A: SQLAlchemy has support for composite primary and foreign keys, so any SQLAlchemy based framework (Pylons and Werkzeug comes to mind) should suite your needs. But surrogate primary key is easier to use and better supported anyway.
Django or similar for composite primary keys
I am writing a web application for my engineering company (warning: I am a programmer only by hobby) and was planning on using Django until I hit this snag. The models I want to use naturally have multi-column primary keys. Per http://code.djangoproject.com/ticket/373, I can't use Django, at least not a released version. Can anyone help me with a workaround, whether it be via another web framework (Python-based only, please) or by suggesting changes to the model so it will work with Django's limitations? I am really hoping for the latter, as I was hoping to use this as an opportunity to learn Django. Example: Table one has part_number and part_revision as two fields that should comprise a primary key. A P/N can exist at multiple revisions, but P/N + rev are unique. Table two has part_number, part_revision and dimension_number as its primary key. A P/N at a specific rev can have a number of dimensions, however, each is unique. Also, in this case, P/N + rev should be a ForeignKey of Table one.
[ "Why not add a normal primary key, and then specify that part_number and part_revision as unique_together?\nThis essentially is the Djangoish (Djangonic?) way of doing what Mitch Wheat said.\n", "A work around is to create a surrogate key (an auto increment column) as the primary key column and place a unique index on your domain composite key.\nForeign keys will then refer to the surrogate primary key column.\n", "I strongly suggest using a surrogate key. Not because it is \"Djangoesque\". Suppose that you use a composite key that includes part_number. What if some time later your company decides to change the format (and therefore values) of that field? Or in general terms, any field? You would not want to deal with changing primary keys. I don't know whatever benefit you see in using a composite key that consists of \"real\" values, but I reckon it isn't worth the hassle. Use meaningless, autoincremented keys (and that should probably render a composite key useless).\n", "SQLAlchemy has support for composite primary and foreign keys, so any SQLAlchemy based framework (Pylons and Werkzeug comes to mind) should suite your needs. But surrogate primary key is easier to use and better supported anyway.\n" ]
[ 24, 20, 14, 2 ]
[]
[]
[ "django", "django_models", "python", "web_applications" ]
stackoverflow_0001624257_django_django_models_python_web_applications.txt
Q: Python package install using pip or easy_install from repos The simplest way to deal with python package installations, so far, to me, has been to check out the source from the source control system and then add a symbolic link in the python dist-packages folder. Clearly since source control provides the complete control to downgrade, upgrade to any branch, tag, it works very well. Is there a way using one of the Package installers (easy_install or pip or other), one can achieve the same. easy_install obtains the tar.gz and install them using the setup.py install which installs in the dist-packages folder in python2.6. Is there a way to configure it, or pip to use the source version control system (SVN/GIT/Hg/Bzr) instead. A: Using pip this is quite easy. For instance: pip install -e hg+http://bitbucket.org/andrewgodwin/south/#egg=South Pip will automatically clone the source repo and run "setup.py develop" for you to install it into your environment (which hopefully is a virtualenv). Git, Subversion, Bazaar and Mercurial are all supported. You can also then run "pip freeze" and it will output a list of your currently-installed packages with their exact versions (including, for develop-installs, the exact revision from the VCS). You can put this straight into a requirements file and later run pip install -r requirements.txt to install that same set of packages at the exact same versions. A: If you download or check out the source distribution of a package — the one that has its "setup.py" inside of it — then if the package is based on the "setuptools" (which also power easy_install), you can move into that directory and say: $ python setup.py develop and it will create the right symlinks in dist-packages so that the .py files in the source distribution are the ones that get imported, rather than copies installed separately (which is what "setup.py install" would do — create separate copies that don't change immediately when you edit the source code to try a change). As the other response indicates, you should try reading the "setuptools" documentation to learn more. "setup.py develop" is a really useful feature! Try using it in combination with a virtualenv, and you can "setup.py develop" painlessly and without messing up your system-wide Python with packages you are only developing on temporarily: http://pypi.python.org/pypi/virtualenv A: easy_install has support for downloading specific versions. For example: easy_install python-dateutil==1.4.0 Will install v1.4, while the latest version 1.4.1 would be picked if no version was specified. There is also support for svn checkouts, but using that doesn't give you much benefits from your manual version. See the manual for more information above. Being able to switch to specific branches is rarely useful unless you are developing the packages in question, and then it's typically not a good idea to install them in site-packages anyway. A: easy_install accepts a URL for the source tree too. Works at least when the sources are in Subversion.
Python package install using pip or easy_install from repos
The simplest way to deal with python package installations, so far, to me, has been to check out the source from the source control system and then add a symbolic link in the python dist-packages folder. Clearly since source control provides the complete control to downgrade, upgrade to any branch, tag, it works very well. Is there a way using one of the Package installers (easy_install or pip or other), one can achieve the same. easy_install obtains the tar.gz and install them using the setup.py install which installs in the dist-packages folder in python2.6. Is there a way to configure it, or pip to use the source version control system (SVN/GIT/Hg/Bzr) instead.
[ "Using pip this is quite easy. For instance:\npip install -e hg+http://bitbucket.org/andrewgodwin/south/#egg=South\n\nPip will automatically clone the source repo and run \"setup.py develop\" for you to install it into your environment (which hopefully is a virtualenv). Git, Subversion, Bazaar and Mercurial are all supported. \nYou can also then run \"pip freeze\" and it will output a list of your currently-installed packages with their exact versions (including, for develop-installs, the exact revision from the VCS). You can put this straight into a requirements file and later run\npip install -r requirements.txt\n\nto install that same set of packages at the exact same versions.\n", "If you download or check out the source distribution of a package — the one that has its \"setup.py\" inside of it — then if the package is based on the \"setuptools\" (which also power easy_install), you can move into that directory and say:\n$ python setup.py develop\n\nand it will create the right symlinks in dist-packages so that the .py files in the source distribution are the ones that get imported, rather than copies installed separately (which is what \"setup.py install\" would do — create separate copies that don't change immediately when you edit the source code to try a change).\nAs the other response indicates, you should try reading the \"setuptools\" documentation to learn more. \"setup.py develop\" is a really useful feature! Try using it in combination with a virtualenv, and you can \"setup.py develop\" painlessly and without messing up your system-wide Python with packages you are only developing on temporarily:\nhttp://pypi.python.org/pypi/virtualenv\n\n", "easy_install has support for downloading specific versions. For example:\neasy_install python-dateutil==1.4.0\n\nWill install v1.4, while the latest version 1.4.1 would be picked if no version was specified.\nThere is also support for svn checkouts, but using that doesn't give you much benefits from your manual version. See the manual for more information above.\nBeing able to switch to specific branches is rarely useful unless you are developing the packages in question, and then it's typically not a good idea to install them in site-packages anyway.\n", "easy_install accepts a URL for the source tree too. Works at least when the sources are in Subversion.\n" ]
[ 26, 11, 4, 0 ]
[]
[]
[ "easy_install", "pip", "python", "svn", "version_control" ]
stackoverflow_0001033897_easy_install_pip_python_svn_version_control.txt
Q: How to determine default Accept-Language header based on IP (country code)? I want like to enhance spam protection on my site. I've found out that after being banned by ip bots don't change the Accept-Language and Accept-Charset http headers (so most of spam comes with windows-1251 accept-charset). I understand that there may be normal users with unordinary preferences, but anyways, how can I determine which charset and language headers are most popular in a particular country? TIA A: This answer has two parts: determining where your user comes from, and what language they speak. To determine where they come from, you can use a service such as hostip.info, which has an API which takes an IP address and returns a country code. Second, you would need a list such as this one to translate the country code into a language code. You could use either a full database or a simple dict to store the mapping.
How to determine default Accept-Language header based on IP (country code)?
I want like to enhance spam protection on my site. I've found out that after being banned by ip bots don't change the Accept-Language and Accept-Charset http headers (so most of spam comes with windows-1251 accept-charset). I understand that there may be normal users with unordinary preferences, but anyways, how can I determine which charset and language headers are most popular in a particular country? TIA
[ "This answer has two parts: determining where your user comes from, and what language they speak. To determine where they come from, you can use a service such as hostip.info, which has an API which takes an IP address and returns a country code. Second, you would need a list such as this one to translate the country code into a language code. You could use either a full database or a simple dict to store the mapping.\n" ]
[ 1 ]
[]
[]
[ "geoip", "python", "spam" ]
stackoverflow_0001624816_geoip_python_spam.txt
Q: Python Tornado Web - AttributeError: 'Connection' object has no attribute '_execute' I'm experiencing a strange behaviour working with the latest branch of tornadoweb when I deploy my app on my production server. I tested several times the code and it is fully working when I test it on my laptop (Archlinux) with python 2.6.3 and MySQLdb 1.2.3b2. As soon as I deploy on my production server (Ubuntu x64) with python 2.6.2, MySQLdb 1.2.3.c1 ('ve tested also with 1.2.1 version) and call for that page it raises this error: Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/tornado/web.py", line 688, in _execute getattr(self, self.request.method.lower())(*args, **kwargs) File "/var/www/app.py", line 122, in get self.store_db('cc',test) File "/var/www/app.py", line 82, in store_db self.db.execute(query) File "/usr/local/lib/python2.6/dist-packages/tornado/database.py", line 132, in execute self._execute(cursor, query, parameters) AttributeError: 'Connection' object has no attribute '_execute' The strange behaviour is also that testing the native demo (called blog) on my laptop it works fine, but as soon as I deploy it in production it stop working with the save trouble listed above. I have to add that db.get / db.query functions works fine. A: I finally fixed my problem moving to a fresh ubuntu x64 instead of using a i386 version.
Python Tornado Web - AttributeError: 'Connection' object has no attribute '_execute'
I'm experiencing a strange behaviour working with the latest branch of tornadoweb when I deploy my app on my production server. I tested several times the code and it is fully working when I test it on my laptop (Archlinux) with python 2.6.3 and MySQLdb 1.2.3b2. As soon as I deploy on my production server (Ubuntu x64) with python 2.6.2, MySQLdb 1.2.3.c1 ('ve tested also with 1.2.1 version) and call for that page it raises this error: Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/tornado/web.py", line 688, in _execute getattr(self, self.request.method.lower())(*args, **kwargs) File "/var/www/app.py", line 122, in get self.store_db('cc',test) File "/var/www/app.py", line 82, in store_db self.db.execute(query) File "/usr/local/lib/python2.6/dist-packages/tornado/database.py", line 132, in execute self._execute(cursor, query, parameters) AttributeError: 'Connection' object has no attribute '_execute' The strange behaviour is also that testing the native demo (called blog) on my laptop it works fine, but as soon as I deploy it in production it stop working with the save trouble listed above. I have to add that db.get / db.query functions works fine.
[ "I finally fixed my problem moving to a fresh ubuntu x64 instead of using a i386 version.\n" ]
[ -2 ]
[]
[]
[ "attributeerror", "mysql", "python", "tornado", "ubuntu" ]
stackoverflow_0001618339_attributeerror_mysql_python_tornado_ubuntu.txt
Q: Python loops with multiple lists? <edit> Thanks to everyone who has answered so far. The zip and os.path.join are really helpful. Any suggestions on ways to list the counter in front, without doing something like this: zip(range(len(files)), files, directories) </edit> Hi, I'm in the process of learning Python, but I come from a background where the following pseudocode is typical: directories = ['directory_0', 'directory_1', 'directory_2'] files = ['file_a', 'file_b', 'file_c'] for(i = 0; i < directories.length; i++) { print (i + 1) + '. ' + directories[i] + '/' + files[i] + '\n' } # Output: # 1. directory_0/file_a # 2. directory_1/file_b # 3. directory_2/file_c In Python, the way I would write the above right now, would be like this: directories = ['directory_0', 'directory_1', 'directory_2'] files = ['file_a', 'file_b', 'file_c'] for i in range(len(directories)): print '%s. %s/%s' % ((i + 1), directories[i], files[i] # Output: # 1. directory_0/file_a # 2. directory_1/file_b # 3. directory_2/file_c While reading Dive into Python, Mark Pilgrim says that using for loops for counters is "Visual Basic-style thinking" (Simple Counters). He goes on to show how to use loops with dictionaries, but never really addresses a python solution in regards to how for loop counters are typically used in other languages. I was hoping somebody could show me how to properly write the above scenario in Python. Is it possible to do it a different way? If I took out the incrementing line count, is it possible to just match the two lists together using some kind of list comprehension? For example, if all I wanted from the output was this (no counters, is that possible with list comprehension): # Output: # directory_0/file_a # directory_1/file_b # directory_2/file_c Thanks in advance for any help. A: import os.path for dir, file in zip(directories, files): print(os.path.join(dir, file)) # for directories, files you can have it as a list comprehension as well, creating list of string with print going after that] with counter: for i, (dir, file) in enumerate(zip(directories, files)): print(i, os.path.join(dir, file)) A: Try this: directories = ['directory_0', 'directory_1', 'directory_2'] files = ['file_a', 'file_b', 'file_c'] for file, dir in zip(files, directories): print dir + '/' + file To explain, the zip() function takes lists as input and returns a list of "zipped" tuples. so zip([1,2,3,4,5],[a,b,c,d,e]) would return [(1,a),(2,b) and so on. You can then assign the members of the tuples to variables with the python for <var> in <list> syntax. There are a million different ways to do what you are asking, but the above uses some more "pythonic" constructs to make the code a lot more readable (IMHO, anyway). A: If you want to add a counter to any for loop in Python you can use the enumerate() function: listA = ["A", "B", "C", "D", "E"] listB = ["a", "b", "c", "d", "e"] for i, (a, b) in enumerate(zip(listA, listB)): print "%d) %s, %s" % (i, a, b) gives the output: 0) A, a 1) B, b 2) C, c 3) D, d 4) E, e
Python loops with multiple lists?
<edit> Thanks to everyone who has answered so far. The zip and os.path.join are really helpful. Any suggestions on ways to list the counter in front, without doing something like this: zip(range(len(files)), files, directories) </edit> Hi, I'm in the process of learning Python, but I come from a background where the following pseudocode is typical: directories = ['directory_0', 'directory_1', 'directory_2'] files = ['file_a', 'file_b', 'file_c'] for(i = 0; i < directories.length; i++) { print (i + 1) + '. ' + directories[i] + '/' + files[i] + '\n' } # Output: # 1. directory_0/file_a # 2. directory_1/file_b # 3. directory_2/file_c In Python, the way I would write the above right now, would be like this: directories = ['directory_0', 'directory_1', 'directory_2'] files = ['file_a', 'file_b', 'file_c'] for i in range(len(directories)): print '%s. %s/%s' % ((i + 1), directories[i], files[i] # Output: # 1. directory_0/file_a # 2. directory_1/file_b # 3. directory_2/file_c While reading Dive into Python, Mark Pilgrim says that using for loops for counters is "Visual Basic-style thinking" (Simple Counters). He goes on to show how to use loops with dictionaries, but never really addresses a python solution in regards to how for loop counters are typically used in other languages. I was hoping somebody could show me how to properly write the above scenario in Python. Is it possible to do it a different way? If I took out the incrementing line count, is it possible to just match the two lists together using some kind of list comprehension? For example, if all I wanted from the output was this (no counters, is that possible with list comprehension): # Output: # directory_0/file_a # directory_1/file_b # directory_2/file_c Thanks in advance for any help.
[ "import os.path\nfor dir, file in zip(directories, files):\n print(os.path.join(dir, file)) # for directories, files\n\nyou can have it as a list comprehension as well, creating list of string with print going after that]\nwith counter:\nfor i, (dir, file) in enumerate(zip(directories, files)):\n print(i, os.path.join(dir, file))\n\n", "Try this:\ndirectories = ['directory_0', 'directory_1', 'directory_2']\nfiles = ['file_a', 'file_b', 'file_c']\n\nfor file, dir in zip(files, directories):\n print dir + '/' + file\n\nTo explain, the zip() function takes lists as input and returns a list of \"zipped\" tuples. so zip([1,2,3,4,5],[a,b,c,d,e]) would return [(1,a),(2,b) and so on. \nYou can then assign the members of the tuples to variables with the python for <var> in <list> syntax. \nThere are a million different ways to do what you are asking, but the above uses some more \"pythonic\" constructs to make the code a lot more readable (IMHO, anyway). \n", "If you want to add a counter to any for loop in Python you can use the enumerate() function:\nlistA = [\"A\", \"B\", \"C\", \"D\", \"E\"]\nlistB = [\"a\", \"b\", \"c\", \"d\", \"e\"]\nfor i, (a, b) in enumerate(zip(listA, listB)):\n print \"%d) %s, %s\" % (i, a, b)\n\ngives the output:\n0) A, a\n1) B, b\n2) C, c\n3) D, d\n4) E, e\n\n" ]
[ 35, 10, 1 ]
[ "Building on Ryan's answer, you can do:\nfor fileDir in [dir + '/' + file for dir in directories for file in files]:\n print(fileDir)\n\n" ]
[ -1 ]
[ "arrays", "list", "loops", "python" ]
stackoverflow_0000521321_arrays_list_loops_python.txt
Q: Is there an easy way to create derived attributes in Django Model/Python classes? Every Django model has a default primary-key id created automatically. I want the model objects to have another attribute big_id which is calculated as: big_id = id * SOME_CONSTANT I want to access big_id as model_obj.big_id without the corresponding database table having a column called big_id. Is this possible? A: Well, django model instances are just python objects, or so I've been told anyway :P That is how I would do it: class MyModel(models.Model): CONSTANT = 1234 id = models.AutoField(primary_key=True) # not really needed, but hey @property def big_id(self): return self.pk * MyModel.CONSTANT Obviously, you will get an exception if you try to do it with an unsaved model. You can also precalculate the big_id value instead of calculating it every time it is accessed. A: class Person(models.Model): x = 5 name = .. email = .. def _get_y(self): if self.id: return x * self.id return None y = property(_get_y) A: You've got two options I can think of right now: Since you don't want the field in the database your best bet is to define a method on the model that returns self.id * SOME_CONSTANT, say you call it big_id(). You can access this method anytime as yourObj.big_id(), and it will be available in templates as yourObj.big_id (you might want to read about the "magic dot" in django templates). If you don't mind it being in the DB, you can override the save method on your object to calculate id * SOME_CONSTANT and store it in a big_id field. This would save you from having to calculate it every time, since I assume ID isn't going to change
Is there an easy way to create derived attributes in Django Model/Python classes?
Every Django model has a default primary-key id created automatically. I want the model objects to have another attribute big_id which is calculated as: big_id = id * SOME_CONSTANT I want to access big_id as model_obj.big_id without the corresponding database table having a column called big_id. Is this possible?
[ "Well, django model instances are just python objects, or so I've been told anyway :P\nThat is how I would do it:\nclass MyModel(models.Model):\n CONSTANT = 1234\n id = models.AutoField(primary_key=True) # not really needed, but hey\n\n @property\n def big_id(self):\n return self.pk * MyModel.CONSTANT\n\nObviously, you will get an exception if you try to do it with an unsaved model. You can also precalculate the big_id value instead of calculating it every time it is accessed.\n", "class Person(models.Model):\n x = 5\n name = ..\n email = ..\n def _get_y(self):\n if self.id:\n return x * self.id \n return None\n y = property(_get_y) \n\n", "You've got two options I can think of right now:\n\nSince you don't want the field in the database your best bet is to define a method on the model that returns self.id * SOME_CONSTANT, say you call it big_id(). You can access this method anytime as yourObj.big_id(), and it will be available in templates as yourObj.big_id (you might want to read about the \"magic dot\" in django templates).\nIf you don't mind it being in the DB, you can override the save method on your object to calculate id * SOME_CONSTANT and store it in a big_id field. This would save you from having to calculate it every time, since I assume ID isn't going to change\n\n" ]
[ 8, 1, 0 ]
[]
[]
[ "django", "django_models", "oop", "python" ]
stackoverflow_0001626155_django_django_models_oop_python.txt
Q: How can this behavior be acomplished? Python "short circuting" test I have the following code: def testGeodatabase(self): geodatabaseList = self.gp.ListWorkspaces("*","ALL") for x in geodatabaseList: if x == self.outputGeodatabase: return True else: pass return False What i need to know the following: in case the if condition evaluates to true, will the function stop looking in the list and never return False? Or do i need a break statement? def testGeodatabase(self): geodatabaseList = self.gp.ListWorkspaces("*","ALL") for x in geodatabaseList: if x == self.outputGeodatabase: return True break else: pass return False If the following code does not solve my problem, what can i use to do simulate that behavior? Thanks A: return is the end of the line, and nothing else will happen in that function afterwards. On the other hand, you could rewrite your function as def testGeodatabase(self): return self.outputGeodatabase in self.gp.ListWorkspaces("*","ALL") A: You don't need the break keyword in the code above. Actually, you don't need the else: pass either. The return True will exit the function. A: The return statement will indeed cause the function to be exited at that point. No further code is executed in the function. Here is a simple test which you could run to prove the point: def someFunction(nums): for i in nums: if i == 1: return "Found 1!" return "Never found 1" And running it: >>> someFunction([2]) 'Never found 1' >>> someFunction([2,1,3]) 'Found 1!' A: I think that using any() is the best choice: def testGeodatabase(self): geodatabaseList = self.gp.ListWorkspaces("*","ALL") return any(x == self.outputGeodatabase for x in geodatabaseList)
How can this behavior be acomplished? Python "short circuting" test
I have the following code: def testGeodatabase(self): geodatabaseList = self.gp.ListWorkspaces("*","ALL") for x in geodatabaseList: if x == self.outputGeodatabase: return True else: pass return False What i need to know the following: in case the if condition evaluates to true, will the function stop looking in the list and never return False? Or do i need a break statement? def testGeodatabase(self): geodatabaseList = self.gp.ListWorkspaces("*","ALL") for x in geodatabaseList: if x == self.outputGeodatabase: return True break else: pass return False If the following code does not solve my problem, what can i use to do simulate that behavior? Thanks
[ "return is the end of the line, and nothing else will happen in that function afterwards. On the other hand, you could rewrite your function as\ndef testGeodatabase(self):\n return self.outputGeodatabase in self.gp.ListWorkspaces(\"*\",\"ALL\")\n\n", "You don't need the break keyword in the code above. Actually, you don't need the\nelse:\n pass\n\neither. The\nreturn True\n\nwill exit the function.\n", "The return statement will indeed cause the function to be exited at that point. No further code is executed in the function.\nHere is a simple test which you could run to prove the point:\ndef someFunction(nums):\n for i in nums:\n if i == 1:\n return \"Found 1!\"\n return \"Never found 1\"\n\nAnd running it:\n>>> someFunction([2]) \n'Never found 1' \n>>> someFunction([2,1,3]) \n'Found 1!' \n\n", "I think that using any() is the best choice:\ndef testGeodatabase(self):\n geodatabaseList = self.gp.ListWorkspaces(\"*\",\"ALL\")\n return any(x == self.outputGeodatabase for x in geodatabaseList)\n\n" ]
[ 8, 2, 1, 0 ]
[]
[]
[ "python", "short_circuiting" ]
stackoverflow_0001626267_python_short_circuiting.txt
Q: Python __getattribute__ (or __getattr__) to emulate php __call I would like to create a class that effectively does this (mixing a little PHP with Python) class Middle(object) : # self.apply is a function that applies a function to a list # e.g self.apply = [] ... self.apply.append(foobar) def __call(self, name, *args) : self.apply(name, *args) Thus allowing for code to say: m = Middle() m.process_foo(a, b, c) In this case __call() is the PHP __call() method which is invoked when a method is not found on an object. A: You need to define __getattr__, it is called if an attribute is not otherwise found on your object. Notice that getattr is called for any failed lookup, and that you don't get it like a function all, so you have to return the method that will be called. def __getattr__(self, attr): def default_method(*args): self.apply(attr, *args) return default_method A: Consider passing arguments to your methods as arguments, not encoded into the method name which will then be magically used as an argument. Where are you writing code that doesn't know what methods it will be calling? Why call c.do_Something(x) and then unpack the method name instead of just calling c.do('Something', x) ? In any case it's easy enough to handle unfound attributes: class Dispatcher(object): def __getattr__(self, key): try: return object.__getattr__(self, key) except AttributeError: return self.dispatch(key) def default(self, *args, **kw): print "Assuming default method" print args, kw def dispatch(self, key): print 'Looking for method: %s'%(key,) return self.default A test: >>> d = Dispatcher() >>> d.hello() Looking for method: hello Assuming default method () {} This seems to be fraught with "gotchas" - the thing returned by getattr is going to be presumed to be not just a function, but a bound method on that instance. So be sure to return that.
Python __getattribute__ (or __getattr__) to emulate php __call
I would like to create a class that effectively does this (mixing a little PHP with Python) class Middle(object) : # self.apply is a function that applies a function to a list # e.g self.apply = [] ... self.apply.append(foobar) def __call(self, name, *args) : self.apply(name, *args) Thus allowing for code to say: m = Middle() m.process_foo(a, b, c) In this case __call() is the PHP __call() method which is invoked when a method is not found on an object.
[ "You need to define __getattr__, it is called if an attribute is not otherwise found on your object.\nNotice that getattr is called for any failed lookup, and that you don't get it like a function all, so you have to return the method that will be called.\ndef __getattr__(self, attr):\n def default_method(*args):\n self.apply(attr, *args)\n return default_method\n\n", "Consider passing arguments to your methods as arguments, not encoded into the method name which will then be magically used as an argument.\nWhere are you writing code that doesn't know what methods it will be calling?\nWhy call c.do_Something(x) and then unpack the method name instead of just calling c.do('Something', x) ?\nIn any case it's easy enough to handle unfound attributes:\nclass Dispatcher(object):\n def __getattr__(self, key):\n try:\n return object.__getattr__(self, key)\n except AttributeError:\n return self.dispatch(key)\n\n def default(self, *args, **kw):\n print \"Assuming default method\"\n print args, kw\n\n def dispatch(self, key):\n print 'Looking for method: %s'%(key,)\n return self.default\n\nA test:\n>>> d = Dispatcher()\n>>> d.hello()\nLooking for method: hello\nAssuming default method\n() {}\n\nThis seems to be fraught with \"gotchas\" - the thing returned by getattr is going to be presumed to be not just a function, but a bound method on that instance. So be sure to return that. \n" ]
[ 3, 2 ]
[ "I actually did this recently. Here's an example of how I solved it:\nclass Example:\n def FUNC_1(self, arg):\n return arg - 1\n\n def FUNC_2(self, arg):\n return arg - 2\n\n def decode(self, func, arg):\n try:\n exec( \"result = self.FUNC_%s(arg)\" % (func) )\n except AttributeError:\n # Call your default method here\n result = self.default(arg)\n\n return result\n\n def default(self, arg):\n return arg\n\nand the output:\n>>> dude = Example()\n>>> print dude.decode(1, 0)\n-1\n>>> print dude.decode(2, 10)\n8\n>>> print dude.decode(3, 5)\n5\n\n" ]
[ -1 ]
[ "class", "python" ]
stackoverflow_0001626478_class_python.txt
Q: pythonic approach to dynamic class variable names (ala PHP's $$var) The following code: class Log: BAT_STATS = ['AB', 'R', 'H', 'HR'] def __init__(self, type): for cat in Log.BAT_STATS: self.cat = 0 I want the loop there to create a class property of each key in BAT_STATS, so I can go: log = Log() print log.HR; Similar to PHP with $this->$$foo = 'bar' where $foo would be 'HR'. A: Maybe this? class Log: BAT_STATS = ['AB', 'R', 'H', 'HR'] def __init__(self, type): for cat in Log.BAT_STATS: setattr(self, cat, 0) EDIT - Oops, indentation was a bit messed up. @EOL: Are you suggesting putting it straight into the class definition? While for some applications it might be nice just to set these values once for the class rather than per-instance, I'm not sure how you'd do that. Inside the class definition you don't have a "self" or "klass" variable to call setattr on. At the end of the class definition Python parcels up the locals dictionary to use as the class's member dictionary. You can read this dictionary directly with locals(), but I don't think you have any guarantee that you can write back to it. I would guess that the easiest way to get the effect is to modify the class dictionary after it has been created, but that could be quite confusing because then the class's behaviour is no longer clear just from looking at its definition. It's not necessarily a bad idea, but I wouldn't like to recommend it without having a better understanding of the scenario it's going to be used in. A: Explicit recommendation: DO NOT USE the following. and please ease-up on the downvotes... Edit: I even found a quote from Alex Martelli expressing how bad an idea my snippet is. (Short excerpt from Python in a Nutshell, 2nd ed. O'Reilly 2006) Use exec only when it is really indispensable. MOst often, it's best to avoid exec choose more specific, well-well controlled mechanisms instead: exec pries loose your control on your code's namespace, damages performance and exposes you to numerours, hard-to-find bugs. Therefore here is my not so pythonic moment: class Log: BAT_STATS = ['AB', 'R', 'H', 'HR'] def __init__(self): for cat in Log.BAT_STATS: exec('self.' + cat + ' = 0') Using setattr() is of course cleaner (I recommend it in this simple case), but it's nice to remember the power of exec... A bit like being reminded of a dangerous tool we have in the shed.
pythonic approach to dynamic class variable names (ala PHP's $$var)
The following code: class Log: BAT_STATS = ['AB', 'R', 'H', 'HR'] def __init__(self, type): for cat in Log.BAT_STATS: self.cat = 0 I want the loop there to create a class property of each key in BAT_STATS, so I can go: log = Log() print log.HR; Similar to PHP with $this->$$foo = 'bar' where $foo would be 'HR'.
[ "Maybe this?\nclass Log:\n BAT_STATS = ['AB', 'R', 'H', 'HR']\n\n def __init__(self, type):\n for cat in Log.BAT_STATS:\n setattr(self, cat, 0)\n\nEDIT - Oops, indentation was a bit messed up.\n@EOL: Are you suggesting putting it straight into the class definition? While for some applications it might be nice just to set these values once for the class rather than per-instance, I'm not sure how you'd do that. Inside the class definition you don't have a \"self\" or \"klass\" variable to call setattr on. At the end of the class definition Python parcels up the locals dictionary to use as the class's member dictionary. You can read this dictionary directly with locals(), but I don't think you have any guarantee that you can write back to it. I would guess that the easiest way to get the effect is to modify the class dictionary after it has been created, but that could be quite confusing because then the class's behaviour is no longer clear just from looking at its definition. It's not necessarily a bad idea, but I wouldn't like to recommend it without having a better understanding of the scenario it's going to be used in.\n", "Explicit recommendation: DO NOT USE the following.\nand please ease-up on the downvotes...\nEdit: I even found a quote from Alex Martelli expressing how bad an idea my snippet is.\n(Short excerpt from Python in a Nutshell, 2nd ed. O'Reilly 2006)\nUse exec only when it is really indispensable. MOst often, it's best to avoid exec choose more specific, well-well controlled mechanisms instead: exec pries loose your control on your code's namespace, damages performance and exposes you to numerours, hard-to-find bugs.\nTherefore here is my not so pythonic moment:\nclass Log:\n BAT_STATS = ['AB', 'R', 'H', 'HR']\n def __init__(self):\n for cat in Log.BAT_STATS:\n exec('self.' + cat + ' = 0')\n\nUsing setattr() is of course cleaner (I recommend it in this simple case), but it's nice to remember the power of exec... A bit like being reminded of a dangerous tool we have in the shed.\n" ]
[ 8, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001626842_python.txt
Q: Installing TurboGears on windows 7 I tried installing TurboGears 1.0 on Windows 7 using tgsetup.py. and got following error error: Couldn't find a setup script in c:\users\sandre~1\appdata\local\temp\ easy_install-jimbkt\Cheetah-2.4.0.linux-i686.tar.gz When looking into this folder I see easy_install-jimbkt folder appearing and disappearing right away. Is this something Windows 7 does? Anybody know a walkaround for it? I can't use newer version of TG (which actually installs fine) because I have to support project written with TG 1.0 A: Looks like it's trying to install the Linux binaries of Cheetah, which definitely won't work on Windows. To get that piece of the puzzle to run in Windows, you can download and install the source version with the following command (I'll assume you already have Python and setuptools installed and working): easy_install install http://pypi.python.org/packages/source/C/Cheetah/Cheetah-2.4.0.tar.gz#md5=873f5440676355512f176fc4ac01011e I dunno about the TurboGears 1.0 stuff, though... However, instructions for Windows installation are available here (but you probably knew that already).
Installing TurboGears on windows 7
I tried installing TurboGears 1.0 on Windows 7 using tgsetup.py. and got following error error: Couldn't find a setup script in c:\users\sandre~1\appdata\local\temp\ easy_install-jimbkt\Cheetah-2.4.0.linux-i686.tar.gz When looking into this folder I see easy_install-jimbkt folder appearing and disappearing right away. Is this something Windows 7 does? Anybody know a walkaround for it? I can't use newer version of TG (which actually installs fine) because I have to support project written with TG 1.0
[ "Looks like it's trying to install the Linux binaries of Cheetah, which definitely won't work on Windows. To get that piece of the puzzle to run in Windows, you can download and install the source version with the following command (I'll assume you already have Python and setuptools installed and working): \neasy_install install http://pypi.python.org/packages/source/C/Cheetah/Cheetah-2.4.0.tar.gz#md5=873f5440676355512f176fc4ac01011e\n\nI dunno about the TurboGears 1.0 stuff, though... However, instructions for Windows installation are available here (but you probably knew that already).\n" ]
[ 0 ]
[]
[]
[ "easy_install", "python", "turbogears" ]
stackoverflow_0001626619_easy_install_python_turbogears.txt
Q: How to convert dumbo sequence file input to tab separated text I have in input, which could be a single primitive or a list or tuple of primitives. I'd like to flatten it to just a list, like so: def flatten(values): return list(values) The normal case would be flatten(someiterablethatisn'tastring) But if values = '1234', I'd get ['1', '2', '3', '4'], but I'd want ['1234'] And if values = 1, I'd get TypeError: 'int' object is not iterable, but I'd want [1] Is there an elegant way to do this? What I really want to do in the end is just '\t'.join(flatten(values)) Edit: Let me explain this better... I wish to convert a hadoop binary sequence file to a flat tab separated text file using dumbo. Using the output format option, -outputformat text Dumbo is a python wrapper around hadoop streaming. In short I need to write mapper function: def mapper(key, values) #do some stuff yield k, v where k is a string from the first part in the key, and value is a tab separated string containing the rest of the key and the values as strings. eg: input: (123, [1,2,3]) output: ('123', '1\t2\t\t3') or more complicated: input: ([123, 'abc'], [1,2,3]) output: ('123', 'abc\t1\t2\t\t3') The input key or value can be a primitive or a list/tuple of primitives I'd like a "flatten" function that can deal with anything, and return a list of values. For the out value, I'll do something like this v = '\t'.join(list(str(s) for s in flatten(seq))) A: Sounds like you want itertools.chain(). You will need to special-case strings, though, since they are really just iterables of characters. Update: This is a much simpler problem if you do it as a recursive generator. Try this: def flatten(*seq): for item in seq: if isinstance(item, basestring): yield item else: try: it = iter(item) except TypeError: yield item it = None if it is not None: for obj in flatten(it): yield obj This returns an iterator instead of a list, but it's lazily evaluated, which is probably what you want anyway. If you really need a list, just use list(flatten(seq)) instead. Update 2: As others have pointed out, if what you really want is to pass this into str.join(), then you will need to convert all the elements to strings. To do that, you can either replace yield foo with yield str(foo) throughout my example above, or just use code like the following: "\t".join(str(o) for o in flatten(seq)) A: Based on your restated question, this mapper function might do what you want: def mapper(key, values): r"""Specification: do some stuff yield k, v where k is a string from the first part in the key, and value is a tab separated string containing the rest of the key and the values as strings. >>> mapper(123, [1,2,3]) ('123', '1\t2\t3') >>> mapper([123, 'abc'], [1,2,3]) ('123', 'abc\t1\t2\t3') """ if not isinstance(key, list): key = [key] k, v = key[0], key[1:] v.extend(values) return str(k), '\t'.join(map(str, v)) if __name__ == '__main__': import doctest doctest.testmod() It looks like you'll probably want to change that return to a yield. This also assumes that the input key will always be a single item or a list of items (not a list of lists) and that the input values will always be a list of items (again, not a list of lists). Does that meet your requirements? A: I must say that the stated requirements are odd, and I don't think flatten is the right name for this kind of operation. But if you're really sure that this is what you want, then this is what I can distil from your question: >>> import itertools >>> def to_list_of_strings(input): ... if isinstance(input, basestring): # In Py3k: isinstance(input, str) ... return [input] ... try: ... return itertools.chain(*map(to_list_of_strings, input)) ... except TypeError: ... return [str(input)] ... >>> '\t'.join(to_list_of_strings(8)) '8' >>> '\t'.join(to_list_of_strings((1, 2))) '1\t2' >>> '\t'.join(to_list_of_strings("test")) 'test' >>> '\t'.join(to_list_of_strings(["test", "test2"])) 'test\ttest2' >>> '\t'.join(to_list_of_strings(range(4))) '0\t1\t2\t3' >>> '\t'.join(to_list_of_strings([1, 2, (3, 4)])) '1\t2\t3\t4'
How to convert dumbo sequence file input to tab separated text
I have in input, which could be a single primitive or a list or tuple of primitives. I'd like to flatten it to just a list, like so: def flatten(values): return list(values) The normal case would be flatten(someiterablethatisn'tastring) But if values = '1234', I'd get ['1', '2', '3', '4'], but I'd want ['1234'] And if values = 1, I'd get TypeError: 'int' object is not iterable, but I'd want [1] Is there an elegant way to do this? What I really want to do in the end is just '\t'.join(flatten(values)) Edit: Let me explain this better... I wish to convert a hadoop binary sequence file to a flat tab separated text file using dumbo. Using the output format option, -outputformat text Dumbo is a python wrapper around hadoop streaming. In short I need to write mapper function: def mapper(key, values) #do some stuff yield k, v where k is a string from the first part in the key, and value is a tab separated string containing the rest of the key and the values as strings. eg: input: (123, [1,2,3]) output: ('123', '1\t2\t\t3') or more complicated: input: ([123, 'abc'], [1,2,3]) output: ('123', 'abc\t1\t2\t\t3') The input key or value can be a primitive or a list/tuple of primitives I'd like a "flatten" function that can deal with anything, and return a list of values. For the out value, I'll do something like this v = '\t'.join(list(str(s) for s in flatten(seq)))
[ "Sounds like you want itertools.chain(). You will need to special-case strings, though, since they are really just iterables of characters.\nUpdate:\nThis is a much simpler problem if you do it as a recursive generator. Try this:\ndef flatten(*seq):\n for item in seq:\n if isinstance(item, basestring):\n yield item\n else:\n try:\n it = iter(item)\n except TypeError:\n yield item\n it = None\n if it is not None:\n for obj in flatten(it):\n yield obj\n\nThis returns an iterator instead of a list, but it's lazily evaluated, which is probably what you want anyway. If you really need a list, just use list(flatten(seq)) instead.\nUpdate 2:\nAs others have pointed out, if what you really want is to pass this into str.join(), then you will need to convert all the elements to strings. To do that, you can either replace yield foo with yield str(foo) throughout my example above, or just use code like the following:\n\"\\t\".join(str(o) for o in flatten(seq))\n\n", "Based on your restated question, this mapper function might do what you want:\ndef mapper(key, values):\n r\"\"\"Specification: do some stuff yield k, v where k is a string from the\n first part in the key, and value is a tab separated string containing the\n rest of the key and the values as strings.\n\n >>> mapper(123, [1,2,3])\n ('123', '1\\t2\\t3')\n\n >>> mapper([123, 'abc'], [1,2,3])\n ('123', 'abc\\t1\\t2\\t3')\n \"\"\"\n if not isinstance(key, list):\n key = [key]\n k, v = key[0], key[1:]\n v.extend(values)\n return str(k), '\\t'.join(map(str, v))\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod()\n\nIt looks like you'll probably want to change that return to a yield. This also assumes that the input key will always be a single item or a list of items (not a list of lists) and that the input values will always be a list of items (again, not a list of lists).\nDoes that meet your requirements?\n", "I must say that the stated requirements are odd, and I don't think flatten is the right name for this kind of operation. But if you're really sure that this is what you want, then this is what I can distil from your question:\n>>> import itertools \n>>> def to_list_of_strings(input):\n... if isinstance(input, basestring): # In Py3k: isinstance(input, str)\n... return [input]\n... try:\n... return itertools.chain(*map(to_list_of_strings, input))\n... except TypeError:\n... return [str(input)]\n... \n>>> '\\t'.join(to_list_of_strings(8))\n'8'\n>>> '\\t'.join(to_list_of_strings((1, 2)))\n'1\\t2'\n>>> '\\t'.join(to_list_of_strings(\"test\"))\n'test'\n>>> '\\t'.join(to_list_of_strings([\"test\", \"test2\"]))\n'test\\ttest2'\n>>> '\\t'.join(to_list_of_strings(range(4)))\n'0\\t1\\t2\\t3'\n>>> '\\t'.join(to_list_of_strings([1, 2, (3, 4)]))\n'1\\t2\\t3\\t4'\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "hadoop", "python", "text" ]
stackoverflow_0001625757_hadoop_python_text.txt
Q: Would it be a good idea or bad idea to connect a VB.NET frontend with a Python backend using sockets? I have some really nice Python code to do what I need to do. I don't particularly like any of the Python GUI choices though. wxPython is nice, but for what I need, the speed on resizing, refreshing and dynamically adding controls just isn't there. I would like to create the GUI in VB.NET. I imagine I could use IronPython to link the two, but that creates a dependency on a rather large third-party product. I was perusing the MSDN documentation on Windows IPC and got the idea to use sockets. I copied the Python echo server code from the Python documentation and in under 5 minutes was able to create a client in VB.NET without even reading the System.Net.Sockets documentation, so this certainly doesn't seem too hard. The question I have is... is this a terrible idea? If so, what should I be doing instead? If this is a good idea, how do I go about it? A: It's not a terrible idea. In fact, if you write the Python code to have a RESTful interface, and then access that from VB.NET, it is a downright good idea. Later on you could reuse that Python server from any other application written in Python or VB.NET or something else. Because REST is standard and easy to test, people can even do GETs from a browser and maybe that will be useful in itself. Here is a Yahoo page that gives you code examples to do REST GET, POST and so on, in VB.NET. If you think REST has too much overhead and need something more lightweight, please don't try to invent your own protocol. Consider something like Google's Protocol Buffers which can also be used from VB.NET. A: I think this is an excellent idea. I'll second Michael Dillon's recommendation for a REST API, and I'll further recommend that you use Django to implement your REST server. I wrote a REST web service using Django, and Django made it really easy and fun. Django made it really simple to set up the URLs the way I wanted them, to run whatever code a URL called for, and to interact with the database as needed. My web service was rock solid reliable, and I was able to test it for debugging simply using a web browser. If you already have your code working in Python and just want to slap on a glue interface, and if REST doesn't seem like what you want, you could look at the Twisted networking framework. Here is a nice article on how to do networking in Python with both the standard Python modules and with Twisted.
Would it be a good idea or bad idea to connect a VB.NET frontend with a Python backend using sockets?
I have some really nice Python code to do what I need to do. I don't particularly like any of the Python GUI choices though. wxPython is nice, but for what I need, the speed on resizing, refreshing and dynamically adding controls just isn't there. I would like to create the GUI in VB.NET. I imagine I could use IronPython to link the two, but that creates a dependency on a rather large third-party product. I was perusing the MSDN documentation on Windows IPC and got the idea to use sockets. I copied the Python echo server code from the Python documentation and in under 5 minutes was able to create a client in VB.NET without even reading the System.Net.Sockets documentation, so this certainly doesn't seem too hard. The question I have is... is this a terrible idea? If so, what should I be doing instead? If this is a good idea, how do I go about it?
[ "It's not a terrible idea. In fact, if you write the Python code to have a RESTful interface, and then access that from VB.NET, it is a downright good idea. Later on you could reuse that Python server from any other application written in Python or VB.NET or something else. Because REST is standard and easy to test, people can even do GETs from a browser and maybe that will be useful in itself.\nHere is a Yahoo page that gives you code examples to do REST GET, POST and so on, in VB.NET.\nIf you think REST has too much overhead and need something more lightweight, please don't try to invent your own protocol. Consider something like Google's Protocol Buffers which can also be used from VB.NET.\n", "I think this is an excellent idea. I'll second Michael Dillon's recommendation for a REST API, and I'll further recommend that you use Django to implement your REST server.\nI wrote a REST web service using Django, and Django made it really easy and fun. Django made it really simple to set up the URLs the way I wanted them, to run whatever code a URL called for, and to interact with the database as needed. My web service was rock solid reliable, and I was able to test it for debugging simply using a web browser.\nIf you already have your code working in Python and just want to slap on a glue interface, and if REST doesn't seem like what you want, you could look at the Twisted networking framework. Here is a nice article on how to do networking in Python with both the standard Python modules and with Twisted.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "sockets", "user_interface", "vb.net" ]
stackoverflow_0001626631_python_sockets_user_interface_vb.net.txt
Q: How do I extract a ieee-be binary file embedded in a zipfile? I have a set of zip files which contains several ieee-be encoded binary and text files. I have used Pythons ZipFile module and can extract the contents of the text file def readPropFile(myZipFile): zf = zipfile.ZipFile(myZipFile,'r') # Open zip file for reading zFileList=zf.namelist() # extract list of files embedded in _myZipFile_ # text files in _myZipFile_ contain the word 'properties' so # get a list of the property files here for f in zFileList: if f.find('properties')>0: propFileList.append(f) # open first file in propFileList pp2 = cStringIO.StringIO(zf.read(propFileList[0])) fileLines = [] for ll in pp2: fileLines.append(ll) # return the lines in the property text file return fileLines Now I would like to do the same sort of thing except reading the data in the binary files and creating an array of floats. So how would I proceed? Update 1 The format of the binary files is such that in MATLAB I after extracting them to a temporary location I can read them with the following >>fid=fopen('dataFile.bin','r','ieee-be'); >>dat=fread(fid,[1 inf],'float'); Update 2 I now have a simple function in attempt to read the binary data something like def readBinaryFile(myZipFile): zFile = zipfile.ZipFile(myZipFile,'r') dataFileName = 'dataFile.bin' stringData = zFile.read(dataFileName) ss=stringData[0:4] data=struct.unpack('>f',ss) but the value I get does the same as the value reported in MATLAB. Update 3 first float in my binary file HEX value: BD 98 99 3D float : -.07451103 A: Most of what you need is in this answer How do I convert a Python float to a hexadecimal string in python 2.5? Nonworking solution attached See the stuff about struct.pack. More details on struct are in the Python docs A: You could also try the Numpy extension (here), which is a bit lighter than SciPy. Numpy has lots of I/O routines. For example, import numpy f = file ('example.dat') data_type = numpy.dtype ('float32').newbyteorder ('>') x = numpy.fromfile (f, dtype=data_type) gives you a numpy array. There's probably a less-clunky way to specify the data type. A: In the snippet from the question, the "properties files" (whatever that is) are detected, in a rather loose fashion, by the presence of the string 'properties' in their contents, when read as text. I don't know what the equivalent of this would be for binary IEEE-le files. However, with Python, a easy way to read ieee-le (or other formats) files is with SciPy's io.fopen module. Edit Since anyway reading such a binary file requires one to know the structure, you can express this in a struct.pack() format as desribed in Michael Dillon's response! This only requires the std library, and is just as easy!
How do I extract a ieee-be binary file embedded in a zipfile?
I have a set of zip files which contains several ieee-be encoded binary and text files. I have used Pythons ZipFile module and can extract the contents of the text file def readPropFile(myZipFile): zf = zipfile.ZipFile(myZipFile,'r') # Open zip file for reading zFileList=zf.namelist() # extract list of files embedded in _myZipFile_ # text files in _myZipFile_ contain the word 'properties' so # get a list of the property files here for f in zFileList: if f.find('properties')>0: propFileList.append(f) # open first file in propFileList pp2 = cStringIO.StringIO(zf.read(propFileList[0])) fileLines = [] for ll in pp2: fileLines.append(ll) # return the lines in the property text file return fileLines Now I would like to do the same sort of thing except reading the data in the binary files and creating an array of floats. So how would I proceed? Update 1 The format of the binary files is such that in MATLAB I after extracting them to a temporary location I can read them with the following >>fid=fopen('dataFile.bin','r','ieee-be'); >>dat=fread(fid,[1 inf],'float'); Update 2 I now have a simple function in attempt to read the binary data something like def readBinaryFile(myZipFile): zFile = zipfile.ZipFile(myZipFile,'r') dataFileName = 'dataFile.bin' stringData = zFile.read(dataFileName) ss=stringData[0:4] data=struct.unpack('>f',ss) but the value I get does the same as the value reported in MATLAB. Update 3 first float in my binary file HEX value: BD 98 99 3D float : -.07451103
[ "Most of what you need is in this answer How do I convert a Python float to a hexadecimal string in python 2.5? Nonworking solution attached\nSee the stuff about struct.pack.\nMore details on struct are in the Python docs\n", "You could also try the Numpy extension (here), which is a bit lighter than SciPy. Numpy has lots of I/O routines. For example,\nimport numpy\nf = file ('example.dat')\ndata_type = numpy.dtype ('float32').newbyteorder ('>')\nx = numpy.fromfile (f, dtype=data_type)\n\ngives you a numpy array. There's probably a less-clunky way to specify the data type.\n", "In the snippet from the question, the \"properties files\" (whatever that is) are detected, in a rather loose fashion, by the presence of the string 'properties' in their contents, when read as text. I don't know what the equivalent of this would be for binary IEEE-le files.\nHowever, with Python, a easy way to read ieee-le (or other formats) files is with SciPy's io.fopen module.\nEdit\nSince anyway reading such a binary file requires one to know the structure, you can express this in a struct.pack() format as desribed in Michael Dillon's response! This only requires the std library, and is just as easy!\n" ]
[ 2, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001627376_python.txt
Q: Problem in importing classes from models file in GAE I am very new to python and after a brief intro on to python + Google app engine, I've started to work on a pilot project. I have bulkloaded 2 entities UserDetails and PhoneBook with data onto the app engine. Now in my UI I try to first take in the user name then query it to get the user id from UserDetails, then using the retrieved user id I query the PhoneBook to get the his phone book entries. Here's my code for the UI, #!/usr/bin/env python import wsgiref.handlers from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp import template from models import UserDetails,PhoneBook #import models class showPhoneBook(db.Model): """ class to store the user_name to db """ user_name = db.StringProperty(required=True) class MyHandler(webapp.RequestHandler): def get(self): """ Query type 1 """ #p = db.GqlQuery('SELECT * FROM UserDetails WHERE user_name = :1', user_name) """ Query type 2 """ #p = UserDetails.gql('WHERE user_name = :1', user_name) """ Query type 3 """ p = UserDetails.all().filter('user_name = ', user_name) result1 = p.get() for itr1 in result1: userId = itr.user_id """ Query type 1 """ #q = db.GqlQuery('SELECT * FROM PhoneBook WHERE user_id = :1', userId) """ Query type 2 """ #q = PhoneBook.gql('WHERE user_id = :1', userId) """ Query type 3 """ q = PhoneBook.all().filter('user_id = ', userId) values = { 'phoneBookValues': q } self.response.out.write( template.render('phonebook.html', values)) def post(self): phoneBookuserID = showPhoneBook( user_name = self.request.get('username')) phonebookuserID.put() self.redirect('/') def main(): app = webapp.WSGIApplication([ (r'.*',MyHandler)], debug=True) wsgiref.handlers.CGIHandler().run(app) if __name__ == "__main__": main() The problem is that I get this error in my logs that GAE can't import the UserDetails/phone class from models, <type 'exceptions.ImportError'>: cannot import name UserDetails Traceback (most recent call last): File "/base/data/home/apps/bulkloader160by2/1-4.337299749289926105/main.py", line 7, in <module> from models import UserDetails,PhoneBook and when I remove from models import UserDetails,PhoneBook and use import models, I get, global name 'UserDetails' is not defined Traceback (most recent call last): File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 507, in __call__ handler.get(*groups) File "/base/data/home/apps/bulkloader160by2/1-4.337300095868541686/main.py", line 19, in get p = UserDetails.all().filter('user_name = ', user_name) NameError: global name 'UserDetails' is not defined This is my models.py file where I've my UserDetails and PhoneBook classes defined. I've stored this file in the root directory. #!/usr/bin/env python from google.appengine.ext import db #Table structure of User Details table class UserDetails(db.Model): user_id = db.IntegerProperty(required = True) user_name = db.StringProperty(required = True) mobile_number = db.PhoneNumberProperty(required = True) #Table structure of Phone Book table class PhoneBook(db.Model): contact_id = db.IntegerProperty(required=True) user_id = db.IntegerProperty(required=True) contact_name = db.StringProperty(required=True) contact_number = db.PhoneNumberProperty(required=True) Being new to python I am not able to figure out why am not able to import my classes from my models.py file and how to set/get the user_name from post(self) to get(self). Please forgive me if I'm being naive but your help is most appreciated if you can help me in setting my code right. Thanks. A: The first error you're getting occurs when it can import the module ('models') fine, but can't find the member ('UserDetails') in it. This would generally be the case if you'd mistyped the name of the class, or hadn't defined it in that module. The second error is expected, because you're importing the 'models' module, but then trying to refer to UserDetails by its unqualified name - you need to refer to it as 'models.UserDetails' if you import just the module. Based on the code you've provided, though, I can't see anything wrong, which leads me to believe it's something you've changed for the snippets you pasted. Double-check that UserDetails is defined correctly in your models.py file. You might also want to try the following: import models logging.warn(models.__file__) This will show you the path to the models module, which will let you verify there's not another module by the same name somewhere that you're accidentally importing instead. A: If models.py is in a subdirectory you might need to add it to the system path, such as: sys.path.append(os.path.join(os.path.dirname(__file__), "lib")) If models is the directory and the classes are in UserDetails.py etc. you may need to: sys.path.append(os.path.join(os.path.dirname(__file__), "models")) import UserDetails
Problem in importing classes from models file in GAE
I am very new to python and after a brief intro on to python + Google app engine, I've started to work on a pilot project. I have bulkloaded 2 entities UserDetails and PhoneBook with data onto the app engine. Now in my UI I try to first take in the user name then query it to get the user id from UserDetails, then using the retrieved user id I query the PhoneBook to get the his phone book entries. Here's my code for the UI, #!/usr/bin/env python import wsgiref.handlers from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp import template from models import UserDetails,PhoneBook #import models class showPhoneBook(db.Model): """ class to store the user_name to db """ user_name = db.StringProperty(required=True) class MyHandler(webapp.RequestHandler): def get(self): """ Query type 1 """ #p = db.GqlQuery('SELECT * FROM UserDetails WHERE user_name = :1', user_name) """ Query type 2 """ #p = UserDetails.gql('WHERE user_name = :1', user_name) """ Query type 3 """ p = UserDetails.all().filter('user_name = ', user_name) result1 = p.get() for itr1 in result1: userId = itr.user_id """ Query type 1 """ #q = db.GqlQuery('SELECT * FROM PhoneBook WHERE user_id = :1', userId) """ Query type 2 """ #q = PhoneBook.gql('WHERE user_id = :1', userId) """ Query type 3 """ q = PhoneBook.all().filter('user_id = ', userId) values = { 'phoneBookValues': q } self.response.out.write( template.render('phonebook.html', values)) def post(self): phoneBookuserID = showPhoneBook( user_name = self.request.get('username')) phonebookuserID.put() self.redirect('/') def main(): app = webapp.WSGIApplication([ (r'.*',MyHandler)], debug=True) wsgiref.handlers.CGIHandler().run(app) if __name__ == "__main__": main() The problem is that I get this error in my logs that GAE can't import the UserDetails/phone class from models, <type 'exceptions.ImportError'>: cannot import name UserDetails Traceback (most recent call last): File "/base/data/home/apps/bulkloader160by2/1-4.337299749289926105/main.py", line 7, in <module> from models import UserDetails,PhoneBook and when I remove from models import UserDetails,PhoneBook and use import models, I get, global name 'UserDetails' is not defined Traceback (most recent call last): File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 507, in __call__ handler.get(*groups) File "/base/data/home/apps/bulkloader160by2/1-4.337300095868541686/main.py", line 19, in get p = UserDetails.all().filter('user_name = ', user_name) NameError: global name 'UserDetails' is not defined This is my models.py file where I've my UserDetails and PhoneBook classes defined. I've stored this file in the root directory. #!/usr/bin/env python from google.appengine.ext import db #Table structure of User Details table class UserDetails(db.Model): user_id = db.IntegerProperty(required = True) user_name = db.StringProperty(required = True) mobile_number = db.PhoneNumberProperty(required = True) #Table structure of Phone Book table class PhoneBook(db.Model): contact_id = db.IntegerProperty(required=True) user_id = db.IntegerProperty(required=True) contact_name = db.StringProperty(required=True) contact_number = db.PhoneNumberProperty(required=True) Being new to python I am not able to figure out why am not able to import my classes from my models.py file and how to set/get the user_name from post(self) to get(self). Please forgive me if I'm being naive but your help is most appreciated if you can help me in setting my code right. Thanks.
[ "The first error you're getting occurs when it can import the module ('models') fine, but can't find the member ('UserDetails') in it. This would generally be the case if you'd mistyped the name of the class, or hadn't defined it in that module.\nThe second error is expected, because you're importing the 'models' module, but then trying to refer to UserDetails by its unqualified name - you need to refer to it as 'models.UserDetails' if you import just the module.\nBased on the code you've provided, though, I can't see anything wrong, which leads me to believe it's something you've changed for the snippets you pasted. Double-check that UserDetails is defined correctly in your models.py file. You might also want to try the following:\nimport models\nlogging.warn(models.__file__)\n\nThis will show you the path to the models module, which will let you verify there's not another module by the same name somewhere that you're accidentally importing instead.\n", "If models.py is in a subdirectory you might need to add it to the system path, such as:\nsys.path.append(os.path.join(os.path.dirname(__file__), \"lib\"))\n\nIf models is the directory and the classes are in UserDetails.py etc. you may need to:\nsys.path.append(os.path.join(os.path.dirname(__file__), \"models\"))\nimport UserDetails\n\n" ]
[ 1, 0 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "python" ]
stackoverflow_0001623941_google_app_engine_google_cloud_datastore_python.txt
Q: Python, Perl And C/C++ With GUI I'm now thinking, is it possible to integrate Python, Perl and C/C++ and also doing a GUI application with this very nice mix of languages? A: Well, there is Wx, Inline::Python and Inline::C, but the question is why? A: Anything is "possible", but whether it is necessary or beneficial is debatable and highly depends on your requirements. Don't mix if you don't need to. Use the language that best fits the domain or target requirements. I can't think of a scenario where one needs to mix Python and Perl as their domain is largely the same. Using C/C++ can be beneficial in cases where you need hardcore system integration or specialized machine dependent services. Or when you need to extend Python or Perl itself (both are written in C/C++). EDIT: if you want to do a GUI application, it is probably easier to choose a language that fits the OS you want your GUI to run in. I.e. something like (but not limited to) C# for Windows, Objective-C for iPhone or Mac, Qt + C++ for Linux etc. A: There's always Parrot. Here's the Wikipedia page. It's a vm to allow you access your favorite libraries from different languages in one application. A: Everything is possible - but why add two and a half more levels of complexity? A: Python & Perl? together? I can only think of an editor.
Python, Perl And C/C++ With GUI
I'm now thinking, is it possible to integrate Python, Perl and C/C++ and also doing a GUI application with this very nice mix of languages?
[ "Well, there is Wx, Inline::Python and Inline::C, but the question is why?\n", "Anything is \"possible\", but whether it is necessary or beneficial is debatable and highly depends on your requirements. Don't mix if you don't need to. Use the language that best fits the domain or target requirements. \nI can't think of a scenario where one needs to mix Python and Perl as their domain is largely the same.\nUsing C/C++ can be beneficial in cases where you need hardcore system integration or specialized machine dependent services. Or when you need to extend Python or Perl itself (both are written in C/C++).\nEDIT: if you want to do a GUI application, it is probably easier to choose a language that fits the OS you want your GUI to run in. I.e. something like (but not limited to) C# for Windows, Objective-C for iPhone or Mac, Qt + C++ for Linux etc.\n", "There's always Parrot. Here's the Wikipedia page. It's a vm to allow you access your favorite libraries from different languages in one application.\n", "Everything is possible - but why add two and a half more levels of complexity?\n", "Python & Perl? together?\nI can only think of an editor.\n" ]
[ 9, 5, 3, 1, 1 ]
[]
[]
[ "c", "c++", "integration", "perl", "python" ]
stackoverflow_0001628001_c_c++_integration_perl_python.txt
Q: Can someone explain this unexpected Python import behavior? I've run into some behavior from Python 2.6.1 that I didn't expect. Here is some trivial code to reproduce the problem: ---- ControlPointValue.py ------ class ControlPointValue: def __init__(self): pass ---- ControlPointValueSet.py ---- import ControlPointValue ---- main.py -------------------- from ControlPointValue import * from ControlPointValueSet import * val = ControlPointValue() .... here is the error I get when I run main.py (under OS/X Snow Leopard, if it matters): jeremy-friesners-mac-pro-3:~ jaf$ python main.py Traceback (most recent call last): File "main.py", line 4, in <module> val = ControlPointValue() TypeError: 'module' object is not callable Can someone explain what is going on here? Is Python getting confused because the class name is the same as the file name? If so, what is the best way to fix the problem? (I'd prefer to have my python files named after the classes that are defined in them) Thanks, Jeremy A: I don't think it's unexpected. What you are basically doing is: 1) the first import in main.py imports the contents of ControlPointValue module into the global namespace. this produces a class bound to that name. 2) the second import in main.py imports the contents of ControlPointValueSet module into the global namespace. This module imports ControlPointValue module. This overwrites the binding in the global namespace, replacing the binding for that name from the class to the module. To solve, I would suggest you not to import *, ever. Always keep the last module prefix. For example, if you have foo/bar/baz/bruf.py containing a class Frobniz, do from foo.bar.baz import bruf and then use bruf.Frobniz() A: In addition to the other suggestions about star imports, don't name your module and your class the same. Follow pep8's suggestions and give your modules short all lower case names and name your classes LikeThis. E.g. ---- controlpoints.py ------ class ControlPointValue: def __init__(self): pass ---- valuesets.py ---- from controlpoints import ControlPointValue ---- main.py -------------------- from controlpoints import ControlPointValue from valuesets import * val = ControlPointValue()
Can someone explain this unexpected Python import behavior?
I've run into some behavior from Python 2.6.1 that I didn't expect. Here is some trivial code to reproduce the problem: ---- ControlPointValue.py ------ class ControlPointValue: def __init__(self): pass ---- ControlPointValueSet.py ---- import ControlPointValue ---- main.py -------------------- from ControlPointValue import * from ControlPointValueSet import * val = ControlPointValue() .... here is the error I get when I run main.py (under OS/X Snow Leopard, if it matters): jeremy-friesners-mac-pro-3:~ jaf$ python main.py Traceback (most recent call last): File "main.py", line 4, in <module> val = ControlPointValue() TypeError: 'module' object is not callable Can someone explain what is going on here? Is Python getting confused because the class name is the same as the file name? If so, what is the best way to fix the problem? (I'd prefer to have my python files named after the classes that are defined in them) Thanks, Jeremy
[ "I don't think it's unexpected. What you are basically doing is:\n1) the first import in main.py imports the contents of ControlPointValue module into the global namespace. this produces a class bound to that name.\n2) the second import in main.py imports the contents of ControlPointValueSet module into the global namespace. This module imports ControlPointValue module. This overwrites the binding in the global namespace, replacing the binding for that name from the class to the module.\nTo solve, I would suggest you not to import *, ever. Always keep the last module prefix. For example, if you have foo/bar/baz/bruf.py containing a class Frobniz, do\nfrom foo.bar.baz import bruf\n\nand then use bruf.Frobniz()\n", "In addition to the other suggestions about star imports, don't name your module and your class the same. Follow pep8's suggestions and give your modules short all lower case names and name your classes LikeThis. E.g.\n---- controlpoints.py ------\nclass ControlPointValue:\n def __init__(self):\n pass\n\n---- valuesets.py ----\nfrom controlpoints import ControlPointValue\n\n---- main.py --------------------\nfrom controlpoints import ControlPointValue\nfrom valuesets import *\n\nval = ControlPointValue()\n\n" ]
[ 10, 4 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001628085_import_python.txt
Q: The advantages of having static function like len(), max(), and min() over inherited method calls i am a python newbie, and i am not sure why python implemented len(obj), max(obj), and min(obj) as a static like functions (i am from the java language) over obj.len(), obj.max(), and obj.min() what are the advantages and disadvantages (other than obvious inconsistency) of having len()... over the method calls? why guido chose this over the method calls? (this could have been solved in python3 if needed, but it wasn't changed in python3, so there gotta be good reasons...i hope) thanks!! A: The big advantage is that built-in functions (and operators) can apply extra logic when appropriate, beyond simply calling the special methods. For example, min can look at several arguments and apply the appropriate inequality checks, or it can accept a single iterable argument and proceed similarly; abs when called on an object without a special method __abs__ could try comparing said object with 0 and using the object change sign method if needed (though it currently doesn't); and so forth. So, for consistency, all operations with wide applicability must always go through built-ins and/or operators, and it's those built-ins responsibility to look up and apply the appropriate special methods (on one or more of the arguments), use alternate logic where applicable, and so forth. An example where this principle wasn't correctly applied (but the inconsistency was fixed in Python 3) is "step an iterator forward": in 2.5 and earlier, you needed to define and call the non-specially-named next method on the iterator. In 2.6 and later you can do it the right way: the iterator object defines __next__, the new next built-in can call it and apply extra logic, for example to supply a default value (in 2.6 you can still do it the bad old way, for backwards compatibility, though in 3.* you can't any more). Another example: consider the expression x + y. In a traditional object-oriented language (able to dispatch only on the type of the leftmost argument -- like Python, Ruby, Java, C++, C#, &c) if x is of some built-in type and y is of your own fancy new type, you're sadly out of luck if the language insists on delegating all the logic to the method of type(x) that implements addition (assuming the language allows operator overloading;-). In Python, the + operator (and similarly of course the builtin operator.add, if that's what you prefer) tries x's type's __add__, and if that one doesn't know what to do with y, then tries y's type's __radd__. So you can define your types that know how to add themselves to integers, floats, complex, etc etc, as well as ones that know how to add such built-in numeric types to themselves (i.e., you can code it so that x + y and y + x both work fine, when y is an instance of your fancy new type and x is an instance of some builtin numeric type). "Generic functions" (as in PEAK) are a more elegant approach (allowing any overriding based on a combination of types, never with the crazy monomaniac focus on the leftmost arguments that OOP encourages!-), but (a) they were unfortunately not accepted for Python 3, and (b) they do of course require the generic function to be expressed as free-standing (it would be absolutely crazy to have to consider the function as "belonging" to any single type, where the whole POINT is that can be differently overridden/overloaded based on arbitrary combination of its several arguments' types!-). Anybody who's ever programmed in Common Lisp, Dylan, or PEAK, knows what I'm talking about;-). So, free-standing functions and operators are just THE right, consistent way to go (even though the lack of generic functions, in bare-bones Python, does remove some fraction of the inherent elegance, it's still a reasonable mix of elegance and practicality!-). A: It emphasizes the capabilities of an object, not its methods or type. Capabilites are declared by "helper" functions such as __iter__ and __len__ but they don't make up the interface. The interface is in the builtin functions, and beside this also in the buit-in operators like + and [] for indexing and slicing. Sometimes, it is not a one-to-one correspondance: For example, iter(obj) returns an iterator for an object, and will work even if __iter__ is not defined. If not defined, it goes on to look if the object defines __getitem__ and will return an iterator accessing the object index-wise (like an array). This goes together with Python's Duck Typing, we care only about what we can do with an object, not that it is of a particular type. A: Actually, those aren't "static" methods in the way you are thinking about them. They are built-in functions that really just alias to certain methods on python objects that implement them. >>> class Foo(object): ... def __len__(self): ... return 42 ... >>> f = Foo() >>> len(f) 42 These are always available to be called whether or not the object implements them or not. The point is to have some consistency. Instead of some class having a method called length() and another called size(), the convention is to implement len and let the callers always access it by the more readable len(obj) instead of obj.methodThatDoesSomethingCommon A: I thought the reason was so these basic operations could be done on iterators with the same interface as containers. However, it actually doesn't work with len: def foo(): for i in range(10): yield i print len(foo()) ... fails with TypeError. len() won't consume and count an iterator; it only works with objects that have a __len__ call. So, as far as I'm concerned, len() shouldn't exist. It's much more natural to say obj.len than len(obj), and much more consistent with the rest of the language and the standard library. We don't say append(lst, 1); we say lst.append(1). Having a separate global method for length is an odd, inconsistent special case, and eats a very obvious name in the global namespace, which is a very bad habit of Python. This is unrelated to duck typing; you can say getattr(obj, "len") to decide whether you can use len on an object just as easily--and much more consistently--than you can use getattr(obj, "__len__"). All that said, as language warts go--for those who consider this a wart--this is a very easy one to live with. On the other hand, min and max do work on iterators, which gives them a use apart from any particular object. This is straightforward, so I'll just give an example: import random def foo(): for i in range(10): yield random.randint(0, 100) print max(foo()) However, there are no __min__ or __max__ methods to override its behavior, so there's no consistent way to provide efficient searching for sorted containers. If a container is sorted on the same key that you're searching, min/max are O(1) operations instead of O(n), and the only way to expose that is by a different, inconsistent method. (This could be fixed in the language relatively easily, of course.) To follow up with another issue with this: it prevents use of Python's method binding. As a simple, contrived example, you can do this to supply a function to add values to a list: def add(f): f(1) f(2) f(3) lst = [] add(lst.append) print lst and this works on all member functions. You can't do that with min, max or len, though, since they're not methods of the object they operate on. Instead, you have to resort to functools.partial, a clumsy second-class workaround common in other languages. Of course, this is an uncommon case; but it's the uncommon cases that tell us about a language's consistency.
The advantages of having static function like len(), max(), and min() over inherited method calls
i am a python newbie, and i am not sure why python implemented len(obj), max(obj), and min(obj) as a static like functions (i am from the java language) over obj.len(), obj.max(), and obj.min() what are the advantages and disadvantages (other than obvious inconsistency) of having len()... over the method calls? why guido chose this over the method calls? (this could have been solved in python3 if needed, but it wasn't changed in python3, so there gotta be good reasons...i hope) thanks!!
[ "The big advantage is that built-in functions (and operators) can apply extra logic when appropriate, beyond simply calling the special methods. For example, min can look at several arguments and apply the appropriate inequality checks, or it can accept a single iterable argument and proceed similarly; abs when called on an object without a special method __abs__ could try comparing said object with 0 and using the object change sign method if needed (though it currently doesn't); and so forth.\nSo, for consistency, all operations with wide applicability must always go through built-ins and/or operators, and it's those built-ins responsibility to look up and apply the appropriate special methods (on one or more of the arguments), use alternate logic where applicable, and so forth.\nAn example where this principle wasn't correctly applied (but the inconsistency was fixed in Python 3) is \"step an iterator forward\": in 2.5 and earlier, you needed to define and call the non-specially-named next method on the iterator. In 2.6 and later you can do it the right way: the iterator object defines __next__, the new next built-in can call it and apply extra logic, for example to supply a default value (in 2.6 you can still do it the bad old way, for backwards compatibility, though in 3.* you can't any more).\nAnother example: consider the expression x + y. In a traditional object-oriented language (able to dispatch only on the type of the leftmost argument -- like Python, Ruby, Java, C++, C#, &c) if x is of some built-in type and y is of your own fancy new type, you're sadly out of luck if the language insists on delegating all the logic to the method of type(x) that implements addition (assuming the language allows operator overloading;-).\nIn Python, the + operator (and similarly of course the builtin operator.add, if that's what you prefer) tries x's type's __add__, and if that one doesn't know what to do with y, then tries y's type's __radd__. So you can define your types that know how to add themselves to integers, floats, complex, etc etc, as well as ones that know how to add such built-in numeric types to themselves (i.e., you can code it so that x + y and y + x both work fine, when y is an instance of your fancy new type and x is an instance of some builtin numeric type).\n\"Generic functions\" (as in PEAK) are a more elegant approach (allowing any overriding based on a combination of types, never with the crazy monomaniac focus on the leftmost arguments that OOP encourages!-), but (a) they were unfortunately not accepted for Python 3, and (b) they do of course require the generic function to be expressed as free-standing (it would be absolutely crazy to have to consider the function as \"belonging\" to any single type, where the whole POINT is that can be differently overridden/overloaded based on arbitrary combination of its several arguments' types!-). Anybody who's ever programmed in Common Lisp, Dylan, or PEAK, knows what I'm talking about;-).\nSo, free-standing functions and operators are just THE right, consistent way to go (even though the lack of generic functions, in bare-bones Python, does remove some fraction of the inherent elegance, it's still a reasonable mix of elegance and practicality!-).\n", "It emphasizes the capabilities of an object, not its methods or type. Capabilites are declared by \"helper\" functions such as __iter__ and __len__ but they don't make up the interface. The interface is in the builtin functions, and beside this also in the buit-in operators like + and [] for indexing and slicing.\nSometimes, it is not a one-to-one correspondance: For example, iter(obj) returns an iterator for an object, and will work even if __iter__ is not defined. If not defined, it goes on to look if the object defines __getitem__ and will return an iterator accessing the object index-wise (like an array).\nThis goes together with Python's Duck Typing, we care only about what we can do with an object, not that it is of a particular type.\n", "Actually, those aren't \"static\" methods in the way you are thinking about them. They are built-in functions that really just alias to certain methods on python objects that implement them.\n>>> class Foo(object):\n... def __len__(self):\n... return 42\n... \n>>> f = Foo()\n>>> len(f)\n42\n\nThese are always available to be called whether or not the object implements them or not. The point is to have some consistency. Instead of some class having a method called length() and another called size(), the convention is to implement len and let the callers always access it by the more readable len(obj) instead of obj.methodThatDoesSomethingCommon\n", "I thought the reason was so these basic operations could be done on iterators with the same interface as containers. However, it actually doesn't work with len:\ndef foo():\n for i in range(10):\n yield i\nprint len(foo())\n\n... fails with TypeError. len() won't consume and count an iterator; it only works with objects that have a __len__ call.\nSo, as far as I'm concerned, len() shouldn't exist. It's much more natural to say obj.len than len(obj), and much more consistent with the rest of the language and the standard library. We don't say append(lst, 1); we say lst.append(1). Having a separate global method for length is an odd, inconsistent special case, and eats a very obvious name in the global namespace, which is a very bad habit of Python.\nThis is unrelated to duck typing; you can say getattr(obj, \"len\") to decide whether you can use len on an object just as easily--and much more consistently--than you can use getattr(obj, \"__len__\").\nAll that said, as language warts go--for those who consider this a wart--this is a very easy one to live with.\nOn the other hand, min and max do work on iterators, which gives them a use apart from any particular object. This is straightforward, so I'll just give an example:\nimport random\ndef foo():\n for i in range(10):\n yield random.randint(0, 100)\nprint max(foo())\n\nHowever, there are no __min__ or __max__ methods to override its behavior, so there's no consistent way to provide efficient searching for sorted containers. If a container is sorted on the same key that you're searching, min/max are O(1) operations instead of O(n), and the only way to expose that is by a different, inconsistent method. (This could be fixed in the language relatively easily, of course.)\nTo follow up with another issue with this: it prevents use of Python's method binding. As a simple, contrived example, you can do this to supply a function to add values to a list:\ndef add(f):\n f(1)\n f(2)\n f(3)\nlst = []\nadd(lst.append)\nprint lst\n\nand this works on all member functions. You can't do that with min, max or len, though, since they're not methods of the object they operate on. Instead, you have to resort to functools.partial, a clumsy second-class workaround common in other languages.\nOf course, this is an uncommon case; but it's the uncommon cases that tell us about a language's consistency.\n" ]
[ 20, 3, 3, 1 ]
[]
[]
[ "language_design", "python" ]
stackoverflow_0001628222_language_design_python.txt
Q: Python mkdir giving me wrong permissions I'm trying to create a folder and create a file within it. Whenever i create that folder (via Python), it creates a folder that gives me no permissions at all and read-only mode. When i try to create the file i get an IOError. Error: <type 'exceptions.IOError'> I tried creating (and searching) for a description of all other modes (besides 0770). Can anyone give me light? What are the other mode codes? A: After you create the folder you can set the permissions with os.chmod The mod is written in base 8, if you convert it to binary it would be 000 111 111 000 rwx rwx rwx The first rwx is for owner, the second is for the group and the third is for world r=read,w=write,x=execute The permissions you see most often are 7 read/write/execute - you need execute for directories to see the contents 6 read/write 4 readonly When you use os.chmod it makes most sense to use octal notation so os.chmod('myfile',0o666) # read/write by everyone os.chmod('myfile',0o644) # read/write by me, readable for everone else Remember I said you usually want directories to be "executable" so you can see the contents. os.chmod('mydir',0o777) # read/write by everyone os.chmod('mydir',0o755) # read/write by me, readable for everone else Note: The syntax of 0o777 is for Python 2.6 and 3+. otherwise for the 2 series it is 0777. 2.6 accepts either syntax so the one you choose will depend on whether you want to be forward or backward compatible. A: You've probably got a funky umask. Try os.umask(0002) before making your directory. A: The Python manual says: os.mkdir(path[, mode]) Create a directory named path with numeric mode mode. The default mode is 0777 (octal). On some systems, mode is ignored. Where it is used, the current umask value is first masked out. Availability: Unix, Windows. Have you specified a mode - which mode did you specify. Did you consider specifying a mode explicitly? And what is the program's umask value set to" A: Since your on Windows, this might be a crapshoot. Make sure there aren't any wacky special permissions on the parent directory or with the policy settings that defines the permissions any directories created by your account get. I doubt this is a python problem as I haven't been able to recreate the problem on Windows with a relatively vanilla Vista install.
Python mkdir giving me wrong permissions
I'm trying to create a folder and create a file within it. Whenever i create that folder (via Python), it creates a folder that gives me no permissions at all and read-only mode. When i try to create the file i get an IOError. Error: <type 'exceptions.IOError'> I tried creating (and searching) for a description of all other modes (besides 0770). Can anyone give me light? What are the other mode codes?
[ "After you create the folder you can set the permissions with os.chmod\nThe mod is written in base 8, if you convert it to binary it would be\n000 111 111 000\n rwx rwx rwx\n\nThe first rwx is for owner, the second is for the group and the third is for world \nr=read,w=write,x=execute\nThe permissions you see most often are\n7 read/write/execute - you need execute for directories to see the contents\n6 read/write\n4 readonly \nWhen you use os.chmod it makes most sense to use octal notation\nso\nos.chmod('myfile',0o666) # read/write by everyone\nos.chmod('myfile',0o644) # read/write by me, readable for everone else\n\nRemember I said you usually want directories to be \"executable\" so you can see the contents.\nos.chmod('mydir',0o777) # read/write by everyone\nos.chmod('mydir',0o755) # read/write by me, readable for everone else\n\nNote: The syntax of 0o777 is for Python 2.6 and 3+. otherwise for the 2 series it is 0777. 2.6 accepts either syntax so the one you choose will depend on whether you want to be forward or backward compatible.\n", "You've probably got a funky umask. Try os.umask(0002) before making your directory.\n", "The Python manual says:\n\nos.mkdir(path[, mode])\n\nCreate a directory named path with numeric mode mode. The default mode is 0777 (octal). On some systems, mode is ignored. Where it is used, the current umask value is first masked out. Availability: Unix, Windows.\n\nHave you specified a mode - which mode did you specify. Did you consider specifying a mode explicitly? And what is the program's umask value set to\"\n", "Since your on Windows, this might be a crapshoot. Make sure there aren't any wacky special permissions on the parent directory or with the policy settings that defines the permissions any directories created by your account get. I doubt this is a python problem as I haven't been able to recreate the problem on Windows with a relatively vanilla Vista install.\n" ]
[ 23, 5, 3, 1 ]
[]
[]
[ "filesystems", "mkdir", "python", "windows" ]
stackoverflow_0001627198_filesystems_mkdir_python_windows.txt
Q: SimpleParse non-deterministic grammar until runtime I'm working on a basic networking protocol in Python, which should be able to transfer both ASCII strings (read: EOL-terminated) and binary data. For the latter to be possible, I chose to create the grammar such that it contains the number of bytes to come which are going to be binary. For SimpleParse, the grammar would look like this [1] so far: EOL := [\n] IDENTIFIER := [a-zA-Z0-9_-]+ SIZE_INTEGER := [1-9]*[0-9]+ ASCII_VALUE := [^\n\0]+, EOL BINARY_VALUE := .*+ value := (ASCII_VALUE/BINARY_VALUE) eol_attribute := IDENTIFIER, ':', value binary_attribute := IDENTIFIER, [\t], SIZE_INTEGER, ':', value attributes := (eol_attribute/binary_attribute)+ command := IDENTIFIER, EOL command := IDENTIFIER, '{', attributes, '}' The problem is I don't know how to instruct SimpleParse that the following is going to be a chuck of binary data of SIZE_INTEGER bytes at runtime. The cause for this is the definition of the terminal BINARY_VALUE which fulfills my needs as it is now, so it cannot be changed. Thanks Edit I suppose the solution would be telling it to stop when it matches the production binary_attribute and let me populate the AST node manually (via socket.recv()), but how to do that? Edit 2 Base64-encoding or similar is not an option. [1] I have't tested it, so I don't know if it practically works, it's only for you to get an idea A: If the grammar is as simple as the one you quoted, then perhaps using a parser generator is overkill? You might find that rolling your own recursive parser by hand is simpler and quicker. A: If you want your application to be portable and reliable I would suggest you pass only standard ASCII characters over the wire. Different computer architectures have different binary representaions, different word sizes, different character sets. There are three approaches to dealing with this. FIrst you can ignore the issues and hope you only ever have to implement the protocol on a single paltform. Two you can go all computer sciency and come up with a "cardinal form" for each possible data type ala CORBA. You can be practical and use the magic of "sprintf" and "scanf" to translate your data to and from plain ASCII characters when sending data over the network. I would also suggest that your protocol includes a message length at or near the begining of the message. The commonest bug in home made protocols is the receiving partner expecting more data than was sent and subsequntly waiting forever for data that was never sent. A: I strongly recommend you consider using the construct library for parsing the binary data. It also has support for text (ASCII), so when it detects text you can pass that to your SimpleParse-based parser, but the binary data will be parsed with construct. It's very convenient and powerful.
SimpleParse non-deterministic grammar until runtime
I'm working on a basic networking protocol in Python, which should be able to transfer both ASCII strings (read: EOL-terminated) and binary data. For the latter to be possible, I chose to create the grammar such that it contains the number of bytes to come which are going to be binary. For SimpleParse, the grammar would look like this [1] so far: EOL := [\n] IDENTIFIER := [a-zA-Z0-9_-]+ SIZE_INTEGER := [1-9]*[0-9]+ ASCII_VALUE := [^\n\0]+, EOL BINARY_VALUE := .*+ value := (ASCII_VALUE/BINARY_VALUE) eol_attribute := IDENTIFIER, ':', value binary_attribute := IDENTIFIER, [\t], SIZE_INTEGER, ':', value attributes := (eol_attribute/binary_attribute)+ command := IDENTIFIER, EOL command := IDENTIFIER, '{', attributes, '}' The problem is I don't know how to instruct SimpleParse that the following is going to be a chuck of binary data of SIZE_INTEGER bytes at runtime. The cause for this is the definition of the terminal BINARY_VALUE which fulfills my needs as it is now, so it cannot be changed. Thanks Edit I suppose the solution would be telling it to stop when it matches the production binary_attribute and let me populate the AST node manually (via socket.recv()), but how to do that? Edit 2 Base64-encoding or similar is not an option. [1] I have't tested it, so I don't know if it practically works, it's only for you to get an idea
[ "If the grammar is as simple as the one you quoted, then perhaps using a parser generator is overkill? You might find that rolling your own recursive parser by hand is simpler and quicker.\n", "If you want your application to be portable and reliable I would suggest you pass only standard ASCII characters over the wire.\nDifferent computer architectures have different binary representaions, different word sizes, different character sets. There are three approaches to dealing with this.\nFIrst you can ignore the issues and hope you only ever have to implement the protocol on a single paltform.\nTwo you can go all computer sciency and come up with a \"cardinal form\" for each possible data type ala CORBA.\nYou can be practical and use the magic of \"sprintf\" and \"scanf\" to translate your data to and from plain ASCII characters when sending data over the network.\nI would also suggest that your protocol includes a message length at or near the begining of the message. The commonest bug in home made protocols is the receiving partner expecting more data than was sent and subsequntly waiting forever for data that was never sent. \n", "I strongly recommend you consider using the construct library for parsing the binary data. It also has support for text (ASCII), so when it detects text you can pass that to your SimpleParse-based parser, but the binary data will be parsed with construct. It's very convenient and powerful.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "parsing", "python", "text_parsing" ]
stackoverflow_0001537708_parsing_python_text_parsing.txt
Q: Using the same handler for multiple wx.TextCtrls? I'm having a bit of trouble with a panel that has two wxPython TextCtrls in it. I want either an EVT_CHAR or EVT_KEY_UP handler bound to both controls, and I want to be able to tell which TextCtrl generated the event. I would think that event.Id would tell me this, but in the following sample code it's always 0. Any thoughts? I've only tested this on OS X. This code simply checks that both TextCtrls have some text in them before enabling the Done button import wx class MyFrame(wx.Frame): def __init__(self, parent, ID, title): wx.Frame.__init__(self, parent, ID, title, wx.DefaultPosition, wx.Size(200, 150)) self.panel = BaseNameEntryPanel(self) class BaseNameEntryPanel(wx.Panel): def __init__(self, parent): wx.Panel.__init__(self, parent, -1) self.entry = wx.TextCtrl(self, wx.NewId()) self.entry2 = wx.TextCtrl(self, wx.NewId()) self.donebtn = wx.Button(self, wx.NewId(), "Done") self.donebtn.Disable() vsizer = wx.BoxSizer(wx.VERTICAL) vsizer.Add(self.entry, 1, wx.EXPAND|wx.GROW) vsizer.Add(self.entry2, 1, wx.EXPAND|wx.GROW) vsizer.Add(self.donebtn, 1, wx.EXPAND|wx.GROW) self.SetSizer(vsizer) self.Fit() self.entry.Bind(wx.EVT_KEY_UP, self.Handle) self.entry2.Bind(wx.EVT_KEY_UP, self.Handle) def Handle(self, event): keycode = event.GetKeyCode() print keycode, event.Id # <- event.Id is always 0! def checker(entry): return bool(entry.GetValue().strip()) self.donebtn.Enable(checker(self.entry) and checker(self.entry2)) class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, "Hello from wxPython") frame.Show(True) self.SetTopWindow(frame) return True app = MyApp(0) app.MainLoop() A: You could try event.GetId() or event.GetEventObject() and see if either of these work. Another approach to this is to use lambda or functools.partial to effectively pass a parameter to the handler. So, for example, sub in the lines below into your program: self.entry.Bind(wx.EVT_KEY_UP, functools.partial(self.Handle, ob=self.entry)) self.entry2.Bind(wx.EVT_KEY_UP, functools.partial(self.Handle, ob=self.entry2)) def Handle(self, event, ob=None): print ob And then ob will be either entry or entry2 depending on which panel is clicked. But, of course, this shouldn't be necessary, and GetId and GetEventObject() should both work -- though I don't (yet) have a Mac to try these on.
Using the same handler for multiple wx.TextCtrls?
I'm having a bit of trouble with a panel that has two wxPython TextCtrls in it. I want either an EVT_CHAR or EVT_KEY_UP handler bound to both controls, and I want to be able to tell which TextCtrl generated the event. I would think that event.Id would tell me this, but in the following sample code it's always 0. Any thoughts? I've only tested this on OS X. This code simply checks that both TextCtrls have some text in them before enabling the Done button import wx class MyFrame(wx.Frame): def __init__(self, parent, ID, title): wx.Frame.__init__(self, parent, ID, title, wx.DefaultPosition, wx.Size(200, 150)) self.panel = BaseNameEntryPanel(self) class BaseNameEntryPanel(wx.Panel): def __init__(self, parent): wx.Panel.__init__(self, parent, -1) self.entry = wx.TextCtrl(self, wx.NewId()) self.entry2 = wx.TextCtrl(self, wx.NewId()) self.donebtn = wx.Button(self, wx.NewId(), "Done") self.donebtn.Disable() vsizer = wx.BoxSizer(wx.VERTICAL) vsizer.Add(self.entry, 1, wx.EXPAND|wx.GROW) vsizer.Add(self.entry2, 1, wx.EXPAND|wx.GROW) vsizer.Add(self.donebtn, 1, wx.EXPAND|wx.GROW) self.SetSizer(vsizer) self.Fit() self.entry.Bind(wx.EVT_KEY_UP, self.Handle) self.entry2.Bind(wx.EVT_KEY_UP, self.Handle) def Handle(self, event): keycode = event.GetKeyCode() print keycode, event.Id # <- event.Id is always 0! def checker(entry): return bool(entry.GetValue().strip()) self.donebtn.Enable(checker(self.entry) and checker(self.entry2)) class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, "Hello from wxPython") frame.Show(True) self.SetTopWindow(frame) return True app = MyApp(0) app.MainLoop()
[ "You could try event.GetId() or event.GetEventObject() and see if either of these work.\nAnother approach to this is to use lambda or functools.partial to effectively pass a parameter to the handler. So, for example, sub in the lines below into your program:\n self.entry.Bind(wx.EVT_KEY_UP, functools.partial(self.Handle, ob=self.entry))\n self.entry2.Bind(wx.EVT_KEY_UP, functools.partial(self.Handle, ob=self.entry2))\n\n def Handle(self, event, ob=None):\n print ob\n\nAnd then ob will be either entry or entry2 depending on which panel is clicked. But, of course, this shouldn't be necessary, and GetId and GetEventObject() should both work -- though I don't (yet) have a Mac to try these on.\n" ]
[ 4 ]
[]
[]
[ "python", "wxpython" ]
stackoverflow_0001628010_python_wxpython.txt
Q: chaining queries together in Django I have a query that gets me 32 avatar images from my avatar application: newUserAv = Avatar.objects.filter(valid=True)[:32] I'd like to combine this with a query to django's Auth user model, so I can get the last the last 32 people, who have avatar images, sorted by the date joined. What is the best way to chain these two together? The avatar application was a reusable app, and its model is: image = models.ImageField(upload_to="avatars/%Y/%b/%d", storage=storage) user = models.ForeignKey(User) date = models.DateTimeField(auto_now_add=True) valid = models.BooleanField() Note that the date field, is the date the avatar is updated, so not suitable for my purporse A: Either you put a field in your own User class (you might have to subclass User or compose with django.contrib.auth.models.User) that indicates that the User has an avatar. Than you can make your query easily. Or do something like that: from django.utils.itercompat import groupby avatars = Avatar.objects.select_related("user").filter(valid=True).order_by("-user__date_joined")[:32] grouped_users = groupby(avatars, lambda x: x.user) user_list = [] for user, avatar_list in grouped_users: user.avatar = list(avatar_list)[0] user_list.append(user) # user_list is now what you asked for in the first_place: # a list of users with their avatars This assumes that one user has one and only one avatar. Your model allows for more than one avatar per user so you have to watch out not to store more than one. Explanation of Code Snippet: The avatars of the most 32 recent joined users are requested together with the related user, so there doesn't have to be a database query for any of them in the upcoming code. The list of avatars is then grouped with the user as a key. The list gets all items from the generator avatar_list and the first item (there should only be one) is assigned to user.avatar Note that this is not necessary, you could always do something like: for avatar in avatars: user = avatar.user But it might feel more naturally to access the avatars by user.avatar. A: It's not possible to combine queries on two different base models. Django won't let you do this (it'll throw an error telling you exactly that). However, if you have a foreignkey from one model to the other, then adding select_related() to your query will fetch the related objects into memory in a single DB query so that you can access them without going back to the DB.
chaining queries together in Django
I have a query that gets me 32 avatar images from my avatar application: newUserAv = Avatar.objects.filter(valid=True)[:32] I'd like to combine this with a query to django's Auth user model, so I can get the last the last 32 people, who have avatar images, sorted by the date joined. What is the best way to chain these two together? The avatar application was a reusable app, and its model is: image = models.ImageField(upload_to="avatars/%Y/%b/%d", storage=storage) user = models.ForeignKey(User) date = models.DateTimeField(auto_now_add=True) valid = models.BooleanField() Note that the date field, is the date the avatar is updated, so not suitable for my purporse
[ "Either you put a field in your own User class (you might have to subclass User or compose with django.contrib.auth.models.User) that indicates that the User has an avatar. Than you can make your query easily.\nOr do something like that:\nfrom django.utils.itercompat import groupby\navatars = Avatar.objects.select_related(\"user\").filter(valid=True).order_by(\"-user__date_joined\")[:32]\ngrouped_users = groupby(avatars, lambda x: x.user)\nuser_list = []\nfor user, avatar_list in grouped_users:\n user.avatar = list(avatar_list)[0]\n user_list.append(user)\n# user_list is now what you asked for in the first_place: \n# a list of users with their avatars\n\nThis assumes that one user has one and only one avatar. Your model allows for more than one avatar per user so you have to watch out not to store more than one.\nExplanation of Code Snippet:\nThe avatars of the most 32 recent joined users are requested together with the related user, so there doesn't have to be a database query for any of them in the upcoming code.\nThe list of avatars is then grouped with the user as a key. The list gets all items from the generator avatar_list and the first item (there should only be one) is assigned to user.avatar\nNote that this is not necessary, you could always do something like:\nfor avatar in avatars:\n user = avatar.user\n\nBut it might feel more naturally to access the avatars by user.avatar.\n", "It's not possible to combine queries on two different base models. Django won't let you do this (it'll throw an error telling you exactly that).\nHowever, if you have a foreignkey from one model to the other, then adding select_related() to your query will fetch the related objects into memory in a single DB query so that you can access them without going back to the DB.\n" ]
[ 2, 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001628162_django_django_models_python.txt