content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Mapping URL Pattern to a Single RequestHandler in a WSGIApplication Is it possible to map a URL pattern (regular expression or some other mapping) to a single RequestHandler? If so how can I accomplish this? Ideally I'd like to do something like this: application=WSGIApplication([('/*',MyRequestHandler),]) So that MyRequestHandler handles all requests made. Note that I'm working on a proof of concept app where by definition I won't know all URLs that will be coming to the domain. Also note that I'm doing this on Google App Engine if that matters. A: The pattern you describe will work fine. Also, any groups in the regular expression you specify will be passed as arguments to the handler methods (get, post, etc). For example: class MyRequestHandler(webapp.RequestHandler): def get(self, date, id): # Do stuff. Note that date and id are both strings, even if the groups are numeric. application = WSGIApplication([('/(\d{4}-\d{2}-\d{2})/(\d+)', MyRequestHandler)]) In the above example, the two groups (a date and an id) are broken out and passed as arguments to your handler functions. A: application=WSGIApplication([(r'.*',MyRequestHandler),]) for more see the AppEngine docs
Mapping URL Pattern to a Single RequestHandler in a WSGIApplication
Is it possible to map a URL pattern (regular expression or some other mapping) to a single RequestHandler? If so how can I accomplish this? Ideally I'd like to do something like this: application=WSGIApplication([('/*',MyRequestHandler),]) So that MyRequestHandler handles all requests made. Note that I'm working on a proof of concept app where by definition I won't know all URLs that will be coming to the domain. Also note that I'm doing this on Google App Engine if that matters.
[ "The pattern you describe will work fine. Also, any groups in the regular expression you specify will be passed as arguments to the handler methods (get, post, etc). For example:\nclass MyRequestHandler(webapp.RequestHandler):\n def get(self, date, id):\n # Do stuff. Note that date and id are both strings, even if the groups are numeric.\n\napplication = WSGIApplication([('/(\\d{4}-\\d{2}-\\d{2})/(\\d+)', MyRequestHandler)])\n\nIn the above example, the two groups (a date and an id) are broken out and passed as arguments to your handler functions.\n", "application=WSGIApplication([(r'.*',MyRequestHandler),])\n\nfor more see the AppEngine docs\n" ]
[ 8, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001010427_google_app_engine_python.txt
Q: Will Dict Return Keys and Values in Same Order? Possible Duplicate: Python dictionary: are keys() and values() always the same order? If i have a dictonary in python, will .keys and .values return the corresponding elements in the same order? E.g. foo = {'foobar' : 1, 'foobar2' : 4, 'kittty' : 34743} For the keys it returns: >>> foo.keys() ['foobar2', 'foobar', 'kittty'] Now will foo.values() return the elements always in the same order as their corresponding keys? A: It's hard to improve on the Python documentation: Keys and values are listed in an arbitrary order which is non-random, varies across Python implementations, and depends on the dictionary’s history of insertions and deletions. If items(), keys(), values(), iteritems(), iterkeys(), and itervalues() are called with no intervening modifications to the dictionary, the lists will directly correspond. This allows the creation of (value, key) pairs using zip(): pairs = zip(d.values(), d.keys()). The same relationship holds for the iterkeys() and itervalues() methods: pairs = zip(d.itervalues(), d.iterkeys()) provides the same value for pairs. Another way to create the same list is pairs = [(v, k) for (k, v) in d.iteritems()] So, in short, "yes" with the caveat that you must not modify the dictionary in between your call to keys() and your call to values(). A: Yes, they will Just see the doc at Python doc : Keys and values are listed in an arbitrary order which is non-random, varies across Python implementations, and depends on the dictionary’s history of insertions and deletions. If items(), keys(), values(), iteritems(), iterkeys(), and itervalues() are called with no intervening modifications to the dictionary, the lists will directly correspond. Best thing to do is still to use dict.items() A: From the Python 2.6 documentation: Keys and values are listed in an arbitrary order which is non-random, varies across Python implementations, and depends on the dictionary’s history of insertions and deletions. If items(), keys(), values(), iteritems(), iterkeys(), and itervalues() are called with no intervening modifications to the dictionary, the lists will directly correspond. This allows the creation of (value, key) pairs using zip(): pairs = zip(d.values(), d.keys()). The same relationship holds for the iterkeys() and itervalues() methods: pairs = zip(d.itervalues(), d.iterkeys()) provides the same value for pairs. Another way to create the same list is pairs = [(v, k) for (k, v) in d.iteritems()]. I'm over 99% certain the same will hold true for Python 3.0.
Will Dict Return Keys and Values in Same Order?
Possible Duplicate: Python dictionary: are keys() and values() always the same order? If i have a dictonary in python, will .keys and .values return the corresponding elements in the same order? E.g. foo = {'foobar' : 1, 'foobar2' : 4, 'kittty' : 34743} For the keys it returns: >>> foo.keys() ['foobar2', 'foobar', 'kittty'] Now will foo.values() return the elements always in the same order as their corresponding keys?
[ "It's hard to improve on the Python documentation:\n\nKeys and values are listed in an arbitrary order which is non-random, varies across Python implementations, and depends on the dictionary’s history of insertions and deletions. If items(), keys(), values(), iteritems(), iterkeys(), and itervalues() are called with no intervening modifications to the dictionary, the lists will directly correspond. This allows the creation of (value, key) pairs using zip(): pairs = zip(d.values(), d.keys()). The same relationship holds for the iterkeys() and itervalues() methods: pairs = zip(d.itervalues(), d.iterkeys()) provides the same value for pairs. Another way to create the same list is pairs = [(v, k) for (k, v) in d.iteritems()]\n\nSo, in short, \"yes\" with the caveat that you must not modify the dictionary in between your call to keys() and your call to values().\n", "Yes, they will\nJust see the doc at Python doc :\n\nKeys and values are listed in an arbitrary order which is non-random, varies across Python implementations, and depends on the dictionary’s history of insertions and deletions. If items(), keys(), values(), iteritems(), iterkeys(), and itervalues() are called with no intervening modifications to the dictionary, the lists will directly correspond.\n\nBest thing to do is still to use dict.items()\n", "From the Python 2.6 documentation:\n\nKeys and values are listed in an arbitrary order which is non-random, varies across Python implementations, and depends on the dictionary’s history of insertions and deletions. If items(), keys(), values(), iteritems(), iterkeys(), and itervalues() are called with no intervening modifications to the dictionary, the lists will directly correspond. This allows the creation of (value, key) pairs using zip(): pairs = zip(d.values(), d.keys()). The same relationship holds for the iterkeys() and itervalues() methods: pairs = zip(d.itervalues(), d.iterkeys()) provides the same value for pairs. Another way to create the same list is pairs = [(v, k) for (k, v) in d.iteritems()].\n\nI'm over 99% certain the same will hold true for Python 3.0. \n" ]
[ 18, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001012354_python.txt
Q: What is the purpose of the two colons in this Python string-slicing statement? For example, str = "hello" str[1::3] And where can I find this in Python documentation? A: in sequences' description: s[i:j:k] slice of s from i to j with step k The slice of s from i to j with step k is defined as the sequence of items with index x = i + n*k such that 0 <= n < (j-i)/k. In other words, the indices are i, i+k, i+2*k, i+3*k and so on, stopping when j is reached (but never including j). If i or j is greater than len(s), use len(s). If i or j are omitted or None, they become “end” values (which end depends on the sign of k). Note, k cannot be zero. If k is None, it is treated like 1.
What is the purpose of the two colons in this Python string-slicing statement?
For example, str = "hello" str[1::3] And where can I find this in Python documentation?
[ "in sequences' description:\ns[i:j:k] slice of s from i to j with step k\n\n\nThe slice of s from i to j with step k is defined as the sequence of items with index x = i + n*k such that 0 <= n < (j-i)/k. In other words, the indices are i, i+k, i+2*k, i+3*k and so on, stopping when j is reached (but never including j). If i or j is greater than len(s), use len(s). If i or j are omitted or None, they become “end” values (which end depends on the sign of k). Note, k cannot be zero. If k is None, it is treated like 1.\n\n" ]
[ 20 ]
[]
[]
[ "python", "slice" ]
stackoverflow_0001013272_python_slice.txt
Q: Why does this python code hang on import/compile but work in the shell? I'm trying to use python to sftp a file, and the code works great in the interactive shell -- even pasting it in all at once. When I try to import the file (just to compile it), the code hangs with no exceptions or obvious errors. How do I get the code to compile, or does someone have working code that accomplishes sftp by some other method? This code hangs right at the ssh.connect() statement: """ ProblemDemo.py Chopped down from the paramiko demo file. This code works in the shell but hangs when I try to import it! """ from time import sleep import os import paramiko sOutputFilename = "redacted.htm" #-- The payload file hostname = "redacted.com" ####-- WARNING! Embedded passwords! Remove ASAP. sUsername = "redacted" sPassword = "redacted" sTargetDir = "redacted" #-- Get host key, if we know one. hostkeytype = None hostkey = None host_keys = {} try: host_keys = paramiko.util.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) except IOError: try: # try ~/ssh/ too, because windows can't have a folder named ~/.ssh/ host_keys = paramiko.util.load_host_keys(os.path.expanduser('~/ssh/known_hosts')) except IOError: print '*** Unable to open host keys file' host_keys = {} if host_keys.has_key(hostname): hostkeytype = host_keys[hostname].keys()[0] hostkey = host_keys[hostname][hostkeytype] print 'Using host key of type %s' % hostkeytype ssh = paramiko.Transport((hostname, 22)) ssh.connect(username=sUsername, password=sPassword, hostkey=hostkey) sftp = paramiko.SFTPClient.from_transport(ssh) sftp.chdir (sTargetDir) sftp.put (sOutputFilename, sOutputFilename) ssh.close() A: That's indeed a bad idea to execute this kind of code at import time, although I am not sure why it hangs - it may be that import mechanism does something strange which interacts badly with paramiko (thread related issues maybe ?). Anyway, the usual solution is to implement the functionality into a function: def my_expensive_function(args): pass if __name__ == '__main__': import sys my_expensive_functions(sys.args) This way, just importing the module will not do anything, but running the script will execute the function with the given arguments at command line. A: This may not be a direct reason why, but rarely do you ever want to have "functionality" executed upon import. Normally you should define a class or function that you then call like this: import mymodule mymodule.run() The "global" code that you run in an import typically should be limited to imports, variable definitions, function and class definitions, and the like... A: Weirdness aside, I was just using import to compile the code. Turning the script into a function seems like an unnecessary complication for this kind of application. Searched for alternate means to compile and found: import py_compile py_compile.compile("ProblemDemo.py") This generated a pyc file that works as intended. So the lesson learned is that import is not a robust way to compile python scripts.
Why does this python code hang on import/compile but work in the shell?
I'm trying to use python to sftp a file, and the code works great in the interactive shell -- even pasting it in all at once. When I try to import the file (just to compile it), the code hangs with no exceptions or obvious errors. How do I get the code to compile, or does someone have working code that accomplishes sftp by some other method? This code hangs right at the ssh.connect() statement: """ ProblemDemo.py Chopped down from the paramiko demo file. This code works in the shell but hangs when I try to import it! """ from time import sleep import os import paramiko sOutputFilename = "redacted.htm" #-- The payload file hostname = "redacted.com" ####-- WARNING! Embedded passwords! Remove ASAP. sUsername = "redacted" sPassword = "redacted" sTargetDir = "redacted" #-- Get host key, if we know one. hostkeytype = None hostkey = None host_keys = {} try: host_keys = paramiko.util.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) except IOError: try: # try ~/ssh/ too, because windows can't have a folder named ~/.ssh/ host_keys = paramiko.util.load_host_keys(os.path.expanduser('~/ssh/known_hosts')) except IOError: print '*** Unable to open host keys file' host_keys = {} if host_keys.has_key(hostname): hostkeytype = host_keys[hostname].keys()[0] hostkey = host_keys[hostname][hostkeytype] print 'Using host key of type %s' % hostkeytype ssh = paramiko.Transport((hostname, 22)) ssh.connect(username=sUsername, password=sPassword, hostkey=hostkey) sftp = paramiko.SFTPClient.from_transport(ssh) sftp.chdir (sTargetDir) sftp.put (sOutputFilename, sOutputFilename) ssh.close()
[ "That's indeed a bad idea to execute this kind of code at import time, although I am not sure why it hangs - it may be that import mechanism does something strange which interacts badly with paramiko (thread related issues maybe ?). Anyway, the usual solution is to implement the functionality into a function:\ndef my_expensive_function(args):\n pass\n\nif __name__ == '__main__':\n import sys\n my_expensive_functions(sys.args)\n\nThis way, just importing the module will not do anything, but running the script will execute the function with the given arguments at command line.\n", "This may not be a direct reason why, but rarely do you ever want to have \"functionality\" executed upon import. Normally you should define a class or function that you then call like this:\nimport mymodule\nmymodule.run()\n\nThe \"global\" code that you run in an import typically should be limited to imports, variable definitions, function and class definitions, and the like...\n", "Weirdness aside, I was just using import to compile the code. Turning the script into a function seems like an unnecessary complication for this kind of application.\nSearched for alternate means to compile and found:\n\nimport py_compile\npy_compile.compile(\"ProblemDemo.py\")\n\nThis generated a pyc file that works as intended.\nSo the lesson learned is that import is not a robust way to compile python scripts.\n" ]
[ 5, 1, 0 ]
[]
[]
[ "compilation", "python", "sftp", "shell" ]
stackoverflow_0001013064_compilation_python_sftp_shell.txt
Q: Windows error and python I'm working on a bit of code that is supposed to run an exe file inside a folder on my system and getting an error saying... WindowsError: [Error 3] The system cannot find the path specified. Here's a bit of the code: exepath = os.path.join(EXE file localtion) exepath = '"' + os.path.normpath(exepath) + '"' cmd = [exepath, '-el', str(el), '-n', str(z)] print 'The python program is running this command:' print cmd process = Popen(cmd, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] I have imported subprocess and also from subprocess import * For example, This is how my exe file location looks like in the first line of the code I show: exepath= os.path.join('/Program Files','next folder','next folder','blah.exe') Am I missing something? A: You need to properly escape the space in the executable path A: Besides properly escaping spaces and other characters that could cause problems (such as /), you can also use the 8 character old DOS paths. For example, Program Files would be: Progra~1 , making sure to append ~1 for the last two characters. EDIT: You could add an r to the front of the string, making it a raw literal. Python would read the string character for character. Like this: r " \Program files" A: If I remember correctly, you don't need to quote your executuable file path, like you do in the second line. EDIT: Well, just grabbed nearby Windows box and tested this. Popen works the same regardless the path is quoted or not. So this is not an issue. A: AFAIK, there is no need to surround the path in quotation marks unless cmd.exe is involved in running the program. In addition, you might want to use the environment variable ProgramFiles to find out the actual location of 'Program Files' because that depends on regional settings and can also be tweaked using TweakUI.
Windows error and python
I'm working on a bit of code that is supposed to run an exe file inside a folder on my system and getting an error saying... WindowsError: [Error 3] The system cannot find the path specified. Here's a bit of the code: exepath = os.path.join(EXE file localtion) exepath = '"' + os.path.normpath(exepath) + '"' cmd = [exepath, '-el', str(el), '-n', str(z)] print 'The python program is running this command:' print cmd process = Popen(cmd, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] I have imported subprocess and also from subprocess import * For example, This is how my exe file location looks like in the first line of the code I show: exepath= os.path.join('/Program Files','next folder','next folder','blah.exe') Am I missing something?
[ "You need to properly escape the space in the executable path\n", "Besides properly escaping spaces and other characters that could cause problems (such as /), you can also use the 8 character old DOS paths. \nFor example, Program Files would be:\nProgra~1 , making sure to append ~1 for the last two characters.\nEDIT: You could add an r to the front of the string, making it a raw literal. Python would read the string character for character. Like this:\nr \" \\Program files\"\n", "If I remember correctly, you don't need to quote your executuable file path, like you do in the second line.\nEDIT: Well, just grabbed nearby Windows box and tested this. Popen works the same regardless the path is quoted or not. So this is not an issue.\n", "AFAIK, there is no need to surround the path in quotation marks unless cmd.exe is involved in running the program.\nIn addition, you might want to use the environment variable ProgramFiles to find out the actual location of 'Program Files' because that depends on regional settings and can also be tweaked using TweakUI.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "popen", "python" ]
stackoverflow_0001013311_popen_python.txt
Q: Swig bindings for python/lua do not initialize member data properly I'm trying to build a set of Lua bindings for a collection of C++ classes, but have been toying with Python to see if I get better results. In either language the bindings seem to work, however, when I initialize an instance of a class that contains members of other classes, those data members do not seem to be guaranteed to be initialized. For example, take the class: class MyClass : public ParentClass // (Obviously) not a real class { public: SomeClass sc; OtherClass oc; };//Note that none of my classes have a constructor or destructor; this is by design. When I generate bindings for a class like this, I am able to execute statements like: var = module_name.MyClass() print(var.sc.x, var.sc.y) And I get the expected junk values printed to the screen. However, if I try to print anything about the instance of OtherClass, it becomes obvious that it is "stubbed out" -- in Lua it has no metatable at all and in Python doing dir(var.oc) gives only the default functions. However, if I then do: var.oc = module_name.OtherClass() The oc metatable / dir(oc) call are what I would have hoped for and it can be treated as expected. Can anyone offer any insight into why only -some- of the member data are initialized? Thanks! A: Turns out this problem was related to another problem I was having. See this thread for the resolution.
Swig bindings for python/lua do not initialize member data properly
I'm trying to build a set of Lua bindings for a collection of C++ classes, but have been toying with Python to see if I get better results. In either language the bindings seem to work, however, when I initialize an instance of a class that contains members of other classes, those data members do not seem to be guaranteed to be initialized. For example, take the class: class MyClass : public ParentClass // (Obviously) not a real class { public: SomeClass sc; OtherClass oc; };//Note that none of my classes have a constructor or destructor; this is by design. When I generate bindings for a class like this, I am able to execute statements like: var = module_name.MyClass() print(var.sc.x, var.sc.y) And I get the expected junk values printed to the screen. However, if I try to print anything about the instance of OtherClass, it becomes obvious that it is "stubbed out" -- in Lua it has no metatable at all and in Python doing dir(var.oc) gives only the default functions. However, if I then do: var.oc = module_name.OtherClass() The oc metatable / dir(oc) call are what I would have hoped for and it can be treated as expected. Can anyone offer any insight into why only -some- of the member data are initialized? Thanks!
[ "Turns out this problem was related to another problem I was having. See this thread for the resolution.\n" ]
[ 0 ]
[]
[]
[ "initialization", "lua", "python", "swig" ]
stackoverflow_0000916555_initialization_lua_python_swig.txt
Q: Creating a logging handler to connect to Oracle? So right now i need to create and implement an extension of the Python logging module that will be used to log to our database. Basically we have several python applications(that all run in the background) that currently log to a random mishmash of text files. Which makes it almost impossible to find out if a certain application failed or not. The problem given to me is to move said logging to text files to an oracle DB. The tables have already been defined, and where things need to be logged to but right now, im looking at adding another logging handler that will log to the DB. I am using python 2.5.4 and cx_Oracle and the applications in general can be ether run as a service/daemon or a straight application. I'm just mainly curious about what would be the best possible way to go about this. Few questions: If any errors occur with cx_Oracle, where should these errors be logged to? If its down would it be best to just go and have the logger retreat to the default text file? Awhile back we started enforcing that people use sys.stderr/stdout.write instead of print, so worst case scenario we wouldn't run into any issues with print becoming deprecated. Is there a way to seamlessly make all of the thousands of sys.std calls be piped directly into the logger, and have the logger pickup the slack? After every logged message, should the script automatically do a commit? (there's going to be several dozen a second.) What is the best way to implement a new handler for the logging system? Inheriting from the basic Handler class seems to be easiest. Any ideas / suggestions would be great. A: If errors occur with cx_Oracle, it's probably best to log these to a text file. You could try redirecting sys.stdout and sys.stderr to file-like objects which log whatever's written to them to a logger. I would guess you do want to commit after each event, unless you have strong reasons for not doing this. Alternatively, you can buffer several events and write them all in a single transaction every so often. Below is an example which uses mx.ODBC, you can probably adapt this to cx_Oracle without too much trouble. It's meant to be Python DB-API 2.0 compliant, I think. The standalone Python logging distribution (before logging was added to Python) is at http://www.red-dove.com/python_logging.html and although the logging package in Python is much more up to date, the standalone distribution contains a test directory which has a lot of useful examples of derived handler classes. #!/usr/bin/env python # # Copyright 2001-2009 by Vinay Sajip. All Rights Reserved. # # Permission to use, copy, modify, and distribute this software and its # documentation for any purpose and without fee is hereby granted, # provided that the above copyright notice appear in all copies and that # both that copyright notice and this permission notice appear in # supporting documentation, and that the name of Vinay Sajip # not be used in advertising or publicity pertaining to distribution # of the software without specific, written prior permission. # VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING # ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL # VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR # ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER # IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT # OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. # # This file is part of the standalone Python logging distribution. See # http://www.red-dove.com/python_logging.html # """ A test harness for the logging module. An example handler - DBHandler - which writes to an Python DB API 2.0 data source. You'll need to set this source up before you run the test. Copyright (C) 2001-2009 Vinay Sajip. All Rights Reserved. """ import sys, string, time, logging class DBHandler(logging.Handler): def __init__(self, dsn, uid='', pwd=''): logging.Handler.__init__(self) import mx.ODBC.Windows self.dsn = dsn self.uid = uid self.pwd = pwd self.conn = mx.ODBC.Windows.connect(self.dsn, self.uid, self.pwd) self.SQL = """INSERT INTO Events ( Created, RelativeCreated, Name, LogLevel, LevelText, Message, Filename, Pathname, Lineno, Milliseconds, Exception, Thread ) VALUES ( %(dbtime)s, %(relativeCreated)d, '%(name)s', %(levelno)d, '%(levelname)s', '%(message)s', '%(filename)s', '%(pathname)s', %(lineno)d, %(msecs)d, '%(exc_text)s', '%(thread)s' ); """ self.cursor = self.conn.cursor() def formatDBTime(self, record): record.dbtime = time.strftime("#%m/%d/%Y#", time.localtime(record.created)) def emit(self, record): try: #use default formatting self.format(record) #now set the database time up self.formatDBTime(record) if record.exc_info: record.exc_text = logging._defaultFormatter.formatException(record.exc_info) else: record.exc_text = "" sql = self.SQL % record.__dict__ self.cursor.execute(sql) self.conn.commit() except: import traceback ei = sys.exc_info() traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr) del ei def close(self): self.cursor.close() self.conn.close() logging.Handler.close(self) dh = DBHandler('Logging') logger = logging.getLogger("") logger.setLevel(logging.DEBUG) logger.addHandler(dh) logger.info("Jackdaws love my big %s of %s", "sphinx", "quartz") logger.debug("Pack my %s with five dozen %s", "box", "liquor jugs") try: import math math.exp(1000) except: logger.exception("Problem with %s", "math.exp")
Creating a logging handler to connect to Oracle?
So right now i need to create and implement an extension of the Python logging module that will be used to log to our database. Basically we have several python applications(that all run in the background) that currently log to a random mishmash of text files. Which makes it almost impossible to find out if a certain application failed or not. The problem given to me is to move said logging to text files to an oracle DB. The tables have already been defined, and where things need to be logged to but right now, im looking at adding another logging handler that will log to the DB. I am using python 2.5.4 and cx_Oracle and the applications in general can be ether run as a service/daemon or a straight application. I'm just mainly curious about what would be the best possible way to go about this. Few questions: If any errors occur with cx_Oracle, where should these errors be logged to? If its down would it be best to just go and have the logger retreat to the default text file? Awhile back we started enforcing that people use sys.stderr/stdout.write instead of print, so worst case scenario we wouldn't run into any issues with print becoming deprecated. Is there a way to seamlessly make all of the thousands of sys.std calls be piped directly into the logger, and have the logger pickup the slack? After every logged message, should the script automatically do a commit? (there's going to be several dozen a second.) What is the best way to implement a new handler for the logging system? Inheriting from the basic Handler class seems to be easiest. Any ideas / suggestions would be great.
[ "\nIf errors occur with cx_Oracle, it's probably best to log these to a text file.\nYou could try redirecting sys.stdout and sys.stderr to file-like objects which log whatever's written to them to a logger.\nI would guess you do want to commit after each event, unless you have strong reasons for not doing this. Alternatively, you can buffer several events and write them all in a single transaction every so often.\nBelow is an example which uses mx.ODBC, you can probably adapt this to cx_Oracle without too much trouble. It's meant to be Python DB-API 2.0 compliant, I think.\n\nThe standalone Python logging distribution (before logging was added to Python) is at http://www.red-dove.com/python_logging.html and although the logging package in Python is much more up to date, the standalone distribution contains a test directory which has a lot of useful examples of derived handler classes.\n#!/usr/bin/env python\n#\n# Copyright 2001-2009 by Vinay Sajip. All Rights Reserved.\n#\n# Permission to use, copy, modify, and distribute this software and its\n# documentation for any purpose and without fee is hereby granted,\n# provided that the above copyright notice appear in all copies and that\n# both that copyright notice and this permission notice appear in\n# supporting documentation, and that the name of Vinay Sajip\n# not be used in advertising or publicity pertaining to distribution\n# of the software without specific, written prior permission.\n# VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING\n# ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL\n# VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR\n# ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER\n# IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT\n# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n#\n# This file is part of the standalone Python logging distribution. See\n# http://www.red-dove.com/python_logging.html\n#\n\"\"\"\nA test harness for the logging module. An example handler - DBHandler -\nwhich writes to an Python DB API 2.0 data source. You'll need to set this\nsource up before you run the test.\n\nCopyright (C) 2001-2009 Vinay Sajip. All Rights Reserved.\n\"\"\"\nimport sys, string, time, logging\n\nclass DBHandler(logging.Handler):\n def __init__(self, dsn, uid='', pwd=''):\n logging.Handler.__init__(self)\n import mx.ODBC.Windows\n self.dsn = dsn\n self.uid = uid\n self.pwd = pwd\n self.conn = mx.ODBC.Windows.connect(self.dsn, self.uid, self.pwd)\n self.SQL = \"\"\"INSERT INTO Events (\n Created,\n RelativeCreated,\n Name,\n LogLevel,\n LevelText,\n Message,\n Filename,\n Pathname,\n Lineno,\n Milliseconds,\n Exception,\n Thread\n )\n VALUES (\n %(dbtime)s,\n %(relativeCreated)d,\n '%(name)s',\n %(levelno)d,\n '%(levelname)s',\n '%(message)s',\n '%(filename)s',\n '%(pathname)s',\n %(lineno)d,\n %(msecs)d,\n '%(exc_text)s',\n '%(thread)s'\n );\n \"\"\"\n self.cursor = self.conn.cursor()\n\n def formatDBTime(self, record):\n record.dbtime = time.strftime(\"#%m/%d/%Y#\", time.localtime(record.created))\n\n def emit(self, record):\n try:\n #use default formatting\n self.format(record)\n #now set the database time up\n self.formatDBTime(record)\n if record.exc_info:\n record.exc_text = logging._defaultFormatter.formatException(record.exc_info)\n else:\n record.exc_text = \"\"\n sql = self.SQL % record.__dict__\n self.cursor.execute(sql)\n self.conn.commit()\n except:\n import traceback\n ei = sys.exc_info()\n traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr)\n del ei\n\n def close(self):\n self.cursor.close()\n self.conn.close()\n logging.Handler.close(self)\n\ndh = DBHandler('Logging')\nlogger = logging.getLogger(\"\")\nlogger.setLevel(logging.DEBUG)\nlogger.addHandler(dh)\nlogger.info(\"Jackdaws love my big %s of %s\", \"sphinx\", \"quartz\")\nlogger.debug(\"Pack my %s with five dozen %s\", \"box\", \"liquor jugs\")\ntry:\n import math\n math.exp(1000)\nexcept:\n logger.exception(\"Problem with %s\", \"math.exp\")\n\n" ]
[ 20 ]
[]
[]
[ "logging", "oracle", "python" ]
stackoverflow_0000935930_logging_oracle_python.txt
Q: Traversing foreign key related tables in django templates View categories = Category.objects.all() t = loader.get_template('index.html') v = Context({ 'categories': categories }) return HttpResponse(t.render(v)) Template {% for category in categories %} <h1>{{ category.name }}</h1> {% endfor %} this works great. now im trying to print each company in that category. the company table has a foreign key to the category table ive tried {% for company in category.company_set.all() %} seems django doesn't like () in templates There's a maze of information on the django site i keep getting lost between the .96, 1.0 and dev version. im running django version 1.0.2 A: Just get rid of the parentheses: {% for company in category.company_set.all %} Here's the appropriate documentation. You can call methods that take 0 parameters this way.
Traversing foreign key related tables in django templates
View categories = Category.objects.all() t = loader.get_template('index.html') v = Context({ 'categories': categories }) return HttpResponse(t.render(v)) Template {% for category in categories %} <h1>{{ category.name }}</h1> {% endfor %} this works great. now im trying to print each company in that category. the company table has a foreign key to the category table ive tried {% for company in category.company_set.all() %} seems django doesn't like () in templates There's a maze of information on the django site i keep getting lost between the .96, 1.0 and dev version. im running django version 1.0.2
[ "Just get rid of the parentheses:\n{% for company in category.company_set.all %}\n\nHere's the appropriate documentation. You can call methods that take 0 parameters this way.\n" ]
[ 52 ]
[]
[]
[ "django", "django_models", "django_templates", "python" ]
stackoverflow_0001014591_django_django_models_django_templates_python.txt
Q: Python Authentication with urllib2 So I'm trying to download a file from a site called vsearch.cisco.com with python [python] #Connects to the Cisco Server and Downloads files at the URL specified import urllib2 #Define Useful Variables url = 'http://vsearch.cisco.com' username = 'xxxxxxxx' password = 'xxxxxxxx' realm = 'CEC' # Begin Making connection # Create a Handler -- Also could be where the error lies handler = urllib2.HTTPDigestAuthHandler() handler.add_password(realm,url,username,password) # Create an Opener opener = urllib2.build_opener(handler) urllib2.install_opener(opener) try: urllib2.urlopen(url) print f.read() except urllib2.HTTPError, e: print e.code print e.header [/python] My error is ValueError: AbstractDigestAuthHandler doesn't know about basic I've tried using Basic HTML Authorization handlers and even HTTPS handlers. Nothing gives me access. This error is different from all the other errors however. The other errors are simply 401 HTML errors Any suggestions on how to do this? A: A "password manager" might help: mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() mgr.add_password(None, url, user, password) urllib2.build_opener(urllib2.HTTPBasicAuthHandler(mgr), urllib2.HTTPDigestAuthHandler(mgr)) A: As for what I tried in my tests (http://devel.almad.net/trac/django-http-digest/browser/djangohttpdigest/tests/test_simple_digest.py), error is prabably in your url - To make it working, I've included http:// part, not only host.
Python Authentication with urllib2
So I'm trying to download a file from a site called vsearch.cisco.com with python [python] #Connects to the Cisco Server and Downloads files at the URL specified import urllib2 #Define Useful Variables url = 'http://vsearch.cisco.com' username = 'xxxxxxxx' password = 'xxxxxxxx' realm = 'CEC' # Begin Making connection # Create a Handler -- Also could be where the error lies handler = urllib2.HTTPDigestAuthHandler() handler.add_password(realm,url,username,password) # Create an Opener opener = urllib2.build_opener(handler) urllib2.install_opener(opener) try: urllib2.urlopen(url) print f.read() except urllib2.HTTPError, e: print e.code print e.header [/python] My error is ValueError: AbstractDigestAuthHandler doesn't know about basic I've tried using Basic HTML Authorization handlers and even HTTPS handlers. Nothing gives me access. This error is different from all the other errors however. The other errors are simply 401 HTML errors Any suggestions on how to do this?
[ "A \"password manager\" might help:\n mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()\n mgr.add_password(None, url, user, password) \n urllib2.build_opener(urllib2.HTTPBasicAuthHandler(mgr),\n urllib2.HTTPDigestAuthHandler(mgr))\n\n", "As for what I tried in my tests (http://devel.almad.net/trac/django-http-digest/browser/djangohttpdigest/tests/test_simple_digest.py), error is prabably in your url - To make it working, I've included http:// part, not only host.\n" ]
[ 8, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001014570_python.txt
Q: Running Python code contained in a string I'm writing a game engine using pygame and box2d, and in the character builder, I want to be able to write the code that will be executed on keydown events. My plan was to have a text editor in the character builder that let you write code similar to: if key == K_a: ## Move left pass elif key == K_d: ## Move right pass I will retrieve the contents of the text editor as a string, and I want the code to be run in a method in this method of Character: def keydown(self, key): ## Run code from text editor What's the best way to do that? A: You can use the eval(string) method to do this. Definition eval(code, globals=None, locals=None) The code is just standard Python code - this means that it still needs to be properly indented. The globals can have a custom __builtins__ defined, which could be useful for security purposes. Example eval("print('Hello')") Would print hello to the console. You can also specify local and global variables for the code to use: eval("print('Hello, %s'%name)", {}, {'name':'person-b'}) Security Concerns Be careful, though. Any user input will be executed. Consider: eval("import os;os.system('sudo rm -rf /')") There are a number of ways around that. The easiest is to do something like: eval("import os;...", {'os':None}) Which will throw an exception, rather than erasing your hard drive. While your program is desktop, this could be a problem if people redistributed scripts, which I imagine is intended. Strange Example Here's an example of using eval rather strangely: def hello() : print('Hello') def world() : print('world') CURRENT_MOOD = 'happy' eval(get_code(), {'contrivedExample':__main__}, {'hi':hello}.update(locals())) What this does on the eval line is: Gives the current module another name (it becomes contrivedExample to the script). The consumer can call contrivedExample.hello() now.) It defines hi as pointing to hello It combined that dictionary with the list of current globals in the executing module. FAIL It turns out (thanks commenters!) that you actually need to use the exec statement. Big oops. The revised examples are as follows: exec Definition (This looks familiar!) Exec is a statement: exec "code" [in scope] Where scope is a dictionary of both local and global variables. If this is not specified, it executes in the current scope. The code is just standard Python code - this means that it still needs to be properly indented. exec Example exec "print('hello')" Would print hello to the console. You can also specify local and global variables for the code to use: eval "print('hello, '+name)" in {'name':'person-b'} exec Security Concerns Be careful, though. Any user input will be executed. Consider: exec "import os;os.system('sudo rm -rf /')" Print Statement As also noted by commenters, print is a statement in all versions of Python prior to 3.0. In 2.6, the behaviour can be changed by typing from __future__ import print_statement. Otherwise, use: print "hello" Instead of : print("hello") A: As others have pointed out, you can load the text into a string and use exec "codestring". If contained in a file already, using execfile will avoid having to load it. One performance note: You should avoid execing the code multiple times, as parsing and compiling the python source is a slow process. ie. don't have: def keydown(self, key): exec user_code You can improve this a little by compiling the source into a code object (with compile() and exec that, or better, by constructing a function that you keep around, and only build once. Either require the user to write "def my_handler(args...)", or prepend it yourself, and do something like: user_source = "def user_func(args):\n" + '\n'.join(" "+line for line in user_source.splitlines()) d={} exec user_source in d user_func = d['user_func'] Then later: if key == K_a: user_func(args) A: You can use eval() A: eval or exec. You should definitely read Python library reference before programming.
Running Python code contained in a string
I'm writing a game engine using pygame and box2d, and in the character builder, I want to be able to write the code that will be executed on keydown events. My plan was to have a text editor in the character builder that let you write code similar to: if key == K_a: ## Move left pass elif key == K_d: ## Move right pass I will retrieve the contents of the text editor as a string, and I want the code to be run in a method in this method of Character: def keydown(self, key): ## Run code from text editor What's the best way to do that?
[ "You can use the eval(string) method to do this. \nDefinition\neval(code, globals=None, locals=None)\nThe code is just standard Python code - this means that it still needs to be properly indented. \nThe globals can have a custom __builtins__ defined, which could be useful for security purposes.\nExample\neval(\"print('Hello')\")\n\nWould print hello to the console. You can also specify local and global variables for the code to use:\neval(\"print('Hello, %s'%name)\", {}, {'name':'person-b'})\n\nSecurity Concerns\nBe careful, though. Any user input will be executed. Consider:\neval(\"import os;os.system('sudo rm -rf /')\")\n\nThere are a number of ways around that. The easiest is to do something like:\neval(\"import os;...\", {'os':None})\n\nWhich will throw an exception, rather than erasing your hard drive. While your program is desktop, this could be a problem if people redistributed scripts, which I imagine is intended. \nStrange Example\nHere's an example of using eval rather strangely:\ndef hello() : print('Hello')\ndef world() : print('world')\nCURRENT_MOOD = 'happy'\n\neval(get_code(), {'contrivedExample':__main__}, {'hi':hello}.update(locals()))\n\nWhat this does on the eval line is:\n\nGives the current module another name (it becomes contrivedExample to the script). The consumer can call contrivedExample.hello() now.)\nIt defines hi as pointing to hello\nIt combined that dictionary with the list of current globals in the executing module.\n\nFAIL\nIt turns out (thanks commenters!) that you actually need to use the exec statement. Big oops. The revised examples are as follows: \n\nexec Definition\n(This looks familiar!)\nExec is a statement:\nexec \"code\" [in scope]\nWhere scope is a dictionary of both local and global variables. If this is not specified, it executes in the current scope.\nThe code is just standard Python code - this means that it still needs to be properly indented. \nexec Example\nexec \"print('hello')\"\n\nWould print hello to the console. You can also specify local and global variables for the code to use:\neval \"print('hello, '+name)\" in {'name':'person-b'}\n\nexec Security Concerns\nBe careful, though. Any user input will be executed. Consider:\nexec \"import os;os.system('sudo rm -rf /')\"\n\n\nPrint Statement\nAs also noted by commenters, print is a statement in all versions of Python prior to 3.0. In 2.6, the behaviour can be changed by typing from __future__ import print_statement. Otherwise, use:\nprint \"hello\"\n\nInstead of :\nprint(\"hello\")\n\n", "As others have pointed out, you can load the text into a string and use exec \"codestring\". If contained in a file already, using execfile will avoid having to load it.\nOne performance note: You should avoid execing the code multiple times, as parsing and compiling the python source is a slow process. ie. don't have:\ndef keydown(self, key):\n exec user_code\n\nYou can improve this a little by compiling the source into a code object (with compile() and exec that, or better, by constructing a function that you keep around, and only build once. Either require the user to write \"def my_handler(args...)\", or prepend it yourself, and do something like:\nuser_source = \"def user_func(args):\\n\" + '\\n'.join(\" \"+line for line in user_source.splitlines())\n\nd={}\nexec user_source in d\nuser_func = d['user_func']\n\nThen later:\nif key == K_a:\n user_func(args)\n\n", "You can use eval()\n", "eval or exec. You should definitely read Python library reference before programming.\n" ]
[ 25, 2, 0, 0 ]
[]
[]
[ "eval", "exec", "pygame", "python" ]
stackoverflow_0001015142_eval_exec_pygame_python.txt
Q: Logging All Exceptions in a pyqt4 app What's the best way to log all of the exceptions in a pyqt4 application using the standard python logging api? I've tried wrapping exec_() in a try, except block, and logging the exceptions from that, but it only logs exceptions from the initialization of the app. As a temporary solution, I wrapped the most important methods in try, except blocks, but that can't be the only way to do it. A: You need to override sys.excepthook def my_excepthook(type, value, tback): # log the exception here # then call the default handler sys.__excepthook__(type, value, tback) sys.excepthook = my_excepthook
Logging All Exceptions in a pyqt4 app
What's the best way to log all of the exceptions in a pyqt4 application using the standard python logging api? I've tried wrapping exec_() in a try, except block, and logging the exceptions from that, but it only logs exceptions from the initialization of the app. As a temporary solution, I wrapped the most important methods in try, except blocks, but that can't be the only way to do it.
[ "You need to override sys.excepthook\ndef my_excepthook(type, value, tback):\n # log the exception here\n\n # then call the default handler\n sys.__excepthook__(type, value, tback) \n\nsys.excepthook = my_excepthook\n\n" ]
[ 16 ]
[]
[]
[ "logging", "pyqt", "python" ]
stackoverflow_0001015047_logging_pyqt_python.txt
Q: How do I compile Python C extensions using MinGW inside a virtualenv? When using virtualenv in combination with the MinGW compiler on Windows, compiling a C extension results in the following error: C:\MinGW\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe: cannot find -lpython25 collect2: ld returned 1 exit status error: Setup script exited with error: command 'gcc' failed with exit status 1 What should one do to successfully compile C extensions? A: Set the LIBRARY_PATH environment variable so MinGW knows where to find the system-wide Python libpython25.a. Place a line in your virtualenv's activate.bat: set LIBRARY_PATH=c:\python25\libs Or set a global environment variable in Windows. Be sure to change 25 to correspond to your version of Python if you're not using version 2.5.
How do I compile Python C extensions using MinGW inside a virtualenv?
When using virtualenv in combination with the MinGW compiler on Windows, compiling a C extension results in the following error: C:\MinGW\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe: cannot find -lpython25 collect2: ld returned 1 exit status error: Setup script exited with error: command 'gcc' failed with exit status 1 What should one do to successfully compile C extensions?
[ "Set the LIBRARY_PATH environment variable so MinGW knows where to find the system-wide Python libpython25.a.\nPlace a line in your virtualenv's activate.bat:\nset LIBRARY_PATH=c:\\python25\\libs\n\nOr set a global environment variable in Windows.\nBe sure to change 25 to correspond to your version of Python if you're not using version 2.5.\n" ]
[ 6 ]
[]
[]
[ "mingw", "python", "virtualenv" ]
stackoverflow_0001015605_mingw_python_virtualenv.txt
Q: Why is `self` in Python objects immutable? Why can't I perform an action like the following: class Test(object): def __init__(self): self = 5 t = Test() print t I would expect it to print 5 since we're overwriting the instance with it, but instead it doesn't do anything at all. Doesn't even throw an error. Just ignores the assignment. I understand that there would be hardly any situations where one would want to do that, but it still seems odd that you can't. Update: I now understand why it doesn't work, but I'd still like to know if there is any way of replacing an instance from within the instance. A: Any simple assignment to any argument of any function behaves exactly the same way in Python: binds that name to a different value, and does nothing else whatsoever. "No special case is special enough to break the rules", as the Zen of Python says!-) So, far from it being odd (that simply=assigning to a specific argument in a specific function has no externally visible effect whatsoever), it would be utterly astonishing if this specific case worked in any other way, just because of the names of the function and argument in question. Should you ever want to make a class that constructs an object of a different type than itself, such behavior is of course quite possible -- but it's obtained by overriding the special method __new__, not __init__: class Test(object): def __new__(cls): return 5 t = Test() print t This does emit 5. The __new__ / __init__ behavior in Python is an example of the "two-step construction" design pattern: the "constructor" proper is __new__ (it builds and returns a (normally uninitialized) object (normally a new one of the type/class in question); __init__ is the "initializer" which properly initializes the new object. This allows, for example, the construction of objects that are immutable once constructed: in this case everything must be done in __new__, before the immutable object is constructed, since, given that the object is immutable, __init__ cannot mutate it in order to initialize it. A: It doesnt "ignore" the assignment. The assignment works just fine, you created a local name that points to the data 5. If you really want to do what you are doing... class Test(object): def __new__(*args): return 5 A: I just ran a quick test, and you can assign to self. Inside your __init__() method, print out the value of self. You'll see that it's 5. What you're missing here is that parameters are passed by value in Python. So, changing the value of a variable in a function or method won't change it for the outside world. All that being said, I would strongly advise against ever changing self. A: Dr. Egon Spengler: It would be bad. Dr. Peter Venkman: I'm fuzzy on the whole good/bad thing. What do you mean, "bad"? Dr. Egon Spengler: Try to imagine all life as you know it stopping instantaneously and every molecule in your body exploding at the speed of light. A: Sometimes you want to do this, though not with immutable types like int: >>> class Test(list): ... def __init__(self): ... list.__init__(self, [1,2,3]) # self = [1,2,3] seems right, but isn't >> t = Test() >> print t [1, 2, 3]
Why is `self` in Python objects immutable?
Why can't I perform an action like the following: class Test(object): def __init__(self): self = 5 t = Test() print t I would expect it to print 5 since we're overwriting the instance with it, but instead it doesn't do anything at all. Doesn't even throw an error. Just ignores the assignment. I understand that there would be hardly any situations where one would want to do that, but it still seems odd that you can't. Update: I now understand why it doesn't work, but I'd still like to know if there is any way of replacing an instance from within the instance.
[ "Any simple assignment to any argument of any function behaves exactly the same way in Python: binds that name to a different value, and does nothing else whatsoever. \"No special case is special enough to break the rules\", as the Zen of Python says!-)\nSo, far from it being odd (that simply=assigning to a specific argument in a specific function has no externally visible effect whatsoever), it would be utterly astonishing if this specific case worked in any other way, just because of the names of the function and argument in question.\nShould you ever want to make a class that constructs an object of a different type than itself, such behavior is of course quite possible -- but it's obtained by overriding the special method __new__, not __init__:\nclass Test(object):\n def __new__(cls):\n return 5\n\nt = Test()\nprint t\n\nThis does emit 5. The __new__ / __init__ behavior in Python is an example of the \"two-step construction\" design pattern: the \"constructor\" proper is __new__ (it builds and returns a (normally uninitialized) object (normally a new one of the type/class in question); __init__ is the \"initializer\" which properly initializes the new object.\nThis allows, for example, the construction of objects that are immutable once constructed: in this case everything must be done in __new__, before the immutable object is constructed, since, given that the object is immutable, __init__ cannot mutate it in order to initialize it.\n", "It doesnt \"ignore\" the assignment. The assignment works just fine, you created a local name that points to the data 5.\nIf you really want to do what you are doing...\nclass Test(object):\n def __new__(*args):\n return 5\n\n", "I just ran a quick test, and you can assign to self. Inside your __init__() method, print out the value of self. You'll see that it's 5.\nWhat you're missing here is that parameters are passed by value in Python. So, changing the value of a variable in a function or method won't change it for the outside world.\nAll that being said, I would strongly advise against ever changing self.\n", "Dr. Egon Spengler: It would be bad.\nDr. Peter Venkman: I'm fuzzy on the whole good/bad thing. What do you mean, \"bad\"?\nDr. Egon Spengler: Try to imagine all life as you know it stopping instantaneously and every molecule in your body exploding at the speed of light.\n", "Sometimes you want to do this, though not with immutable types like int:\n>>> class Test(list):\n ... def __init__(self):\n ... list.__init__(self, [1,2,3]) # self = [1,2,3] seems right, but isn't\n\n>> t = Test()\n>> print t\n[1, 2, 3]\n\n" ]
[ 63, 10, 3, 1, 1 ]
[ "class Test(object):\n def __init__(self):\n self = 5\n\nt = Test()\nprint t\n\nis like having this PHP (only other lang i know, sorry)\nclass Test {\n function __construct() {\n $this = 5;\n }\n}\n\nI don't see how it makes sense. replacing the instance with a value?\n" ]
[ -3 ]
[ "object", "python" ]
stackoverflow_0001015592_object_python.txt
Q: How can I accurately program an automated "click" on Windows? I wrote a program to click on an application automatically at scheduled time using Win32, using MOUSE_DOWN and MOUSE_UP. It usually works well, except I found that I need to put in a sleep 0.1 between the MOUSE_DOWN and MOUSE_UP. (using Ruby, which allows sleeping a fraction of a second). Without the sleep, sometimes the click doesn't go through. But I noticed that sometimes, the click is "too long". The click is actually on a Flash app's Right Arrow. This right arrow will go to the next item on a list. So if you MOUSE_DOWN for a little longer, it actually will shift 2 or 3 items instead of just 1 item. So I wonder, is there a way to accurately simulate 1 click in this case. Probably there is no MOUSE_CLICK event? It has to be simulated using MOUSE_DOWN and MOUSE_UP? (it is actually called MOUSEEVENTF_LEFTDOWN and MOUSEEVENTF_LEFTUP on Win32, just for simplicity it is stated as MOUSE_DOWN instead.) A: If you're not bound to a specific language you could have a look at AutoIt which is made especially for things like this. I had good experiences with it for automating things like mouseclicks or keystrokes. A: You do not decide what delay setting between mouse down and mouse up results in a valid single click, the operating system does. No sleep function can guarantee the timing between the mouse down and mouse up events you want. Perl's Win32::GuiTest module allows you to send an actual click event rather than messing with the timing of down and up events. Later: Looking at the source code, Win32::GuiTest seems to just fire mouse down and up events without any delay between them see GuiTest.pm: elsif ( $item =~ /leftclick/i ) { SendLButtonDown (); SendLButtonUp (); } In addition, http://msdn.microsoft.com/en-us/library/ms646260(VS.85).aspx states that mouse_event has been superseded by SendInput http://msdn.microsoft.com/en-us/library/ms646310(VS.85).aspx which allows you to send a MOUSEINPUT event with timestamps etc. A: What function(s) are you using to accomplish this? I'd expect SendInput with a chain of 2 passed in MOUSEINPUT structures (one button down, the other button up) would pull this off for you. You can even play games with the timestamps on the faked events to more deterministic behavior than you'll get with sleep(). There's no CLICK to send, as its the responsibility of an application to collapse low level events (mouse move, button down, button up, key down, key up, etc.) into "user level" events. Some code, to clarify. This is straight-up untested/compiled C (read: the bad kind of C), but I hope it illustrates my point. LPINPUT events = (LPINPUT)malloc(sizeof(INPUT) * 2); events[0].type = INPUT_MOUSE; events[0].mi.dwFlags = MOUSEEVENTF_LEFTDOWN; events[0].mi.time = 0; //Here, you can play with timestamps (milliseconds) events[1].type = INPUT_MOUSE; events[1].mi.dwFlags = MOUSEEVENTF_LEFTUP; events[1].mi.time = 0; //Likewise //Push both events into the event queue SendInput(2, events, sizeof(INPUT)); I have zero Ruby experience, so I don't know how you'd interop this for your case. A: I guess 2 things: I bet ruby doesn't and can't guarantee that sleep 0.1 really sleeps only 0.1 seconds. As this is probably out of rubys control and OS controlled The call to the Win32 API via Win32 itself may take an arbitrary time and cause the delay A: Since the Windows API supports sleeping down to 1 millisecond, why not just sleep a shorter time? 0.01 seconds should be enough? A: I'd use AutoHotKey. It's open source and free, been around a long time and actively maintained, to the point that others have built an IDE for it. I use it for generating unique random test data for form testing but it does mouse stuff too. http://www.autohotkey.com/
How can I accurately program an automated "click" on Windows?
I wrote a program to click on an application automatically at scheduled time using Win32, using MOUSE_DOWN and MOUSE_UP. It usually works well, except I found that I need to put in a sleep 0.1 between the MOUSE_DOWN and MOUSE_UP. (using Ruby, which allows sleeping a fraction of a second). Without the sleep, sometimes the click doesn't go through. But I noticed that sometimes, the click is "too long". The click is actually on a Flash app's Right Arrow. This right arrow will go to the next item on a list. So if you MOUSE_DOWN for a little longer, it actually will shift 2 or 3 items instead of just 1 item. So I wonder, is there a way to accurately simulate 1 click in this case. Probably there is no MOUSE_CLICK event? It has to be simulated using MOUSE_DOWN and MOUSE_UP? (it is actually called MOUSEEVENTF_LEFTDOWN and MOUSEEVENTF_LEFTUP on Win32, just for simplicity it is stated as MOUSE_DOWN instead.)
[ "If you're not bound to a specific language you could have a look at AutoIt which is made especially for things like this.\nI had good experiences with it for automating things like mouseclicks or keystrokes.\n", "You do not decide what delay setting between mouse down and mouse up results in a valid single click, the operating system does.\nNo sleep function can guarantee the timing between the mouse down and mouse up events you want. Perl's Win32::GuiTest module allows you to send an actual click event rather than messing with the timing of down and up events. Later: Looking at the source code, Win32::GuiTest seems to just fire mouse down and up events without any delay between them see GuiTest.pm:\nelsif ( $item =~ /leftclick/i ) {\n SendLButtonDown ();\n SendLButtonUp ();\n}\n\nIn addition, http://msdn.microsoft.com/en-us/library/ms646260(VS.85).aspx states that mouse_event has been superseded by SendInput http://msdn.microsoft.com/en-us/library/ms646310(VS.85).aspx which allows you to send a MOUSEINPUT event with timestamps etc.\n", "What function(s) are you using to accomplish this?\nI'd expect SendInput with a chain of 2 passed in MOUSEINPUT structures (one button down, the other button up) would pull this off for you. You can even play games with the timestamps on the faked events to more deterministic behavior than you'll get with sleep().\nThere's no CLICK to send, as its the responsibility of an application to collapse low level events (mouse move, button down, button up, key down, key up, etc.) into \"user level\" events.\n\nSome code, to clarify. This is straight-up untested/compiled C (read: the bad kind of C), but I hope it illustrates my point.\nLPINPUT events = (LPINPUT)malloc(sizeof(INPUT) * 2);\n\nevents[0].type = INPUT_MOUSE;\nevents[0].mi.dwFlags = MOUSEEVENTF_LEFTDOWN;\nevents[0].mi.time = 0; //Here, you can play with timestamps (milliseconds)\nevents[1].type = INPUT_MOUSE;\nevents[1].mi.dwFlags = MOUSEEVENTF_LEFTUP;\nevents[1].mi.time = 0; //Likewise\n\n//Push both events into the event queue\nSendInput(2, events, sizeof(INPUT));\n\nI have zero Ruby experience, so I don't know how you'd interop this for your case.\n", "I guess 2 things:\n\nI bet ruby doesn't and can't guarantee that sleep 0.1 really sleeps only 0.1 seconds. As this is probably out of rubys control and OS controlled\nThe call to the Win32 API via Win32 itself may take an arbitrary time and cause the delay\n\n", "Since the Windows API supports sleeping down to 1 millisecond, why not just sleep a shorter time? 0.01 seconds should be enough?\n", "I'd use AutoHotKey. It's open source and free, been around a long time and actively maintained, to the point that others have built an IDE for it. I use it for generating unique random test data for form testing but it does mouse stuff too. \nhttp://www.autohotkey.com/\n" ]
[ 3, 3, 1, 1, 1, 1 ]
[]
[]
[ "perl", "python", "ruby", "winapi", "windows" ]
stackoverflow_0001011799_perl_python_ruby_winapi_windows.txt
Q: What does : TypeError: cannot concatenate 'str' and 'list' objects mean? What does this error mean? TypeError: cannot concatenate 'str' and 'list' objects Here's part of the code: for j in ('90.','52.62263.','26.5651.','10.8123.'): if j == '90.': z = ('0.') elif j == '52.62263.': z = ('0.', '72.', '144.', '216.', '288.') for k in z: exepath = os.path.join(exe file location here) exepath = '"' + os.path.normpath(exepath) + '"' cmd = [exepath + '-j' + str(j) + '-n' + str(z)] process=Popen('echo ' + cmd, shell=True, stderr=STDOUT ) print process A: I'm not sure you're aware that cmd is a one-element list, and not a string. Changing that line to the below would construct a string, and the rest of your code will work: # Just removing the square brackets cmd = exepath + '-j' + str(j) + '-n' + str(z) I assume you used brackets just to group the operations. That's not necessary if everything is on one line. If you wanted to break it up over two lines, you should use parentheses, not brackets: # This returns a one-element list cmd = [exepath + '-j' + str(j) + '-n' + str(z)] # This returns a string cmd = (exepath + '-j' + str(j) + '-n' + str(z)) Anything between square brackets in python is always a list. Expressions between parentheses are evaluated as normal, unless there is a comma in the expression, in which case the parentheses act as a tuple constructor: # This is a string str = ("I'm a string") # This is a tuple tup = ("I'm a string","me too") # This is also a (one-element) tuple tup = ("I'm a string",) A: string objects can only be concatenated with other strings. Python is a strongly-typed language. It will not coerce types for you. you can do: 'a' + '1' but not: 'a' + 1 in your case, you are trying to concat a string and a list. this won't work. you can append the item to the list though, if that is your desired result: my_list.append('a') A: There is ANOTHER problem in the OP's code: z = ('0.') then later for k in z: The parentheses in the first statement will be ignored, leading to the second statement binding k first to '0' and then '.' ... looks like z = ('0.', ) was intended.
What does : TypeError: cannot concatenate 'str' and 'list' objects mean?
What does this error mean? TypeError: cannot concatenate 'str' and 'list' objects Here's part of the code: for j in ('90.','52.62263.','26.5651.','10.8123.'): if j == '90.': z = ('0.') elif j == '52.62263.': z = ('0.', '72.', '144.', '216.', '288.') for k in z: exepath = os.path.join(exe file location here) exepath = '"' + os.path.normpath(exepath) + '"' cmd = [exepath + '-j' + str(j) + '-n' + str(z)] process=Popen('echo ' + cmd, shell=True, stderr=STDOUT ) print process
[ "I'm not sure you're aware that cmd is a one-element list, and not a string.\nChanging that line to the below would construct a string, and the rest of your code will work:\n# Just removing the square brackets\ncmd = exepath + '-j' + str(j) + '-n' + str(z)\n\nI assume you used brackets just to group the operations. That's not necessary if everything is on one line. If you wanted to break it up over two lines, you should use parentheses, not brackets:\n# This returns a one-element list\ncmd = [exepath + '-j' + str(j) + \n '-n' + str(z)]\n\n# This returns a string\ncmd = (exepath + '-j' + str(j) + \n '-n' + str(z))\n\nAnything between square brackets in python is always a list. Expressions between parentheses are evaluated as normal, unless there is a comma in the expression, in which case the parentheses act as a tuple constructor:\n# This is a string\nstr = (\"I'm a string\")\n\n# This is a tuple\ntup = (\"I'm a string\",\"me too\")\n\n# This is also a (one-element) tuple\ntup = (\"I'm a string\",)\n\n", "string objects can only be concatenated with other strings. Python is a strongly-typed language. It will not coerce types for you.\nyou can do: \n'a' + '1'\n\nbut not: \n'a' + 1\n\nin your case, you are trying to concat a string and a list. this won't work. you can append the item to the list though, if that is your desired result:\nmy_list.append('a')\n\n", "There is ANOTHER problem in the OP's code:\nz = ('0.') then later for k in z:\nThe parentheses in the first statement will be ignored, leading to the second statement binding k first to '0' and then '.' ... looks like z = ('0.', ) was intended.\n" ]
[ 11, 4, 2 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001014503_python_string.txt
Q: Combining C and Python functions in a module I have a C extension module, to which I would like to add some Python utility functions. Is there a recommended way of doing this? For example: import my_module my_module.super_fast_written_in_C() my_module.written_in_Python__easy_to_maintain() I'm primarily interested in Python 2.x. A: The usual way of doing this is: mymod.py contains the utility functions written in Python, and imports the goodies in the _mymod module which is written in C and is imported from _mymod.so or _mymod.pyd. For example, look at .../Lib/csv.py in your Python distribution. A: Prefix your native extension with an underscore. Then, in Python, create a wrapper module that imports that native extension and adds some other non-native routines on top of that. A: The existing answers describe the method most often used: it has the potential advantage of allowing pure-Python (or other-language) implementations on platforms in which the compiled C extension is not available (including Jython and IronPython). In a few cases, however, it may not be worth splitting the module into a C layer and a Python layer just to provide a few extras that are more sensibly written in Python than in C. For example, gmpy (lines 7113 ff at this time), in order to enable pickling of instances of gmpy's type, uses: copy_reg_module = PyImport_ImportModule("copy_reg"); if (copy_reg_module) { char* enable_pickle = "def mpz_reducer(an_mpz): return (gmpy.mpz, (an_mpz.binary(), 256))\n" "def mpq_reducer(an_mpq): return (gmpy.mpq, (an_mpq.binary(), 256))\n" "def mpf_reducer(an_mpf): return (gmpy.mpf, (an_mpf.binary(), 0, 256))\n" "copy_reg.pickle(type(gmpy.mpz(0)), mpz_reducer)\n" "copy_reg.pickle(type(gmpy.mpq(0)), mpq_reducer)\n" "copy_reg.pickle(type(gmpy.mpf(0)), mpf_reducer)\n" ; PyObject* namespace = PyDict_New(); PyObject* result = NULL; if (options.debug) fprintf(stderr, "gmpy_module imported copy_reg OK\n"); PyDict_SetItemString(namespace, "copy_reg", copy_reg_module); PyDict_SetItemString(namespace, "gmpy", gmpy_module); PyDict_SetItemString(namespace, "type", (PyObject*)&PyType_Type); result = PyRun_String(enable_pickle, Py_file_input, namespace, namespace); If you want those few extra functions to "stick around" in your module (not necessary in this example case), you would of course use your module object as built by Py_InitModule3 (or whatever other method) and its PyModule_GetDict rather than a transient dictionary as the namespace in which to PyRun_String. And of course there are more sophisticated approaches than to PyRun_String the def and class statements you need, but, for simple enough cases, this simple approach may in fact be sufficient.
Combining C and Python functions in a module
I have a C extension module, to which I would like to add some Python utility functions. Is there a recommended way of doing this? For example: import my_module my_module.super_fast_written_in_C() my_module.written_in_Python__easy_to_maintain() I'm primarily interested in Python 2.x.
[ "The usual way of doing this is: mymod.py contains the utility functions written in Python, and imports the goodies in the _mymod module which is written in C and is imported from _mymod.so or _mymod.pyd. For example, look at .../Lib/csv.py in your Python distribution.\n", "Prefix your native extension with an underscore.\nThen, in Python, create a wrapper module that imports that native extension and adds some other non-native routines on top of that.\n", "The existing answers describe the method most often used: it has the potential advantage of allowing pure-Python (or other-language) implementations on platforms in which the compiled C extension is not available (including Jython and IronPython).\nIn a few cases, however, it may not be worth splitting the module into a C layer and a Python layer just to provide a few extras that are more sensibly written in Python than in C. For example, gmpy (lines 7113 ff at this time), in order to enable pickling of instances of gmpy's type, uses:\ncopy_reg_module = PyImport_ImportModule(\"copy_reg\");\nif (copy_reg_module) {\n char* enable_pickle =\n \"def mpz_reducer(an_mpz): return (gmpy.mpz, (an_mpz.binary(), 256))\\n\"\n \"def mpq_reducer(an_mpq): return (gmpy.mpq, (an_mpq.binary(), 256))\\n\"\n \"def mpf_reducer(an_mpf): return (gmpy.mpf, (an_mpf.binary(), 0, 256))\\n\"\n \"copy_reg.pickle(type(gmpy.mpz(0)), mpz_reducer)\\n\"\n \"copy_reg.pickle(type(gmpy.mpq(0)), mpq_reducer)\\n\"\n \"copy_reg.pickle(type(gmpy.mpf(0)), mpf_reducer)\\n\"\n ;\n PyObject* namespace = PyDict_New();\n PyObject* result = NULL;\n if (options.debug)\n fprintf(stderr, \"gmpy_module imported copy_reg OK\\n\");\n PyDict_SetItemString(namespace, \"copy_reg\", copy_reg_module);\n PyDict_SetItemString(namespace, \"gmpy\", gmpy_module);\n PyDict_SetItemString(namespace, \"type\", (PyObject*)&PyType_Type);\n result = PyRun_String(enable_pickle, Py_file_input,\n namespace, namespace);\n\nIf you want those few extra functions to \"stick around\" in your module (not necessary in this example case), you would of course use your module object as built by Py_InitModule3 (or whatever other method) and its PyModule_GetDict rather than a transient dictionary as the namespace in which to PyRun_String. And of course there are more sophisticated approaches than to PyRun_String the def and class statements you need, but, for simple enough cases, this simple approach may in fact be sufficient.\n" ]
[ 8, 5, 1 ]
[]
[]
[ "cpython", "python" ]
stackoverflow_0001013449_cpython_python.txt
Q: python ORM allowing for table creation and bulk inserting? I'm looking for an ORM that allows me to do bulk inserts, as well as create code based on python classes. I tried sqlobject, it worked fine for creating the tables but inserting was unacceptibly slow for the amount of data I wanted to insert. If such an ORM doesn't exist any pointers on classes that can help with things like sanitizing input and building SQL strings would be appreciated. A: You might want to try SQLAlchemy. A: I believe sqlalchemy has bulk inserts, but I haven't ever used it. However, it stacks up favorably in benchmark tests according to this this reviewer. EDIT: It doesn't seem clear how he's using SQLAlchemy...whether it's the actual ORM or just query code. Reading the blog entry, I assumed the point was to play with the ORM, but a few commentors seem to assume that he's using query-code and that if it were the ORM it would be much slower. A: I'm not familiar with sqlobject, but for bulk inserts typically you want to make sure this is done in a transaction, so your not commiting for each manipulation. In sqlobject it looks like you can do this by using the transactions object to control commits. You probably need to turn of the default AutoCommit flag as well for this to function properly. http://www.sqlobject.org/SQLObject.html#id45
python ORM allowing for table creation and bulk inserting?
I'm looking for an ORM that allows me to do bulk inserts, as well as create code based on python classes. I tried sqlobject, it worked fine for creating the tables but inserting was unacceptibly slow for the amount of data I wanted to insert. If such an ORM doesn't exist any pointers on classes that can help with things like sanitizing input and building SQL strings would be appreciated.
[ "You might want to try SQLAlchemy.\n", "I believe sqlalchemy has bulk inserts, but I haven't ever used it. However, it stacks up favorably in benchmark tests according to this this reviewer.\nEDIT: It doesn't seem clear how he's using SQLAlchemy...whether it's the actual ORM or just query code. Reading the blog entry, I assumed the point was to play with the ORM, but a few commentors seem to assume that he's using query-code and that if it were the ORM it would be much slower.\n", "I'm not familiar with sqlobject, but for bulk inserts typically you want to make sure this is done in a transaction, so your not commiting for each manipulation.\nIn sqlobject it looks like you can do this by using the transactions object to control commits. You probably need to turn of the default AutoCommit flag as well for this to function properly.\nhttp://www.sqlobject.org/SQLObject.html#id45\n" ]
[ 5, 0, 0 ]
[]
[]
[ "database", "orm", "python" ]
stackoverflow_0001013282_database_orm_python.txt
Q: Processing pairs of values from two sequences in Clojure I'm trying to get into the Clojure community. I've been working a lot with Python, and one of the features I make extensive use of is the zip() method, for iterating over pairs of values. Is there a (clever and short) way of achieving the same in Clojure? A: Another way is to simply use map together with some function that collects its arguments in a sequence, like this: user=> (map vector '(1 2 3) "abc") ([1 \a] [2 \b] [3 \c]) A: (zipmap [:a :b :c] (range 3)) -> {:c 2, :b 1, :a 0} Iterating over maps happens pairwise, e.g. like this: (doseq [[k v] (zipmap [:a :b :c] (range 3))] (printf "key: %s, value: %s\n" k v)) prints: key: :c, value: 2 key: :b, value: 1 key: :a, value: 0 A: The question has been answered, but there's still interleave, which also handles an arbitrary number of sequences, but does not group the resulting sequence into tuples (but you can use partition for that).
Processing pairs of values from two sequences in Clojure
I'm trying to get into the Clojure community. I've been working a lot with Python, and one of the features I make extensive use of is the zip() method, for iterating over pairs of values. Is there a (clever and short) way of achieving the same in Clojure?
[ "Another way is to simply use map together with some function that collects its arguments in a sequence, like this:\nuser=> (map vector '(1 2 3) \"abc\")\n([1 \\a] [2 \\b] [3 \\c])\n\n", "(zipmap [:a :b :c] (range 3))\n-> {:c 2, :b 1, :a 0}\n\nIterating over maps happens pairwise, e.g. like this:\n(doseq [[k v] (zipmap [:a :b :c] (range 3))]\n (printf \"key: %s, value: %s\\n\" k v))\n\nprints:\nkey: :c, value: 2\nkey: :b, value: 1\nkey: :a, value: 0\n\n", "The question has been answered, but there's still interleave, which also handles an arbitrary number of sequences, but does not group the resulting sequence into tuples (but you can use partition for that).\n" ]
[ 12, 4, 3 ]
[]
[]
[ "clojure", "python", "zip" ]
stackoverflow_0001009037_clojure_python_zip.txt
Q: Passing Formatted Text Through XSLT I have formatted text (with newlines, tabs, etc.) coming in from a Telnet connection. I have a python script that manages the Telnet connection and embeds the Telnet response in XML that then gets passed through an XSLT transform. How do I pass that XML through the transform without losing the original formatting? I have access to the transformation script and the python script but not the transform invocation itself. A: You could embed the text you want to be untouched in a CDATA section. A: Data stored in XML comes out the same way it goes in. So if you store the text in an element, no whitespace and newlines are lost unless you tamper with the data in the XSLT. Enclosing the text in CDATA is unnecessary unless there is some formatting that is invalid in XML (pointy brackets, ampersands, quotes) and you don't want to XML-escape the text under any circumstances. This is up to you, but in any case XML-escaping is completely transparent when the XML is handled with an XML-aware tool chain. To answer your question more specifically, you need to show some input, the essential part of the transformation, and some output.
Passing Formatted Text Through XSLT
I have formatted text (with newlines, tabs, etc.) coming in from a Telnet connection. I have a python script that manages the Telnet connection and embeds the Telnet response in XML that then gets passed through an XSLT transform. How do I pass that XML through the transform without losing the original formatting? I have access to the transformation script and the python script but not the transform invocation itself.
[ "You could embed the text you want to be untouched in a CDATA section.\n", "Data stored in XML comes out the same way it goes in. So if you store the text in an element, no whitespace and newlines are lost unless you tamper with the data in the XSLT. \nEnclosing the text in CDATA is unnecessary unless there is some formatting that is invalid in XML (pointy brackets, ampersands, quotes) and you don't want to XML-escape the text under any circumstances. This is up to you, but in any case XML-escaping is completely transparent when the XML is handled with an XML-aware tool chain.\nTo answer your question more specifically, you need to show some input, the essential part of the transformation, and some output.\n" ]
[ 0, 0 ]
[]
[]
[ "python", "xslt" ]
stackoverflow_0001015816_python_xslt.txt
Q: Generate from generators I have a generator that takes a number as an argument and yields other numbers. I want to use the numbers yielded by this generator and pass them as arguments to the same generator, creating a chain of some length. For example, mygenerator(2) yields 5, 4 and 6. Apply mygenerator to each of these numbers, over and over again to the numbers yielded. The generator always yields bigger numbers than the one passed as argument, and for 2 different numbers will never yield the same number. mygenerator(2): 4 5 mygenerator(4) : 10 11 12 mygenerator(5): 9 300 500 So the set (9,10,11,12,300,500) has "distance" 2 from the original number, 2. If I apply it to the number 9, I will get a set of numbers with distance "3" from the original 2. Essentially what I want is to create a set that has a specified distance from a given number and I have problems figuring out how to do that in Python. Help much appreciated :) A: Suppose our generator yields square and cube of given number that way it will output unique so if we want to get numbers at dist D in simplest case we can recursively get numbers at dist D-1 and then apply generator to them def mygen(N): yield N**2 yield N**3 def getSet(N, dist): if dist == 0: return [N] numbers = [] for n in getSet(N, dist-1): numbers += list(mygen(n)) return numbers print getSet(2,0) print getSet(2,1) print getSet(2,2) print getSet(2,3) output is [2] [4, 8] [16, 64, 64, 512] [256, 4096, 4096, 262144, 4096, 262144, 262144, 134217728] A: This solution does not require to keep all results in memory: (in case it doesn't fit in memory etc) def grandKids(generation, kidsFunc, val): layer = [val] for i in xrange(generation): layer = itertools.chain.from_iterable(itertools.imap(kidsFunc, layer)) return layer Example: def kids(x): # children indices in a 1-based binary heap yield x*2 yield x*2+1 >>> list(grandKids(3, kids, 2)) [16, 17, 18, 19, 20, 21, 22, 23] Btw, solution in Haskell: grandKids generation kidsFunc val = iterate (concatMap kidsFunc) [val] !! generation A: I have just started learning Python so bear with me if my answer seems a tad amateurish. What you could do is use a list of lists to populate the values returned from the myGenerator function. So for eg. with 2 as the starting argument your data-structure would resemble something like resDataSet = [[2], [4, 5], [9, 10, 11, 12, 300 , 500] ... ] The row index should give you the distance and you can use methods like extend to add on more data to your list.
Generate from generators
I have a generator that takes a number as an argument and yields other numbers. I want to use the numbers yielded by this generator and pass them as arguments to the same generator, creating a chain of some length. For example, mygenerator(2) yields 5, 4 and 6. Apply mygenerator to each of these numbers, over and over again to the numbers yielded. The generator always yields bigger numbers than the one passed as argument, and for 2 different numbers will never yield the same number. mygenerator(2): 4 5 mygenerator(4) : 10 11 12 mygenerator(5): 9 300 500 So the set (9,10,11,12,300,500) has "distance" 2 from the original number, 2. If I apply it to the number 9, I will get a set of numbers with distance "3" from the original 2. Essentially what I want is to create a set that has a specified distance from a given number and I have problems figuring out how to do that in Python. Help much appreciated :)
[ "Suppose our generator yields square and cube of given number that way it will output unique\nso if we want to get numbers at dist D in simplest case we can recursively get numbers at dist D-1 and then apply generator to them\ndef mygen(N):\n yield N**2\n yield N**3\n\ndef getSet(N, dist):\n if dist == 0:\n return [N]\n\n numbers = []\n for n in getSet(N, dist-1):\n numbers += list(mygen(n))\n\n return numbers\n\nprint getSet(2,0)\nprint getSet(2,1)\nprint getSet(2,2)\nprint getSet(2,3)\n\noutput is\n[2]\n[4, 8]\n[16, 64, 64, 512]\n[256, 4096, 4096, 262144, 4096, 262144, 262144, 134217728]\n\n", "This solution does not require to keep all results in memory: (in case it doesn't fit in memory etc)\ndef grandKids(generation, kidsFunc, val):\n layer = [val]\n for i in xrange(generation):\n layer = itertools.chain.from_iterable(itertools.imap(kidsFunc, layer))\n return layer\n\nExample:\ndef kids(x): # children indices in a 1-based binary heap\n yield x*2\n yield x*2+1\n\n>>> list(grandKids(3, kids, 2))\n[16, 17, 18, 19, 20, 21, 22, 23]\n\nBtw, solution in Haskell:\ngrandKids generation kidsFunc val =\n iterate (concatMap kidsFunc) [val] !! generation\n\n", "I have just started learning Python so bear with me if my answer seems a tad amateurish. What you could do is use a list of lists to populate the values returned from the myGenerator function.\nSo for eg. with 2 as the starting argument your data-structure would resemble something like \nresDataSet = [[2], \n [4, 5],\n [9, 10, 11, 12, 300 , 500]\n ...\n ]\n\nThe row index should give you the distance and you can use methods like extend to add on more data to your list.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "generator", "python" ]
stackoverflow_0001016997_generator_python.txt
Q: How to link C lib against python for embedding under Windows? I am working on an application written in C. One part of the application should embed python and there is my current problem. I try to link my source to the Python library but it does not work. As I use MinGW I have created the python26.a file from python26.lib with dlltool and put the *.a file in C:/Program Files (x86)/python/2.6/libs. Therefore, I compile the file with this command: gcc -shared -o mod_python.dll mod_python.o "-LC:\Program Files (x86)\python\2.6\libs" -lpython26 -Wl,--out-implib,libmod_python.a -Wl,--output-def,mod_python.def and I get those errors: Creating library file: libmod_python.a mod_python.o: In function `module_init': mod_python.c:34: undefined reference to `__imp__Py_Initialize' mod_python.c:35: undefined reference to `__imp__PyEval_InitThreads' ... and so on ... My Python "root" folder is C:\Program Files (x86)\python\2.6 The Devsystem is a Windows Server 2008 GCC Information: Reading specs from C:/Program Files (x86)/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.5 (mingw-vista special r3) What I do wrong? How I get it compiled and linked :-)? Cheers, gregor Edit: I forgot to write information about my Python installation: It's the official python.org installation 2.6.1 ... and how I created the python.a file: dlltool -z python.def --export-all-symbols -v c:\windows\system32\python26.dll dlltool --dllname c:\Windows\system32\python26.dll --def python.def -v --output-lib python26.a A: Well on Windows the python distribution comes already with a libpython26.a in the libs subdir so there is no need to generate .a files using dll tools. I did try a little example with a single C file toto.c: gcc -shared -o ./toto.dll ./toto.c -I/Python26/include/ -L/Python26/libs -lpython26 And it works like a charm. Hope it will help :-) A: Python (at least my distribution) comes with a "python-config" program that automatically creates the correct compiler and linker options for various situations. However, I have never used it on Windows. Perhaps this tool can help you though? A: IIRC, dlltool does not always work. Having python 2.6 + Wow makes things even more less likely to work. For numpy, here is how I did it. Basically, I use obdump.exe to build the table from the dll, which I parse to generate the .def. You should check whether your missing symbols are in the .def, or otherwise it won't work.
How to link C lib against python for embedding under Windows?
I am working on an application written in C. One part of the application should embed python and there is my current problem. I try to link my source to the Python library but it does not work. As I use MinGW I have created the python26.a file from python26.lib with dlltool and put the *.a file in C:/Program Files (x86)/python/2.6/libs. Therefore, I compile the file with this command: gcc -shared -o mod_python.dll mod_python.o "-LC:\Program Files (x86)\python\2.6\libs" -lpython26 -Wl,--out-implib,libmod_python.a -Wl,--output-def,mod_python.def and I get those errors: Creating library file: libmod_python.a mod_python.o: In function `module_init': mod_python.c:34: undefined reference to `__imp__Py_Initialize' mod_python.c:35: undefined reference to `__imp__PyEval_InitThreads' ... and so on ... My Python "root" folder is C:\Program Files (x86)\python\2.6 The Devsystem is a Windows Server 2008 GCC Information: Reading specs from C:/Program Files (x86)/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.5 (mingw-vista special r3) What I do wrong? How I get it compiled and linked :-)? Cheers, gregor Edit: I forgot to write information about my Python installation: It's the official python.org installation 2.6.1 ... and how I created the python.a file: dlltool -z python.def --export-all-symbols -v c:\windows\system32\python26.dll dlltool --dllname c:\Windows\system32\python26.dll --def python.def -v --output-lib python26.a
[ "Well on Windows the python distribution comes already with a libpython26.a in the libs subdir so there is no need to generate .a files using dll tools.\nI did try a little example with a single C file toto.c:\ngcc -shared -o ./toto.dll ./toto.c -I/Python26/include/ -L/Python26/libs -lpython26\n\nAnd it works like a charm. Hope it will help :-)\n", "Python (at least my distribution) comes with a \"python-config\" program that automatically creates the correct compiler and linker options for various situations. However, I have never used it on Windows. Perhaps this tool can help you though?\n", "IIRC, dlltool does not always work. Having python 2.6 + Wow makes things even more less likely to work. For numpy, here is how I did it. Basically, I use obdump.exe to build the table from the dll, which I parse to generate the .def. You should check whether your missing symbols are in the .def, or otherwise it won't work.\n" ]
[ 3, 1, 1 ]
[]
[]
[ "c", "gcc", "linker", "python", "windows" ]
stackoverflow_0001013441_c_gcc_linker_python_windows.txt
Q: wxProgressDialog like behaviour for a wxDialog I want to create modal dialog but which shouldn't behave in a modal way i.e. control flow should continue if i do dlg = wx.Dialog(parent) dlg.ShowModal() print "xxx" dlg.Destroy() "xxx" will not get printed, but in case of progress dialog dlg = wx.ProgressDialog.__init__(self,title, title, parent=parent, style=wx.PD_APP_MODAL) print "xxx" dlg.Destroy() "xxx" will get printed so basically i want to achieve wx.PD__APP__MODAL for a normal dialog? A: Just use Show instead of ShowModal. If your function (the print "xxx" part) runs for a long time you will either have to manually call wx.SafeYield every so often or move your work to a separate thread and send custom events to your dialog from it. One more tip. As I understand, you want to execute some code after the modal dialog is shown, here is a little trick for a special bind to EVT_INIT_DIALOG that accomplishes just that. import wx class TestFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) btn = wx.Button(self, label="Show Dialog") btn.Bind(wx.EVT_BUTTON, self.ShowDialog) def ShowDialog(self, event): dlg = wx.Dialog(self) dlg.Bind(wx.EVT_INIT_DIALOG, lambda e: wx.CallAfter(self.OnModal, e)) dlg.ShowModal() dlg.Destroy() def OnModal(self, event): wx.MessageBox("Executed after ShowModal") app = wx.PySimpleApp() app.TopWindow = TestFrame() app.TopWindow.Show() app.MainLoop() A: It was very trivial, just using wx.PD_APP_MODAL style in wx.Dialog allows it to be modal without stopping the program flow, only user input to app is blocked, i thought PD_APP_MODAL is only for progress dialog
wxProgressDialog like behaviour for a wxDialog
I want to create modal dialog but which shouldn't behave in a modal way i.e. control flow should continue if i do dlg = wx.Dialog(parent) dlg.ShowModal() print "xxx" dlg.Destroy() "xxx" will not get printed, but in case of progress dialog dlg = wx.ProgressDialog.__init__(self,title, title, parent=parent, style=wx.PD_APP_MODAL) print "xxx" dlg.Destroy() "xxx" will get printed so basically i want to achieve wx.PD__APP__MODAL for a normal dialog?
[ "Just use Show instead of ShowModal.\nIf your function (the print \"xxx\" part) runs for a long time you will either have to manually call wx.SafeYield every so often or move your work to a separate thread and send custom events to your dialog from it.\nOne more tip. As I understand, you want to execute some code after the modal dialog is shown, here is a little trick for a special bind to EVT_INIT_DIALOG that accomplishes just that.\nimport wx\n\nclass TestFrame(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None)\n btn = wx.Button(self, label=\"Show Dialog\")\n btn.Bind(wx.EVT_BUTTON, self.ShowDialog)\n\n def ShowDialog(self, event):\n dlg = wx.Dialog(self)\n dlg.Bind(wx.EVT_INIT_DIALOG, lambda e: wx.CallAfter(self.OnModal, e))\n dlg.ShowModal()\n dlg.Destroy()\n\n def OnModal(self, event):\n wx.MessageBox(\"Executed after ShowModal\")\n\napp = wx.PySimpleApp()\napp.TopWindow = TestFrame()\napp.TopWindow.Show()\napp.MainLoop()\n\n", "It was very trivial, just using wx.PD_APP_MODAL style in wx.Dialog allows it to be modal without stopping the program flow, only user input to app is blocked, i thought PD_APP_MODAL is only for progress dialog\n" ]
[ 1, 0 ]
[]
[]
[ "modal_dialog", "python", "wxpython" ]
stackoverflow_0001006598_modal_dialog_python_wxpython.txt
Q: Bash or Python to go backwards? I have a text file which a lot of random occurrences of the string @STRING_A, and I would be interested in writing a short script which removes only some of them. Particularly one that scans the file and once it finds a line which starts with this string like @STRING_A then checks if 3 lines backwards there is another occurrence of a line starting with the same string, like @STRING_A @STRING_A and if it happens, to delete the occurrence 3 lines backward. I was thinking about bash, but I do not know how to "go backwards" with it. So I am sure that this is not possible with bash. I also thought about python, but then I should store all information in memory in order to go backwards and then, for long files it would be unfeasible. What do you think? Is it possible to do it in bash or python? Thanks A: Funny that after all these hours nobody's yet given a solution to the problem as actually phrased (as @John Machin points out in a comment) -- remove just the leading marker (if followed by another such marker 3 lines down), not the whole line containing it. It's not hard, of course -- here's a tiny mod as needed of @truppo's fun solution, for example: from itertools import izip, chain f = "foo.txt" for third, line in izip(chain(" ", open(f)), open(f)): if third.startswith("@STRING_A") and line.startswith("@STRING_A"): line = line[len("@STRING_A"):] print line, Of course, in real life, one would use an iterator.tee instead of reading the file twice, have this code in a function, not repeat the marker constant endlessly, &c-). A: Of course Python will work as well. Simply store the last three lines in an array and check if the first element in the array is the same as the value you are currently reading. Then delete the value and print out the current array. You would then move over your elements to make room for the new value and repeat. Of course when the array is filled you'd have to make sure to continue to move values out of the array and put in the newly read values, stopping to check each time to see if the first value in the array matches the value you are currently reading. A: Here is a more fun solution, using two iterators with a three element offset :) from itertools import izip, chain, tee f1, f2 = tee(open("foo.txt")) for third, line in izip(chain(" ", f1), f2): if not (third.startswith("@STRING_A") and line.startswith("@STRING_A")): print line, A: Why shouldn't it possible in bash? You don't need to keep the whole file in memory, just the last three lines (if I understood correctly), and write what's appropriate to standard-out. Redirect that into a temporary file, check that everything worked as expected, and overwrite the source file with the temporary one. Same goes for Python. I'd provide a script of my own, but that wouldn't be tested. ;-) A: This code will scan through the file, and remove lines starting with the marker. It only keeps only three lines in memory by default: from collections import deque def delete(fp, marker, gap=3): """Delete lines from *fp* if they with *marker* and are followed by another line starting with *marker* *gap* lines after. """ buf = deque() for line in fp: if len(buf) < gap: buf.append(line) else: old = buf.popleft() if not (line.startswith(marker) and old.startswith(marker)): yield old buf.append(line) for line in buf: yield line I've tested it with: >>> from StringIO import StringIO >>> fp = StringIO('''a ... b ... xxx 1 ... c ... xxx 2 ... d ... e ... xxx 3 ... f ... g ... h ... xxx 4 ... i''') >>> print ''.join(delete(fp, 'xxx')) a b xxx 1 c d e xxx 3 f g h xxx 4 i A: As AlbertoPL said, store lines in a fifo for later use--don't "go backwards". For this I would definitely use python over bash+sed/awk/whatever. I took a few moments to code this snippet up: from collections import deque line_fifo = deque() for line in open("test"): line_fifo.append(line) if len(line_fifo) == 4: # "look 3 lines backward" if line_fifo[0] == line_fifo[-1] == "@STRING_A\n": # get rid of that match line_fifo.popleft() else: # print out the top of the fifo print line_fifo.popleft(), # don't forget to print out the fifo when the file ends for line in line_fifo: print line, A: My awk-fu has never been that good... but the following may provide you what you're looking for in a bash-shell/shell-utility form: sed `awk 'BEGIN{ORS=";"} /@STRING_A/ { if(LAST!="" && LAST+3 >= NR) print LAST "d" LAST = NR }' test_file` test_file Basically... awk is producing a command for sed to strip certain lines. I'm sure there's a relatively easy way to make awk do all of the processing, but this does seem to work. The bad part? It does read the test_file twice. The good part? It is a bash/shell-utility implementation. Edit: Alex Martelli points out that the sample file above might have confused me. (my above code deletes the whole line, rather than the @STRING_A flag only) This is easily remedied by adjusting the command to sed: sed `awk 'BEGIN{ORS=";"} /@STRING_A/ { if(LAST!="" && LAST+3 >= NR) print LAST "s/@STRING_A//" LAST = NR }' test_file` test_file A: This "answer" is for lyrae ... I'll amend my previous comment: if the needle is in the first 3 lines of the file, your script will either cause an IndexError or access a line that it shouldn't be accessing, sometimes with interesting side-effects. Example of your script causing IndexError: >>> lines = "@string line 0\nblah blah\n".splitlines(True) >>> needle = "@string " >>> for i,line in enumerate(lines): ... if line.startswith(needle) and lines[i-3].startswith(needle): ... lines[i-3] = lines[i-3].replace(needle, "") ... Traceback (most recent call last): File "<stdin>", line 2, in <module> IndexError: list index out of range and this example shows not only that the Earth is round but also why your "fix" to the "don't delete the whole line" problem should have used .replace(needle, "", 1) or [len(needle):] instead of .replace(needle, "") >>> lines = "NEEDLE x NEEDLE y\nnoddle\nnuddle\n".splitlines(True) >>> needle = "NEEDLE" >>> # Expected result: no change to the file ... for i,line in enumerate(lines): ... if line.startswith(needle) and lines[i-3].startswith(needle): ... lines[i-3] = lines[i-3].replace(needle, "") ... >>> print ''.join(lines) x y <<<=== whoops! noddle nuddle <<<=== still got unwanted newline in here >>>
Bash or Python to go backwards?
I have a text file which a lot of random occurrences of the string @STRING_A, and I would be interested in writing a short script which removes only some of them. Particularly one that scans the file and once it finds a line which starts with this string like @STRING_A then checks if 3 lines backwards there is another occurrence of a line starting with the same string, like @STRING_A @STRING_A and if it happens, to delete the occurrence 3 lines backward. I was thinking about bash, but I do not know how to "go backwards" with it. So I am sure that this is not possible with bash. I also thought about python, but then I should store all information in memory in order to go backwards and then, for long files it would be unfeasible. What do you think? Is it possible to do it in bash or python? Thanks
[ "Funny that after all these hours nobody's yet given a solution to the problem as actually phrased (as @John Machin points out in a comment) -- remove just the leading marker (if followed by another such marker 3 lines down), not the whole line containing it. It's not hard, of course -- here's a tiny mod as needed of @truppo's fun solution, for example:\nfrom itertools import izip, chain\nf = \"foo.txt\"\nfor third, line in izip(chain(\" \", open(f)), open(f)):\n if third.startswith(\"@STRING_A\") and line.startswith(\"@STRING_A\"):\n line = line[len(\"@STRING_A\"):]\n print line,\n\nOf course, in real life, one would use an iterator.tee instead of reading the file twice, have this code in a function, not repeat the marker constant endlessly, &c-).\n", "Of course Python will work as well. Simply store the last three lines in an array and check if the first element in the array is the same as the value you are currently reading. Then delete the value and print out the current array. You would then move over your elements to make room for the new value and repeat. Of course when the array is filled you'd have to make sure to continue to move values out of the array and put in the newly read values, stopping to check each time to see if the first value in the array matches the value you are currently reading.\n", "Here is a more fun solution, using two iterators with a three element offset :)\nfrom itertools import izip, chain, tee\nf1, f2 = tee(open(\"foo.txt\"))\nfor third, line in izip(chain(\" \", f1), f2):\n if not (third.startswith(\"@STRING_A\") and line.startswith(\"@STRING_A\")):\n print line,\n\n", "Why shouldn't it possible in bash? You don't need to keep the whole file in memory, just the last three lines (if I understood correctly), and write what's appropriate to standard-out. Redirect that into a temporary file, check that everything worked as expected, and overwrite the source file with the temporary one.\nSame goes for Python.\nI'd provide a script of my own, but that wouldn't be tested. ;-)\n", "This code will scan through the file, and remove lines starting with the marker. It only keeps only three lines in memory by default:\nfrom collections import deque\n\ndef delete(fp, marker, gap=3):\n \"\"\"Delete lines from *fp* if they with *marker* and are followed\n by another line starting with *marker* *gap* lines after.\n \"\"\"\n buf = deque()\n for line in fp:\n if len(buf) < gap:\n buf.append(line)\n else:\n old = buf.popleft()\n if not (line.startswith(marker) and old.startswith(marker)):\n yield old\n buf.append(line)\n for line in buf:\n yield line\n\nI've tested it with:\n>>> from StringIO import StringIO\n>>> fp = StringIO('''a\n... b\n... xxx 1\n... c\n... xxx 2\n... d\n... e\n... xxx 3\n... f\n... g\n... h\n... xxx 4\n... i''')\n>>> print ''.join(delete(fp, 'xxx'))\na\nb\nxxx 1\nc\nd\ne\nxxx 3\nf\ng\nh\nxxx 4\ni\n\n", "As AlbertoPL said, store lines in a fifo for later use--don't \"go backwards\". For this I would definitely use python over bash+sed/awk/whatever. \nI took a few moments to code this snippet up:\nfrom collections import deque\nline_fifo = deque()\nfor line in open(\"test\"):\n line_fifo.append(line)\n if len(line_fifo) == 4:\n # \"look 3 lines backward\" \n if line_fifo[0] == line_fifo[-1] == \"@STRING_A\\n\":\n # get rid of that match\n line_fifo.popleft()\n else:\n # print out the top of the fifo\n print line_fifo.popleft(),\n# don't forget to print out the fifo when the file ends\nfor line in line_fifo: print line,\n\n", "My awk-fu has never been that good... but the following may provide you what you're looking for in a bash-shell/shell-utility form:\nsed `awk 'BEGIN{ORS=\";\"}\n/@STRING_A/ {\n if(LAST!=\"\" && LAST+3 >= NR) print LAST \"d\"\n LAST = NR\n}' test_file` test_file\n\nBasically... awk is producing a command for sed to strip certain lines. I'm sure there's a relatively easy way to make awk do all of the processing, but this does seem to work.\nThe bad part? It does read the test_file twice.\nThe good part? It is a bash/shell-utility implementation.\nEdit: Alex Martelli points out that the sample file above might have confused me. (my above code deletes the whole line, rather than the @STRING_A flag only)\nThis is easily remedied by adjusting the command to sed:\nsed `awk 'BEGIN{ORS=\";\"}\n/@STRING_A/ {\n if(LAST!=\"\" && LAST+3 >= NR) print LAST \"s/@STRING_A//\"\n LAST = NR\n}' test_file` test_file\n\n", "This \"answer\" is for lyrae ... I'll amend my previous comment: if the needle is in the first 3 lines of the file, your script will either cause an IndexError or access a line that it shouldn't be accessing, sometimes with interesting side-effects.\nExample of your script causing IndexError:\n>>> lines = \"@string line 0\\nblah blah\\n\".splitlines(True)\n>>> needle = \"@string \"\n>>> for i,line in enumerate(lines):\n... if line.startswith(needle) and lines[i-3].startswith(needle):\n... lines[i-3] = lines[i-3].replace(needle, \"\")\n...\nTraceback (most recent call last):\n File \"<stdin>\", line 2, in <module>\nIndexError: list index out of range\n\nand this example shows not only that the Earth is round but also why your \"fix\" to the \"don't delete the whole line\" problem should have used .replace(needle, \"\", 1) or [len(needle):] instead of .replace(needle, \"\") \n>>> lines = \"NEEDLE x NEEDLE y\\nnoddle\\nnuddle\\n\".splitlines(True)\n>>> needle = \"NEEDLE\"\n>>> # Expected result: no change to the file\n... for i,line in enumerate(lines):\n... if line.startswith(needle) and lines[i-3].startswith(needle):\n... lines[i-3] = lines[i-3].replace(needle, \"\")\n...\n>>> print ''.join(lines)\n x y <<<=== whoops!\nnoddle\nnuddle\n <<<=== still got unwanted newline in here\n>>>\n\n" ]
[ 4, 2, 2, 1, 1, 1, 0, 0 ]
[ "In bash you can use sort -r filename and tail -n filename to read the file backwards.\n$LINES=`tail -n filename | sort -r`\n# now iterate through the lines and do your checking\n\n", "This may be what you're looking for?\nlines = open('sample.txt').readlines()\n\nneedle = \"@string \"\n\nfor i,line in enumerate(lines):\n if line.startswith(needle) and lines[i-3].startswith(needle):\n lines[i-3] = lines[i-3].replace(needle, \"\")\nprint ''.join(lines)\n\nthis outputs:\nstring 0 extra text\nstring 1 extra text\nstring 2 extra text\nstring 3 extra text\n--replaced -- 4 extra text\nstring 5 extra text\nstring 6 extra text\n@string 7 extra text\nstring 8 extra text\nstring 9 extra text\nstring 10 extra text\n\n", "I would consider using sed. gnu sed supports definition of line ranges. if sed would fail, then there is another beast - awk and I'm sure you can do it with awk.\nO.K. I feel I should put my awk POC. I could not figure out to use sed addresses. I have not tried combination of awk+sed, but it seems to me it's overkill.\nmy awk script works as follows:\n\nIt reads lines and stores them into 3 line buffer\nonce desired pattern is found (/^data.*/ in my case), the 3-line buffer is looked up to check, whether desired pattern has been seen three lines ago\nif pattern has been seen, then 3 lines are scratched\n\nto be honest, I would probably go with python also, given that awk is really awkward.\nthe AWK code follows:\n\nfunction max(a, b)\n{\n if (a > b)\n return a;\n else\n return b;\n}\n\nBEGIN {\n w = 0; #write index\n r = 0; #read index\n buf[0, 1, 2]; #buffer\n\n}\n\nEND {\n # flush buffer\n # start at read index and print out up to w index\n for (k = r % 3; k r - max(r - 3, 0); k--) {\n #search in 3 line history buf\n if (match(buf[k % 3], /^data.*/) != 0) {\n # found -> remove lines from history\n # by rewriting them -> adjust write index\n w -= max(r, 3);\n }\n }\n buf[w % 3] = $0;\n w++;\n}\n\n/^.*/ {\n # store line into buffer, if the history\n # is full, print out the oldest one.\n if (w > 2) {\n print buf[r % 3];\n r++;\n buf[w % 3] = $0;\n }\n else {\n buf[w] = $0;\n }\n w++;\n}\n\n" ]
[ -1, -1, -2 ]
[ "bash", "python" ]
stackoverflow_0001012490_bash_python.txt
Q: Python cgi performance I own a legacy python application written as CGI. Until now this works OK, but the number of concurrent users will increment largely in the very near future. Here on SO I read: "CGI is great for low-traffic websites, but it has some performance problems for anything else". I know it would have been better to start in another way, but CGI is what is is now. Could someone point me a direction on to how to keep the CGI performing, without having to rewrite all code? A: CGI doesn't scale because each request forks a brand new server process. It's a lot of overhead. mod_wsgi avoid the overhead by forking one process and handing requests to that one running process. Let's assume the application is the worst kind of cgi. The worst case is that it has files like this. my_cgi.py import cgi print "status: 200 OK" print "content-type: text/html" print print "<!doctype...>" print "<html>" etc. You can try to "wrap" the original CGI files to make it wsgi. wsgi.py import cStringIO def my_cgi( environ, start_response ): page = cStringIO.StringIO() sys.stdout= page os.environ.update( environ ) # you may have to do something like execfile( "my_cgi.py", globals=environ ) execfile( "my_cgi.py" ) status = '200 OK' # HTTP Status headers = [('Content-type', 'text/html')] # HTTP Headers start_response(status, headers) return page.getvalue() This a first step to rewriting your CGI application into a proper framework. This requires very little work, and will make your CGI's much more scalable, since you won't be starting a fresh CGI process for each request. The second step is to create a mod_wsgi server that Apache uses instead of all the CGI scripts. This server must (1) parse the URL's, (2) call various function like the my_cgi example function. Each function will execfile the old CGI script without forking a new process. Look at werkzeug for helpful libraries. If your application CGI scripts have some structure (functions, classes, etc.) you can probably import those and do something much, much smarter than the above. A better way is this. wsgi.py from my_cgi import this_func, that_func def my_cgi( environ, start_response ): result= this_func( some_args ) page_text= that_func( result, some_other_args ) status = '200 OK' # HTTP Status headers = [('Content-type', 'text/html')] # HTTP Headers start_response(status, headers) return page_text This requires more work because you have to understand the legacy application. However, this has two advantages. It makes your CGI's more scalable because you're not starting a fresh process for each request. It allows you to rethink your application, possibly changing it to a proper framework. Once you've done this, it's not very hard to take the next step and move to TurboGears or Pylons or web.py for a very simple framework. A: Use FastCGI. If I understand FastCGI correctly, you can do what you want by writing a very simple Python program that sits between the web server and your legacy code.
Python cgi performance
I own a legacy python application written as CGI. Until now this works OK, but the number of concurrent users will increment largely in the very near future. Here on SO I read: "CGI is great for low-traffic websites, but it has some performance problems for anything else". I know it would have been better to start in another way, but CGI is what is is now. Could someone point me a direction on to how to keep the CGI performing, without having to rewrite all code?
[ "CGI doesn't scale because each request forks a brand new server process. It's a lot of overhead. mod_wsgi avoid the overhead by forking one process and handing requests to that one running process.\nLet's assume the application is the worst kind of cgi.\nThe worst case is that it has files like this.\nmy_cgi.py\nimport cgi\nprint \"status: 200 OK\"\nprint \"content-type: text/html\"\nprint\nprint \"<!doctype...>\"\nprint \"<html>\"\netc.\n\nYou can try to \"wrap\" the original CGI files to make it wsgi.\nwsgi.py\nimport cStringIO\ndef my_cgi( environ, start_response ):\n page = cStringIO.StringIO()\n sys.stdout= page\n os.environ.update( environ ) \n # you may have to do something like execfile( \"my_cgi.py\", globals=environ ) \n execfile( \"my_cgi.py\" )\n status = '200 OK' # HTTP Status\n headers = [('Content-type', 'text/html')] # HTTP Headers\n start_response(status, headers)\n return page.getvalue()\n\nThis a first step to rewriting your CGI application into a proper framework. This requires very little work, and will make your CGI's much more scalable, since you won't be starting a fresh CGI process for each request.\nThe second step is to create a mod_wsgi server that Apache uses instead of all the CGI scripts. This server must (1) parse the URL's, (2) call various function like the my_cgi example function. Each function will execfile the old CGI script without forking a new process.\nLook at werkzeug for helpful libraries.\nIf your application CGI scripts have some structure (functions, classes, etc.) you can probably import those and do something much, much smarter than the above. A better way is this. \nwsgi.py\nfrom my_cgi import this_func, that_func\ndef my_cgi( environ, start_response ):\n\n result= this_func( some_args )\n page_text= that_func( result, some_other_args )\n\n status = '200 OK' # HTTP Status\n headers = [('Content-type', 'text/html')] # HTTP Headers\n start_response(status, headers)\n return page_text\n\nThis requires more work because you have to understand the legacy application. However, this has two advantages.\n\nIt makes your CGI's more scalable because you're not starting a fresh process for each request.\nIt allows you to rethink your application, possibly changing it to a proper framework. Once you've done this, it's not very hard to take the next step and move to TurboGears or Pylons or web.py for a very simple framework.\n\n", "Use FastCGI. If I understand FastCGI correctly, you can do what you want by writing a very simple Python program that sits between the web server and your legacy code.\n" ]
[ 6, 3 ]
[]
[]
[ "cgi", "performance", "python" ]
stackoverflow_0001017087_cgi_performance_python.txt
Q: UnicodeDecodeError when reading dictionary words file with simple Python script First time doing Python in a while, and I'm having trouble doing a simple scan of a file when I run the following script with Python 3.0.1, with open("/usr/share/dict/words", 'r') as f: for line in f: pass I get this exception: Traceback (most recent call last): File "/home/matt/install/test.py", line 2, in <module> for line in f: File "/home/matt/install/root/lib/python3.0/io.py", line 1744, in __next__ line = self.readline() File "/home/matt/install/root/lib/python3.0/io.py", line 1817, in readline while self._read_chunk(): File "/home/matt/install/root/lib/python3.0/io.py", line 1565, in _read_chunk self._set_decoded_chars(self._decoder.decode(input_chunk, eof)) File "/home/matt/install/root/lib/python3.0/io.py", line 1299, in decode output = self.decoder.decode(input, final=final) File "/home/matt/install/root/lib/python3.0/codecs.py", line 300, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 1689-1692: invalid data The line in the file it blows up on is "Argentinian", which doesn't seem to be unusual in any way. Update: I added, encoding="iso-8559-1" to the open() call, and it fixed the problem. A: Can you check to make sure it is valid UTF-8? A way to do that is given at this SO question: iconv -f UTF-8 /usr/share/dict/words -o /dev/null There are other ways to do the same thing. A: How have you determined from "position 1689-1692" what line in the file it has blown up on? Those numbers would be offsets in the chunk that it's trying to decode. You would have had to determine what chunk it was -- how? Try this at the interactive prompt: buf = open('the_file', 'rb').read() len(buf) ubuf = buf.decode('utf8') # splat ... but it will give you the byte offset into the file buf[offset-50:60] # should show you where/what the problem is # By the way, from the error message, looks like a bad # FOUR-byte UTF-8 character ... interesting
UnicodeDecodeError when reading dictionary words file with simple Python script
First time doing Python in a while, and I'm having trouble doing a simple scan of a file when I run the following script with Python 3.0.1, with open("/usr/share/dict/words", 'r') as f: for line in f: pass I get this exception: Traceback (most recent call last): File "/home/matt/install/test.py", line 2, in <module> for line in f: File "/home/matt/install/root/lib/python3.0/io.py", line 1744, in __next__ line = self.readline() File "/home/matt/install/root/lib/python3.0/io.py", line 1817, in readline while self._read_chunk(): File "/home/matt/install/root/lib/python3.0/io.py", line 1565, in _read_chunk self._set_decoded_chars(self._decoder.decode(input_chunk, eof)) File "/home/matt/install/root/lib/python3.0/io.py", line 1299, in decode output = self.decoder.decode(input, final=final) File "/home/matt/install/root/lib/python3.0/codecs.py", line 300, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 1689-1692: invalid data The line in the file it blows up on is "Argentinian", which doesn't seem to be unusual in any way. Update: I added, encoding="iso-8559-1" to the open() call, and it fixed the problem.
[ "Can you check to make sure it is valid UTF-8? A way to do that is given at this SO question:\niconv -f UTF-8 /usr/share/dict/words -o /dev/null\n\nThere are other ways to do the same thing.\n", "How have you determined from \"position 1689-1692\" what line in the file it has blown up on? Those numbers would be offsets in the chunk that it's trying to decode. You would have had to determine what chunk it was -- how?\nTry this at the interactive prompt:\nbuf = open('the_file', 'rb').read()\nlen(buf)\nubuf = buf.decode('utf8')\n# splat ... but it will give you the byte offset into the file\nbuf[offset-50:60] # should show you where/what the problem is\n# By the way, from the error message, looks like a bad\n# FOUR-byte UTF-8 character ... interesting\n\n" ]
[ 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001017334_python.txt
Q: simple update in sqlalchemy UserTable is: id (INT) name (STR) last_login (DATETIME) Serving a web page request i have a user id in hand and I only wish to update the last_login field to 'now'. It seems to me that there are 2 ways: issue a direct SQL using db_engine (losing the mapper) OR query the user first and then update the object Both work fine but look quite disgusting in code. Is anyone aware of a more elegant way of doing an update-with-no-query using sqlalchemy? Is there another ORM who has got this right? Thanks A: Assuming you have a mapper UserTable in place: DBSession.query(UserTable).filter_by(id = user_id).\ update({"last_login":datetime.datetime.now()}, synchronize_session=False) Additional parameters in the docs.
simple update in sqlalchemy
UserTable is: id (INT) name (STR) last_login (DATETIME) Serving a web page request i have a user id in hand and I only wish to update the last_login field to 'now'. It seems to me that there are 2 ways: issue a direct SQL using db_engine (losing the mapper) OR query the user first and then update the object Both work fine but look quite disgusting in code. Is anyone aware of a more elegant way of doing an update-with-no-query using sqlalchemy? Is there another ORM who has got this right? Thanks
[ "Assuming you have a mapper UserTable in place:\nDBSession.query(UserTable).filter_by(id = user_id).\\\n update({\"last_login\":datetime.datetime.now()}, synchronize_session=False)\n\nAdditional parameters in the docs.\n" ]
[ 23 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001017388_python_sqlalchemy.txt
Q: NameError: global name 'has_no_changeset' is not defined OK - Python newbie here - I assume I am doing something really stupid, could you please tell me what it is so we can all get on with our lives? I get the error NameError: global name 'has_no_changeset' is not defined in the line 55 (where I try calling the function has_no_changeset). from genshi.builder import tag from trac.core import implements,Component from trac.ticket.api import ITicketManipulator from trac.ticket.default_workflow import ConfigurableTicketWorkflow from trac.perm import IPermissionRequestor from trac.config import Option, ListOption import re revision = "$Rev$" url = "$URL$" class CloseActionController(Component): """Support for close checking. If a ticket is closed, it is NOT allowed if ALL the following conditions apply: a) ticket is 'bug' ticket b) resolution status is 'fixed' c) none of the ticket's changes include a comment containing a changeset, i.e. regex "\[\d+\]" d) the ticket does not have the keyword 'web' """ implements(ITicketManipulator) # ITicketManipulator methods def prepare_ticket(req, ticket, fields, actions): """Not currently called, but should be provided for future compatibility.""" return def has_no_changeset(ticket): db = self.env.get_db_cnx() cursor = db.cursor() cursor.execute("SELECT newvalue FROM ticket_change WHERE ticket=%s AND field='comment'", (str(ticket.id).encode('ascii','replace'),)) for newvalue, in cursor: if re.search("\[\d{5,}\]", newvalue): return False return True def validate_ticket(me, req, ticket): """Validate a ticket after it's been populated from user input. Must return a list of `(field, message)` tuples, one for each problem detected. `field` can be `None` to indicate an overall problem with the ticket. Therefore, a return value of `[]` means everything is OK.""" if ticket['type'] == 'bug' and ticket['resolution'] == 'fixed': if ticket['keywords'] == None or ticket['keywords'].find('web') == -1: if has_no_changeset(ticket): return [(None, 'You can only close a bug ticket as "fixed" if you refer to a changeset somewhere within the ticket, e.g. with [12345]!')] return[] A: You need to explicitly specify self (or in your case, me) when referring to a method of the current class: if me.has_no_changeset(ticket): You're using me instead of self - that's legal but strongly discouraged. The first parameter of member functions should be called self: def validate_ticket(self, req, ticket): # [...] if self.has_no_changeset(ticket):
NameError: global name 'has_no_changeset' is not defined
OK - Python newbie here - I assume I am doing something really stupid, could you please tell me what it is so we can all get on with our lives? I get the error NameError: global name 'has_no_changeset' is not defined in the line 55 (where I try calling the function has_no_changeset). from genshi.builder import tag from trac.core import implements,Component from trac.ticket.api import ITicketManipulator from trac.ticket.default_workflow import ConfigurableTicketWorkflow from trac.perm import IPermissionRequestor from trac.config import Option, ListOption import re revision = "$Rev$" url = "$URL$" class CloseActionController(Component): """Support for close checking. If a ticket is closed, it is NOT allowed if ALL the following conditions apply: a) ticket is 'bug' ticket b) resolution status is 'fixed' c) none of the ticket's changes include a comment containing a changeset, i.e. regex "\[\d+\]" d) the ticket does not have the keyword 'web' """ implements(ITicketManipulator) # ITicketManipulator methods def prepare_ticket(req, ticket, fields, actions): """Not currently called, but should be provided for future compatibility.""" return def has_no_changeset(ticket): db = self.env.get_db_cnx() cursor = db.cursor() cursor.execute("SELECT newvalue FROM ticket_change WHERE ticket=%s AND field='comment'", (str(ticket.id).encode('ascii','replace'),)) for newvalue, in cursor: if re.search("\[\d{5,}\]", newvalue): return False return True def validate_ticket(me, req, ticket): """Validate a ticket after it's been populated from user input. Must return a list of `(field, message)` tuples, one for each problem detected. `field` can be `None` to indicate an overall problem with the ticket. Therefore, a return value of `[]` means everything is OK.""" if ticket['type'] == 'bug' and ticket['resolution'] == 'fixed': if ticket['keywords'] == None or ticket['keywords'].find('web') == -1: if has_no_changeset(ticket): return [(None, 'You can only close a bug ticket as "fixed" if you refer to a changeset somewhere within the ticket, e.g. with [12345]!')] return[]
[ "You need to explicitly specify self (or in your case, me) when referring to a method of the current class:\nif me.has_no_changeset(ticket):\n\nYou're using me instead of self - that's legal but strongly discouraged. The first parameter of member functions should be called self:\ndef validate_ticket(self, req, ticket):\n # [...]\n if self.has_no_changeset(ticket):\n\n" ]
[ 4 ]
[]
[]
[ "python", "syntax" ]
stackoverflow_0001017467_python_syntax.txt
Q: backend for python which is the best back end for python applications and what is the advantage of using sqlite ,how it can be connected to python applications A: What do you mean with back end? Python apps connect to SQLite just like any other database, you just have to import the correct module and check how to use it. The advantages of using SQLite are: You don't need to setup a database server, it's just a file No configurations needed Cross platform Mainly, desktops applications are the ones that take real advantage of this. For web apps, SQLite is not recommended, since the file containing the data, is easily readable (lacks any kind of encryption), and when the web server lacks special configuration, the file is downloadable by anyone. A: Django, Twisted, and CherryPy are popular Python "Back-Ends" as far as web applications go, with Twisted likely being the most flexible as far as networking is concerned. SQLite can, as has been previously posted, be directly interfaced with using SQL commands as it has native bindings for Python, or it can be accessed with an Object Relational Manager such as SQLObject (another Python library). As far as performance is concered, SQLite is fairly scalable and should be able to handle most use cases that don't require a seperate database server (nothing enterprise level). An additional benefit of SQLite is that the database is self-contained in a single file allowing for easy backup while remained a common enough format that multiple applications can access the data. A word of advice on using SQLite with Python, however, is that you may run into issues with threading (in the past most of the bindings for SQLite were not thread-safe, although this may have changed over time). A: The language you are using at the application layer has little to do with your database choice underneath. You need to examine the advantages of other DB packages to get an idea of what you want. Here are some popular database packages for cheap or free: ms sql server express, pg/sql, mysql A: If you mean "what is the best database?" then there's simply no way to answer this question. If you just want a small database that won't be used by more than a handful of people at a time, SQLite is what you're looking for. If you're running a database for a giant corporation serving thousands, you're probably looking for Oracle. In between those, you have MySQL, PostgreSQL, SQL Server, db2, and probably more. If you're familiar with one of those, that may be the best to go with from a practical standpoint. If you're doing a typical webapp, my advice would be to go with MySQL or PostgreSQL as they're free and well supported by just about any ORM you could think of (my personal preference is towards PostgreSQL, but I'm not experienced enough with either of these to make a good argument one way or another). If you do go with one of those two, my recommendation is to use storm as the ORM. (And yes, there are free versions of SQL Server and Oracle. You won't have as many choices as far as ORMs go though)
backend for python
which is the best back end for python applications and what is the advantage of using sqlite ,how it can be connected to python applications
[ "What do you mean with back end? Python apps connect to SQLite just like any other database, you just have to import the correct module and check how to use it.\nThe advantages of using SQLite are:\n\nYou don't need to setup a database server, it's just a file\nNo configurations needed\nCross platform\n\nMainly, desktops applications are the ones that take real advantage of this. For web apps, SQLite is not recommended, since the file containing the data, is easily readable (lacks any kind of encryption), and when the web server lacks special configuration, the file is downloadable by anyone.\n", "Django, Twisted, and CherryPy are popular Python \"Back-Ends\" as far as web applications go, with Twisted likely being the most flexible as far as networking is concerned.\nSQLite can, as has been previously posted, be directly interfaced with using SQL commands as it has native bindings for Python, or it can be accessed with an Object Relational Manager such as SQLObject (another Python library).\nAs far as performance is concered, SQLite is fairly scalable and should be able to handle most use cases that don't require a seperate database server (nothing enterprise level). An additional benefit of SQLite is that the database is self-contained in a single file allowing for easy backup while remained a common enough format that multiple applications can access the data. A word of advice on using SQLite with Python, however, is that you may run into issues with threading (in the past most of the bindings for SQLite were not thread-safe, although this may have changed over time).\n", "The language you are using at the application layer has little to do with your database choice underneath. You need to examine the advantages of other DB packages to get an idea of what you want.\nHere are some popular database packages for cheap or free:\nms sql server express, pg/sql, mysql\n", "If you mean \"what is the best database?\" then there's simply no way to answer this question. If you just want a small database that won't be used by more than a handful of people at a time, SQLite is what you're looking for. If you're running a database for a giant corporation serving thousands, you're probably looking for Oracle. In between those, you have MySQL, PostgreSQL, SQL Server, db2, and probably more.\nIf you're familiar with one of those, that may be the best to go with from a practical standpoint. If you're doing a typical webapp, my advice would be to go with MySQL or PostgreSQL as they're free and well supported by just about any ORM you could think of (my personal preference is towards PostgreSQL, but I'm not experienced enough with either of these to make a good argument one way or another). If you do go with one of those two, my recommendation is to use storm as the ORM.\n(And yes, there are free versions of SQL Server and Oracle. You won't have as many choices as far as ORMs go though)\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001017399_python.txt
Q: Is there an API to access a Google Group data? I'm trying to build some statistics for an email group I participate. Is there any Python API to access the email data on a GoogleGroup? Also, I know some statistics are available on the group's main page. I'm looking for something more complex than what is shown there. A: There isn't an API that I know of, however you can access the XML feed and manipulate it as required.
Is there an API to access a Google Group data?
I'm trying to build some statistics for an email group I participate. Is there any Python API to access the email data on a GoogleGroup? Also, I know some statistics are available on the group's main page. I'm looking for something more complex than what is shown there.
[ "There isn't an API that I know of, however you can access the XML feed and manipulate it as required.\n" ]
[ 3 ]
[]
[]
[ "google_groups", "python" ]
stackoverflow_0001017794_google_groups_python.txt
Q: Help me understand the difference between CLOBs and BLOBs in Oracle This is mainly just a "check my understanding" type of question. Here's my understanding of CLOBs and BLOBs as they work in Oracle: CLOBs are for text like XML, JSON, etc. You should not assume what encoding the database will store it as (at least in an application) as it will be converted to whatever encoding the database was configured to use. BLOBs are for binary data. You can be reasonably assured that they will be stored how you send them and that you will get them back with exactly the same data as they were sent as. So in other words, say I have some binary data (in this case a pickled python object). I need to be assured that when I send it, it will be stored exactly how I sent it and that when I get it back it will be exactly the same. A BLOB is what I want, correct? Is it really feasible to use a CLOB for this? Or will character encoding cause enough problems that it's not worth it? A: CLOB is encoding and collation sensitive, BLOB is not. When you write into a CLOB using, say, CL8WIN1251, you write a 0xC0 (which is Cyrillic letter А). When you read data back using AL16UTF16, you get back 0x0410, which is a UTF16 represenation of this letter. If you were reading from a BLOB, you would get same 0xC0 back. A: Your understanding is correct. Since you mention Python, think of the Python 3 distinction between strings and bytes: CLOBs and BLOBs are quite analogous, with the extra issue that the encoding of CLOBs is not under your app's control.
Help me understand the difference between CLOBs and BLOBs in Oracle
This is mainly just a "check my understanding" type of question. Here's my understanding of CLOBs and BLOBs as they work in Oracle: CLOBs are for text like XML, JSON, etc. You should not assume what encoding the database will store it as (at least in an application) as it will be converted to whatever encoding the database was configured to use. BLOBs are for binary data. You can be reasonably assured that they will be stored how you send them and that you will get them back with exactly the same data as they were sent as. So in other words, say I have some binary data (in this case a pickled python object). I need to be assured that when I send it, it will be stored exactly how I sent it and that when I get it back it will be exactly the same. A BLOB is what I want, correct? Is it really feasible to use a CLOB for this? Or will character encoding cause enough problems that it's not worth it?
[ "CLOB is encoding and collation sensitive, BLOB is not.\nWhen you write into a CLOB using, say, CL8WIN1251, you write a 0xC0 (which is Cyrillic letter А).\nWhen you read data back using AL16UTF16, you get back 0x0410, which is a UTF16 represenation of this letter.\nIf you were reading from a BLOB, you would get same 0xC0 back.\n", "Your understanding is correct. Since you mention Python, think of the Python 3 distinction between strings and bytes: CLOBs and BLOBs are quite analogous, with the extra issue that the encoding of CLOBs is not under your app's control.\n" ]
[ 56, 10 ]
[]
[]
[ "oracle", "python" ]
stackoverflow_0001018073_oracle_python.txt
Q: One django installation different users per site How can I have different users for different sites with django. My application should look like this: a.mydomain.com b.otherdomain.com Users should be bound to the domain, so that a.mydomain.com and b.otherdomain.com have different users. A: In the auth setup, you could create separate custom permissions, one per domain, and check if the current user has the permission for the current domain -- see the "custom permissions" section in the auth doc in question.
One django installation different users per site
How can I have different users for different sites with django. My application should look like this: a.mydomain.com b.otherdomain.com Users should be bound to the domain, so that a.mydomain.com and b.otherdomain.com have different users.
[ "In the auth setup, you could create separate custom permissions, one per domain, and check if the current user has the permission for the current domain -- see the \"custom permissions\" section in the auth doc in question.\n" ]
[ 1 ]
[]
[]
[ "authentication", "django", "django_models", "python" ]
stackoverflow_0001018111_authentication_django_django_models_python.txt
Q: Formatting csv file data with html template I have an csv file, the data, and an HTML file, the template. I want a script that will create an individual html file per record from the csv file, using the html file as a template. Which is the best way to do this in Ruby? Python? Is there a tool/library I can use for this in either language? A: Python with Jinja2. import jinja import csv env= jinja.Environment() env.loader= jinja.FileSystemLoader("some/directory") template= env.get_template( "name" ) rdr= csv.reader( open("some.csv", "r" ) ) csv_data = [ row for row in rdr ] print template.render( data=csv_data ) It turns out that you might be able to get away with simply passing the rdr directly to Jinja for rending. If the template looks like this, it will work with a wide variety of Python structures, including an iterator. <table> {% for row in data %} <tr> <td>{{ row.0 }}</td><td>{{ row.1 }}</td> </tr> {% endfor %} </table> A: Ruby has built in CSV handling which should make it fairly trivial to output static HTML files. See: http://www.rubytips.org/2008/01/06/csv-processing-in-ruby/ http://www.ruby-doc.org/stdlib/libdoc/csv/rdoc/index.html Actually, so does Python, so it's really a matter of personal preference (or of whichever you already have configured).
Formatting csv file data with html template
I have an csv file, the data, and an HTML file, the template. I want a script that will create an individual html file per record from the csv file, using the html file as a template. Which is the best way to do this in Ruby? Python? Is there a tool/library I can use for this in either language?
[ "Python with Jinja2.\nimport jinja\nimport csv\n\nenv= jinja.Environment()\nenv.loader= jinja.FileSystemLoader(\"some/directory\")\ntemplate= env.get_template( \"name\" )\n\nrdr= csv.reader( open(\"some.csv\", \"r\" ) )\ncsv_data = [ row for row in rdr ]\n\nprint template.render( data=csv_data )\n\nIt turns out that you might be able to get away with simply passing the rdr directly to Jinja for rending. \nIf the template looks like this, it will work with a wide variety of Python structures, including an iterator.\n<table>\n{% for row in data %}\n<tr>\n <td>{{ row.0 }}</td><td>{{ row.1 }}</td>\n</tr>\n{% endfor %}\n</table>\n\n", "Ruby has built in CSV handling which should make it fairly trivial to output static HTML files.\nSee:\n\nhttp://www.rubytips.org/2008/01/06/csv-processing-in-ruby/\nhttp://www.ruby-doc.org/stdlib/libdoc/csv/rdoc/index.html\n\nActually, so does Python, so it's really a matter of personal preference (or of whichever you already have configured).\n" ]
[ 6, 5 ]
[]
[]
[ "csv", "formatting", "html", "python", "ruby" ]
stackoverflow_0001017898_csv_formatting_html_python_ruby.txt
Q: Pass-through keyword arguments I've got a class function that needs to "pass through" a particular keyword argument: def createOrOpenTable(self, tableName, schema, asType=Table): if self.tableExists(tableName): return self.openTable(tableName, asType=asType) else: return self.createTable(self, tableName, schema, asType=asType) When I call it, I get an error like this: TypeError: createTable() got multiple values for keyword argument 'asType' Is there any way to "pass through" such a keyword argument? I've thought of several answers, but none of them are optimal. From worst to best: I could change the keyword name on one or more of the functions, but I want to use the same keyword for all three functions, since the parameter carries the same meaning. I could pass the asType parameter by position instead of by keyword, but if I add other keyword parameters to openTable or createTable, I'd have to remember to change the calls. I'd rather it automatically adapt, as it would if I could use the keyword form. I could use the **args form here instead, to get a dictionary of keyword parameters rather than using a default parameter, but that seems like using a sledgehammer to swat a fly (because of the extra lines of code needed to properly parse it). Is there a better solution? A: You're doing it right... Just take out the self in the second function call :) return self.createTable(self, tableName, schema, asType=asType) should be: return self.createTable(tableName, schema, asType=asType) A: I have to say, that I first thought of a more complicated problem. But the answer of David Wolever is absolutely correct. It is just the duplicate self here, that creates the problem. This way, the positional parameters get out of line and asType is given a value as possitional parameter (once) and as keyword-parameter (second time!). A much more interesting problem is, what to do, when you want to enhance the called routine (createTable in the example) without everytime enhancing the intermediate function. Here, the **args solution makes the trick: For example: def createOrOpenTable(self, tableName, schema, **args): if self.tableExists(tableName): return self.openTable(tableName, **args) else: return self.createTable(tableName, schema, **args) By this way, it is possible to enhance the signature of createTable and openTable without having to change createOrOpenTable any more. When create and openTable can have different keyword-parameters, then of course both routines must be defined as follows: def createTable(self, tableName, schema, asType=None, **others): ... The others parameter eats up any keyword parameters unknown to the method -- it is also not needed to evaluate it. A: I would have posted a comment to Juergen's post, but I need to write a code example. Here's a little bit more generic version: def createOrOpenTable(self, tableName, schema, *args, **argd): if self.tableExists(tableName): return self.openTable(tableName, *args, **argd) else: return self.createTable(tableName, schema, *args, **argd) This will allow positional arguments to also be effective (which is important if you truly want this to be a "pass-through."
Pass-through keyword arguments
I've got a class function that needs to "pass through" a particular keyword argument: def createOrOpenTable(self, tableName, schema, asType=Table): if self.tableExists(tableName): return self.openTable(tableName, asType=asType) else: return self.createTable(self, tableName, schema, asType=asType) When I call it, I get an error like this: TypeError: createTable() got multiple values for keyword argument 'asType' Is there any way to "pass through" such a keyword argument? I've thought of several answers, but none of them are optimal. From worst to best: I could change the keyword name on one or more of the functions, but I want to use the same keyword for all three functions, since the parameter carries the same meaning. I could pass the asType parameter by position instead of by keyword, but if I add other keyword parameters to openTable or createTable, I'd have to remember to change the calls. I'd rather it automatically adapt, as it would if I could use the keyword form. I could use the **args form here instead, to get a dictionary of keyword parameters rather than using a default parameter, but that seems like using a sledgehammer to swat a fly (because of the extra lines of code needed to properly parse it). Is there a better solution?
[ "You're doing it right... Just take out the self in the second function call :)\n return self.createTable(self, tableName, schema, asType=asType)\n\nshould be:\n return self.createTable(tableName, schema, asType=asType)\n\n", "I have to say, that I first thought of a more complicated problem. But the answer of David Wolever is absolutely correct. It is just the duplicate self here, that creates the problem. This way, the positional parameters get out of line and asType is given a value as possitional parameter (once) and as keyword-parameter (second time!).\nA much more interesting problem is, what to do, when you want to enhance the called routine (createTable in the example) without everytime enhancing the intermediate function. Here, the **args solution makes the trick:\nFor example:\ndef createOrOpenTable(self, tableName, schema, **args):\n if self.tableExists(tableName):\n return self.openTable(tableName, **args)\n else:\n return self.createTable(tableName, schema, **args)\n\nBy this way, it is possible to enhance the signature of createTable and openTable without having to change createOrOpenTable any more.\nWhen create and openTable can have different keyword-parameters, then of course both routines must be defined as follows:\ndef createTable(self, tableName, schema, asType=None, **others):\n ...\n\nThe others parameter eats up any keyword parameters unknown to the method -- it is also not needed to evaluate it.\n", "I would have posted a comment to Juergen's post, but I need to write a code example. Here's a little bit more generic version:\ndef createOrOpenTable(self, tableName, schema, *args, **argd):\n if self.tableExists(tableName):\n return self.openTable(tableName, *args, **argd)\n else:\n return self.createTable(tableName, schema, *args, **argd)\n\nThis will allow positional arguments to also be effective (which is important if you truly want this to be a \"pass-through.\"\n" ]
[ 9, 5, 5 ]
[]
[]
[ "python" ]
stackoverflow_0001018359_python.txt
Q: Python and if statement I'm running a script to feed an exe file a statement like below: for j in ('90.','52.62263.','26.5651.','10.8123.'): if j == '90.': z = ('0.') elif j == '52.62263.': z = ('0.', '72.', '144.', '216.', '288.') elif j == '26.5651': z = ('324.', '36.', '108.', '180.', '252.') else: z = ('288.', '0.', '72.', '144.', '216.') for k in z: exepath = os.path.join('\Program Files' , 'BRL-CAD' , 'bin' , 'rtarea.exe') exepath = '"' + os.path.normpath(exepath) + '"' cmd = exepath + '-j' + str(el) + '-k' + str(z) process=Popen('echo ' + cmd, shell=True, stderr=STDOUT ) print process I'm using the command prompt and when I run the exe with these numbers there are times when It doesn't seem to be in order. Like sometimes it will print out 3 statements of the 52.62263 but then before they all are printed it will print out a single 26.5651 and then go back to 52.62263. It's not just those numbers that act like this. Different runs it may be different numbers (A 52.62263 between "two" 90 statements) . All in all, I want it to print it in order top to bottom. Any suggestions and using my code any helpful solutions? thanks! A: z = ('0.') is not a tuple, therefore your for k in z loop will iterate over the characters "0" and ".". Add a comma to tell python you want it to be a tuple: z = ('0.',) A: I think what's happening right now is that you are not waiting for those processes to finish before they're printed. Try something like this in your last 2 lines: from subprocess import Popen, STDOUT stdout, stderr = Popen('echo ' + cmd, shell=True, stderr=STDOUT).communicate() print stdout A: What eduffy said. And this is a little cleaner; just prints, but you get the idea: import os data = { '90.': ('0.',), '52.62263.': ('0.', '72.', '144.', '216.', '288.'), '26.5651.': ('324.', '36.', '108.', '180.', '252.'), '10.8123.': ('288.', '0.', '72.', '144.', '216.'), } for tag in data: for k in data[tag]: exepath = os.path.join('\Program Files', 'BRL-CAD', 'bin', 'rtarea.exe') exepath = '"' + os.path.normpath(exepath) + '"' cmd = exepath + ' -el ' + str(tag) + ' -az ' + str(data[tag]) process = 'echo ' + cmd print process A: Since you've made a few posts about this bit of code, allow me to just correct/pythonify/beautify the whole thing: for j,z in { '90.' : ('0.',) , '52.62263.' : ('0.', '72.', '144.', '216.', '288.') , '26.5651.' : ('324.', '36.', '108.', '180.', '252.') , '10.8123.' : ('288.', '0.', '72.', '144.', '216.') }.iteritems(): for k in z: exepath = os.path.join('\Program Files' , 'BRL-CAD', 'bin' , 'rtarea.exe') exepath = '"%s"' % os.path.normpath(exepath) cmd = exepath + '-j' + str(el) + '-k' + z process = Popen('echo ' + cmd, shell=True, stderr=STDOUT ) print process
Python and if statement
I'm running a script to feed an exe file a statement like below: for j in ('90.','52.62263.','26.5651.','10.8123.'): if j == '90.': z = ('0.') elif j == '52.62263.': z = ('0.', '72.', '144.', '216.', '288.') elif j == '26.5651': z = ('324.', '36.', '108.', '180.', '252.') else: z = ('288.', '0.', '72.', '144.', '216.') for k in z: exepath = os.path.join('\Program Files' , 'BRL-CAD' , 'bin' , 'rtarea.exe') exepath = '"' + os.path.normpath(exepath) + '"' cmd = exepath + '-j' + str(el) + '-k' + str(z) process=Popen('echo ' + cmd, shell=True, stderr=STDOUT ) print process I'm using the command prompt and when I run the exe with these numbers there are times when It doesn't seem to be in order. Like sometimes it will print out 3 statements of the 52.62263 but then before they all are printed it will print out a single 26.5651 and then go back to 52.62263. It's not just those numbers that act like this. Different runs it may be different numbers (A 52.62263 between "two" 90 statements) . All in all, I want it to print it in order top to bottom. Any suggestions and using my code any helpful solutions? thanks!
[ "z = ('0.') is not a tuple, therefore your for k in z loop will iterate over the characters \"0\" and \".\". Add a comma to tell python you want it to be a tuple:\nz = ('0.',)\n\n", "I think what's happening right now is that you are not waiting for those processes to finish before they're printed. Try something like this in your last 2 lines:\nfrom subprocess import Popen, STDOUT\nstdout, stderr = Popen('echo ' + cmd, shell=True, stderr=STDOUT).communicate()\nprint stdout\n\n", "What eduffy said. And this is a little cleaner; just prints, but you get the idea:\nimport os\n\ndata = {\n '90.': ('0.',),\n '52.62263.': ('0.', '72.', '144.', '216.', '288.'),\n '26.5651.': ('324.', '36.', '108.', '180.', '252.'),\n '10.8123.': ('288.', '0.', '72.', '144.', '216.'),\n}\n\nfor tag in data:\n for k in data[tag]:\n exepath = os.path.join('\\Program Files', 'BRL-CAD', 'bin', 'rtarea.exe')\n exepath = '\"' + os.path.normpath(exepath) + '\"'\n cmd = exepath + ' -el ' + str(tag) + ' -az ' + str(data[tag])\n process = 'echo ' + cmd\n print process\n\n", "Since you've made a few posts about this bit of code, allow me to just correct/pythonify/beautify the whole thing:\nfor j,z in {\n '90.' : ('0.',) ,\n '52.62263.' : ('0.', '72.', '144.', '216.', '288.') ,\n '26.5651.' : ('324.', '36.', '108.', '180.', '252.') ,\n '10.8123.' : ('288.', '0.', '72.', '144.', '216.')\n }.iteritems():\n\n for k in z:\n exepath = os.path.join('\\Program Files' , 'BRL-CAD', 'bin' , 'rtarea.exe')\n exepath = '\"%s\"' % os.path.normpath(exepath)\n cmd = exepath + '-j' + str(el) + '-k' + z\n\n process = Popen('echo ' + cmd, shell=True, stderr=STDOUT )\n print process\n\n" ]
[ 8, 6, 5, 2 ]
[]
[]
[ "if_statement", "python" ]
stackoverflow_0001018415_if_statement_python.txt
Q: Difference between using __init__ and setting a class variable I'm trying to learn descriptors, and I'm confused by objects behaviour - in the two examples below, as I understood __init__ they should work the same. Can someone unconfuse me, or point me to a resource that explains this? import math class poweroftwo(object): """any time this is set with an int, turns it's value to a tuple of the int and the int^2""" def __init__(self, value=None, name="var"): self.val = (value, math.pow(value, 2)) self.name = name def __set__(self, obj, val): print "SET" self.val = (val, math.pow(val, 2)) def __get__(self, obj, objecttype): print "GET" return self.val class powoftwotest(object): def __init__(self, value): self.x = poweroftwo(value) class powoftwotest_two(object): x = poweroftwo(10) >>> a = powoftwotest_two() >>> b = powoftwotest(10) >>> a.x == b.x >>> GET >>> False #Why not true? shouldn't both a.x and b.x be instances of poweroftwo with the same values? A: First, please name all classes with LeadingUpperCaseNames. >>> a.x GET (10, 100.0) >>> b.x <__main__.poweroftwo object at 0x00C57D10> >>> type(a.x) GET <type 'tuple'> >>> type(b.x) <class '__main__.poweroftwo'> a.x is instance-level access, which supports descriptors. This is what is meant in section 3.4.2.2 by "(a so-called descriptor class) appears in the class dictionary of another new-style class". The class dictionary must be accessed by an instance to use the __get__ and __set__ methods. b.x is class-level access, which does not support descriptors.
Difference between using __init__ and setting a class variable
I'm trying to learn descriptors, and I'm confused by objects behaviour - in the two examples below, as I understood __init__ they should work the same. Can someone unconfuse me, or point me to a resource that explains this? import math class poweroftwo(object): """any time this is set with an int, turns it's value to a tuple of the int and the int^2""" def __init__(self, value=None, name="var"): self.val = (value, math.pow(value, 2)) self.name = name def __set__(self, obj, val): print "SET" self.val = (val, math.pow(val, 2)) def __get__(self, obj, objecttype): print "GET" return self.val class powoftwotest(object): def __init__(self, value): self.x = poweroftwo(value) class powoftwotest_two(object): x = poweroftwo(10) >>> a = powoftwotest_two() >>> b = powoftwotest(10) >>> a.x == b.x >>> GET >>> False #Why not true? shouldn't both a.x and b.x be instances of poweroftwo with the same values?
[ "First, please name all classes with LeadingUpperCaseNames.\n>>> a.x\nGET\n(10, 100.0)\n>>> b.x\n<__main__.poweroftwo object at 0x00C57D10>\n>>> type(a.x)\nGET\n<type 'tuple'>\n>>> type(b.x)\n<class '__main__.poweroftwo'>\n\na.x is instance-level access, which supports descriptors. This is what is meant in section 3.4.2.2 by \"(a so-called descriptor class) appears in the class dictionary of another new-style class\". The class dictionary must be accessed by an instance to use the __get__ and __set__ methods.\nb.x is class-level access, which does not support descriptors.\n" ]
[ 3 ]
[]
[]
[ "descriptor", "python" ]
stackoverflow_0001018977_descriptor_python.txt
Q: Python logging incompatibilty between 2.5 and 2.6 Could you help me solve the following incompatibility issue between Python 2.5 and 2.6? logger.conf: [loggers] keys=root,aLogger,bLogger [handlers] keys=consoleHandler [formatters] keys= [logger_root] level=NOTSET handlers=consoleHandler [logger_aLogger] level=DEBUG handlers=consoleHandler propagate=0 qualname=a [logger_bLogger] level=INFO handlers=consoleHandler propagate=0 qualname=b [handler_consoleHandler] class=StreamHandler args=(sys.stderr,) module_one.py: import logging import logging.config logging.config.fileConfig('logger.conf') a_log = logging.getLogger('a.submod') b_log = logging.getLogger('b.submod') def function_one(): b_log.info("function_one() called.") module_two.py: import logging import logging.config logging.config.fileConfig('logger.conf') a_log = logging.getLogger('a.submod') b_log = logging.getLogger('b.submod') def function_two(): a_log.info("function_two() called.") logger.py: from module_one import function_one from module_two import function_two function_one() function_two() Output of calling logger.py under Ubuntu 9.04: $ python2.5 logger.py $ $ python2.6 logger.py function_one() called. function_two() called. $ A: This is a bug which was fixed between 2.5 and 2.6. The fileConfig() function is intended for one-off configuration and so should not be called more than once - however you choose to arrange this. The intended behaviour of fileConfig is to disable any loggers which are not explicitly mentioned in the configuration, and leave enabled the mentioned loggers and their children; the bug was causing the children to be disabled when they shouldn't have been. The example logger configuration mentions loggers 'a' and 'b'; after calling getLogger('a.submod') a child logger is created. The second fileConfig call wrongly disables this in Python 2.5 - in Python 2.6 the logger is not disabled as it is a child of a logger explicitly mentioned in the configuration. A: I don't understand the reasons of this behavior myself but as you well stated in 2.6 it works differently. We can assume this is a bug affecting 2.5 As a workaround I suggest the following: extra_module.py: import logging import logging.config logging.config.fileConfig('logger.conf') a_log = logging.getLogger('a.submod') b_log = logging.getLogger('b.submod') module_one.py: from extra_module import a_log def function_one(): a_log.info("function_one() called.") module_two.py: from extra_module import b_log def function_two(): b_log.info("function_two() called.") by using this scheme I was able to run logger.py on python2.5.4 with the same behavior as of 2.6 A: Interesting... I played a little in the console and it looks like the second call to logging.config.fileConfig is mucking things up. Not sure why this is though... Here's a transcript that shows the problem: lorien$ python2.5 Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import logging >>> import logging.config >>> logging.config.fileConfig('logger.conf') >>> alog = logging.getLogger('a.submod') >>> alog.info('foo') foo >>> import logging >>> import logging.config >>> alog.info('foo') foo >>> logging.config.fileConfig('logger.conf') >>> alog.info('foo') >>> alog = logging.getLogger('a.submod') >>> alog.info('foo') >>> >>> blog = logging.getLogger('b.submod') >>> blog.info('foo') foo >>> As soon as I call logging.config.fileConfig the second time, my logger instance stops logging. Grabbing a new logging instance doesn't help since it's the same object. If I wait until after configuring both times to fetch the logger instances, then things work - this is why the blog instance works. My suggestion is to delay grabbing the logger instances until you are in the functions. If you move the calls to logging.getLogger() into function_one and function_two, then everything works well. A: I was able to fix this by changing the names of the loggers like so, in both files: logging.config.fileConfig('logger.conf') a_log = logging.getLogger('a') b_log = logging.getLogger('b') I'm not sure of the exact error, but the v2.5 logger module seems to have trouble matching names passed to getLogger() with names in the config file.
Python logging incompatibilty between 2.5 and 2.6
Could you help me solve the following incompatibility issue between Python 2.5 and 2.6? logger.conf: [loggers] keys=root,aLogger,bLogger [handlers] keys=consoleHandler [formatters] keys= [logger_root] level=NOTSET handlers=consoleHandler [logger_aLogger] level=DEBUG handlers=consoleHandler propagate=0 qualname=a [logger_bLogger] level=INFO handlers=consoleHandler propagate=0 qualname=b [handler_consoleHandler] class=StreamHandler args=(sys.stderr,) module_one.py: import logging import logging.config logging.config.fileConfig('logger.conf') a_log = logging.getLogger('a.submod') b_log = logging.getLogger('b.submod') def function_one(): b_log.info("function_one() called.") module_two.py: import logging import logging.config logging.config.fileConfig('logger.conf') a_log = logging.getLogger('a.submod') b_log = logging.getLogger('b.submod') def function_two(): a_log.info("function_two() called.") logger.py: from module_one import function_one from module_two import function_two function_one() function_two() Output of calling logger.py under Ubuntu 9.04: $ python2.5 logger.py $ $ python2.6 logger.py function_one() called. function_two() called. $
[ "This is a bug which was fixed between 2.5 and 2.6. The fileConfig() function is intended for one-off configuration and so should not be called more than once - however you choose to arrange this. The intended behaviour of fileConfig is to disable any loggers which are not explicitly mentioned in the configuration, and leave enabled the mentioned loggers and their children; the bug was causing the children to be disabled when they shouldn't have been. The example logger configuration mentions loggers 'a' and 'b'; after calling getLogger('a.submod') a child logger is created. The second fileConfig call wrongly disables this in Python 2.5 - in Python 2.6 the logger is not disabled as it is a child of a logger explicitly mentioned in the configuration.\n", "I don't understand the reasons of this behavior myself but as you well stated in 2.6 it works differently. We can assume this is a bug affecting 2.5\nAs a workaround I suggest the following:\nextra_module.py:\nimport logging\nimport logging.config\n\nlogging.config.fileConfig('logger.conf')\na_log = logging.getLogger('a.submod')\nb_log = logging.getLogger('b.submod')\n\nmodule_one.py:\nfrom extra_module import a_log\n\ndef function_one():\n a_log.info(\"function_one() called.\")\n\nmodule_two.py:\nfrom extra_module import b_log\n\ndef function_two():\n b_log.info(\"function_two() called.\")\n\nby using this scheme I was able to run logger.py on python2.5.4 with the same behavior as of 2.6\n", "Interesting... I played a little in the console and it looks like the second call to logging.config.fileConfig is mucking things up. Not sure why this is though... Here's a transcript that shows the problem:\nlorien$ python2.5\nPython 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) \n[GCC 4.0.1 (Apple Inc. build 5465)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import logging\n>>> import logging.config\n>>> logging.config.fileConfig('logger.conf')\n>>> alog = logging.getLogger('a.submod')\n>>> alog.info('foo')\nfoo\n>>> import logging\n>>> import logging.config\n>>> alog.info('foo')\nfoo\n>>> logging.config.fileConfig('logger.conf')\n>>> alog.info('foo')\n>>> alog = logging.getLogger('a.submod')\n>>> alog.info('foo')\n>>> \n>>> blog = logging.getLogger('b.submod')\n>>> blog.info('foo')\nfoo\n>>>\n\nAs soon as I call logging.config.fileConfig the second time, my logger instance stops logging. Grabbing a new logging instance doesn't help since it's the same object. If I wait until after configuring both times to fetch the logger instances, then things work - this is why the blog instance works.\nMy suggestion is to delay grabbing the logger instances until you are in the functions. If you move the calls to logging.getLogger() into function_one and function_two, then everything works well.\n", "I was able to fix this by changing the names of the loggers like so, in both files:\nlogging.config.fileConfig('logger.conf')\na_log = logging.getLogger('a')\nb_log = logging.getLogger('b')\n\nI'm not sure of the exact error, but the v2.5 logger module seems to have trouble matching names passed to getLogger() with names in the config file.\n" ]
[ 8, 1, 0, 0 ]
[]
[]
[ "incompatibility", "logging", "python" ]
stackoverflow_0001018527_incompatibility_logging_python.txt
Q: Algorithm for updating a list from a list I've got a data source that provides a list of objects and their properties (a CSV file, but that doesn't matter). Each time my program runs, it needs to pull a new copy of the list of objects, compare it to the list of objects (and their properties) stored in the database, and update the database as needed. Dealing with new objects is easy - the data source gives each object a sequential ID number, check the top ID number in the new information against the database, and you're done. I'm looking for suggestions for the other cases - when some of an object's properties have changed, or when an object has been deleted. A naive solution would be to pull all the objects from the database and get the complement of the intersection of the two sets (old and new) and then examine those results, but that seems like it wouldn't be very efficient if the sets get large. Any ideas? A: Is there no way to maintain a "last time modified" field? That's what it sounds like you're really looking for: an incremental backup, based on last time backup was run, compared to last time an object was changed/deleted(/added). A: You need to have timestamps in both your database and your CSV file. Timestamp should show the data when the record was updated and you should compare timestamps of the record with same IDs to decide if you need updating it or not As to your idea about intersection... It should be done vise versa! You have to import all data from CSV to the temporary table and do intersection between 2 SQL database tables. If you use Oracle or MS SQL 2008 (not sure for 2005) you will found a very usefull MERGE keyword, so you can write SQL with less efforts then you will spend for merging data in other programming language. A: The standard approach for huge piles of data amounts to this. We'll assume that list_1 is the "master" (without duplicates) and list_2 is the "updates" which may have duplicates. iter_1 = iter( sorted(list_1) ) # Essentially SELECT...ORDER BY iter_2 = iter( sorted(list_2) ) eof_1 = False eof_2 = False try: item_1 = iter_1.next() except StopIteration: eof_1= True try: item_2 = iter_2.next() except StopIteration: eof_2= True while not eof_1 and not eof_2: if item_1 == item_2: # do your update to create the new master list. try: item_2 = iter_2.next() except StopIteration: eof_2= True elif item_1 < item_2: try: item_1 = iter_1.next() except StopIteration: eof_1= True elif item_2 < item_1: # Do your insert to create the new master list. try: item_2 = iter_2.next() except StopIteration: eof_2= True assert eof_1 or eof_2 if eof_1: # item_2 and the rest of list_2 are inserts. elif eof_2: pass else: raise Error("What!?!?") Yes, it involves a potential sort. If list_1 is kept in sorted order when you write it back to the file system, that saves considerable time. If list_2 can be accumulated in a structure that keeps it sorted, then that saves considerable time. Sorry about the wordiness, but you need to know which iterator raised the StopIteration, so you can't (trivially) wrap the whole while loop in a big-old-try block. A: When you pull the list into your program, iterate over the list doing a query based on a column property in the database table that maps to the same property of the object from list like ObjectName. Or you could load the whole table into a list and compare the list that way. I assuming that you have something unique about the object that exists besides the ID the database assigns. If that object is not found in the table via the query, create a new entry. If it is found like FogleBird mentioned, have a computed hash or CRC stored for that object in the table that you can compare with the object in the list(run computation on the object). If the hashes don't match, update that object with the one on the list.
Algorithm for updating a list from a list
I've got a data source that provides a list of objects and their properties (a CSV file, but that doesn't matter). Each time my program runs, it needs to pull a new copy of the list of objects, compare it to the list of objects (and their properties) stored in the database, and update the database as needed. Dealing with new objects is easy - the data source gives each object a sequential ID number, check the top ID number in the new information against the database, and you're done. I'm looking for suggestions for the other cases - when some of an object's properties have changed, or when an object has been deleted. A naive solution would be to pull all the objects from the database and get the complement of the intersection of the two sets (old and new) and then examine those results, but that seems like it wouldn't be very efficient if the sets get large. Any ideas?
[ "Is there no way to maintain a \"last time modified\" field? That's what it sounds like you're really looking for: an incremental backup, based on last time backup was run, compared to last time an object was changed/deleted(/added).\n", "You need to have timestamps in both your database and your CSV file. Timestamp should show the data when the record was updated and you should compare timestamps of the record with same IDs to decide if you need updating it or not\nAs to your idea about intersection...\nIt should be done vise versa!\nYou have to import all data from CSV to the temporary table and do intersection between 2 SQL database tables. If you use Oracle or MS SQL 2008 (not sure for 2005) you will found a very usefull MERGE keyword, so you can write SQL with less efforts then you will spend for merging data in other programming language.\n", "The standard approach for huge piles of data amounts to this. \nWe'll assume that list_1 is the \"master\" (without duplicates) and list_2 is the \"updates\" which may have duplicates.\niter_1 = iter( sorted(list_1) ) # Essentially SELECT...ORDER BY\niter_2 = iter( sorted(list_2) )\neof_1 = False\neof_2 = False\ntry:\n item_1 = iter_1.next()\nexcept StopIteration:\n eof_1= True\ntry:\n item_2 = iter_2.next()\nexcept StopIteration:\n eof_2= True\nwhile not eof_1 and not eof_2:\n if item_1 == item_2:\n # do your update to create the new master list.\n try:\n item_2 = iter_2.next()\n except StopIteration:\n eof_2= True\n elif item_1 < item_2:\n try:\n item_1 = iter_1.next()\n except StopIteration:\n eof_1= True\n elif item_2 < item_1:\n # Do your insert to create the new master list.\n try:\n item_2 = iter_2.next()\n except StopIteration:\n eof_2= True\nassert eof_1 or eof_2\nif eof_1:\n # item_2 and the rest of list_2 are inserts.\nelif eof_2:\n pass\nelse:\n raise Error(\"What!?!?\") \n\nYes, it involves a potential sort. If list_1 is kept in sorted order when you write it back to the file system, that saves considerable time. If list_2 can be accumulated in a structure that keeps it sorted, then that saves considerable time.\nSorry about the wordiness, but you need to know which iterator raised the StopIteration, so you can't (trivially) wrap the whole while loop in a big-old-try block.\n", "When you pull the list into your program, iterate over the list doing a query based on a column property in the database table that maps to the same property of the object from list like ObjectName. Or you could load the whole table into a list and compare the list that way. I assuming that you have something unique about the object that exists besides the ID the database assigns.\nIf that object is not found in the table via the query, create a new entry. If it is found like FogleBird mentioned, have a computed hash or CRC stored for that object in the table that you can compare with the object in the list(run computation on the object). If the hashes don't match, update that object with the one on the list.\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "google_app_engine", "python", "set" ]
stackoverflow_0001019302_google_app_engine_python_set.txt
Q: Using data from django queries in the same view I might have missed somthing while searching through the documentation - I can't seem to find a way to use data from one query to form another query. My query is: sites_list = Site.objects.filter(worker=worker) I'm trying to do something like this: for site in sites_list: [Insert Query Here] Edit: I saw the awnser and im not sure how i didnt get that, maybe thats the sign im up too late coding :S A: You could easily do something like this: sites_list = Site.objects.filter(worker=worker) for site in sites_list: new_sites_list = Site.objects.filter(name=site.name).filter(something else) A: You can also use the __in lookup type. For example, if you had an Entry model with a relation to Site, you could write: Entry.objects.filter(site__in=Site.objects.filter(...some conditions...)) This will end up doing one query in the DB (the filter condition on sites would be turned into a subquery in the WHERE clause).
Using data from django queries in the same view
I might have missed somthing while searching through the documentation - I can't seem to find a way to use data from one query to form another query. My query is: sites_list = Site.objects.filter(worker=worker) I'm trying to do something like this: for site in sites_list: [Insert Query Here] Edit: I saw the awnser and im not sure how i didnt get that, maybe thats the sign im up too late coding :S
[ "You could easily do something like this:\nsites_list = Site.objects.filter(worker=worker)\n\nfor site in sites_list:\n new_sites_list = Site.objects.filter(name=site.name).filter(something else)\n\n", "You can also use the __in lookup type. For example, if you had an Entry model with a relation to Site, you could write:\nEntry.objects.filter(site__in=Site.objects.filter(...some conditions...))\n\nThis will end up doing one query in the DB (the filter condition on sites would be turned into a subquery in the WHERE clause).\n" ]
[ 2, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0000995970_django_python.txt
Q: Python - configuration options, how to input/handle? When your application takes a few (~ 5) configuration parameters, and the application is going to be used by non-technology users (i.e. KISS), how do you usually handle reading configuration options, and then passing around the parameters between objects/functions (multiple modules)? Options examples: input and output directories/file names, verbosity level. I generally use optparse (Python) and pass around the options/parameters as arguments; but I'm wondering if it's more common to use a configuration text file that is read directly by all modules' objects (but then, isn't this like having 'global' variables?, and without anyone 'owning' the state?). Another typical issue is unit testing; if I want to unit test each single module independently, a particular module may only require 1 out of the 5 configuration options; how do you usually decouple individual modules/objects from the rest of the application, and yet still allow it to accept 1 or 2 required parameters (does the unit test framework somehow invoke or take over the configuration functionality)? My guess is that there is not a unique correct way to do this, but it'd be interesting to read about various approaches, or well-known patterns. A: Do you usually read config options via: - command-line/gui options - a config text file Both. We use Django's settings.py and logging.ini. We also use command-line options and arguments for the options that change most frequently. How do multiple modules/objects have access to these options? settings.py; logging.ini -- can't say. Our options are private to the main program, and used to build arguments to functions or object initializers. [Sharing the optparse options is a big pain in the neck and needless binds a lot of things into an untestable mess.] When doing unit-testing of a single module (NOT the "main" module): (e.g. read option specifying input filename) [I can't parse the question. I assume this is "how do you test when there are options?"] The answer is -- we don't. Since only the main method parses command-line options, no other module, function or class has any idea of command-line options. There's no this module "require 1 out of the 5 config options" The module's classes (or functions) have ordinary arguments and that's that. We only use optparse. A: "Counts answer" Please update these counts and feel free to add/modify. Do you usually read config options via: - command-line/gui options : 1 - a config text file : 0 How do multiple modules/objects have access to these options? - they receive them from the caller as an argument: 1 - read them directly from the config text file: 0 When doing unit-testing of a single module (NOT the "main" module) and the module uses one option, e.g. input filename: - unit-test framework provides own "simplified" config functionality: 0 - unit-test framework invokes main app's config functionality: 1 Do you use: - optparse: 1 - getopt: 0 - others? Please list any config management "design pattern" (usable in Python) and add a count if you use it - thanks. - -
Python - configuration options, how to input/handle?
When your application takes a few (~ 5) configuration parameters, and the application is going to be used by non-technology users (i.e. KISS), how do you usually handle reading configuration options, and then passing around the parameters between objects/functions (multiple modules)? Options examples: input and output directories/file names, verbosity level. I generally use optparse (Python) and pass around the options/parameters as arguments; but I'm wondering if it's more common to use a configuration text file that is read directly by all modules' objects (but then, isn't this like having 'global' variables?, and without anyone 'owning' the state?). Another typical issue is unit testing; if I want to unit test each single module independently, a particular module may only require 1 out of the 5 configuration options; how do you usually decouple individual modules/objects from the rest of the application, and yet still allow it to accept 1 or 2 required parameters (does the unit test framework somehow invoke or take over the configuration functionality)? My guess is that there is not a unique correct way to do this, but it'd be interesting to read about various approaches, or well-known patterns.
[ "Do you usually read config options via:\n- command-line/gui options\n- a config text file \nBoth. We use Django's settings.py and logging.ini. We also use command-line options and arguments for the options that change most frequently.\nHow do multiple modules/objects have access to these options?\n\nsettings.py; logging.ini -- can't say.\nOur options are private to the main program, and used to build\narguments to functions or object initializers. \n\n[Sharing the optparse options is a big pain in the neck and needless binds a lot of things into an untestable mess.]\nWhen doing unit-testing of a single module (NOT the \"main\" module):\n(e.g. read option specifying input filename)\n[I can't parse the question. I assume this is \"how do you test when there are options?\"]\nThe answer is -- we don't. Since only the main method parses command-line options, no other module, function or class has any idea of command-line options. There's no this module \"require 1 out of the 5 config options\" The module's classes (or functions) have ordinary arguments and that's that.\nWe only use optparse.\n", "\"Counts answer\"\nPlease update these counts and feel free to add/modify.\n\nDo you usually read config options via:\n- command-line/gui options : 1\n- a config text file : 0\n\n\nHow do multiple modules/objects have access to these options?\n- they receive them from the caller as an argument: 1\n- read them directly from the config text file: 0\n\n\nWhen doing unit-testing of a single module (NOT the \"main\" module)\nand the module uses one option, e.g. input filename:\n- unit-test framework provides own \"simplified\" config functionality: 0\n- unit-test framework invokes main app's config functionality: 1\n\n\nDo you use:\n- optparse: 1\n- getopt: 0\n- others?\n\n\nPlease list any config management \"design pattern\" \n(usable in Python) and add a count if you use it - thanks.\n- \n-\n\n" ]
[ 2, 0 ]
[]
[]
[ "command_line_arguments", "configuration_files", "python" ]
stackoverflow_0001019850_command_line_arguments_configuration_files_python.txt
Q: Subclassing in Python Is it possible to subclass dynamically? I know there's ____bases____ but I don't want to effect all instances of the class. I want the object cf to polymorph into a mixin of the DrvCrystalfontz class. Further into the hierarchy is a subclass of gobject that needs to be available at this level for connecting signals, and the solution below isn't sufficient. class DrvCrystalfontz: def __init__(self, model, visitor, obj=None, config=None): if model not in Models.keys(): error("Unknown Crystalfontz model %s" % model) return self.model = Models[model] if self.model.protocol == 1: cf = Protocol1(self, visitor, obj, config) elif self.model.protocol == 2: cf = Protocol2(self, visitor, obj, config) elif self.model.protocol == 3: cf = Protocol3(self, visitor, obj, config) for key in cf.__dict__.keys(): self.__dict__[key] = cf.__dict__[key] A: I'm not sure I'm clear on your desired use here, but it is possible to subclass dynamically. You can use the type object to dynamically construct a class given a name, tuple of base classes and dict of methods / class attributes, eg: >>> MySub = type("MySub", (DrvCrystalfontz, some_other_class), {'some_extra method' : lamba self: do_something() }) MySub is now a subclass of DrvCrystalfontz andsome_other_class, inherits their methods, and adds a new one ("some_extra_method").
Subclassing in Python
Is it possible to subclass dynamically? I know there's ____bases____ but I don't want to effect all instances of the class. I want the object cf to polymorph into a mixin of the DrvCrystalfontz class. Further into the hierarchy is a subclass of gobject that needs to be available at this level for connecting signals, and the solution below isn't sufficient. class DrvCrystalfontz: def __init__(self, model, visitor, obj=None, config=None): if model not in Models.keys(): error("Unknown Crystalfontz model %s" % model) return self.model = Models[model] if self.model.protocol == 1: cf = Protocol1(self, visitor, obj, config) elif self.model.protocol == 2: cf = Protocol2(self, visitor, obj, config) elif self.model.protocol == 3: cf = Protocol3(self, visitor, obj, config) for key in cf.__dict__.keys(): self.__dict__[key] = cf.__dict__[key]
[ "I'm not sure I'm clear on your desired use here, but it is possible to subclass dynamically. You can use the type object to dynamically construct a class given a name, tuple of base classes and dict of methods / class attributes, eg:\n>>> MySub = type(\"MySub\", (DrvCrystalfontz, some_other_class), \n {'some_extra method' : lamba self: do_something() })\n\nMySub is now a subclass of DrvCrystalfontz andsome_other_class, inherits their methods, and adds a new one (\"some_extra_method\").\n" ]
[ 2 ]
[]
[]
[ "python", "subclassing" ]
stackoverflow_0001019834_python_subclassing.txt
Q: Optimizing Jinja2 Environment creation My application is running on Google App Engine and most of requests constantly gets yellow flag due to high CPU usage. Using profiler I tracked the issue down to the routine of creating jinja2.Environment instance. I'm creating the instance at module level: from jinja2 import Environment, FileSystemLoader jinja_env = Environment(loader=FileSystemLoader(TEMPLATE_DIRS)) Due to the Google AppEngine operation mode (CGI), this code can be run upon each and every request (their module import cache seems to cache modules for seconds rather than for minutes). I was thinking about storing the environment instance in memcache, but it seems to be not picklable. FileSystemLoader instance seems to be picklable and can be cached, but I did not observe any substantial improvement in CPU usage with this approach. Anybody can suggest a way to decrease the overhead of creating jinja2.Environment instance? Edit: below is (relevant) part of profiler output. 222172 function calls (215262 primitive calls) in 8.695 CPU seconds ncalls tottime percall cumtime percall filename:lineno(function) 33 1.073 0.033 1.083 0.033 {google3.apphosting.runtime._apphosting_runtime___python__apiproxy.Wait} 438/111 0.944 0.002 2.009 0.018 /base/python_dist/lib/python2.5/sre_parse.py:385(_parse) 4218 0.655 0.000 1.002 0.000 /base/python_dist/lib/python2.5/pickle.py:1166(load_long_binput) 1 0.611 0.611 0.679 0.679 /base/data/home/apps/with-the-flow/1.331879498764931274/jinja2/environment.py:10() One call, but as far I can see (and this is consistent across all my GAE-based apps), the most expensive in the whole request processing cycle. A: Armin suggested to pre-compile Jinja2 templates to python code, and use the compiled templates in production. So I've made a compiler/loader for that, and it now renders some complex templates 13 times faster, throwing away all the parsing overhead. The related discussion with link to the repository is here. A: OK, people, this is what I got today on #pocoo: [20:59] zgoda: hello, i'd like to know if i could optimize my jinja2 environment creation process, the problem -> Optimizing Jinja2 Environment creation [21:00] zgoda: i have profiler output from "cold" app -> http://paste.pocoo.org/show/107009/ [21:01] zgoda: and for "hot" -> http://paste.pocoo.org/show/107014/ [21:02] zgoda: i'm wondering if i could somewhat lower the CPU cost of creating environment for "cold" requests [21:05] mitsuhiko: zgoda: put the env creation into a module that you import [21:05] mitsuhiko: like [21:05] mitsuhiko: from yourapplication.utils import env [21:05] zgoda: it's already there [21:06] mitsuhiko: hmm [21:06] mitsuhiko: i think the problem is that the template are re-compiled each access [21:06] mitsuhiko: unfortunately gae is incredible limited, i don't know if there is much i can do currently [21:07] zgoda: i tried with jinja bytecache but it does not work on prod (its on on dev server) [21:08] mitsuhiko: i know [21:08] mitsuhiko: appengine does not have marshal [21:12] zgoda: mitsuhiko: thank you [21:13] zgoda: i was hoping i'm doing something wrong and this can be optimized... [21:13] mitsuhiko: zgoda: next release will come with improved appengine support, but i'm not sure yet how to implement improved caching for ae It looks Armin is aware of problems with bytecode caching on AppEngine and has some plans to improve Jinja2 to allow caching on GAE. I hope things will get better over time. A: According to this google recipe you can use memcache to cache bytecodes. You can also cache the template file content itself. All in the same recipe
Optimizing Jinja2 Environment creation
My application is running on Google App Engine and most of requests constantly gets yellow flag due to high CPU usage. Using profiler I tracked the issue down to the routine of creating jinja2.Environment instance. I'm creating the instance at module level: from jinja2 import Environment, FileSystemLoader jinja_env = Environment(loader=FileSystemLoader(TEMPLATE_DIRS)) Due to the Google AppEngine operation mode (CGI), this code can be run upon each and every request (their module import cache seems to cache modules for seconds rather than for minutes). I was thinking about storing the environment instance in memcache, but it seems to be not picklable. FileSystemLoader instance seems to be picklable and can be cached, but I did not observe any substantial improvement in CPU usage with this approach. Anybody can suggest a way to decrease the overhead of creating jinja2.Environment instance? Edit: below is (relevant) part of profiler output. 222172 function calls (215262 primitive calls) in 8.695 CPU seconds ncalls tottime percall cumtime percall filename:lineno(function) 33 1.073 0.033 1.083 0.033 {google3.apphosting.runtime._apphosting_runtime___python__apiproxy.Wait} 438/111 0.944 0.002 2.009 0.018 /base/python_dist/lib/python2.5/sre_parse.py:385(_parse) 4218 0.655 0.000 1.002 0.000 /base/python_dist/lib/python2.5/pickle.py:1166(load_long_binput) 1 0.611 0.611 0.679 0.679 /base/data/home/apps/with-the-flow/1.331879498764931274/jinja2/environment.py:10() One call, but as far I can see (and this is consistent across all my GAE-based apps), the most expensive in the whole request processing cycle.
[ "Armin suggested to pre-compile Jinja2 templates to python code, and use the compiled templates in production. So I've made a compiler/loader for that, and it now renders some complex templates 13 times faster, throwing away all the parsing overhead. The related discussion with link to the repository is here.\n", "OK, people, this is what I got today on #pocoo:\n[20:59] zgoda: hello, i'd like to know if i could optimize my jinja2 environment creation process, the problem -> Optimizing Jinja2 Environment creation\n[21:00] zgoda: i have profiler output from \"cold\" app -> http://paste.pocoo.org/show/107009/\n[21:01] zgoda: and for \"hot\" -> http://paste.pocoo.org/show/107014/\n[21:02] zgoda: i'm wondering if i could somewhat lower the CPU cost of creating environment for \"cold\" requests\n[21:05] mitsuhiko: zgoda: put the env creation into a module that you import\n[21:05] mitsuhiko: like\n[21:05] mitsuhiko: from yourapplication.utils import env\n[21:05] zgoda: it's already there\n[21:06] mitsuhiko: hmm\n[21:06] mitsuhiko: i think the problem is that the template are re-compiled each access\n[21:06] mitsuhiko: unfortunately gae is incredible limited, i don't know if there is much i can do currently\n[21:07] zgoda: i tried with jinja bytecache but it does not work on prod (its on on dev server)\n[21:08] mitsuhiko: i know\n[21:08] mitsuhiko: appengine does not have marshal\n[21:12] zgoda: mitsuhiko: thank you\n[21:13] zgoda: i was hoping i'm doing something wrong and this can be optimized...\n[21:13] mitsuhiko: zgoda: next release will come with improved appengine support, but i'm not sure yet how to implement improved caching for ae\nIt looks Armin is aware of problems with bytecode caching on AppEngine and has some plans to improve Jinja2 to allow caching on GAE. I hope things will get better over time.\n", "According to this google recipe you can use memcache to cache bytecodes. You can also cache the template file content itself. All in the same recipe\n" ]
[ 10, 4, 1 ]
[]
[]
[ "google_app_engine", "jinja2", "python" ]
stackoverflow_0000618827_google_app_engine_jinja2_python.txt
Q: Is there an easy way to convert an std::list to a Python list? I'm writing a little Python extension in C/C++, and I've got a function like this: void set_parameters(int first_param, std::list<double> param_list) { //do stuff } I'd like to be able to call it from Python like this: set_parameters(f_param, [1.0, 0.5, 2.1]) Is there a reasonably easy way to make that conversion? Ideally, I'd like a way that doesn't need a whole lot of extra dependencies, but some things just aren't possible without extra stuff, so that's not as big a deal. A: Take a look at Boost.Python. Question you've asked is covered in Iterators chapter of the tutorial The point is, Boost.Python provides stl_input_iterator template that converts Python's iterable to stl's input_iterator, which can be used to fill your std::list. A: It turned out to be less pain than I thought, once I found the docs that I probably should have read before I asked the question. I was able to get a PyList object in my wrapper function, then just iterate over it and push the values onto the vector I needed. The code looks like this: static PyObject* py_set_perlin_parameters(PyObject* self, PyObject* args) { int octaves; double persistence; PyObject* zoom_list; int zoom_count = 0; std::vector<double> zoom_vector; if(!PyArg_ParseTuple(args, "idO!:set_perlin_parameters", &octaves, &persistence, &PyList_Type, &zoom_list)) { return NULL; } if(!PyList_Check(zoom_list)) { PyErr_SetString(PyExc_TypeError, "set_perlin_parameters: third parameter must be a list"); return NULL; } zoom_count = PyList_Size(zoom_list); for(int i = 0; i < zoom_count; i++) { PyObject* list_val; double val; list_val = PyList_GetItem(zoom_list, i); if(list_val == NULL) { return NULL; } val = PyFloat_AsDouble(list_val); zoom_vector.push_back(val); } set_perlin_parameters(octaves, persistence, zoom_vector); return Py_None; }
Is there an easy way to convert an std::list to a Python list?
I'm writing a little Python extension in C/C++, and I've got a function like this: void set_parameters(int first_param, std::list<double> param_list) { //do stuff } I'd like to be able to call it from Python like this: set_parameters(f_param, [1.0, 0.5, 2.1]) Is there a reasonably easy way to make that conversion? Ideally, I'd like a way that doesn't need a whole lot of extra dependencies, but some things just aren't possible without extra stuff, so that's not as big a deal.
[ "Take a look at Boost.Python. Question you've asked is covered in Iterators chapter of the tutorial\nThe point is, Boost.Python provides stl_input_iterator template that converts Python's iterable to stl's input_iterator, which can be used to fill your std::list.\n", "It turned out to be less pain than I thought, once I found the docs that I probably should have read before I asked the question. I was able to get a PyList object in my wrapper function, then just iterate over it and push the values onto the vector I needed. The code looks like this:\nstatic PyObject* py_set_perlin_parameters(PyObject* self, PyObject* args)\n{\n int octaves;\n double persistence;\n PyObject* zoom_list;\n int zoom_count = 0;\n std::vector<double> zoom_vector;\n\n if(!PyArg_ParseTuple(args, \"idO!:set_perlin_parameters\", &octaves, &persistence, &PyList_Type, &zoom_list))\n {\n return NULL;\n }\n\n if(!PyList_Check(zoom_list)) \n {\n PyErr_SetString(PyExc_TypeError, \"set_perlin_parameters: third parameter must be a list\");\n return NULL;\n }\n\n zoom_count = PyList_Size(zoom_list);\n\n for(int i = 0; i < zoom_count; i++)\n {\n PyObject* list_val;\n double val;\n\n list_val = PyList_GetItem(zoom_list, i);\n\n if(list_val == NULL)\n {\n return NULL;\n }\n\n val = PyFloat_AsDouble(list_val);\n\n zoom_vector.push_back(val);\n }\n\n set_perlin_parameters(octaves, persistence, zoom_vector);\n\n return Py_None;\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "c", "python", "stl" ]
stackoverflow_0001019457_c_python_stl.txt
Q: How to walk up a linked-list using a list comprehension? I've been trying to think of a way to traverse a hierarchical structure, like a linked list, using a list expression, but haven't come up with anything that seems to work. Basically, I want to convert this code: p = self.parent names = [] while p: names.append(p.name) p = p.parent print ".".join(names) into a one-liner like: print ".".join( [o.name for o in <???>] ) I'm not sure how to do the traversal in the ??? part, though, in a generic way (if its even possible). I have several structures with similar .parent type attributes, and don't want to have write a yielding function for each. Edit: I can't use the __iter__ methods of the object itself because its already used for iterating over the values contained within the object itself. Most other answers, except for liori's, hardcode the attribute name, which is what I want to avoid. Here's my adaptation based upon liori's answer: import operator def walk(attr, start): if callable(attr): getter = attr else: getter = operator.attrgetter(attr) o = getter(start) while o: yield o o = getter(o) A: The closest thing I can think of is to create a parent generator: # Generate a node's parents, heading towards ancestors def gen_parents(node): node = node.parent while node: yield node node = node.parent # Now you can do this parents = [x.name for x in gen_parents(node)] print '.'.join(parents) A: If you want your solution to be general, use a general techique. This is a fixed-point like generator: def fixedpoint(f, start, stop): while start != stop: yield start start = f(start) It will return a generator yielding start, f(start), f(f(start)), f(f(f(start))), ..., as long as neither of these values are equal to stop. Usage: print ".".join(x.name for x in fixedpoint(lambda p:p.parent, self, None)) My personal helpers library has similar fixedpoint-like function for years... it is pretty useful for quick hacks. A: List comprehension works with objects that are iterators (have the next() method). You need to define an iterator for your structure in order to be able to iterate it this way. A: Your LinkedList needs to be iterable for it to work properly. Here's a good resource on it. (PDF warning) It is very in depth on both iterators and generators. Once you do that, you'll be able to just do this: print ".".join( [o.name for o in self] )
How to walk up a linked-list using a list comprehension?
I've been trying to think of a way to traverse a hierarchical structure, like a linked list, using a list expression, but haven't come up with anything that seems to work. Basically, I want to convert this code: p = self.parent names = [] while p: names.append(p.name) p = p.parent print ".".join(names) into a one-liner like: print ".".join( [o.name for o in <???>] ) I'm not sure how to do the traversal in the ??? part, though, in a generic way (if its even possible). I have several structures with similar .parent type attributes, and don't want to have write a yielding function for each. Edit: I can't use the __iter__ methods of the object itself because its already used for iterating over the values contained within the object itself. Most other answers, except for liori's, hardcode the attribute name, which is what I want to avoid. Here's my adaptation based upon liori's answer: import operator def walk(attr, start): if callable(attr): getter = attr else: getter = operator.attrgetter(attr) o = getter(start) while o: yield o o = getter(o)
[ "The closest thing I can think of is to create a parent generator:\n# Generate a node's parents, heading towards ancestors\ndef gen_parents(node):\n node = node.parent\n while node:\n yield node\n node = node.parent\n\n# Now you can do this\nparents = [x.name for x in gen_parents(node)]\nprint '.'.join(parents)\n\n", "If you want your solution to be general, use a general techique. This is a fixed-point like generator:\ndef fixedpoint(f, start, stop):\n while start != stop:\n yield start\n start = f(start)\n\nIt will return a generator yielding start, f(start), f(f(start)), f(f(f(start))), ..., as long as neither of these values are equal to stop.\nUsage:\nprint \".\".join(x.name for x in fixedpoint(lambda p:p.parent, self, None))\n\nMy personal helpers library has similar fixedpoint-like function for years... it is pretty useful for quick hacks.\n", "List comprehension works with objects that are iterators (have the next() method). You need to define an iterator for your structure in order to be able to iterate it this way.\n", "Your LinkedList needs to be iterable for it to work properly. \nHere's a good resource on it. (PDF warning) It is very in depth on both iterators and generators.\nOnce you do that, you'll be able to just do this:\nprint \".\".join( [o.name for o in self] )\n\n" ]
[ 6, 2, 1, 1 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0001020037_list_comprehension_python.txt
Q: Can client side python use threads? I have never programed in Python before, so excuse my code. I have this script that will run in a terminal but I can't get it to run client side. I am running this in Appcelerator's Titanium application. Anyway, I have been troubleshooting it and it seems that it isn't running the threads at all. Is this a limitation? does anyone know? <script type="text/python"> import os import sys import Queue import threading class FindThread ( threading.Thread ): def run ( self ): running = True while running: if jobPool.empty(): #print '<< CLOSING THREAD' running = False continue job = jobPool.get() window.document.getElementById('output').innerHTML += os.path.join(top, name) if job != None: dirSearch(job) jobPool = Queue.Queue ( 0 ) def findPython(): #output = window.document.getElementById('output') window.document.getElementById('output').innerHTML += "Starting" dirSearch("/") # Start 10 threads: for x in xrange ( 10 ): #print '>> OPENING THREAD' FindThread().start() def dirSearch(top = "."): import os, stat, types names = os.listdir(top) for name in names: try: st = os.lstat(os.path.join(top, name)) except os.error: continue if stat.S_ISDIR(st.st_mode): jobPool.put( os.path.join(top, name) ) else: window.document.getElementById('output').innerHTML += os.path.join(top, name) window.findPython = findPython </script> A: The answer, currently (Friday, June 19th, 2009) is yes, it can run threads, but the nothing but the main thread can access JavaScript objects, this includes the DOM. so if you are planning on updating the UI with a threading app, this is not possible... YET. Until the Appcelerator team creates some sort of queue to the main thread, possible via a binding system. Please see discussion at the appcelerator forums.
Can client side python use threads?
I have never programed in Python before, so excuse my code. I have this script that will run in a terminal but I can't get it to run client side. I am running this in Appcelerator's Titanium application. Anyway, I have been troubleshooting it and it seems that it isn't running the threads at all. Is this a limitation? does anyone know? <script type="text/python"> import os import sys import Queue import threading class FindThread ( threading.Thread ): def run ( self ): running = True while running: if jobPool.empty(): #print '<< CLOSING THREAD' running = False continue job = jobPool.get() window.document.getElementById('output').innerHTML += os.path.join(top, name) if job != None: dirSearch(job) jobPool = Queue.Queue ( 0 ) def findPython(): #output = window.document.getElementById('output') window.document.getElementById('output').innerHTML += "Starting" dirSearch("/") # Start 10 threads: for x in xrange ( 10 ): #print '>> OPENING THREAD' FindThread().start() def dirSearch(top = "."): import os, stat, types names = os.listdir(top) for name in names: try: st = os.lstat(os.path.join(top, name)) except os.error: continue if stat.S_ISDIR(st.st_mode): jobPool.put( os.path.join(top, name) ) else: window.document.getElementById('output').innerHTML += os.path.join(top, name) window.findPython = findPython </script>
[ "The answer, currently (Friday, June 19th, 2009) is yes, it can run threads, but the nothing but the main thread can access JavaScript objects, this includes the DOM. so if you are planning on updating the UI with a threading app, this is not possible... YET. Until the Appcelerator team creates some sort of queue to the main thread, possible via a binding system. \nPlease see discussion at the appcelerator forums.\n" ]
[ 2 ]
[]
[]
[ "appcelerator", "client_side", "multithreading", "python", "titanium" ]
stackoverflow_0000992008_appcelerator_client_side_multithreading_python_titanium.txt
Q: Interacting with another command line program in Python I need to write a Python script that can run another command line program and interact with it's stdin and stdout streams. Essentially, the Python script will read from the target command line program, intelligently respond by writing to its stdin, and then read the results from the program again. (It would do this repeatedly.) I've looked through the subprocess module, and I can't seem to get it to do this read/write/read/write thing that I'm looking for. Is there something else I should be trying? A: To perform such detailed interaction (when, outside of your control, the other program may be buffering its output unless it thinks it's talking to a terminal) needs something like pexpect -- which in turns requires pty, a Python standard library module that (on operating systems that allow it, such as Linux and Mac OS x) implements "pseudo-terminals". Life is harder on Windows, but maybe this zipfile can help -- it's supposed to be a port of pexpect to Windows (sorry, I have no Windows machine to check it on). The project in question, called wexpect, lives here. A: see the question wxPython: how to create a bash shell window? there I have given a full fledged interaction with bash shell reading stdout and stderr and communicating via stdin main part is extension of this code bp = Popen('bash', shell=False, stdout=PIPE, stdin=PIPE, stderr=PIPE) bp.stdin.write("ls\n") bp.stdout.readline() if we read all data it will get blocked so the link to script I have given does it in a thread. That is a complete wxpython app mimicking bash shell partially.
Interacting with another command line program in Python
I need to write a Python script that can run another command line program and interact with it's stdin and stdout streams. Essentially, the Python script will read from the target command line program, intelligently respond by writing to its stdin, and then read the results from the program again. (It would do this repeatedly.) I've looked through the subprocess module, and I can't seem to get it to do this read/write/read/write thing that I'm looking for. Is there something else I should be trying?
[ "To perform such detailed interaction (when, outside of your control, the other program may be buffering its output unless it thinks it's talking to a terminal) needs something like pexpect -- which in turns requires pty, a Python standard library module that (on operating systems that allow it, such as Linux and Mac OS x) implements \"pseudo-terminals\".\nLife is harder on Windows, but maybe this zipfile can help -- it's supposed to be a port of pexpect to Windows (sorry, I have no Windows machine to check it on). The project in question, called wexpect, lives here.\n", "see the question\nwxPython: how to create a bash shell window?\nthere I have given a full fledged interaction with bash shell\nreading stdout and stderr and communicating via stdin\nmain part is extension of this code\nbp = Popen('bash', shell=False, stdout=PIPE, stdin=PIPE, stderr=PIPE)\nbp.stdin.write(\"ls\\n\")\nbp.stdout.readline()\n\nif we read all data it will get blocked so the link to script I have given does it in a thread. That is a complete wxpython app mimicking bash shell partially.\n" ]
[ 7, 4 ]
[]
[]
[ "command_line", "python", "subprocess" ]
stackoverflow_0001020980_command_line_python_subprocess.txt
Q: bug in "django-admin.py makemessages" or xgettext call? -> "warning: unterminated string" django-admin.py makemessages dies with errors "warning: unterminated string" on cases where really long strings are wrapped: string = "some text \ more text\ and even more" These strings don't even need to be translated - e.g. sql query strings. The problem goes away when I concatenate the string, but the result looks ugly and it takes time to join them... Does anyone have a problem like this? Have you found a way to fix it? I have the following versions of the tools involved: xgettext-0.17, gettext-0.17, django-1.0.2, python-2.6.2 There was a ticket on this issue, but it was closed probably because the error appears only in some combination of component versions. EDIT: found the source of problem - xgettext prints warning messages to sterr and django takes them as fatal errors and quits. return status of xgettext call is 0 - "success". I guess that django should recognize it as success and not quit because of warnings. Interestinly xgettext still extracts backslash-wrapped strings if they need to be translated, but gives warnings in stderr ("unterminated string") and .po file ("internationalized messages should not contain the `\r' escape sequence") xgettext call is the following: xgettext -d django -L Python --keyword=gettext_noop \ --keyword=gettext_lazy --keyword=ngettext_lazy:1,2 \ --keyword=ugettext_noop --keyword=ugettext_lazy \ --keyword=ungettext_lazy:1,2 --from-code UTF-8 -o - source_file.py called from django/core/management/commands/makemessages.py A: I can think of two possibilities: you might have an extra space after your backslash at the end of the line; or you might be somehow ending up with the wrong line-ending characters in your source (e.g. Windows-style when your Python is expecting Unix-style, thus disabling the backslashes). Either way, I would take advantage of C-style automatic string concatenation: >>> string = ("some text " ... "more text " ... "and even more") >>> string 'some text more text and even more' Alternatively, if you don't mind newlines ending up in there, use multi-line strings: >>> string = """some text ... more text ... and even more""" IMO these look much nicer, and are much less fragile when refactoring. Does this help?
bug in "django-admin.py makemessages" or xgettext call? -> "warning: unterminated string"
django-admin.py makemessages dies with errors "warning: unterminated string" on cases where really long strings are wrapped: string = "some text \ more text\ and even more" These strings don't even need to be translated - e.g. sql query strings. The problem goes away when I concatenate the string, but the result looks ugly and it takes time to join them... Does anyone have a problem like this? Have you found a way to fix it? I have the following versions of the tools involved: xgettext-0.17, gettext-0.17, django-1.0.2, python-2.6.2 There was a ticket on this issue, but it was closed probably because the error appears only in some combination of component versions. EDIT: found the source of problem - xgettext prints warning messages to sterr and django takes them as fatal errors and quits. return status of xgettext call is 0 - "success". I guess that django should recognize it as success and not quit because of warnings. Interestinly xgettext still extracts backslash-wrapped strings if they need to be translated, but gives warnings in stderr ("unterminated string") and .po file ("internationalized messages should not contain the `\r' escape sequence") xgettext call is the following: xgettext -d django -L Python --keyword=gettext_noop \ --keyword=gettext_lazy --keyword=ngettext_lazy:1,2 \ --keyword=ugettext_noop --keyword=ugettext_lazy \ --keyword=ungettext_lazy:1,2 --from-code UTF-8 -o - source_file.py called from django/core/management/commands/makemessages.py
[ "I can think of two possibilities: you might have an extra space after your backslash at the end of the line; or you might be somehow ending up with the wrong line-ending characters in your source (e.g. Windows-style when your Python is expecting Unix-style, thus disabling the backslashes).\nEither way, I would take advantage of C-style automatic string concatenation:\n>>> string = (\"some text \"\n... \"more text \"\n... \"and even more\")\n>>> string\n'some text more text and even more'\n\nAlternatively, if you don't mind newlines ending up in there, use multi-line strings:\n>>> string = \"\"\"some text\n... more text\n... and even more\"\"\"\n\nIMO these look much nicer, and are much less fragile when refactoring.\nDoes this help?\n" ]
[ 2 ]
[]
[]
[ "django", "internationalization", "python", "xgettext" ]
stackoverflow_0001020432_django_internationalization_python_xgettext.txt
Q: Is there a better way to convert a list to a dictionary in Python with keys but no values? I was sure that there would be a one liner to convert a list to a dictionary where the items in the list were keys and the dictionary had no values. The only way I could find to do it was argued against. "Using list comprehensions when the result is ignored is misleading and inefficient. A for loop is better" myList = ['a','b','c','d'] myDict = {} x=[myDict.update({item:None}) for item in myList] >>> myDict {'a': None, 'c': None, 'b': None, 'd': None} It works, but is there a better way to do this? A: Use dict.fromkeys: >>> my_list = [1, 2, 3] >>> dict.fromkeys(my_list) {1: None, 2: None, 3: None} Values default to None, but you can specify them as an optional argument: >>> my_list = [1, 2, 3] >>> dict.fromkeys(my_list, 0) {1: 0, 2: 0, 3: 0} From the docs: a.fromkeys(seq[, value]) Creates a new dictionary with keys from seq and values set to value. dict.fromkeys is a class method that returns a new dictionary. value defaults to None. New in version 2.3. A: You could use a set instead of a dict: >>> myList=['a','b','c','d'] >>> set(myList) set(['a', 'c', 'b', 'd']) This is better if you never need to store values, and are just storing an unordered collection of unique items. A: To answer the original questioner's performance worries (for lookups in dict vs set), somewhat surprisingly, dict lookups can be minutely faster (in Python 2.5.1 on my rather slow laptop) assuming for example that half the lookups fail and half succeed. Here's how one goes about finding out: $ python -mtimeit -s'k=dict.fromkeys(range(99))' '5 in k and 112 in k' 1000000 loops, best of 3: 0.236 usec per loop $ python -mtimeit -s'k=set(range(99))' '5 in k and 112 in k' 1000000 loops, best of 3: 0.265 usec per loop doing each check several times to verify they're repeatable. So, if those 30 nanoseconds or less on a slow laptop are in an absolutely crucial bottleneck, it may be worth going for the obscure dict.fromkeys solution rather than the simple, obvious, readable, and clearly correct set (unusual -- almost invariably in Python the simple and direct solution has performance advantages too). Of course, one needs to check with one's own Python version, machine, data, and ratio of successful vs failing tests, and confirm with extremely accurate profiling that shaving 30 nanoseconds (or whatever) off this lookup will make an important difference. Fortunately, in the vast majority of cases, this will prove totally unnecessary... but since programmers will obsess about meaningless micro-optimizations anyway, no matter how many times they're told about their irrelevance, the timeit module is right there in the standard library to make those mostly-meaningless micro-benchmarks easy as pie anyway!-) A: And here's a fairly wrong and inefficient way to do it using map: >>> d = dict() >>> map (lambda x: d.__setitem__(x, None), [1,2,3]) [None, None, None] >>> d {1: None, 2: None, 3: None} A: You can use a list comprehension: my_list = ['a','b','c','d'] my_dict = dict([(ele, None) for ele in my_list]) A: Maybe you can use itertools: >>>import itertools >>>my_list = ['a','b','c','d'] >>>d = {} >>>for x in itertools.imap(d.setdefault, my_list): pass >>>print d {'a': None, 'c': None, 'b': None, 'd': None} For huge lists, maybe this is very good :P
Is there a better way to convert a list to a dictionary in Python with keys but no values?
I was sure that there would be a one liner to convert a list to a dictionary where the items in the list were keys and the dictionary had no values. The only way I could find to do it was argued against. "Using list comprehensions when the result is ignored is misleading and inefficient. A for loop is better" myList = ['a','b','c','d'] myDict = {} x=[myDict.update({item:None}) for item in myList] >>> myDict {'a': None, 'c': None, 'b': None, 'd': None} It works, but is there a better way to do this?
[ "Use dict.fromkeys:\n>>> my_list = [1, 2, 3]\n>>> dict.fromkeys(my_list)\n{1: None, 2: None, 3: None}\n\nValues default to None, but you can specify them as an optional argument:\n>>> my_list = [1, 2, 3]\n>>> dict.fromkeys(my_list, 0)\n{1: 0, 2: 0, 3: 0}\n\nFrom the docs:\n\na.fromkeys(seq[, value]) Creates a new\n dictionary with keys from seq and\n values set to value.\ndict.fromkeys is a class method that\n returns a new dictionary. value\n defaults to None. New in version 2.3.\n\n", "You could use a set instead of a dict:\n>>> myList=['a','b','c','d']\n>>> set(myList)\nset(['a', 'c', 'b', 'd'])\n\nThis is better if you never need to store values, and are just storing an unordered collection of unique items.\n", "To answer the original questioner's performance worries (for lookups in dict vs set), somewhat surprisingly, dict lookups can be minutely faster (in Python 2.5.1 on my rather slow laptop) assuming for example that half the lookups fail and half succeed. Here's how one goes about finding out:\n$ python -mtimeit -s'k=dict.fromkeys(range(99))' '5 in k and 112 in k'\n1000000 loops, best of 3: 0.236 usec per loop\n$ python -mtimeit -s'k=set(range(99))' '5 in k and 112 in k'\n1000000 loops, best of 3: 0.265 usec per loop\n\ndoing each check several times to verify they're repeatable. So, if those 30 nanoseconds or less on a slow laptop are in an absolutely crucial bottleneck, it may be worth going for the obscure dict.fromkeys solution rather than the simple, obvious, readable, and clearly correct set (unusual -- almost invariably in Python the simple and direct solution has performance advantages too).\nOf course, one needs to check with one's own Python version, machine, data, and ratio of successful vs failing tests, and confirm with extremely accurate profiling that shaving 30 nanoseconds (or whatever) off this lookup will make an important difference.\nFortunately, in the vast majority of cases, this will prove totally unnecessary... but since programmers will obsess about meaningless micro-optimizations anyway, no matter how many times they're told about their irrelevance, the timeit module is right there in the standard library to make those mostly-meaningless micro-benchmarks easy as pie anyway!-)\n", "And here's a fairly wrong and inefficient way to do it using map:\n>>> d = dict()\n>>> map (lambda x: d.__setitem__(x, None), [1,2,3])\n[None, None, None]\n>>> d\n{1: None, 2: None, 3: None}\n\n", "You can use a list comprehension:\nmy_list = ['a','b','c','d']\nmy_dict = dict([(ele, None) for ele in my_list])\n\n", "Maybe you can use itertools:\n>>>import itertools\n>>>my_list = ['a','b','c','d']\n>>>d = {}\n>>>for x in itertools.imap(d.setdefault, my_list): pass\n>>>print d\n{'a': None, 'c': None, 'b': None, 'd': None}\n\nFor huge lists, maybe this is very good :P\n" ]
[ 23, 15, 5, 1, 1, 1 ]
[]
[]
[ "dictionary", "list", "list_comprehension", "python" ]
stackoverflow_0001020722_dictionary_list_list_comprehension_python.txt
Q: Package for creating and validating HTML forms in Python? - to be used in Google Appengine Is there a well maintained package available in Python for creating and validating HTML forms? I will deploying it finally on Google Appengine. A: For client-side validation, check http://plugins.jquery.com/search/node/form+validate; for server-side, actually ALMOST every web framework (web.py, django, etc.) has its own form generation as well as validation lib for you to use. A: You can use Django form validation on GAE storage via db.djangoforms.ModelForm. To smoothly integrate client-side Dojo functionality with Django server-side web apps, I'd look at dojango, which does work fine with GAE (as well as without). However, dojango (currently at release 0.3.1) does not yet automatically provide client-side validation of Django forms -- that's on the roadmap for the forthcoming release 0.4 of dojango, but I have no idea about the timeframe in which you could expect it. A: AppEngine includes Django's form framework (or a variation thereof), which I find very nice. It also plays well with your ORM (i.e. getting forms for models is very DRY). The only potential problem is the lack of client-side validation.
Package for creating and validating HTML forms in Python? - to be used in Google Appengine
Is there a well maintained package available in Python for creating and validating HTML forms? I will deploying it finally on Google Appengine.
[ "For client-side validation, check http://plugins.jquery.com/search/node/form+validate;\nfor server-side, actually ALMOST every web framework (web.py, django, etc.) has its own form generation as well as validation lib for you to use.\n", "You can use Django form validation on GAE storage via db.djangoforms.ModelForm.\nTo smoothly integrate client-side Dojo functionality with Django server-side web apps, I'd look at dojango, which does work fine with GAE (as well as without). However, dojango (currently at release 0.3.1) does not yet automatically provide client-side validation of Django forms -- that's on the roadmap for the forthcoming release 0.4 of dojango, but I have no idea about the timeframe in which you could expect it.\n", "AppEngine includes Django's form framework (or a variation thereof), which I find very nice. It also plays well with your ORM (i.e. getting forms for models is very DRY). The only potential problem is the lack of client-side validation.\n" ]
[ 2, 2, 0 ]
[]
[]
[ "forms", "google_app_engine", "html", "python" ]
stackoverflow_0001021411_forms_google_app_engine_html_python.txt
Q: split a string by a delimiter in a context sensitive way For example, I want to split str = '"a,b,c",d,e,f' into ["a,b,c",'d','e','f'] (i.e. don't split the quoted part) In this case, this can be done with re.findall('".*?"|[^,]+',str) However, if str = '"a,,b,c",d,,f' I want ["a,,b,c",'d','','f'] i.e. I want a behavior that is like python's split function. Is there any way I can do this in one (small) line, possibly using Python's re library? Actually, I just realized (on this site) that the csv module is perfect for what I want to do, but I am curious whether there is a regular expression that re can use to do it as well. A: Use the csv module as it is a real parser. Regular expressions are nonoptimal (or completely unsuited) for most things involving matching delimiters in which the rules change (I'm unsure as to whether this particular grammar is regular or not). You might be able to create a regex that would work in this case, but it would be rather complex (especially dealing with cases like "He said, \"How are you\""). A: re.split(',(?=(?:[^"]*"[^"]*")*[^"]*$)', str) After matching a comma, if there's an odd number of quotation marks up ahead ahead, the comma must be inside a pair of quotation marks, so it doesn't count as a delimiter. Obviously this doesn't take the possibility of escaped quotation marks into account, but that can handled if need be--it just makes the regex about twice as ugly as it already is. :D A: Writing a state machine for this would, on the other hand, seem to be quite straightforward. DFAs and regexes have the same power, but usually one of them is better suited to the problem at hand, and is usually very dependent on the additional logic you might need to implement. A: Page 271 of Friedl's Mastering Regular Expressions has a regular expression for extracting possibly quoted CSV fields, but it requires a bit of postprocessing: >>> re.findall('(?:^|,)(?:"((?:[^"]|"")*)"|([^",]*))',str) [('a,b,c', ''), ('', 'd'), ('', 'e'), ('', 'f')] >>> re.findall('(?:^|,)(?:"((?:[^"]|"")*)"|([^",]*))','"a,b,c",d,,f') [('a,b,c', ''), ('', 'd'), ('', ''), ('', 'f')] Same pattern with the verbose flag: csv = re.compile(r""" (?:^|,) (?: # now match either a double-quoted field # (inside, paired double quotes are allowed)... " # (double-quoted field's opening quote) ( (?: [^"] | "" )* ) " # (double-quoted field's closing quote) | # ...or some non-quote/non-comma text... ( [^",]* ) )""", re.X) A: You can get close using non-greedy specifiers. The closest I've got is: >>> re.findall('(".*?"|.*?)(?:,|$)', '"a,b,c",d,e,f') ['"a,,b,c"', 'd', '', 'f', ''] But as you see, you end up with a redundant empty string at the end, which is indistinguishable from the result you get when the string ends with a comma: >>> re.findall('(".*?"|.*?)(?:,|$)', '"a,b,c",d,e,f,') ['"a,,b,c"', 'd', '', 'f', ''] so you'd need to do some manual tweaking at the end - something like: matches = regex,findall(s) if not s.endswith(","): matches.pop() or matches = regex.findall(s+",")[:-1] There's probably a better way. A: Here's a function that'll accomplish the task: def smart_split(data, delimiter=","): """ Performs splitting with string preservation. This reads both single and double quoted strings. """ result = [] quote_type = None buffer = "" position = 0 while position < len(data): if data[position] in ["\"", "'"]: quote_type = data[position] while quote_type is not None: position += 1 if data[position] == quote_type: quote_type = None position += 1 else: buffer += data[position] if data[position] == delimiter: result.append(buffer) buffer = "" else: buffer += data[position] position += 1 result.append(buffer) return result Example of use: str = '"a,b,c",d,e,f' print smart_split(str) # Prints: ['a,b,c', 'd', 'e', 'f'] A: Here's a really short function that will do the same thing: def split (aString): splitByQuotes = (",%s,"%aString).split('"') splitByQuotes[0::2] = [x.split(",")[1:-1] for x in splitByQuotes[0::2]] return [a.strip() \ for b in splitByQuotes \ for a in (b if type(b)==list else [b])] It splits the string where the quotes are, creating a list where every even element is the stuff outside the quotes and every odd element is the stuff that was encapsulated within quotes. The stuff in quotes it leaves alone, the stuff outside it splits where the commas are. Now we have a list of alternating lists and strings, which we then unwrap with the last line. The reason for wrapping the string in commas at the beginning and removing commas in the middle is to prevent spare empty elements in the list. It should be able to handle whitespace - I added a strip() function at the end to make it produce clean output, but that's not necessary. usage: >>> print split('c, , "a,,b,c",d,"moo","f"') ['c', '', 'a,,b,c', 'd', 'moo', 'f']
split a string by a delimiter in a context sensitive way
For example, I want to split str = '"a,b,c",d,e,f' into ["a,b,c",'d','e','f'] (i.e. don't split the quoted part) In this case, this can be done with re.findall('".*?"|[^,]+',str) However, if str = '"a,,b,c",d,,f' I want ["a,,b,c",'d','','f'] i.e. I want a behavior that is like python's split function. Is there any way I can do this in one (small) line, possibly using Python's re library? Actually, I just realized (on this site) that the csv module is perfect for what I want to do, but I am curious whether there is a regular expression that re can use to do it as well.
[ "Use the csv module as it is a real parser. Regular expressions are nonoptimal (or completely unsuited) for most things involving matching delimiters in which the rules change (I'm unsure as to whether this particular grammar is regular or not). You might be able to create a regex that would work in this case, but it would be rather complex (especially dealing with cases like \"He said, \\\"How are you\\\"\").\n", "re.split(',(?=(?:[^\"]*\"[^\"]*\")*[^\"]*$)', str)\n\nAfter matching a comma, if there's an odd number of quotation marks up ahead ahead, the comma must be inside a pair of quotation marks, so it doesn't count as a delimiter. Obviously this doesn't take the possibility of escaped quotation marks into account, but that can handled if need be--it just makes the regex about twice as ugly as it already is. :D\n", "Writing a state machine for this would, on the other hand, seem to be quite straightforward. DFAs and regexes have the same power, but usually one of them is better suited to the problem at hand, and is usually very dependent on the additional logic you might need to implement.\n", "Page 271 of Friedl's Mastering Regular Expressions has a regular expression for extracting possibly quoted CSV fields, but it requires a bit of postprocessing:\n>>> re.findall('(?:^|,)(?:\"((?:[^\"]|\"\")*)\"|([^\",]*))',str)\n[('a,b,c', ''), ('', 'd'), ('', 'e'), ('', 'f')]\n>>> re.findall('(?:^|,)(?:\"((?:[^\"]|\"\")*)\"|([^\",]*))','\"a,b,c\",d,,f')\n[('a,b,c', ''), ('', 'd'), ('', ''), ('', 'f')]\n\nSame pattern with the verbose flag:\ncsv = re.compile(r\"\"\"\n (?:^|,)\n (?: # now match either a double-quoted field\n # (inside, paired double quotes are allowed)...\n \" # (double-quoted field's opening quote)\n ( (?: [^\"] | \"\" )* )\n \" # (double-quoted field's closing quote)\n |\n # ...or some non-quote/non-comma text...\n ( [^\",]* )\n )\"\"\", re.X)\n\n", "You can get close using non-greedy specifiers. The closest I've got is:\n>>> re.findall('(\".*?\"|.*?)(?:,|$)', '\"a,b,c\",d,e,f')\n['\"a,,b,c\"', 'd', '', 'f', '']\n\nBut as you see, you end up with a redundant empty string at the end, which is indistinguishable from the result you get when the string ends with a comma:\n>>> re.findall('(\".*?\"|.*?)(?:,|$)', '\"a,b,c\",d,e,f,')\n['\"a,,b,c\"', 'd', '', 'f', '']\n\nso you'd need to do some manual tweaking at the end - something like:\nmatches = regex,findall(s)\nif not s.endswith(\",\"): matches.pop()\n\nor\nmatches = regex.findall(s+\",\")[:-1]\n\nThere's probably a better way.\n", "Here's a function that'll accomplish the task:\ndef smart_split(data, delimiter=\",\"):\n \"\"\" Performs splitting with string preservation. This reads both single and\n double quoted strings.\n \"\"\"\n result = []\n quote_type = None\n buffer = \"\"\n position = 0\n while position < len(data):\n if data[position] in [\"\\\"\", \"'\"]:\n quote_type = data[position]\n while quote_type is not None:\n position += 1\n if data[position] == quote_type:\n quote_type = None\n position += 1\n else:\n buffer += data[position]\n if data[position] == delimiter:\n result.append(buffer)\n buffer = \"\"\n else:\n buffer += data[position]\n position += 1\n result.append(buffer)\n return result\n\nExample of use:\nstr = '\"a,b,c\",d,e,f'\nprint smart_split(str)\n# Prints: ['a,b,c', 'd', 'e', 'f']\n\n", "Here's a really short function that will do the same thing:\ndef split (aString):\n splitByQuotes = (\",%s,\"%aString).split('\"')\n splitByQuotes[0::2] = [x.split(\",\")[1:-1] for x in splitByQuotes[0::2]]\n return [a.strip() \\\n for b in splitByQuotes \\\n for a in (b if type(b)==list else [b])]\n\nIt splits the string where the quotes are, creating a list where every even element is the stuff outside the quotes and every odd element is the stuff that was encapsulated within quotes. The stuff in quotes it leaves alone, the stuff outside it splits where the commas are. Now we have a list of alternating lists and strings, which we then unwrap with the last line. The reason for wrapping the string in commas at the beginning and removing commas in the middle is to prevent spare empty elements in the list. It should be able to handle whitespace - I added a strip() function at the end to make it produce clean output, but that's not necessary.\nusage:\n>>> print split('c, , \"a,,b,c\",d,\"moo\",\"f\"')\n['c', '', 'a,,b,c', 'd', 'moo', 'f']\n\n" ]
[ 2, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "regex", "split" ]
stackoverflow_0001019756_python_regex_split.txt
Q: Exporting a zope folder with python We have two zope servers running our company's internal site. One is the live site and one is the dev site. I'm working on writing a python script that moves everything from the dev server to the live server. Right now the process involves a bunch of steps that are done in the zope management interface. I need to make all that automatic so that running one script handles it all. One thing I need to do is export one folder from the live server so that I can reimport it back into the live site after the update. How can I do this from a python script? We're using Zope 2.8 and python 2.3.4 A: You can try to use the functions manage_exportObject and manage_importObject located in the file $ZOPE_HOME/lib/python/OFS/ObjectManager.py Let say we install two Zope 2.8 instances located at: /tmp/instance/dev for the development server (port 8080) /tmp/instance/prod for the production server (port 9090) In the ZMI of the development server, I have created two folders /MyFolder1 and /MyFolder2 containing some page templates. The following Python script exports each folder in .zexp files, and imports them in the ZMI of the production instance: #!/usr/bin/python import urllib import shutil ids_to_transfer = ['MyFolder1', 'MyFolder2'] for id in ids_to_transfer: urllib.urlopen('http://admin:password_dev@localhost:8080/manage_exportObject?id=' + id) shutil.move('/tmp/instance/dev/var/' + id + '.zexp', '/tmp/instance/prod/import/' + id + '.zexp') urllib.urlopen('http://admin:password_prod@localhost:9090/manage_delObjects?ids=' + id) urllib.urlopen('http://admin:password_prod@localhost:9090/manage_importObject?file=' + id + '.zexp') A: To make this more general and allow copying folders not in the root directory I would do something like this: #!/usr/bin/python import urllib import shutil username_dev = 'admin' username_prod = 'admin' password_dev = 'password_dev' password_prod = 'password_prod' url_dev = 'localhost:8080' url_prod = 'localhost:9090' paths_and_ids_to_transfer = [('level1/level2/','MyFolder1'), ('level1/','MyFolder2')] for path, id in ids_to_transfer: urllib.urlopen('http://%s:%s@%s/%smanage_exportObject?id=%s' % (username_dev, password_dev, url_dev, path, id)) shutil.move('/tmp/instance/dev/var/' + id + '.zexp', '/tmp/instance/prod/import/' + id + '.zexp') urllib.urlopen('http://%s:%s@%s/%smanage_delObjects?ids=%s' % (username_prod, password_prod, url_prod, path, id)) urllib.urlopen('http://%s:%s@%s/%smanage_importObject?file=%s.zexp' % (username_prod, password_prod, url_prod, path, id)) If I had the rep I would add this to the other answer but alas... If someone wants to merge them, please go ahead. A: If you really move everything you could probably just move the Data.fs instead. But otherwise the import/export above is a good way.
Exporting a zope folder with python
We have two zope servers running our company's internal site. One is the live site and one is the dev site. I'm working on writing a python script that moves everything from the dev server to the live server. Right now the process involves a bunch of steps that are done in the zope management interface. I need to make all that automatic so that running one script handles it all. One thing I need to do is export one folder from the live server so that I can reimport it back into the live site after the update. How can I do this from a python script? We're using Zope 2.8 and python 2.3.4
[ "You can try to use the functions manage_exportObject and manage_importObject located in the file $ZOPE_HOME/lib/python/OFS/ObjectManager.py\nLet say we install two Zope 2.8 instances located at:\n\n/tmp/instance/dev for the development server (port 8080)\n/tmp/instance/prod for the production server (port 9090)\n\nIn the ZMI of the development server, I have created two folders /MyFolder1 and /MyFolder2 containing some page templates. The following Python script exports each folder in .zexp files, and imports them in the ZMI of the production instance:\n#!/usr/bin/python\nimport urllib\nimport shutil\n\nids_to_transfer = ['MyFolder1', 'MyFolder2']\n\nfor id in ids_to_transfer:\n urllib.urlopen('http://admin:password_dev@localhost:8080/manage_exportObject?id=' + id)\n\n shutil.move('/tmp/instance/dev/var/' + id + '.zexp', '/tmp/instance/prod/import/' + id + '.zexp')\n\n urllib.urlopen('http://admin:password_prod@localhost:9090/manage_delObjects?ids=' + id)\n urllib.urlopen('http://admin:password_prod@localhost:9090/manage_importObject?file=' + id + '.zexp')\n\n", "To make this more general and allow copying folders not in the root directory I would do something like this:\n#!/usr/bin/python\nimport urllib\nimport shutil\n\nusername_dev = 'admin'\nusername_prod = 'admin'\npassword_dev = 'password_dev'\npassword_prod = 'password_prod'\nurl_dev = 'localhost:8080'\nurl_prod = 'localhost:9090'\n\npaths_and_ids_to_transfer = [('level1/level2/','MyFolder1'), ('level1/','MyFolder2')]\n\nfor path, id in ids_to_transfer:\n urllib.urlopen('http://%s:%s@%s/%smanage_exportObject?id=%s' % (username_dev, password_dev, url_dev, path, id))\n\n shutil.move('/tmp/instance/dev/var/' + id + '.zexp', '/tmp/instance/prod/import/' + id + '.zexp')\n\n urllib.urlopen('http://%s:%s@%s/%smanage_delObjects?ids=%s' % (username_prod, password_prod, url_prod, path, id))\n urllib.urlopen('http://%s:%s@%s/%smanage_importObject?file=%s.zexp' % (username_prod, password_prod, url_prod, path, id))\n\nIf I had the rep I would add this to the other answer but alas...\nIf someone wants to merge them, please go ahead.\n", "If you really move everything you could probably just move the Data.fs instead. But otherwise the import/export above is a good way.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "python", "zope" ]
stackoverflow_0000922319_python_zope.txt
Q: Generating a 3D CAPTCHA [pic] I would like to write a Python script that would generate a 3D CAPTCHA like this one: Which graphics libraries can I use? Source: ocr-research.org.ua A: There are many approaches. I would personally create the image in Python Imaging Library using ImageDraw's draw.text, convert to a NumPy array (usint NumPy's asarray) then render with Matplotlib. (Requires Matplotlib maintenance package). Full code (in 2.5): import numpy, pylab from PIL import Image, ImageDraw, ImageFont import matplotlib.axes3d as axes3d sz = (50,30) img = Image.new('L', sz, 255) drw = ImageDraw.Draw(img) font = ImageFont.truetype("arial.ttf", 20) drw.text((5,3), 'text', font=font) img.save('c:/test.png') X , Y = numpy.meshgrid(range(sz[0]),range(sz[1])) Z = 1-numpy.asarray(img)/255 fig = pylab.figure() ax = axes3d.Axes3D(fig) ax.plot_wireframe(X, -Y, Z, rstride=1, cstride=1) ax.set_zlim((0,50)) fig.savefig('c:/test2.png') Obviously there's a little work to be done, eliminating axes, changing view angle, etc.. A: Another binding to consider for rendering with opengl is pyglet. Its best feature is that it is just one download. I think it contains everything you need to implement what Anurag spells out. I will caution you that what you're trying to do is not exactly a simple first project in 3d graphics. If this is your first exposure to OpenGL, consider a series of tutorials like NeHe Tutorials and other help from the OpenGL website. A: I'm not sure I would bother with a full 3D library for what you have above. Just generate a matrix of 3D points, generate the text with something like PIL, scan over it to find which points on the grid are raised, pick a random camera angle and then project the points into a 2D image and draw them with PIL to the final image. That being said... you may be able to use VPython if you don't want to do the 3D math yourself. A: Use Python bindings for OpenGL, http://pyopengl.sourceforge.net/. Create a 2D image of white color text over a black surface using PIL. Make a 3D grid from this, increase z of point where color is white, maybe set z=color value, so by blurring the image you can get real curves in the z direction. Create an OpenGL triangle from these points, use wireframe mode while rendering. Grab the OpenGL buffer into an image, for example, http://python-opengl-examples.blogspot.com/2009/04/render-to-texture.html.
Generating a 3D CAPTCHA [pic]
I would like to write a Python script that would generate a 3D CAPTCHA like this one: Which graphics libraries can I use? Source: ocr-research.org.ua
[ "There are many approaches. I would personally create the image in Python Imaging Library using ImageDraw's draw.text, convert to a NumPy array (usint NumPy's asarray) then render with Matplotlib. (Requires Matplotlib maintenance package).\nFull code (in 2.5):\nimport numpy, pylab\nfrom PIL import Image, ImageDraw, ImageFont\nimport matplotlib.axes3d as axes3d\n\nsz = (50,30)\n\nimg = Image.new('L', sz, 255)\ndrw = ImageDraw.Draw(img)\nfont = ImageFont.truetype(\"arial.ttf\", 20)\n\ndrw.text((5,3), 'text', font=font)\nimg.save('c:/test.png')\n\nX , Y = numpy.meshgrid(range(sz[0]),range(sz[1]))\nZ = 1-numpy.asarray(img)/255\n\nfig = pylab.figure()\nax = axes3d.Axes3D(fig)\nax.plot_wireframe(X, -Y, Z, rstride=1, cstride=1)\nax.set_zlim((0,50))\nfig.savefig('c:/test2.png')\n\n\nObviously there's a little work to be done, eliminating axes, changing view angle, etc..\n", "Another binding to consider for rendering with opengl is pyglet. Its best feature is that it is just one download. I think it contains everything you need to implement what Anurag spells out.\nI will caution you that what you're trying to do is not exactly a simple first project in 3d graphics. If this is your first exposure to OpenGL, consider a series of tutorials like NeHe Tutorials and other help from the OpenGL website.\n", "I'm not sure I would bother with a full 3D library for what you have above. Just generate a matrix of 3D points, generate the text with something like PIL, scan over it to find which points on the grid are raised, pick a random camera angle and then project the points into a 2D image and draw them with PIL to the final image.\nThat being said... you may be able to use VPython if you don't want to do the 3D math yourself.\n", "Use Python bindings for OpenGL, http://pyopengl.sourceforge.net/.\nCreate a 2D image of white color text over a black surface using PIL.\nMake a 3D grid from this, increase z of point where color is white,\nmaybe set z=color value, so by blurring the image you can get real curves in the z direction.\nCreate an OpenGL triangle from these points, use wireframe mode while rendering.\nGrab the OpenGL buffer into an image, for example,\nhttp://python-opengl-examples.blogspot.com/2009/04/render-to-texture.html.\n" ]
[ 33, 4, 2, 1 ]
[]
[]
[ "captcha", "graphics", "python" ]
stackoverflow_0001021721_captcha_graphics_python.txt
Q: Probability time series, observed data probabilities (deja vu) okay folks...thanks for looking at this question. I remember doing the following below in college however I forgotten the exact solution. Any takers to steer in the right direction. I have a time series of data (we'll use three) of N. The data series is sequential in order of time (e.g. obsOne[1] occurred along with obsTwo[1] and obsThree[1]) obsOne[47, 136, -108, -15, 22, ...], obsTwo[448, 321, 122, -207, 269, ...], obsThree[381, 283, 429, -393, 242, ...] Step 2. from the data series I create a series of X range bins with width Z for each data series. (e.g. of observation obsOne: bin1 = [<-108, -108] bin2 = [-108, -26] bin3 = [-26, 55] ... binX = [136, > 136] Step 3. Now create a table with all possible combinations on the data series. Thus if I had 4 bins and 3 data series all combinations would total 4x4x4 = 64 possible outcomes. (e.g. row1 = obsOne bin1 + obsTwo bin1 + obsThree bin1, row2 = obsOne bin1 + obsTwo bin1 + obsThree bin2, ... row5 = obsOne bin1 + obsTwo bin1 + obsThree binX, row6 = obsOne bin1 + obsTwo bin2 + obsThree bin1, row7 = obsOne bin1 + obsTwo bin1 + obsThree bin2, row9 = obsOne bin1 + obsTwo bin2 + obsThree binX, ...) Step 4. I now go back to the data series and find where each row in the data series falls on on the table and count how many times an observation does so. (e.g. obsOne[2] obsTwo[2] obsThree[2] = row 30 on table, obsOne[X] obsTwo[X] obsThree[X] = row 52 on table. Step 5. I then only take the rows on the table with positive matches, count how many observations fell on that row, dived by total number of observation in data series and that gives me my probability for that range on the observed data. I apologize for this basic question, not a math expert. I have done this before many years ago. I forgot which method I used, it was much faster than this long (ancient "by hand") method. I wasn't using python at the time, it was some other proprietary package in c++. I'd like to see if something is out there that can solve this problem with python (now a python shop), could always extend, so it is soft constraint. A: Are you talking about something like this? from __future__ import division from collections import defaultdict obsOne= [47, 136, -108, -15, 22, ] obsTwo= [448, 321, 122, -207, 269, ] obsThree= [381, 283, 429, -393, 242, ] class BinParams( object ): def __init__( self, timeSeries, X ): self.mx= max(timeSeries ) self.mn= min(timeSeries ) self.Z=(self.mx-self.mn)/X def index( self, sample ): return (sample-self.mn)//self.Z binsOne= BinParams( obsOne, 4 ) binsTwo= BinParams( obsTwo, 4 ) binsThree= BinParams( obsThree, 4 ) counts= defaultdict(int) for s1, s2, s3 in zip( obsOne, obsTwo, obsThree ): posn= binsOne.index(s1), binsTwo.index(s2), binsThree.index(s3) counts[posn] += 1 for k in counts: print k, counts[k], counts[k]/len(counts)
Probability time series, observed data probabilities (deja vu)
okay folks...thanks for looking at this question. I remember doing the following below in college however I forgotten the exact solution. Any takers to steer in the right direction. I have a time series of data (we'll use three) of N. The data series is sequential in order of time (e.g. obsOne[1] occurred along with obsTwo[1] and obsThree[1]) obsOne[47, 136, -108, -15, 22, ...], obsTwo[448, 321, 122, -207, 269, ...], obsThree[381, 283, 429, -393, 242, ...] Step 2. from the data series I create a series of X range bins with width Z for each data series. (e.g. of observation obsOne: bin1 = [<-108, -108] bin2 = [-108, -26] bin3 = [-26, 55] ... binX = [136, > 136] Step 3. Now create a table with all possible combinations on the data series. Thus if I had 4 bins and 3 data series all combinations would total 4x4x4 = 64 possible outcomes. (e.g. row1 = obsOne bin1 + obsTwo bin1 + obsThree bin1, row2 = obsOne bin1 + obsTwo bin1 + obsThree bin2, ... row5 = obsOne bin1 + obsTwo bin1 + obsThree binX, row6 = obsOne bin1 + obsTwo bin2 + obsThree bin1, row7 = obsOne bin1 + obsTwo bin1 + obsThree bin2, row9 = obsOne bin1 + obsTwo bin2 + obsThree binX, ...) Step 4. I now go back to the data series and find where each row in the data series falls on on the table and count how many times an observation does so. (e.g. obsOne[2] obsTwo[2] obsThree[2] = row 30 on table, obsOne[X] obsTwo[X] obsThree[X] = row 52 on table. Step 5. I then only take the rows on the table with positive matches, count how many observations fell on that row, dived by total number of observation in data series and that gives me my probability for that range on the observed data. I apologize for this basic question, not a math expert. I have done this before many years ago. I forgot which method I used, it was much faster than this long (ancient "by hand") method. I wasn't using python at the time, it was some other proprietary package in c++. I'd like to see if something is out there that can solve this problem with python (now a python shop), could always extend, so it is soft constraint.
[ "Are you talking about something like this?\nfrom __future__ import division\nfrom collections import defaultdict\n\nobsOne= [47, 136, -108, -15, 22, ]\nobsTwo= [448, 321, 122, -207, 269, ]\nobsThree= [381, 283, 429, -393, 242, ]\n\nclass BinParams( object ):\n def __init__( self, timeSeries, X ):\n self.mx= max(timeSeries )\n self.mn= min(timeSeries )\n self.Z=(self.mx-self.mn)/X\n def index( self, sample ):\n return (sample-self.mn)//self.Z\n\nbinsOne= BinParams( obsOne, 4 )\nbinsTwo= BinParams( obsTwo, 4 )\nbinsThree= BinParams( obsThree, 4 )\n\ncounts= defaultdict(int)\nfor s1, s2, s3 in zip( obsOne, obsTwo, obsThree ):\n posn= binsOne.index(s1), binsTwo.index(s2), binsThree.index(s3)\n counts[posn] += 1\n\nfor k in counts:\n print k, counts[k], counts[k]/len(counts)\n\n" ]
[ 1 ]
[]
[]
[ "data_analysis", "probability", "python", "time_series" ]
stackoverflow_0001021704_data_analysis_probability_python_time_series.txt
Q: Python behavior of string in loop In trying to capitalize a string at separators I encountered behavior I do not understand. Can someone please explain why the string s in reverted during the loop? Thanks. s = 'these-three_words' seperators = ('-','_') for sep in seperators: s = sep.join([i.capitalize() for i in s.split(sep)]) print s print s stdout: These-Three_words These-three_Words These-three_Words A: capitalize turns the first character uppercase and the rest of the string lowercase. In the first iteration, it looks like this: >>> [i.capitalize() for i in s.split('-')] ['These', 'Three_words'] In the second iteration, the strings are the separated into: >>> [i for i in s.split('_')] ['These-Three', 'words'] So running capitalize on both will then turn the T in Three lowercase. A: You could use title(): >>> s = 'these-three_words' >>> print s.title() These-Three_Words A: str.capitalize capitalizes the first character and lowercases the remaining characters. A: Capitalize() will return a copy of the string with only its first character capitalized. You could use this: def cap(s): return s[0].upper() + s[1:]
Python behavior of string in loop
In trying to capitalize a string at separators I encountered behavior I do not understand. Can someone please explain why the string s in reverted during the loop? Thanks. s = 'these-three_words' seperators = ('-','_') for sep in seperators: s = sep.join([i.capitalize() for i in s.split(sep)]) print s print s stdout: These-Three_words These-three_Words These-three_Words
[ "capitalize turns the first character uppercase and the rest of the string lowercase.\nIn the first iteration, it looks like this:\n>>> [i.capitalize() for i in s.split('-')]\n['These', 'Three_words']\n\nIn the second iteration, the strings are the separated into:\n>>> [i for i in s.split('_')]\n['These-Three', 'words']\n\nSo running capitalize on both will then turn the T in Three lowercase.\n", "You could use title():\n>>> s = 'these-three_words'\n>>> print s.title()\nThese-Three_Words\n\n", "str.capitalize capitalizes the first character and lowercases the remaining characters.\n", "Capitalize() will return a copy of the string with only its first character capitalized. You could use this:\n\ndef cap(s):\n return s[0].upper() + s[1:]\n\n" ]
[ 6, 5, 2, 2 ]
[]
[]
[ "loops", "python", "string" ]
stackoverflow_0001022264_loops_python_string.txt
Q: StringListProperty in GAE Is there any way to edit StringListProperty fields via Google's Data Viewer, or some other clever approach? The last I want to do is to modify my application in such way that it provides special throwaway page for just that reason - I don't feel like it's the optimal solution. Cheers, MH A: I would recommend using the Remote API; you can edit anything in your datastore with a minimum of fuss and no special pages needed.
StringListProperty in GAE
Is there any way to edit StringListProperty fields via Google's Data Viewer, or some other clever approach? The last I want to do is to modify my application in such way that it provides special throwaway page for just that reason - I don't feel like it's the optimal solution. Cheers, MH
[ "I would recommend using the Remote API; you can edit anything in your datastore with a minimum of fuss and no special pages needed.\n" ]
[ 2 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001022382_google_app_engine_python.txt
Q: Emulating membership-test in Python: delegating __contains__ to contained-object correctly I am used to that Python allows some neat tricks to delegate functionality to other objects. One example is delegation to contained objects. But it seams, that I don't have luck, when I want to delegate __contains __: class A(object): def __init__(self): self.mydict = {} self.__contains__ = self.mydict.__contains__ a = A() 1 in a I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: argument of type 'A' is not iterable What I am making wrong? When I call a.__contains __(1), everything goes smooth. I even tried to define an __iter __ method in A to make A more look like an iterable, but it did not help. What I am missing out here? A: Special methods such as __contains__ are only special when defined on the class, not on the instance (except in legacy classes in Python 2, which you should not use anyway). So, do your delegation at class level: class A(object): def __init__(self): self.mydict = {} def __contains__(self, other): return self.mydict.__contains__(other) I'd actually prefer to spell the latter as return other in self.mydict, but that's a minor style issue. Edit: if and when "totally dynamic per-instance redirecting of special methods" (like old-style classes offered) is indispensable, it's not hard to implement it with new-style classes: you just need each instance that has such peculiar need to be wrapped in its own special class. For example: class BlackMagic(object): def __init__(self): self.mydict = {} self.__class__ = type(self.__class__.__name__, (self.__class__,), {}) self.__class__.__contains__ = self.mydict.__contains__ Essentially, after the little bit of black magic reassigning self.__class__ to a new class object (which behaves just like the previous one but has an empty dict and no other instances except this one self), anywhere in an old-style class you would assign to self.__magicname__, assign to self.__class__.__magicname__ instead (and make sure it's a built-in or staticmethod, not a normal Python function, unless of course in some different case you do want it to receive the self when called on the instance). Incidentally, the in operator on an instance of this BlackMagic class is faster, as it happens, than with any of the previously proposed solutions -- or at least so I'm measuring with my usual trusty -mtimeit (going directly to the built-in method, instead of following normal lookup routes involving inheritance and descriptors, shaves a bit of the overhead). A metaclass to automate the self.__class__-per-instance idea would not be hard to write (it could do the dirty work in the generated class's __new__ method, and maybe also set all magic names to actually assign on the class if assigned on the instance, either via __setattr__ or many, many properties). But that would be justified only if the need for this feature was really widespread (e.g. porting a huge ancient Python 1.5.2 project that liberally use "per-instance special methods" to modern Python, including Python 3). Do I recommend "clever" or "black magic" solutions? No, I don't: almost invariably it's better to do things in simple, straightforward ways. But "almost" is an important word here, and it's nice to have at hand such advanced "hooks" for the rare, but not non-existent, situations where their use may actually be warranted.
Emulating membership-test in Python: delegating __contains__ to contained-object correctly
I am used to that Python allows some neat tricks to delegate functionality to other objects. One example is delegation to contained objects. But it seams, that I don't have luck, when I want to delegate __contains __: class A(object): def __init__(self): self.mydict = {} self.__contains__ = self.mydict.__contains__ a = A() 1 in a I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: argument of type 'A' is not iterable What I am making wrong? When I call a.__contains __(1), everything goes smooth. I even tried to define an __iter __ method in A to make A more look like an iterable, but it did not help. What I am missing out here?
[ "Special methods such as __contains__ are only special when defined on the class, not on the instance (except in legacy classes in Python 2, which you should not use anyway).\nSo, do your delegation at class level:\nclass A(object):\n def __init__(self):\n self.mydict = {}\n\n def __contains__(self, other):\n return self.mydict.__contains__(other)\n\nI'd actually prefer to spell the latter as return other in self.mydict, but that's a minor style issue.\nEdit: if and when \"totally dynamic per-instance redirecting of special methods\" (like old-style classes offered) is indispensable, it's not hard to implement it with new-style classes: you just need each instance that has such peculiar need to be wrapped in its own special class. For example:\nclass BlackMagic(object):\n def __init__(self):\n self.mydict = {}\n self.__class__ = type(self.__class__.__name__, (self.__class__,), {})\n self.__class__.__contains__ = self.mydict.__contains__\n\nEssentially, after the little bit of black magic reassigning self.__class__ to a new class object (which behaves just like the previous one but has an empty dict and no other instances except this one self), anywhere in an old-style class you would assign to self.__magicname__, assign to self.__class__.__magicname__ instead (and make sure it's a built-in or staticmethod, not a normal Python function, unless of course in some different case you do want it to receive the self when called on the instance).\nIncidentally, the in operator on an instance of this BlackMagic class is faster, as it happens, than with any of the previously proposed solutions -- or at least so I'm measuring with my usual trusty -mtimeit (going directly to the built-in method, instead of following normal lookup routes involving inheritance and descriptors, shaves a bit of the overhead).\nA metaclass to automate the self.__class__-per-instance idea would not be hard to write (it could do the dirty work in the generated class's __new__ method, and maybe also set all magic names to actually assign on the class if assigned on the instance, either via __setattr__ or many, many properties). But that would be justified only if the need for this feature was really widespread (e.g. porting a huge ancient Python 1.5.2 project that liberally use \"per-instance special methods\" to modern Python, including Python 3).\nDo I recommend \"clever\" or \"black magic\" solutions? No, I don't: almost invariably it's better to do things in simple, straightforward ways. But \"almost\" is an important word here, and it's nice to have at hand such advanced \"hooks\" for the rare, but not non-existent, situations where their use may actually be warranted.\n" ]
[ 18 ]
[]
[]
[ "containers", "delegation", "emulation", "iterable", "python" ]
stackoverflow_0001022499_containers_delegation_emulation_iterable_python.txt
Q: Problem getting date with Universal Feed Parser It looks like http://portland.beerandblog.com/feed/atom/ is messed up (as are the 0.92 and 2.0 RSS feeds). Universal Feed Parser (latest version from http://code.google.com/p/feedparser/source/browse/trunk/feedparser/feedparser.py?spec=svn295&r=295 ) doesn't see any dates. <title>Beer and Blog Portland</title> <atom:link href="http://portland.beerandblog.com/feed/" rel="self" type="application/rss+xml" /> <link>http://portland.beerandblog.com</link> <description>Bloggers helping bloggers over beers in Portland, Oregon</description> <pubDate>Fri, 19 Jun 2009 22:54:57 +0000</pubDate> <generator>http://wordpress.org/?v=2.7.1</generator> <language>en</language> <sy:updatePeriod>hourly</sy:updatePeriod> <sy:updateFrequency>1</sy:updateFrequency> <item> <title>Widmer is sponsoring our beer for the After Party!!</title> <link>http://portland.beerandblog.com/2009/06/19/widmer-is-sponsoring-our-beer-for-the-after-party/</link> <comments>http://portland.beerandblog.com/2009/06/19/widmer-is-sponsoring-our-beer-for-the-after-party/#comments</comments> <pubDate>Fri, 19 Jun 2009 22:30:35 +0000</pubDate> <dc:creator>Justin Kistner</dc:creator> <category><![CDATA[beer]]></category> I'm trying try: published = e.published_parsed except: try: published = e.updated_parsed except: published = e.created_parsed and it's failing because I can't get a date. Any thoughts on how to extract the date in a reasonable manner? Thanks! A: Works for me: >>> e = feedparser.parse('http://portland.beerandblog.com/feed/atom/') >>> e.feed.date u'2009-06-19T22:54:57Z' >>> e.feed.date_parsed (2009, 6, 19, 22, 54, 57, 4, 170, 0) >>> e.feed.updated_parsed (2009, 6, 19, 22, 54, 57, 4, 170, 0) Maybe you're looking for e.updated_parsed where you should be looking for e.feed.updated_parsed instead? A: Using a naked except may be masking a problem in your code. Assuming (I don't use feed parsers) that AttributeError is the specific exception that you should be checking for, try (accidental pun) this: try: published = e.published_parsed except AttributeError: try: published = e.updated_parsed except AttributeError: published = e.created_parsed In any case, instead of "it's failing", please show the error message and traceback. Edit I've download the latest release (i.e. not from svn) and followed the example in the docs with this result: C:\feedparser>\python26\python Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import feedparser >>> d = feedparser.parse('http://portland.beerandblog.com/feed/atom/') >>> d.entries[0].updated u'2009-06-19T22:54:57Z' >>> d.entries[0].updated_parsed time.struct_time(tm_year=2009, tm_mon=6, tm_mday=19, tm_hour=22, tm_min=54, tm_sec=57, tm_wday=4, tm_yday=170, tm_isdst=0) >>> d.entries[0].title u'Widmer is sponsoring our beer for the After Party!!' >>> d.entries[0].published u'2009-06-19T22:30:35Z' >>> d.entries[0].published_parsed time.struct_time(tm_year=2009, tm_mon=6, tm_mday=19, tm_hour=22, tm_min=30, tm_sec=35, tm_wday=4, tm_yday=170, tm_isdst=0) >>> Like I said, I'm not into RSS and Atoms and suchlike but it seems to be quite straightforward to me. Except that I don't understand where you are getting the <pubDate> tag and arpanet-style timestamps from; AFAICT that is not present in the raw source -- it has <published> and ISO timestamps: >>> import urllib >>> guff = urllib.urlopen('http://portland.beerandblog.com/feed/atom/').read() >>> guff.find('pubDate') -1 >>> guff.find('published') 1171 >>> guff[1160:1200] 'pdated>\n\t\t<published>2009-06-19T22:30:35' >>> What is your "e" in "e.published_parsed"? Consider showing the full story with accessing feedparser, as I did above.
Problem getting date with Universal Feed Parser
It looks like http://portland.beerandblog.com/feed/atom/ is messed up (as are the 0.92 and 2.0 RSS feeds). Universal Feed Parser (latest version from http://code.google.com/p/feedparser/source/browse/trunk/feedparser/feedparser.py?spec=svn295&r=295 ) doesn't see any dates. <title>Beer and Blog Portland</title> <atom:link href="http://portland.beerandblog.com/feed/" rel="self" type="application/rss+xml" /> <link>http://portland.beerandblog.com</link> <description>Bloggers helping bloggers over beers in Portland, Oregon</description> <pubDate>Fri, 19 Jun 2009 22:54:57 +0000</pubDate> <generator>http://wordpress.org/?v=2.7.1</generator> <language>en</language> <sy:updatePeriod>hourly</sy:updatePeriod> <sy:updateFrequency>1</sy:updateFrequency> <item> <title>Widmer is sponsoring our beer for the After Party!!</title> <link>http://portland.beerandblog.com/2009/06/19/widmer-is-sponsoring-our-beer-for-the-after-party/</link> <comments>http://portland.beerandblog.com/2009/06/19/widmer-is-sponsoring-our-beer-for-the-after-party/#comments</comments> <pubDate>Fri, 19 Jun 2009 22:30:35 +0000</pubDate> <dc:creator>Justin Kistner</dc:creator> <category><![CDATA[beer]]></category> I'm trying try: published = e.published_parsed except: try: published = e.updated_parsed except: published = e.created_parsed and it's failing because I can't get a date. Any thoughts on how to extract the date in a reasonable manner? Thanks!
[ "Works for me:\n>>> e = feedparser.parse('http://portland.beerandblog.com/feed/atom/')\n>>> e.feed.date\nu'2009-06-19T22:54:57Z'\n>>> e.feed.date_parsed\n(2009, 6, 19, 22, 54, 57, 4, 170, 0)\n>>> e.feed.updated_parsed\n(2009, 6, 19, 22, 54, 57, 4, 170, 0)\n\nMaybe you're looking for e.updated_parsed where you should be looking for e.feed.updated_parsed instead?\n", "Using a naked except may be masking a problem in your code. Assuming (I don't use feed parsers) that AttributeError is the specific exception that you should be checking for, try (accidental pun) this:\ntry:\n published = e.published_parsed\nexcept AttributeError:\n try:\n published = e.updated_parsed\n except AttributeError:\n published = e.created_parsed\n\nIn any case, instead of \"it's failing\", please show the error message and traceback.\nEdit\nI've download the latest release (i.e. not from svn) and followed the example in the docs with this result:\nC:\\feedparser>\\python26\\python\nPython 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on\nwin32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import feedparser\n>>> d = feedparser.parse('http://portland.beerandblog.com/feed/atom/')\n>>> d.entries[0].updated\nu'2009-06-19T22:54:57Z'\n>>> d.entries[0].updated_parsed\ntime.struct_time(tm_year=2009, tm_mon=6, tm_mday=19, tm_hour=22, tm_min=54, tm_sec=57, tm_wday=4, tm_yday=170, tm_isdst=0)\n>>> d.entries[0].title\nu'Widmer is sponsoring our beer for the After Party!!'\n>>> d.entries[0].published\nu'2009-06-19T22:30:35Z'\n>>> d.entries[0].published_parsed\ntime.struct_time(tm_year=2009, tm_mon=6, tm_mday=19, tm_hour=22, tm_min=30, tm_sec=35, tm_wday=4, tm_yday=170, tm_isdst=0)\n>>>\n\nLike I said, I'm not into RSS and Atoms and suchlike but it seems to be quite straightforward to me. Except that I don't understand where you are getting the <pubDate> tag and arpanet-style timestamps from; AFAICT that is not present in the raw source -- it has <published> and ISO timestamps: \n>>> import urllib\n>>> guff = urllib.urlopen('http://portland.beerandblog.com/feed/atom/').read()\n>>> guff.find('pubDate')\n-1\n>>> guff.find('published')\n1171\n>>> guff[1160:1200]\n'pdated>\\n\\t\\t<published>2009-06-19T22:30:35'\n>>>\n\nWhat is your \"e\" in \"e.published_parsed\"? Consider showing the full story with accessing feedparser, as I did above.\n" ]
[ 3, 1 ]
[]
[]
[ "feed", "parsing", "python" ]
stackoverflow_0001022504_feed_parsing_python.txt
Q: How can I pass the environment from my Python web application to a Perl program? How do I set Perl's %ENV to introduce a Perl script into the context of my web application? I have a website, written in a language different from Perl (Python). However I need to use a Perl application, which consists of a .pl file: #!/usr/bin/env perl "$ENV{DOCUMENT_ROOT}/foo/bar.pm" =~ /^(.+)$/; require $1; my $BAR = new BAR( user => 'foo', ); print $bar->get_content; ... and a module bar.pm, which relies on "$ENV{HTTP_HOST}", "$ENV{REQUEST_URI}", "$ENV{REMOTE_ADDR}" and "$ENV{DOCUMENT_ROOT}". How should I set this hash? This is my very first experience with Perl, so I may be missing something really obvious here :) A: If you're spawning that Perl process from your Python code (as opposed to "directly from the webserver"), there are several ways to set the child process environment from the Python parent process environment, depending on what you're using for the "spawning". For example, if you're using subprocess.Popen, you can pass an env= argument set to the dictionary you desire, as the docs explain: If env is not None, it must be a mapping that defines the environment variables for the new process; these are used instead of inheriting the current process’ environment, which is the default behavior. A: Perl's special %ENV hash is the interface to the environment. (Under the hood, it calls getenv and putenv as appropriate.) For example: $ cat run.sh #! /bin/bash export REMOTE_ADDR=127.0.0.1 perl -le 'print $ENV{REMOTE_ADDR}' $ ./run.sh 127.0.0.1 Your web server ought to be setting these environment variables.
How can I pass the environment from my Python web application to a Perl program?
How do I set Perl's %ENV to introduce a Perl script into the context of my web application? I have a website, written in a language different from Perl (Python). However I need to use a Perl application, which consists of a .pl file: #!/usr/bin/env perl "$ENV{DOCUMENT_ROOT}/foo/bar.pm" =~ /^(.+)$/; require $1; my $BAR = new BAR( user => 'foo', ); print $bar->get_content; ... and a module bar.pm, which relies on "$ENV{HTTP_HOST}", "$ENV{REQUEST_URI}", "$ENV{REMOTE_ADDR}" and "$ENV{DOCUMENT_ROOT}". How should I set this hash? This is my very first experience with Perl, so I may be missing something really obvious here :)
[ "If you're spawning that Perl process from your Python code (as opposed to \"directly from the webserver\"), there are several ways to set the child process environment from the Python parent process environment, depending on what you're using for the \"spawning\".\nFor example, if you're using subprocess.Popen, you can pass an env= argument set to the dictionary you desire, as the docs explain:\n\nIf env is not None, it must be a\n mapping that defines the environment\n variables for the new process; these\n are used instead of inheriting the\n current process’ environment, which is\n the default behavior.\n\n", "Perl's special %ENV hash is the interface to the environment. (Under the hood, it calls getenv and putenv as appropriate.)\nFor example:\n$ cat run.sh \n#! /bin/bash\n\nexport REMOTE_ADDR=127.0.0.1\n\nperl -le 'print $ENV{REMOTE_ADDR}'\n\n$ ./run.sh \n127.0.0.1\n\nYour web server ought to be setting these environment variables.\n" ]
[ 4, 2 ]
[]
[]
[ "environment", "perl", "python" ]
stackoverflow_0001022694_environment_perl_python.txt
Q: How to make a simple command-line chat in Python? I study network programming and would like to write a simple command-line chat in Python. I'm wondering how make receving constant along with inputing available for sending at any time. As you see, this client can do only one job at a time: from socket import * HOST = 'localhost' PORT = 21567 BUFSIZE = 1024 ADDR = (HOST, PORT) tcpCliSock = socket(AF_INET, SOCK_STREAM) tcpCliSock.connect(ADDR) while 1: data = raw_input('> ') if not data: break tcpCliSock.send(data) data = tcpCliSock.recv(BUFSIZE) if not data: break print data tcpCliSock.close() So if another client sends a message, this client will only receive it after sending a message too. I bet you understand me. I have googled for the matter and found out many interesting things such as asynchronous I/O, threading, non-blocking synchronization, concurrent programming and so on. I have also installed the twisted package. In brief, I've been learning all these things but yet haven't found what I was looking for. (Of course, I will keep trying and trying until I get to the point.) So, my question is how make that? =) A: Your question was not very coherent. However, your program does not need to be asynchronous at all to attain what you are asking for. This is a working chat script you originally wanted with minimal changes. It uses 1 thread for receiving and 1 for sending, both using blocking sockets. It is far simpler than using asynchronous methods. from socket import * from threading import Thread import sys HOST = 'localhost' PORT = 21567 BUFSIZE = 1024 ADDR = (HOST, PORT) tcpCliSock = socket(AF_INET, SOCK_STREAM) tcpCliSock.connect(ADDR) def recv(): while True: data = tcpCliSock.recv(BUFSIZE) if not data: sys.exit(0) print data Thread(target=recv).start() while True: data = raw_input('> ') if not data: break tcpCliSock.send(data) tcpCliSock.close() A: If you want to code it from scratch select is the way to go (and you can read on Google Book Search most of the chapter of Python in a Nutshell that covers such matters); if you want to leverage more abstraction, asyncore is usable, but Twisted is much richer and more powerful. A: Chat programs are doing two things concurrently. Watching the local user's keyboard and sending to the remote user (via a socket of some kind) Watching the remote socket and displaying what they type on the local console. You have several ways to do this. A program that opens socket and keyboard and uses the select module to see which one has input ready. A program that creates two threads. One threads reads the remote socket and prints. The other thread reads the keyboard and sends to the remote socket. A program that forks two subprocesses. One subprocess reads the remote socket and prints. The other subprocess reads the keyboard and sends to the remote socket. A: Well, well, here's what I am having at this very moment. Server goes like this: import asyncore import socket clients = {} class MainServerSocket(asyncore.dispatcher): def __init__(self, port): asyncore.dispatcher.__init__(self) self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind(('',port)) self.listen(5) def handle_accept(self): newSocket, address = self.accept( ) clients[address] = newSocket print "Connected from", address SecondaryServerSocket(newSocket) class SecondaryServerSocket(asyncore.dispatcher_with_send): def handle_read(self): receivedData = self.recv(8192) if receivedData: every = clients.values() for one in every: one.send(receivedData+'\n') else: self.close( ) def handle_close(self): print "Disconnected from", self.getpeername( ) one = self.getpeername( ) del clients[one] MainServerSocket(21567) asyncore.loop( ) And client goes just like this: from Tkinter import * from socket import * import thread HOST = 'localhost' PORT = 21567 BUFSIZE = 1024 ADDR = (HOST, PORT) tcpCliSock = socket(AF_INET, SOCK_STREAM) tcpCliSock.connect(ADDR) class Application(Frame): def __init__(self, master): Frame.__init__(self, master) self.grid() self.create_widgets() self.socket() def callback(self, event): message = self.entry_field.get() tcpCliSock.send(message) def create_widgets(self): self.messaging_field = Text(self, width = 110, height = 20, wrap = WORD) self.messaging_field.grid(row = 0, column = 0, columnspan = 2, sticky = W) self.entry_field = Entry(self, width = 92) self.entry_field.grid(row = 1, column = 0, sticky = W) self.entry_field.bind('<Return>', self.callback) def add(self, data): self.messaging_field.insert(END, data) def socket(self): def loop0(): while 1: data = tcpCliSock.recv(BUFSIZE) if data: self.add(data) thread.start_new_thread(loop0, ()) root = Tk() root.title("Chat client") root.geometry("550x260") app = Application(root) root.mainloop() Now it's time to make the code look better and add some functionality. Thanks for your help, folks! A: You should use select. Check: Another select link howto A: I wrote one in async I/O... its a lot easier to wrap your head around than a full threading model. if you can get your hands ahold of "talk"'s source code, you can learn a lot about it. see a demo http://dsl.org/cookbook/cookbook_40.html#SEC559 , or try it your self if you are on a linux box... it sends characters in real-time. also, ytalk is interactive and multiple users.... kinda like hudddlechat or campfire.
How to make a simple command-line chat in Python?
I study network programming and would like to write a simple command-line chat in Python. I'm wondering how make receving constant along with inputing available for sending at any time. As you see, this client can do only one job at a time: from socket import * HOST = 'localhost' PORT = 21567 BUFSIZE = 1024 ADDR = (HOST, PORT) tcpCliSock = socket(AF_INET, SOCK_STREAM) tcpCliSock.connect(ADDR) while 1: data = raw_input('> ') if not data: break tcpCliSock.send(data) data = tcpCliSock.recv(BUFSIZE) if not data: break print data tcpCliSock.close() So if another client sends a message, this client will only receive it after sending a message too. I bet you understand me. I have googled for the matter and found out many interesting things such as asynchronous I/O, threading, non-blocking synchronization, concurrent programming and so on. I have also installed the twisted package. In brief, I've been learning all these things but yet haven't found what I was looking for. (Of course, I will keep trying and trying until I get to the point.) So, my question is how make that? =)
[ "Your question was not very coherent. However, your program does not need to be asynchronous at all to attain what you are asking for.\nThis is a working chat script you originally wanted with minimal changes. It uses 1 thread for receiving and 1 for sending, both using blocking sockets. It is far simpler than using asynchronous methods.\nfrom socket import *\nfrom threading import Thread\nimport sys\n\nHOST = 'localhost'\nPORT = 21567\nBUFSIZE = 1024\nADDR = (HOST, PORT)\n\ntcpCliSock = socket(AF_INET, SOCK_STREAM)\ntcpCliSock.connect(ADDR)\n\ndef recv():\n while True:\n data = tcpCliSock.recv(BUFSIZE)\n if not data: sys.exit(0)\n print data\n\nThread(target=recv).start()\nwhile True:\n data = raw_input('> ')\n if not data: break\n tcpCliSock.send(data)\n\ntcpCliSock.close()\n\n", "If you want to code it from scratch select is the way to go (and you can read on Google Book Search most of the chapter of Python in a Nutshell that covers such matters); if you want to leverage more abstraction, asyncore is usable, but Twisted is much richer and more powerful.\n", "Chat programs are doing two things concurrently.\n\nWatching the local user's keyboard and sending to the remote user (via a socket of some kind)\nWatching the remote socket and displaying what they type on the local console.\n\nYou have several ways to do this.\n\nA program that opens socket and keyboard and uses the select module to see which one has input ready.\nA program that creates two threads. One threads reads the remote socket and prints. The other thread reads the keyboard and sends to the remote socket.\nA program that forks two subprocesses. One subprocess reads the remote socket and prints. The other subprocess reads the keyboard and sends to the remote socket.\n\n", "Well, well, here's what I am having at this very moment.\nServer goes like this:\nimport asyncore\nimport socket\n\nclients = {}\n\nclass MainServerSocket(asyncore.dispatcher):\n def __init__(self, port):\n asyncore.dispatcher.__init__(self)\n self.create_socket(socket.AF_INET, socket.SOCK_STREAM)\n self.bind(('',port))\n self.listen(5)\n def handle_accept(self):\n newSocket, address = self.accept( )\n clients[address] = newSocket\n print \"Connected from\", address\n SecondaryServerSocket(newSocket)\n\nclass SecondaryServerSocket(asyncore.dispatcher_with_send):\n def handle_read(self):\n receivedData = self.recv(8192)\n if receivedData:\n every = clients.values()\n for one in every:\n one.send(receivedData+'\\n')\n else: self.close( )\n def handle_close(self):\n print \"Disconnected from\", self.getpeername( )\n one = self.getpeername( )\n del clients[one]\n\nMainServerSocket(21567)\nasyncore.loop( )\n\nAnd client goes just like this:\nfrom Tkinter import *\nfrom socket import *\nimport thread\n\nHOST = 'localhost'\nPORT = 21567\nBUFSIZE = 1024\nADDR = (HOST, PORT)\n\ntcpCliSock = socket(AF_INET, SOCK_STREAM)\ntcpCliSock.connect(ADDR)\n\nclass Application(Frame):\n def __init__(self, master):\n Frame.__init__(self, master)\n self.grid()\n self.create_widgets()\n self.socket()\n\n def callback(self, event):\n message = self.entry_field.get()\n tcpCliSock.send(message)\n\n def create_widgets(self):\n self.messaging_field = Text(self, width = 110, height = 20, wrap = WORD)\n self.messaging_field.grid(row = 0, column = 0, columnspan = 2, sticky = W)\n\n self.entry_field = Entry(self, width = 92)\n self.entry_field.grid(row = 1, column = 0, sticky = W)\n self.entry_field.bind('<Return>', self.callback)\n\n def add(self, data):\n self.messaging_field.insert(END, data)\n\n def socket(self):\n def loop0():\n while 1:\n data = tcpCliSock.recv(BUFSIZE)\n if data: self.add(data)\n\n thread.start_new_thread(loop0, ())\n\n\n\nroot = Tk()\nroot.title(\"Chat client\")\nroot.geometry(\"550x260\")\n\napp = Application(root)\n\nroot.mainloop()\n\nNow it's time to make the code look better and add some functionality.\nThanks for your help, folks!\n", "You should use select.\nCheck:\n\nAnother select link\nhowto\n\n", "I wrote one in async I/O... its a lot easier to wrap your head around than a full threading model.\nif you can get your hands ahold of \"talk\"'s source code, you can learn a lot about it. see a demo http://dsl.org/cookbook/cookbook_40.html#SEC559 , or try it your self if you are on a linux box...\nit sends characters in real-time.\nalso, ytalk is interactive and multiple users.... kinda like hudddlechat or campfire.\n" ]
[ 8, 5, 2, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001020839_python.txt
Q: How to pickle a CookieJar? I have an object with a CookieJar that I want to pickle. However as you all probably know, pickle chokes on objects that contain lock objects. And for some horrible reason, a CookieJar has a lock object. from cPickle import dumps from cookielib import CookieJar class Person(object): def __init__(self, name): self.name = name self.cookies = CookieJar() bob = Person("bob") dumps(bob) # Traceback (most recent call last): # File "<stdin>", line 1, in <module> # cPickle.UnpickleableError: Cannot pickle <type 'thread.lock'> objects How do I persist this? The only solution I can think of is to use FileCookieJar.save and FileCookieJar.load to a stringIO object. But is there a better way? A: Here is an attempt, by deriving a class from CookieJar, which override getstate/setstate used by pickle. I haven't used cookieJar, so don't know if it is usable but you can dump derived class from cPickle import dumps from cookielib import CookieJar import threading class MyCookieJar(CookieJar): def __getstate__(self): state = self.__dict__.copy() del state['_cookies_lock'] return state def __setstate__(self, state): self.__dict__ = state self._cookies_lock = threading.RLock() class Person(object): def __init__(self, name): self.name = name self.cookies = MyCookieJar() bob = Person("bob") print dumps(bob) A: CookieJar is not particularly well-designed for persisting (that's what the FileCookieJar subclasses are mostly about!-), but you can iterate over a CookieJar instance to get all cookies (and persist a list of those, for example), and, to rebuild the jar given the cookies, use set_cookie on each. That's how I would set about persisting and unpersisting cookie jars, using the copy_reg method to register the appropriate functions if I needed to use them often.
How to pickle a CookieJar?
I have an object with a CookieJar that I want to pickle. However as you all probably know, pickle chokes on objects that contain lock objects. And for some horrible reason, a CookieJar has a lock object. from cPickle import dumps from cookielib import CookieJar class Person(object): def __init__(self, name): self.name = name self.cookies = CookieJar() bob = Person("bob") dumps(bob) # Traceback (most recent call last): # File "<stdin>", line 1, in <module> # cPickle.UnpickleableError: Cannot pickle <type 'thread.lock'> objects How do I persist this? The only solution I can think of is to use FileCookieJar.save and FileCookieJar.load to a stringIO object. But is there a better way?
[ "Here is an attempt, by deriving a class from CookieJar, which override getstate/setstate used by pickle. I haven't used cookieJar, so don't know if it is usable but you can dump derived class\nfrom cPickle import dumps\nfrom cookielib import CookieJar\nimport threading\n\nclass MyCookieJar(CookieJar):\n def __getstate__(self):\n state = self.__dict__.copy()\n del state['_cookies_lock']\n return state\n\n def __setstate__(self, state):\n self.__dict__ = state\n self._cookies_lock = threading.RLock()\n\nclass Person(object):\n def __init__(self, name):\n self.name = name\n self.cookies = MyCookieJar()\n\nbob = Person(\"bob\")\nprint dumps(bob)\n\n", "CookieJar is not particularly well-designed for persisting (that's what the FileCookieJar subclasses are mostly about!-), but you can iterate over a CookieJar instance to get all cookies (and persist a list of those, for example), and, to rebuild the jar given the cookies, use set_cookie on each. That's how I would set about persisting and unpersisting cookie jars, using the copy_reg method to register the appropriate functions if I needed to use them often.\n" ]
[ 9, 7 ]
[]
[]
[ "cookiejar", "cookielib", "persistence", "pickle", "python" ]
stackoverflow_0001023224_cookiejar_cookielib_persistence_pickle_python.txt
Q: Is there any pywin32 odbc connector documentation available? What is a good pywin32 odbc connector documentation and tutorial on the web? A: Alternatives: mxODBC by egenix.com (if you need ODBC) pyODBC sqlalchemy and DB-API 2.0 modules (which isn't ODBC) but it's maybe better alternative A: The answer is: 'there isn't one'. However, here is an example that shows how to open a connection and issue a query, and how to get column metadata from the result set. The DB API 2.0 specification can be found in PEP 249. import dbi, odbc SQL2005_CS=TEMPLATE="""\ Driver={SQL Native Client}; Server=%(sql_server)s; Database=%(sql_db)s; Trusted_Connection=yes; """ CONN_PARAMS = {'sql_server': 'foo', 'sql_db': 'bar'} query = "select foo from bar" db = odbc.odbc(SQL2005_CS_TEMPLATE % CONN_PARAMS) c = db.cursor() c.execute (query) rs = c.fetchall() # see also fetchone() and fetchmany() # looping over the results for r in rs: print r #print the name of column 0 of the result set print c.description[0][0] #print the type, length, precision etc of column 1. print c.description[1][1:5] db.close() A: The only 'documentation' that I found was a unit test that was installed with the pywin32 package. It seems to give an overview of the general functionality. I found it here: python dir\Lib\site-packages\win32\test\test_odbc.py I should also point out that I believe it is implements the Python Database API Specification v1.0, which is documented here: http://www.python.org/dev/peps/pep-0248/ Note that there is also V2.0 of this specification (see PEP-2049) On a side note, I've been trying to use pywin32 odbc, but I've had problems with intermittent crashing with the ODBC driver I'm using. I've recently moved to pyodbc and my issues were resolved.
Is there any pywin32 odbc connector documentation available?
What is a good pywin32 odbc connector documentation and tutorial on the web?
[ "Alternatives:\n\nmxODBC by egenix.com (if you need ODBC)\npyODBC\nsqlalchemy and DB-API 2.0 modules (which isn't ODBC) but it's maybe better alternative \n\n", "The answer is: 'there isn't one'. However, here is an example that shows how to open a connection and issue a query, and how to get column metadata from the result set. The DB API 2.0 specification can be found in PEP 249.\nimport dbi, odbc\n\nSQL2005_CS=TEMPLATE=\"\"\"\\\nDriver={SQL Native Client};\nServer=%(sql_server)s;\nDatabase=%(sql_db)s;\nTrusted_Connection=yes;\n\"\"\"\n\nCONN_PARAMS = {'sql_server': 'foo',\n 'sql_db': 'bar'}\n\nquery = \"select foo from bar\"\n\ndb = odbc.odbc(SQL2005_CS_TEMPLATE % CONN_PARAMS)\nc = db.cursor()\nc.execute (query)\nrs = c.fetchall() # see also fetchone() and fetchmany()\n# looping over the results\nfor r in rs:\n print r\n\n#print the name of column 0 of the result set\nprint c.description[0][0]\n\n#print the type, length, precision etc of column 1.\nprint c.description[1][1:5]\n\ndb.close()\n\n", "The only 'documentation' that I found was a unit test that was installed with the pywin32 package. It seems to give an overview of the general functionality. I found it here:\npython dir\\Lib\\site-packages\\win32\\test\\test_odbc.py\nI should also point out that I believe it is implements the Python Database API Specification v1.0, which is documented here:\nhttp://www.python.org/dev/peps/pep-0248/\nNote that there is also V2.0 of this specification (see PEP-2049)\nOn a side note, I've been trying to use pywin32 odbc, but I've had problems with intermittent crashing with the ODBC driver I'm using. I've recently moved to pyodbc and my issues were resolved.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "odbc", "pyodbc", "python", "windows" ]
stackoverflow_0000768250_odbc_pyodbc_python_windows.txt
Q: Working with django and sqlalchemy but backend mysql I am working with python's django framework. My models are sqlalchemy and my back-end database is mysql. How will I configure them? A: Some links that might help you: http://lethain.com/entry/2008/jul/23/replacing-django-s-orm-with-sqlalchemy/ http://code.google.com/p/django-sqlalchemy/ http://adam.gomaa.us/blog/2007/aug/26/the-django-orm-problem/ http://gitorious.org/django-sqlalchemy A: See Django database installation, If you’re using MySQL, you’ll need MySQLdb, version 1.2.1p2 or higher. You will also want to read the database-specific notes for the MySQL backend. And the MySQL notes, Django expects the database to support transactions, referential integrity, and Unicode (UTF-8 encoding). Fortunately, MySQL has all these features as available as far back as 3.23. While it may be possible to use 3.23 or 4.0, you'll probably have less trouble if you use 4.1 or 5.0.
Working with django and sqlalchemy but backend mysql
I am working with python's django framework. My models are sqlalchemy and my back-end database is mysql. How will I configure them?
[ "Some links that might help you:\n\nhttp://lethain.com/entry/2008/jul/23/replacing-django-s-orm-with-sqlalchemy/\nhttp://code.google.com/p/django-sqlalchemy/\nhttp://adam.gomaa.us/blog/2007/aug/26/the-django-orm-problem/\nhttp://gitorious.org/django-sqlalchemy\n\n", "See Django database installation,\n\nIf you’re using MySQL, you’ll need MySQLdb, version 1.2.1p2 or higher. You will also want to read the database-specific notes for the MySQL backend.\n\nAnd the MySQL notes,\n\nDjango expects the database to support transactions, referential integrity, and Unicode (UTF-8 encoding). Fortunately, MySQL has all these features as available as far back as 3.23. While it may be possible to use 3.23 or 4.0, you'll probably have less trouble if you use 4.1 or 5.0.\n\n" ]
[ 1, 0 ]
[]
[]
[ "django", "mysql", "python", "sqlalchemy" ]
stackoverflow_0001023417_django_mysql_python_sqlalchemy.txt
Q: Why does mass importing not work but importing definition individually works? So I just met a strange so-called bug. Because this work on my other .py files, but just on this file it suddenly stopped working. from tuttobelo.management.models import * The above used to work, but it stopped working all of a sudden, and I had to replace it with the bottom. from tuttobelo.management.models import Preferences, ProductVariant, UserSeller, ProductOwner, ProductModel, ProductVariant from tuttobelo.management.models import ProductMeta, ShippingMethods I know the following is the better way of coding, however ALL of the models mentioned in models are used, so my question is, what possible reasons can wildcard stop working? The error I got was that the model I was trying to import does not exist, only if I remove the wildcard and import the name of the model could I get it imported properly. Thanks! A: Maybe the models module has an __all__ which does not include what you're looking for. Anyway, from ... import * is never a good idea in production code -- we always meant the import * feature for interactive exploratory use, not production use. Specifically import the module you need -- use that name to qualify names that belong there -- and you'll be vastly happier in the long run!-) A: There are some cases in Python where importing with * will not yield anything. In your example, if tuttobelo.management.models is a package (i.e. a directory with an __init__.py) with the files Preferences.py, ProductVariant.py, etc in it, importing with star will not work, unless you already have imported it explicitly somewhere else. This can be solved by putting in the __init__.py: __all__ = ['Preferences', 'ProductVariant', 'UserSeller', <etc...> ] This will make it possible to do import * again, but as noted, that's a horrible coding style for several reasons. One, tools like pyflakes and pylint, and code introspection in your editor, stops working. Secondly, you end up putting a lot of names in the local namespace, which in your code you don't know where they come from, and secondly you can get clashes in names like this. A better way is to do from tuttobelo.management import models And then refer to the other things by models.Preferences, models.ProductVariant etc. This however will not work with the __all__ variable. Instead you need to import the modules from the __init__.py: import Preferences, ProductVariant, UserSeller, ProductOwner, <etc...> The drawback of this is that all modules get imported even if you don't use them, which means it will take more memory.
Why does mass importing not work but importing definition individually works?
So I just met a strange so-called bug. Because this work on my other .py files, but just on this file it suddenly stopped working. from tuttobelo.management.models import * The above used to work, but it stopped working all of a sudden, and I had to replace it with the bottom. from tuttobelo.management.models import Preferences, ProductVariant, UserSeller, ProductOwner, ProductModel, ProductVariant from tuttobelo.management.models import ProductMeta, ShippingMethods I know the following is the better way of coding, however ALL of the models mentioned in models are used, so my question is, what possible reasons can wildcard stop working? The error I got was that the model I was trying to import does not exist, only if I remove the wildcard and import the name of the model could I get it imported properly. Thanks!
[ "Maybe the models module has an __all__ which does not include what you're looking for. Anyway, from ... import * is never a good idea in production code -- we always meant the import * feature for interactive exploratory use, not production use. Specifically import the module you need -- use that name to qualify names that belong there -- and you'll be vastly happier in the long run!-)\n", "There are some cases in Python where importing with * will not yield anything. In your example, if tuttobelo.management.models is a package (i.e. a directory with an __init__.py) with the files Preferences.py, ProductVariant.py, etc in it, importing with star will not work, unless you already have imported it explicitly somewhere else.\nThis can be solved by putting in the __init__.py:\n__all__ = ['Preferences', 'ProductVariant', 'UserSeller', <etc...> ]\n\nThis will make it possible to do import * again, but as noted, that's a horrible coding style for several reasons. One, tools like pyflakes and pylint, and code introspection in your editor, stops working. Secondly, you end up putting a lot of names in the local namespace, which in your code you don't know where they come from, and secondly you can get clashes in names like this.\nA better way is to do\nfrom tuttobelo.management import models\n\nAnd then refer to the other things by models.Preferences, models.ProductVariant etc. This however will not work with the __all__ variable. Instead you need to import the modules from the __init__.py:\nimport Preferences, ProductVariant, UserSeller, ProductOwner, <etc...>\n\nThe drawback of this is that all modules get imported even if you don't use them, which means it will take more memory.\n" ]
[ 4, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001023326_django_python.txt
Q: It is possible to match a character repetition with regex? How? Question: Is is possible, with regex, to match a word that contains the same character in different positions? Condition: All words have the same length, you know the character positions (example the 1st, the 2nd and the 4th) of the repeated char, but you don't know what is it. Examples: using lowercase 6char words I'd like to match words where the 3rd and the 4th chars are the same. parrot <- match for double r follia <- match for double l carrot <- match for double r mattia <- match for double t rettoo <- match for double t melone <- doesn't match I can't use the quantifier [\d]{2} because it match any succession of two chars, and what if I say the 2nd and the 4th position instead of 3rd and 4th? Is it possible to do what I want with regex? If yes, how can I do that? EDIT: Ask asked in the comments, I'm using python A: You can use a backreference to do this: (.)\1 This will match consecutive occurrences of any character. Edit   Here’s some Python example: import re regexp = re.compile(r"(.)\1") data = ["parrot","follia","carrot","mattia","rettoo","melone"] for str in data: match = re.search(regexp, str) if match: print str, "<- match for double", match.group(1) else: print str, "<- doesn't match" A: You need to use back references for such cases. I am not sure which language you are using, I tried the following example in my VI editor to search for any alphabet repeating. Pattern Regex: \([a-z]\)\1 If you see the example, [a-z] is the pattern you are searching for, and enclose that inside the paranthesis (the parantheses should be escaped in some languages). Once you have a paranthesis, it is a group and can be referred again anywhere in the regex by using \1. If there is more than one group, you can use \1, \2 etc. \1 will be replaced by whatever was matched in the first group. Thanks Arvind A: /(\b\w*?(\w)\2.*?\b)/ will match any word with atleast on character repetition $1 being the word $2 the first repetition. A: Yes, you can use backreference construct to match the double letters. The regular expression (?<char>\w)\k<char>, using named groups and backreferencing, searches for adjacent paired characters. When applied to the string "I'll have a small coffee," it finds matches in the words "I'll", "small", and "coffee". The metacharacter \w finds any single-word character. The grouping construct (?<char>) encloses the metacharacter to force the regular expression engine to remember a subexpression match (which, in this case, will be any single character) and save it under the name "char". The backreference construct \k<char> causes the engine to compare the current character to the previously matched character stored under "char". The entire regular expression successfully finds a match wherever a single character is the same as the preceding character.
It is possible to match a character repetition with regex? How?
Question: Is is possible, with regex, to match a word that contains the same character in different positions? Condition: All words have the same length, you know the character positions (example the 1st, the 2nd and the 4th) of the repeated char, but you don't know what is it. Examples: using lowercase 6char words I'd like to match words where the 3rd and the 4th chars are the same. parrot <- match for double r follia <- match for double l carrot <- match for double r mattia <- match for double t rettoo <- match for double t melone <- doesn't match I can't use the quantifier [\d]{2} because it match any succession of two chars, and what if I say the 2nd and the 4th position instead of 3rd and 4th? Is it possible to do what I want with regex? If yes, how can I do that? EDIT: Ask asked in the comments, I'm using python
[ "You can use a backreference to do this:\n(.)\\1\n\nThis will match consecutive occurrences of any character.\n\nEdit   Here’s some Python example:\nimport re\n\nregexp = re.compile(r\"(.)\\1\")\ndata = [\"parrot\",\"follia\",\"carrot\",\"mattia\",\"rettoo\",\"melone\"]\n\nfor str in data:\n match = re.search(regexp, str)\n if match:\n print str, \"<- match for double\", match.group(1)\n else:\n print str, \"<- doesn't match\"\n\n", "You need to use back references for such cases. I am not sure which language you are using, I tried the following example in my VI editor to search for any alphabet repeating.\n Pattern Regex: \\([a-z]\\)\\1 \nIf you see the example, [a-z] is the pattern you are searching for, and enclose that inside the paranthesis (the parantheses should be escaped in some languages). Once you have a paranthesis, it is a group and can be referred again anywhere in the regex by using \\1. If there is more than one group, you can use \\1, \\2 etc. \\1 will be replaced by whatever was matched in the first group.\nThanks\nArvind\n", "/(\\b\\w*?(\\w)\\2.*?\\b)/\nwill match any word with atleast on character repetition\n$1 being the word \n$2 the first repetition.\n", "Yes, you can use backreference construct to match the double letters.\nThe regular expression (?<char>\\w)\\k<char>, using named groups and backreferencing, searches for adjacent paired characters. When applied to the string \"I'll have a small coffee,\" it finds matches in the words \"I'll\", \"small\", and \"coffee\". The metacharacter \\w finds any single-word character. The grouping construct (?<char>) encloses the metacharacter to force the regular expression engine to remember a subexpression match (which, in this case, will be any single character) and save it under the name \"char\". The backreference construct \\k<char> causes the engine to compare the current character to the previously matched character stored under \"char\". The entire regular expression successfully finds a match wherever a single character is the same as the preceding character.\n" ]
[ 50, 8, 2, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001023902_python_regex.txt
Q: What's a good beginner setup for C++/Python on OSX? I'm looking for a good setup for learning C++ and eventually Python on Mac OSX. As I'm going use C++ I don't want to use XCode, as (I understand) this is primarily used with Objective-C. I have a small bit of experience in Java and MATLAB programming, and math is probably not going to be my main problem. I was thinking on an approach looking something like this: Work through Accelerated C++. Write a couple of small math-programs; something like the Mandelbrot set, a PDE-solver, or a graphing-app. This would be done using a widget toolkit. Write a small game with really crappy graphics. This is probably going to be a rip-off of Jetmen Revival or Space Invaders ;-) (When I'm fed up with the game not working), work my way through Core Python. Repeat steps 2 and 3 in Python. I'm thinking about going with Eclipse and GTK+ / X11. Any thoughts on IDE's and GUI toolkits? Preferably open source, and definitely free. And what do you think about the 5 steps? Any help would be much appreciated - thanks in advance! A: As I'm going use C++ I don't want to use XCode, as (I understand) this is primarily used with Objective-C. XCode is a fine choice, even for pure C++ solutions. Work through Accelerated C++. That's the book that got me started! It's an excellent choice, but not a walk in the park. It took me a month or two, at a rate of about 1 to 2 hours a day. But after this you'll have made a MAJOR jump towards becoming a really good C++ programmer. Write a couple of small math-programs; something like the Mandelbrot set, a PDE-solver, or a graphing-app. This would be done using a widget toolkit. Write a small game with really crappy graphics. This is probably going to be a rip-off of Jetmen Revival or Space Invaders ;-) (When I'm fed up with the game not working), work my way through Core Python (this is a book; max. one link/question for new users...). Fine, I did Tetris. Repeat steps 2 and 3 in Python. I have no experience using Python, but I know it's a much easier language to master than C++. So if you can master C++, Python's won't be any problem. For GUI you could use Qt, especially now it has been made LGPL. However, Cocoa is interesting as well, if feel courageous enough to also learn Objective-C :) (Btw, there is a Python port for Cocoa as well.) A: XCode is a mature IDE well suited to almost any language. C++ is particularly well supported. Apparently GTK+ has native OSX widget support, though I've never used it, so you could skip the X11 stack altogether if you desired. Other cross platform widget sets include wxWidgets, fltk and Tk. For games, though, they are less than optimal. for this I strongly recommend LibSDL or its python binding, PyGame. These can provide a convenient, standard interface to OpenGL if you want to use that, or you can use hardware accelerated 2d primitives if that's all you need. A: When choosing an IDE, it's very much a matter of taste, so the best choice is probably to try out several for a day or two each. Eclipse and XCode are both popular choices that surely are excellent in their own ways. I can't help you with the widgets, as I know very little about that. GTK+ is a popular framework, but the native OS X support wasn't ready last time I checked, but development is ongoing so this could have changed. Qt is less popular, but is nowadays completely open source, so the licensing issues it used to have are solved now, so you might want to look into that as well. wxWidgets are popular in Python and I found it easy to use, but I don't know if it's as good as the other ones, but it may very well be. As for the five steps, it makes much more sense to do them in Python first. Python is easy to learn and master, especially if you are NOT tainted by C/C++. C/C++ programmers often has to unlearn things, as there are so many things you must do and think of that you don't have to bother with in Python. With Python you can concentrate on learning the libraries and tools, instead of having to learn how to not shoot yourself in the foot with C++. Learn C++ afterwards, and you'll have a nicer smoother learning curve, and enjoy yourself more. A: I'd definitely go GTK+. It is very easy. I'm not sure about graphics libraries on OS X. I know OS X primarily uses Objective-C, but if the native graphics library can be used from C++, use that for game graphics. As far as IDEs, I don't know. I use GNU Emacs, but I wouldn't recommend that to a beginner. Learning how to use Emacs is like learning a new programming language all by itself. I would start with a basic text editor (look up one with syntax highlighting) and compiling from the terminal for now, so you don't have to learn an IDE too. They make huge projects easier, but can be a PITA for little things. A: You can use VIM with cscope and ctags plugins for C++, I personally find that to be the fastest. Eclipse for C++ is also good if you need a gui, but it is not as feature rich as it is for Java but it is a good open source IDE. In terms of books, Effective C++ and More Effective C++ are good. A: Work through Accelerated C++. Write a couple of small math-programs; something like the Mandelbrot set, a PDE-solver, or a graphing-app. This would be done using a widget toolkit. Write a small game with really crappy graphics. This is probably going to be a rip-off of Jetmen Revival or Space Invaders ;-) (When I'm fed up with the game not working), work my way through Core Python (this is a book; max. one link/question for new users...). Repeat steps 2 and 3 in Python. Might I recommend doing this in reverse order with respect to languages? Bear in mind that GTK+ isn't trivial to learn, neither is C++. In fact, I'd really recommend starting out with Cocoa and PyObjC first. Cocoa is a little bit more to wrap your head around, but once you get it down, it's very easy to see its benefit. A development setup of GTK and PyGTK can be a PITA to set up on OS X (at least it was for me). A: NetBeans is another choice. Although both Python and C++ support are rather new for it. The Python works find, but I haven't tried the C support.
What's a good beginner setup for C++/Python on OSX?
I'm looking for a good setup for learning C++ and eventually Python on Mac OSX. As I'm going use C++ I don't want to use XCode, as (I understand) this is primarily used with Objective-C. I have a small bit of experience in Java and MATLAB programming, and math is probably not going to be my main problem. I was thinking on an approach looking something like this: Work through Accelerated C++. Write a couple of small math-programs; something like the Mandelbrot set, a PDE-solver, or a graphing-app. This would be done using a widget toolkit. Write a small game with really crappy graphics. This is probably going to be a rip-off of Jetmen Revival or Space Invaders ;-) (When I'm fed up with the game not working), work my way through Core Python. Repeat steps 2 and 3 in Python. I'm thinking about going with Eclipse and GTK+ / X11. Any thoughts on IDE's and GUI toolkits? Preferably open source, and definitely free. And what do you think about the 5 steps? Any help would be much appreciated - thanks in advance!
[ "\nAs I'm going use C++ I don't\n want to use XCode, as (I understand)\n this is primarily used with\n Objective-C.\n\nXCode is a fine choice, even for pure C++ solutions.\n\nWork through Accelerated C++.\n\nThat's the book that got me started! It's an excellent choice, but not a walk in the park. It took me a month or two, at a rate of about 1 to 2 hours a day. But after this you'll have made a MAJOR jump towards becoming a really good C++ programmer.\n\nWrite a couple of small math-programs;\n something like the Mandelbrot set, a\n PDE-solver, or a graphing-app. This\n would be done using a widget toolkit.\n Write a small game with really crappy\n graphics. This is probably going to be\n a rip-off of Jetmen Revival or Space\n Invaders ;-) (When I'm fed up with the\n game not working), work my way through\n Core Python (this is a book; max. one\n link/question for new users...).\n\nFine, I did Tetris.\n\nRepeat steps 2 and 3 in Python.\n\nI have no experience using Python, but I know it's a much easier language to master than C++. So if you can master C++, Python's won't be any problem.\nFor GUI you could use Qt, especially now it has been made LGPL. However, Cocoa is interesting as well, if feel courageous enough to also learn Objective-C :) (Btw, there is a Python port for Cocoa as well.)\n", "XCode is a mature IDE well suited to almost any language. C++ is particularly well supported. \nApparently GTK+ has native OSX widget support, though I've never used it, so you could skip the X11 stack altogether if you desired. Other cross platform widget sets include wxWidgets, fltk and Tk.\nFor games, though, they are less than optimal. for this I strongly recommend LibSDL or its python binding, PyGame. These can provide a convenient, standard interface to OpenGL if you want to use that, or you can use hardware accelerated 2d primitives if that's all you need. \n", "When choosing an IDE, it's very much a matter of taste, so the best choice is probably to try out several for a day or two each. Eclipse and XCode are both popular choices that surely are excellent in their own ways. I can't help you with the widgets, as I know very little about that. GTK+ is a popular framework, but the native OS X support wasn't ready last time I checked, but development is ongoing so this could have changed. Qt is less popular, but is nowadays completely open source, so the licensing issues it used to have are solved now, so you might want to look into that as well. wxWidgets are popular in Python and I found it easy to use, but I don't know if it's as good as the other ones, but it may very well be.\nAs for the five steps, it makes much more sense to do them in Python first. Python is easy to learn and master, especially if you are NOT tainted by C/C++. C/C++ programmers often has to unlearn things, as there are so many things you must do and think of that you don't have to bother with in Python. \nWith Python you can concentrate on learning the libraries and tools, instead of having to learn how to not shoot yourself in the foot with C++. Learn C++ afterwards, and you'll have a nicer smoother learning curve, and enjoy yourself more.\n", "I'd definitely go GTK+. It is very easy. I'm not sure about graphics libraries on OS X. I know OS X primarily uses Objective-C, but if the native graphics library can be used from C++, use that for game graphics.\nAs far as IDEs, I don't know. I use GNU Emacs, but I wouldn't recommend that to a beginner. Learning how to use Emacs is like learning a new programming language all by itself. I would start with a basic text editor (look up one with syntax highlighting) and compiling from the terminal for now, so you don't have to learn an IDE too. They make huge projects easier, but can be a PITA for little things.\n", "You can use VIM with cscope and ctags plugins for C++, I personally find that to be the fastest. Eclipse for C++ is also good if you need a gui, but it is not as feature rich as it is for Java but it is a good open source IDE. \nIn terms of books, Effective C++ and More Effective C++ are good.\n", "\n\nWork through Accelerated C++.\nWrite a couple of small math-programs; something like the\n Mandelbrot set, a PDE-solver, or a\n graphing-app. This would be done using\n a widget toolkit.\nWrite a small game with really crappy graphics. This is probably\n going to be a rip-off of Jetmen\n Revival or Space Invaders ;-)\n(When I'm fed up with the game not working), work my way through Core\n Python (this is a book; max. one\n link/question for new users...).\nRepeat steps 2 and 3 in Python.\n\n\nMight I recommend doing this in reverse order with respect to languages? Bear in mind that GTK+ isn't trivial to learn, neither is C++. In fact, I'd really recommend starting out with Cocoa and PyObjC first. Cocoa is a little bit more to wrap your head around, but once you get it down, it's very easy to see its benefit. A development setup of GTK and PyGTK can be a PITA to set up on OS X (at least it was for me).\n", "NetBeans is another choice. Although both Python and C++ support are rather new for it. The Python works find, but I haven't tried the C support.\n" ]
[ 3, 2, 2, 0, 0, 0, 0 ]
[]
[]
[ "c++", "ide", "macos", "python" ]
stackoverflow_0001024062_c++_ide_macos_python.txt
Q: What causes this Genshi's Template Syntax Error? A Genshi template raises the following error: TemplateSyntaxError: invalid syntax in expression "${item.error}" of "choose" directive The part of the template code that the error specifies is the following ('feed' is a list of dictionary which is passed to the template): <item py:for="item in feed"> <py:choose error="${item.error}"> <py:when error="0"> <title>${item.something}</title> </py:when> <py:otherwise> <title>${item.something}</title> </py:otherwise> </py:choose> </item> Basically, item.error holds either a '0' or a '1' and I want the output based on that. I am not sure where the error is - any help is appreciated. Thanks. A: The docs perhaps don't make this clear, but the attribute needs to be called test (as it is in their examples) instead of error. <item py:for="item in feed"> <py:choose test="item.error"> <py:when test="0"> <title>${item.something}</title> </py:when> <py:otherwise> <title>${item.something}</title> </py:otherwise> </py:choose> </item> A: I've never used Genshi, but based on the documentation I found, it looks like you're trying to use the inline Python expression syntax inside a templates directives argument, which seems to be unneccesary. Try this instead: <item py:for="item in feed"> <py:choose error="item.error"> <py:when error="0"> <title>${item.something}</title> </py:when> <py:otherwise> <title>${item.something}</title> </py:otherwise> </py:choose> </item>
What causes this Genshi's Template Syntax Error?
A Genshi template raises the following error: TemplateSyntaxError: invalid syntax in expression "${item.error}" of "choose" directive The part of the template code that the error specifies is the following ('feed' is a list of dictionary which is passed to the template): <item py:for="item in feed"> <py:choose error="${item.error}"> <py:when error="0"> <title>${item.something}</title> </py:when> <py:otherwise> <title>${item.something}</title> </py:otherwise> </py:choose> </item> Basically, item.error holds either a '0' or a '1' and I want the output based on that. I am not sure where the error is - any help is appreciated. Thanks.
[ "The docs perhaps don't make this clear, but the attribute needs to be called test (as it is in their examples) instead of error.\n<item py:for=\"item in feed\">\n<py:choose test=\"item.error\">\n <py:when test=\"0\">\n <title>${item.something}</title>\n </py:when>\n <py:otherwise>\n <title>${item.something}</title>\n </py:otherwise>\n</py:choose>\n</item>\n\n", "I've never used Genshi, but based on the documentation I found, it looks like you're trying to use the inline Python expression syntax inside a templates directives argument, which seems to be unneccesary. Try this instead:\n<item py:for=\"item in feed\">\n<py:choose error=\"item.error\">\n <py:when error=\"0\">\n <title>${item.something}</title>\n </py:when>\n <py:otherwise>\n <title>${item.something}</title>\n </py:otherwise>\n</py:choose>\n</item>\n\n" ]
[ 4, 0 ]
[]
[]
[ "genshi", "python", "syntax_error" ]
stackoverflow_0000811737_genshi_python_syntax_error.txt
Q: Python newbie: What does this code do? This is a snippet from Google AppEngine tutorial. application = webapp.WSGIApplication([('/', MainPage)], debug=True) I'm not quite sure what debug=True does inside the constructor call. Does it create a local variable with name debug, assign True to it, and pass it to constructor, or is this a way to set a class instance member variable's value in constructor? A: Python functions accept keyword arguments. If you define a function like so: def my_func(a, b='abc', c='def'): print a, b, c You can call it like this: my_func('hello', c='world') And the result will be: hello abc world You can also support dynamic keyword arguments, using special syntax: def my_other_func(a, *b, **c): print a, b, c *b means that the b variable will take all non-named arguments after a, as a tuple object. **c means that the c variable will take all named arguments, as a dict object. If you call the function like this: my_other_func('hello', 'world', 'what a', state='fine', what='day') You will get: hello ('world', 'what a') {'state': 'fine', 'what': 'day'} A: Neither -- rather, webapp.WSGIApplication takes an optional argument named debug, and this code is passing the value True for that parameter. The reference page for WSGIApplication is here and it clearly shows the optional debug argument and the fact that it defaults to False unless explicitly passed in. As the page further makes clear, passing debug as True means that helpful debugging information is shown to the browser if and when an exception occurs while handling the request. How exactly that effect is obtained (in particular, whether it implies the existence of an attribute on the instance of WSGIApplication, or how that hypothetical attribute might be named) is an internal, undocumented implementation detail, which we're not supposed to worry about (of course, you can study the sources of WSGIApplication in the SDK if you do worry, or just want to learn more about one possible implementation of these specs!-). A: It's using named arguments. See Using Optional and Named Arguments.
Python newbie: What does this code do?
This is a snippet from Google AppEngine tutorial. application = webapp.WSGIApplication([('/', MainPage)], debug=True) I'm not quite sure what debug=True does inside the constructor call. Does it create a local variable with name debug, assign True to it, and pass it to constructor, or is this a way to set a class instance member variable's value in constructor?
[ "Python functions accept keyword arguments. If you define a function like so:\ndef my_func(a, b='abc', c='def'):\n print a, b, c\n\nYou can call it like this:\nmy_func('hello', c='world')\n\nAnd the result will be:\nhello abc world\n\nYou can also support dynamic keyword arguments, using special syntax:\ndef my_other_func(a, *b, **c):\n print a, b, c\n\n\n*b means that the b variable will take all non-named arguments after a, as a tuple object.\n**c means that the c variable will take all named arguments, as a dict object.\n\nIf you call the function like this:\nmy_other_func('hello', 'world', 'what a', state='fine', what='day')\n\nYou will get:\nhello ('world', 'what a') {'state': 'fine', 'what': 'day'}\n\n", "Neither -- rather, webapp.WSGIApplication takes an optional argument named debug, and this code is passing the value True for that parameter.\nThe reference page for WSGIApplication is here and it clearly shows the optional debug argument and the fact that it defaults to False unless explicitly passed in.\nAs the page further makes clear, passing debug as True means that helpful debugging information is shown to the browser if and when an exception occurs while handling the request.\nHow exactly that effect is obtained (in particular, whether it implies the existence of an attribute on the instance of WSGIApplication, or how that hypothetical attribute might be named) is an internal, undocumented implementation detail, which we're not supposed to worry about (of course, you can study the sources of WSGIApplication in the SDK if you do worry, or just want to learn more about one possible implementation of these specs!-).\n", "It's using named arguments. See Using Optional and Named Arguments.\n" ]
[ 11, 4, 3 ]
[]
[]
[ "python" ]
stackoverflow_0001024437_python.txt
Q: Looping Fget with fsockopen in PHP 5.x I have a Python Server finally working and responding to multiple command's with the output's, however I'm now having problem's with PHP receiving the full output. I have tried commands such as fgets, fread, the only command that seems to work is "fgets". However this only recieve's on line of data, I then created a while statement shown bellow: while (!feof($handle)) { $buffer = fgets($handle, 4096); echo $buffer; } However it seems the Python server is not sending a Feof at the end of the output so the php page times out and does not display anything. Like I said above, just running echo fgets($handle), work's fine, and output's one line, running the command again under neither will display the next line e.t.c I have attached the important part of my Python Script bellow: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(("", port)) s.listen(5) print "OK." print " Listening on port:", port import subprocess while 1: con, addr = s.accept() while True: datagram = con.recv(1024) if not datagram: break print "Rx Cmd:", datagram print "Launch:", datagram process = subprocess.Popen(datagram+" &", shell=True, stdout=subprocess.PIPE) stdout, stderr = process.communicate() con.send(stdout) con.close() s.close() I have also attached the full PHP script: <?php $handle = fsockopen("tcp://xxx.xxx.xxx.xxx",12345); fwrite($handle,"ls"); echo fgets($handle); fclose($handle); ?> Thanks, Ashley A: I believe you need to fix your server code a bit. I have removed the inner while loop. The problem with your code was that the server never closed the connection, so feof never returned true. I also removed the + " &" bit. To get the output, you need to wait until the process ends anyway. And I am not sure how the shell would handle the & in this case. s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(("", port)) s.listen(5) print "OK." print " Listening on port:", port import subprocess try: while 1: con, addr = s.accept() try: datagram = con.recv(1024) if not datagram: continue print "Rx Cmd:", datagram print "Launch:", datagram process = subprocess.Popen(datagram, shell=True, stdout=subprocess.PIPE) stdout, stderr = process.communicate() con.send(stdout) finally: print "closing connection" con.close() except KeyboardInterrupt: pass finally: print "closing socket" s.close() BTW, you need to use the while-loop in your php script. fgets returns until a single line only.
Looping Fget with fsockopen in PHP 5.x
I have a Python Server finally working and responding to multiple command's with the output's, however I'm now having problem's with PHP receiving the full output. I have tried commands such as fgets, fread, the only command that seems to work is "fgets". However this only recieve's on line of data, I then created a while statement shown bellow: while (!feof($handle)) { $buffer = fgets($handle, 4096); echo $buffer; } However it seems the Python server is not sending a Feof at the end of the output so the php page times out and does not display anything. Like I said above, just running echo fgets($handle), work's fine, and output's one line, running the command again under neither will display the next line e.t.c I have attached the important part of my Python Script bellow: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(("", port)) s.listen(5) print "OK." print " Listening on port:", port import subprocess while 1: con, addr = s.accept() while True: datagram = con.recv(1024) if not datagram: break print "Rx Cmd:", datagram print "Launch:", datagram process = subprocess.Popen(datagram+" &", shell=True, stdout=subprocess.PIPE) stdout, stderr = process.communicate() con.send(stdout) con.close() s.close() I have also attached the full PHP script: <?php $handle = fsockopen("tcp://xxx.xxx.xxx.xxx",12345); fwrite($handle,"ls"); echo fgets($handle); fclose($handle); ?> Thanks, Ashley
[ "I believe you need to fix your server code a bit. I have removed the inner while loop. The problem with your code was that the server never closed the connection, so feof never returned true.\nI also removed the + \" &\" bit. To get the output, you need to wait until the process ends anyway. And I am not sure how the shell would handle the & in this case.\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.bind((\"\", port))\ns.listen(5)\nprint \"OK.\"\nprint \" Listening on port:\", port\nimport subprocess\ntry:\n while 1:\n con, addr = s.accept()\n try:\n datagram = con.recv(1024)\n if not datagram:\n continue\n print \"Rx Cmd:\", datagram\n print \"Launch:\", datagram\n process = subprocess.Popen(datagram, shell=True, stdout=subprocess.PIPE)\n stdout, stderr = process.communicate()\n con.send(stdout)\n finally:\n print \"closing connection\"\n con.close()\nexcept KeyboardInterrupt:\n pass\nfinally:\n print \"closing socket\"\n s.close()\n\nBTW, you need to use the while-loop in your php script. fgets returns until a single line only.\n" ]
[ 1 ]
[]
[]
[ "fgets", "php", "python", "sockets", "tcp" ]
stackoverflow_0001024370_fgets_php_python_sockets_tcp.txt
Q: How to test a Python script with an input file filled with testcases? I'm participating in online judge contests and I want to test my code with a .in file full of testcases to time my algorithm. How can I get my script to take input from this .in file? A: So the script normally takes test cases from stdin, and now you want to test using test cases from a file? If that is the case, use the < redirection operation on the cmd line: my_script < testcases.in A: Read from file(s) and/or stdin: import fileinput for line in fileinput.input(): process(line) A: PyUnit "the standard unit testing framework for Python" might be what you are looking for. Doing a small script that does something like this: #!/usr/bin/env python import sys def main(): in_file = open('path_to_file') for line in in_file: sys.stdout.write(line) if __name__ == "__main__": main() And run as this_script.py | your_app.py A: You can do this in a separate file. testmyscript.py import sys someFile= open( "somefile.in", "r" ) sys.stdin= someFile execfile( "yourscript.py" )
How to test a Python script with an input file filled with testcases?
I'm participating in online judge contests and I want to test my code with a .in file full of testcases to time my algorithm. How can I get my script to take input from this .in file?
[ "So the script normally takes test cases from stdin, and now you want to test using test cases from a file?\nIf that is the case, use the < redirection operation on the cmd line:\nmy_script < testcases.in\n\n", "Read from file(s) and/or stdin:\nimport fileinput\nfor line in fileinput.input():\n process(line)\n\n", "PyUnit \"the standard unit testing framework for Python\" might be what you are looking for.\n\nDoing a small script that does something like this:\n#!/usr/bin/env python\nimport sys\n\ndef main():\n in_file = open('path_to_file')\n for line in in_file:\n sys.stdout.write(line)\n\nif __name__ == \"__main__\":\n main()\n\nAnd run as\nthis_script.py | your_app.py\n\n", "You can do this in a separate file.\ntestmyscript.py\nimport sys\nsomeFile= open( \"somefile.in\", \"r\" )\nsys.stdin= someFile\nexecfile( \"yourscript.py\" )\n\n" ]
[ 7, 2, 1, 1 ]
[]
[]
[ "input", "python" ]
stackoverflow_0001024529_input_python.txt
Q: Django - alternative to subclassing User? I am using the standard User model (django.contrib.auth) which comes with Django. I have made some of my own models in a Django application and created a relationship between like this: from django.db import models from django.contrib.auth.models import User class GroupMembership(models.Model): user = models.ForeignKey(User, null = True, blank = True, related_name='memberships') #other irrelevant fields removed from example So I can now do this to get all of a user's current memberships: user.memberships.all() However, I want to be able to do a more complex query, like this: user.memberships.all().select_related('group__name') This works fine but I want to fetch this data in a template. It seems silly to try to put this sort of logic inside a template (and I can't seem to make it work anyway), so I want to create a better way of doing it. I could sub-class User, but that doesn't seem like a great solution - I may in future want to move my application into other Django sites, and presumably if there was any another application that sub-classed User I wouldn't be able to get it to work. Is the best to create a method inside GroupMembership called something like get_by_user(user)? Would I be able to call this from a template? I would appreciate any advice anybody can give on structuring this - sorry if this is a bit long/vague. A: First, calling select_related and passing arguments, doesn't do anything. It's a hint that cache should be populated. You would never call select_related in a template, only a view function. And only when you knew you needed all those related objects for other processing. "Is the best to create a method inside GroupMembership called something like get_by_user(user)?" You have this. I'm not sure what's wrong with it. GroupMembership.objects.filter( user="someUser" ) "Would I be able to call this from a template?" No. That's what view functions are for. groups = GroupMembership.objects.filter( user="someUser" ) Then you provide the groups object to the template for rendering. Edit This is one line of code; it doesn't seem that onerous a burden to include this in all your view functions. If you want this to appear on every page, you have lots of choices that do not involve repeating this line of code.. A view function can call another function. You might want to try callable objects instead of simple functions; these can subclass a common callable object that fills in this information. You can add a template context processor to put this into the context of all templates that are rendered. You could write your own decorator to assure that this is done in every view function that has the decorator.
Django - alternative to subclassing User?
I am using the standard User model (django.contrib.auth) which comes with Django. I have made some of my own models in a Django application and created a relationship between like this: from django.db import models from django.contrib.auth.models import User class GroupMembership(models.Model): user = models.ForeignKey(User, null = True, blank = True, related_name='memberships') #other irrelevant fields removed from example So I can now do this to get all of a user's current memberships: user.memberships.all() However, I want to be able to do a more complex query, like this: user.memberships.all().select_related('group__name') This works fine but I want to fetch this data in a template. It seems silly to try to put this sort of logic inside a template (and I can't seem to make it work anyway), so I want to create a better way of doing it. I could sub-class User, but that doesn't seem like a great solution - I may in future want to move my application into other Django sites, and presumably if there was any another application that sub-classed User I wouldn't be able to get it to work. Is the best to create a method inside GroupMembership called something like get_by_user(user)? Would I be able to call this from a template? I would appreciate any advice anybody can give on structuring this - sorry if this is a bit long/vague.
[ "First, calling select_related and passing arguments, doesn't do anything. It's a hint that cache should be populated.\nYou would never call select_related in a template, only a view function. And only when you knew you needed all those related objects for other processing.\n\"Is the best to create a method inside GroupMembership called something like get_by_user(user)?\"\nYou have this. I'm not sure what's wrong with it.\n GroupMembership.objects.filter( user=\"someUser\" )\n\n\"Would I be able to call this from a template?\"\nNo. That's what view functions are for.\n groups = GroupMembership.objects.filter( user=\"someUser\" )\n\nThen you provide the groups object to the template for rendering.\n\nEdit\nThis is one line of code; it doesn't seem that onerous a burden to include this in all your view functions.\nIf you want this to appear on every page, you have lots of choices that do not involve repeating this line of code..\n\nA view function can call another function.\nYou might want to try callable objects instead of simple functions; these can subclass a common callable object that fills in this information.\nYou can add a template context processor to put this into the context of all templates that are rendered.\nYou could write your own decorator to assure that this is done in every view function that has the decorator.\n\n" ]
[ 3 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001024684_django_python.txt
Q: finding firefox version How to find Firefox version using python? A: I tried Alan's code snippet and it didn't work for me. One problem with it is that in order for the "-v or -version" flags to work, you must have a debug version firefox. See here under "Miscellaneous" for details. Try the following, which uses the win32 library to read the Product Version string directly from the .exe file: import win32api def get_version(filename): info = win32api.GetFileVersionInfo(filename, "\\") ms = info['ProductVersionMS'] ls = info['ProductVersionLS'] return win32api.HIWORD(ms), win32api.LOWORD(ms), win32api.HIWORD(ls), win32api.LOWORD(ls) if __name__ == '__main__': print ".".join([str (i) for i in get_version(r"C:\Program Files\Mozilla Firefox\firefox.exe")]) A: Try the following code snippet: import os firefox_version = os.popen("firefox --version").read()
finding firefox version
How to find Firefox version using python?
[ "I tried Alan's code snippet and it didn't work for me. One problem with it is that in order for the \"-v or -version\" flags to work, you must have a debug version firefox. See here under \"Miscellaneous\" for details.\nTry the following, which uses the win32 library to read the Product Version string directly from the .exe file:\nimport win32api\n\ndef get_version(filename):\n info = win32api.GetFileVersionInfo(filename, \"\\\\\")\n ms = info['ProductVersionMS']\n ls = info['ProductVersionLS']\n return win32api.HIWORD(ms), win32api.LOWORD(ms), win32api.HIWORD(ls), win32api.LOWORD(ls)\n\nif __name__ == '__main__':\n print \".\".join([str (i) for i in get_version(r\"C:\\Program Files\\Mozilla Firefox\\firefox.exe\")])\n\n", "Try the following code snippet:\nimport os\nfirefox_version = os.popen(\"firefox --version\").read()\n\n" ]
[ 3, 2 ]
[]
[]
[ "firefox", "python" ]
stackoverflow_0001016609_firefox_python.txt
Q: Finding the parent tag of a text string with ElementTree/lxml I'm trying to take a string of text, and "extract" the rest of the text in the paragraph/document from the html. My current is approach is trying to find the "parent tag" of the string in the html that has been parsed with lxml. (if you know of a better way to tackle this problem, I'm all ears!) For example, search the tree for "TEXT STRING HERE" and return the "p" tag. (note that I won't know the exact layout of the html beforehand) <html> <head> ... </head> <body> .... <div> ... <p>TEXT STRING HERE ......</p> ... </html> Thanks for your help! A: This is a simple way to do it with ElementTree. It does require that your HTML input is valid XML (so I have added the appropriate end tags to your HTML): import elementtree.ElementTree as ET html = """<html> <head> </head> <body> <div> <p>TEXT STRING HERE ......</p> </div> </body> </html>""" for e in ET.fromstring(html).getiterator(): if e.text.find('TEXT STRING HERE') != -1: print "Found string %r, element = %r" % (e.text, e)
Finding the parent tag of a text string with ElementTree/lxml
I'm trying to take a string of text, and "extract" the rest of the text in the paragraph/document from the html. My current is approach is trying to find the "parent tag" of the string in the html that has been parsed with lxml. (if you know of a better way to tackle this problem, I'm all ears!) For example, search the tree for "TEXT STRING HERE" and return the "p" tag. (note that I won't know the exact layout of the html beforehand) <html> <head> ... </head> <body> .... <div> ... <p>TEXT STRING HERE ......</p> ... </html> Thanks for your help!
[ "This is a simple way to do it with ElementTree. It does require that your HTML input is valid XML (so I have added the appropriate end tags to your HTML):\nimport elementtree.ElementTree as ET\n\nhtml = \"\"\"<html>\n<head>\n</head>\n<body>\n<div>\n<p>TEXT STRING HERE ......</p> \n</div>\n</body>\n</html>\"\"\"\n\nfor e in ET.fromstring(html).getiterator():\n if e.text.find('TEXT STRING HERE') != -1:\n print \"Found string %r, element = %r\" % (e.text, e)\n\n" ]
[ 3 ]
[]
[]
[ "elementtree", "lxml", "python" ]
stackoverflow_0001025129_elementtree_lxml_python.txt
Q: Django Context Processor Trouble So I am just starting out on learning Django, and I'm attempting to complete one of the sample applications from the book. I'm getting stuck now on creating DRY URL's. More specifically, I cannot get my context processor to work. I create my context processor as so: from django.conf import settings #from mysite.settings import ROOT_URL def root_url_processor(request): return {'ROOT_URL': settings.ROOT_URL} and I placed this file in my app, specifically, mysite/photogallery/context_processors.py . My settings.py file in the root of my project contains: TEMPLATE_CONTEXT_PROCESSORS = ('mysite.context_processors',) When I try to go to the ROOT_URL that I've also specified in my settings.py, I receive this error: TypeError at /gallery/ 'module' object is not callable /gallery/ is the ROOT_URL of this particular application. I realize that perhpas this could mean a naming conflict, but I cannot find one. Furthermore, when I comment out the TEMPLATE_CONTEXT_PROCESSORS definition from settings.py, the application actually does load, however my thumbnail images do not appear (probably because my templates do not know about ROOT_URL, right?). Anyone have any ideas as to what the problem could be? EDIT: Here's some information about my settings.py in case it is of use: ROOT_URLCONF = 'mysite.urls' ROOT_URL = '/gallery/' LOGIN_URL = ROOT_URL + 'login/' MEDIA_URL = ROOT_URL + 'media/' ADMIN_MEDIA_PREFIX = MEDIA_URL + 'admin/' TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. ) TEMPLATE_CONTEXT_PROCESSORS = ('mysite.photogallery.context_processors',) EDIT2: I'm going to add some information about my url files. Essentially I have a root urls.py, a real_urls.py which is also located at the root, and a urls.py that exists in the application. Basically, root/urls.py hides ROOT_URL from real_urls.py, which then includes my app's urls.py. root/urls.py: from django.conf.urls.defaults import * #from mysite.settings import ROOT_URL from django.conf import settings # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: (r'^blog/', include('mysite.blog.urls')), url(r'^%s' % settings.ROOT_URL[1:], include('mysite.real_urls')), ) root/real_urls.py: from django.conf.urls.defaults import * from django.contrib import admin urlpatterns = patterns('', url(r'^admin/(.*)', admin.site.root), url(r'^', include('mysite.photogallery.urls')), ) root/photogallery/urls.py (note that this one probably is not causing any of the problems, but I'm adding it here in case anyone wants to see it.): from django.conf.urls.defaults import * from mysite.photogallery.models import Item, Photo urlpatterns = patterns('django.views.generic', url(r'^$', 'simple.direct_to_template', kwargs={'template': 'index.html', 'extra_context': {'item_list': lambda: Item.objects.all()} }, name='index'), url(r'^items/$', 'list_detail.object_list', kwargs={'queryset': Item.objects.all(), 'template_name': 'items_list.html', 'allow_empty': True }, name='item_list'), url(r'^items/(?P<object_id>\d+)/$', 'list_detail.object_detail', kwargs={'queryset': Item.objects.all(), 'template_name': 'items_detail.html' }, name='item_detail' ), url(r'^photos/(?P<object_id>\d+)/$', 'list_detail.object_detail', kwargs={'queryset': Photo.objects.all(), 'template_name': 'photos_detail.html' }, name='photo_detail'),) A: TEMPLATE_CONTEXT_PROCESSORS should contain a list of callable objects, not modules. List the actual functions that will transform the template contexts. Link to docs.
Django Context Processor Trouble
So I am just starting out on learning Django, and I'm attempting to complete one of the sample applications from the book. I'm getting stuck now on creating DRY URL's. More specifically, I cannot get my context processor to work. I create my context processor as so: from django.conf import settings #from mysite.settings import ROOT_URL def root_url_processor(request): return {'ROOT_URL': settings.ROOT_URL} and I placed this file in my app, specifically, mysite/photogallery/context_processors.py . My settings.py file in the root of my project contains: TEMPLATE_CONTEXT_PROCESSORS = ('mysite.context_processors',) When I try to go to the ROOT_URL that I've also specified in my settings.py, I receive this error: TypeError at /gallery/ 'module' object is not callable /gallery/ is the ROOT_URL of this particular application. I realize that perhpas this could mean a naming conflict, but I cannot find one. Furthermore, when I comment out the TEMPLATE_CONTEXT_PROCESSORS definition from settings.py, the application actually does load, however my thumbnail images do not appear (probably because my templates do not know about ROOT_URL, right?). Anyone have any ideas as to what the problem could be? EDIT: Here's some information about my settings.py in case it is of use: ROOT_URLCONF = 'mysite.urls' ROOT_URL = '/gallery/' LOGIN_URL = ROOT_URL + 'login/' MEDIA_URL = ROOT_URL + 'media/' ADMIN_MEDIA_PREFIX = MEDIA_URL + 'admin/' TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. ) TEMPLATE_CONTEXT_PROCESSORS = ('mysite.photogallery.context_processors',) EDIT2: I'm going to add some information about my url files. Essentially I have a root urls.py, a real_urls.py which is also located at the root, and a urls.py that exists in the application. Basically, root/urls.py hides ROOT_URL from real_urls.py, which then includes my app's urls.py. root/urls.py: from django.conf.urls.defaults import * #from mysite.settings import ROOT_URL from django.conf import settings # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: (r'^blog/', include('mysite.blog.urls')), url(r'^%s' % settings.ROOT_URL[1:], include('mysite.real_urls')), ) root/real_urls.py: from django.conf.urls.defaults import * from django.contrib import admin urlpatterns = patterns('', url(r'^admin/(.*)', admin.site.root), url(r'^', include('mysite.photogallery.urls')), ) root/photogallery/urls.py (note that this one probably is not causing any of the problems, but I'm adding it here in case anyone wants to see it.): from django.conf.urls.defaults import * from mysite.photogallery.models import Item, Photo urlpatterns = patterns('django.views.generic', url(r'^$', 'simple.direct_to_template', kwargs={'template': 'index.html', 'extra_context': {'item_list': lambda: Item.objects.all()} }, name='index'), url(r'^items/$', 'list_detail.object_list', kwargs={'queryset': Item.objects.all(), 'template_name': 'items_list.html', 'allow_empty': True }, name='item_list'), url(r'^items/(?P<object_id>\d+)/$', 'list_detail.object_detail', kwargs={'queryset': Item.objects.all(), 'template_name': 'items_detail.html' }, name='item_detail' ), url(r'^photos/(?P<object_id>\d+)/$', 'list_detail.object_detail', kwargs={'queryset': Photo.objects.all(), 'template_name': 'photos_detail.html' }, name='photo_detail'),)
[ "TEMPLATE_CONTEXT_PROCESSORS should contain a list of callable objects, not modules. List the actual functions that will transform the template contexts. Link to docs. \n" ]
[ 4 ]
[]
[]
[ "django", "django_urls", "python" ]
stackoverflow_0001025025_django_django_urls_python.txt
Q: What's the best way to implement web service for ajax autocomplete I'm implementing a "Google Suggest" like autocomplete feature for tag searching using jQuery's autocomplete. I need to provide a web service to jQuery giving it a list of suggestions based on what the user has typed. I see 2 ways of implementing the web service: 1) just store all the tags in a database and search the DB using user input as prefix. This is simple, but I'm concerned about latency. 2) Use an in-process trie to store all the tags and search it for matching results. As everything will be in-process, I expect this to have much lower latency. But there are several difficulties: -What's a good way to initialize the trie on process start up? Presumable I'll store the tag data in a DB and retrieve them and turn them into a trie when I frist start up the process. But I'm not sure how. I'm using Python/Django. -When a new tag is created by a user, I need to insert the new tag into the trie. But let's say I have 5 Django processes and hence 5 tries, how do I tell the other 4 tries that they need to insert a new tag too? -How to make sure the trie is threadsafe as my Django processes will be threaded (I'm using mod_wsgi). Or do I not have to worry about threadsafty because of Python's GIL? -Any way I can store the tag's frequency of use within the trie as well? How do I tell when does the tag's string end and when does the frequency start - eg. if I store apple213 into the trie, is it "apple" with frequency 213 or is it "apple2" with frequency 13?? Any help on the issues above or any suggestions on a different approach would be really appreciated. A: Don't be concerned about latency before you measure things -- make up a bunch of pseudo-tags, stick them in the DB, and measure latencies for typical queries. Depending on your DB setup, your latency may be just fine and you're spared wasted worries. Do always worry about threading, though - the GIL doesn't make race conditions go away (control might switch among threads at any pseudocode instruction boundary, as well as when C code in an underlying extension or builtin is executing). You need first to check the threadsafety attribute of the DB API module you're using (see PEP 249), and then use locking appropriately or spawn a small pool of dedicated threads that perform DB interactions (receiving requests on a Queue.Queue and returning results on another, the normal architecture for sound and easy threading in Python). A: I would use the first option. 'KISS' - (Keep It Simple Stupid). For small amounts of data there shouldn't be much latency. We run the same kind of thing for a name search and results appear pretty quickly on a few thousand rows. Hope that helps, Josh
What's the best way to implement web service for ajax autocomplete
I'm implementing a "Google Suggest" like autocomplete feature for tag searching using jQuery's autocomplete. I need to provide a web service to jQuery giving it a list of suggestions based on what the user has typed. I see 2 ways of implementing the web service: 1) just store all the tags in a database and search the DB using user input as prefix. This is simple, but I'm concerned about latency. 2) Use an in-process trie to store all the tags and search it for matching results. As everything will be in-process, I expect this to have much lower latency. But there are several difficulties: -What's a good way to initialize the trie on process start up? Presumable I'll store the tag data in a DB and retrieve them and turn them into a trie when I frist start up the process. But I'm not sure how. I'm using Python/Django. -When a new tag is created by a user, I need to insert the new tag into the trie. But let's say I have 5 Django processes and hence 5 tries, how do I tell the other 4 tries that they need to insert a new tag too? -How to make sure the trie is threadsafe as my Django processes will be threaded (I'm using mod_wsgi). Or do I not have to worry about threadsafty because of Python's GIL? -Any way I can store the tag's frequency of use within the trie as well? How do I tell when does the tag's string end and when does the frequency start - eg. if I store apple213 into the trie, is it "apple" with frequency 213 or is it "apple2" with frequency 13?? Any help on the issues above or any suggestions on a different approach would be really appreciated.
[ "Don't be concerned about latency before you measure things -- make up a bunch of pseudo-tags, stick them in the DB, and measure latencies for typical queries. Depending on your DB setup, your latency may be just fine and you're spared wasted worries.\nDo always worry about threading, though - the GIL doesn't make race conditions go away (control might switch among threads at any pseudocode instruction boundary, as well as when C code in an underlying extension or builtin is executing). You need first to check the threadsafety attribute of the DB API module you're using (see PEP 249), and then use locking appropriately or spawn a small pool of dedicated threads that perform DB interactions (receiving requests on a Queue.Queue and returning results on another, the normal architecture for sound and easy threading in Python).\n", "I would use the first option. 'KISS' - (Keep It Simple Stupid).\nFor small amounts of data there shouldn't be much latency. We run the same kind of thing for a name search and results appear pretty quickly on a few thousand rows.\nHope that helps,\nJosh\n" ]
[ 4, 1 ]
[]
[]
[ "ajax", "autocomplete", "python", "trie" ]
stackoverflow_0001025018_ajax_autocomplete_python_trie.txt
Q: Disable GNOME's automount with Python I need to stop GNOME/Nautilus from automagically mounting new devices and partitions as they appear to the system. How can I accomplish this in python? A: Why would do it in Python? You can just use the commandline, as in: gconftool-2 --type bool --set /apps/nautilus/preferences/media_automount false If you really need it to be in Python, then you can use the subprocess module: import subprocess def setAutomount(value): """ @type value: boolean """ cmd = ['gconftool-2', '--type', 'bool', '--set', '/apps/nautilus/preferences/media_automount'] cmd.append(str(value).lower()) subprocess.check_call(cmd) setAutomount(False) But I'm really not sure that it's necessary here.
Disable GNOME's automount with Python
I need to stop GNOME/Nautilus from automagically mounting new devices and partitions as they appear to the system. How can I accomplish this in python?
[ "Why would do it in Python? You can just use the commandline, as in:\ngconftool-2 --type bool --set /apps/nautilus/preferences/media_automount false\n\nIf you really need it to be in Python, then you can use the subprocess module:\nimport subprocess\n\ndef setAutomount(value):\n \"\"\"\n @type value: boolean\n \"\"\"\n cmd = ['gconftool-2', '--type', 'bool', '--set', \n '/apps/nautilus/preferences/media_automount']\n cmd.append(str(value).lower())\n subprocess.check_call(cmd)\n\nsetAutomount(False)\n\nBut I'm really not sure that it's necessary here.\n" ]
[ 3 ]
[]
[]
[ "automount", "gnome", "hal", "python" ]
stackoverflow_0001025244_automount_gnome_hal_python.txt
Q: dual iterator in one python object In python, I am trying to write a class that support two different kind of iterator. Roughly speaking, this object contains a matrix of data and I want to have two different kind of iterator to support row iteration and column iteration. A: dict has several iterator-producing methods -- iterkeys, itervalues, iteritems -- and so should your class. If there's one "most natural" way of iterating, you should also alias it to __iter__ for convenience and readability (that's probably going to be iterrows; of course there is always going to be some doubt, as there was with dict when we designed its iteration behavior, but a reasonable pick is better than none). For example, suppose your matrix is square, held flattened up into a row-major list self.data, with a side of self.n. Then: def iterrows(self): start = 0 n = self.n data = self.data while start < n*n: stop = start + n yield data[start:stop] start = stop def itercols(self): start = 0 n = self.n data = self.data while start < n: yield data[start::n] start += 1 __iter__ = iterrows A: Is this what you're looking for? class Matrix(object): def __init__(self, rows): self._rows = rows def columns(self): return zip(*self._rows) def rows(self): return self._rows # Create a Matrix by providing rows. m = Matrix([[1,2,3], [4,5,6], [7,8,9]]) # Iterate in row-major order. for row in m.rows(): for value in row: print value # Iterate in column-major order. for column in m.columns(): for value in column: print value You can use itertools.izip instead of zip if you want to create each column on demand. You can also move the iteration of the actual values into the class. I wasn't sure if you wanted to iterate over rows/columns (as shown) or the values in the rows/columns. A: Okay, so make two separate methods where each is a generator. class Matrix(object): def iter_rows(self): for row in self.rows: yield row def iter_columns(self): for column in self.columns: yield column Your __iter__ could iterate over one or the other by default, though I'd recommend having no __iter__ at all.
dual iterator in one python object
In python, I am trying to write a class that support two different kind of iterator. Roughly speaking, this object contains a matrix of data and I want to have two different kind of iterator to support row iteration and column iteration.
[ "dict has several iterator-producing methods -- iterkeys, itervalues, iteritems -- and so should your class. If there's one \"most natural\" way of iterating, you should also alias it to __iter__ for convenience and readability (that's probably going to be iterrows; of course there is always going to be some doubt, as there was with dict when we designed its iteration behavior, but a reasonable pick is better than none).\nFor example, suppose your matrix is square, held flattened up into a row-major list self.data, with a side of self.n. Then:\ndef iterrows(self):\n start = 0\n n = self.n\n data = self.data\n while start < n*n:\n stop = start + n\n yield data[start:stop]\n start = stop\n\ndef itercols(self):\n start = 0\n n = self.n\n data = self.data\n while start < n:\n yield data[start::n]\n start += 1\n\n__iter__ = iterrows\n\n", "Is this what you're looking for?\nclass Matrix(object):\n def __init__(self, rows):\n self._rows = rows\n\n def columns(self):\n return zip(*self._rows)\n\n def rows(self):\n return self._rows\n\n# Create a Matrix by providing rows.\nm = Matrix([[1,2,3],\n [4,5,6],\n [7,8,9]])\n\n# Iterate in row-major order.\nfor row in m.rows():\n for value in row:\n print value\n\n# Iterate in column-major order.\nfor column in m.columns():\n for value in column:\n print value\n\nYou can use itertools.izip instead of zip if you want to create each column on demand.\nYou can also move the iteration of the actual values into the class. I wasn't sure if you wanted to iterate over rows/columns (as shown) or the values in the rows/columns.\n", "Okay, so make two separate methods where each is a generator.\nclass Matrix(object):\n def iter_rows(self):\n for row in self.rows:\n yield row\n\n def iter_columns(self):\n for column in self.columns:\n yield column\n\nYour __iter__ could iterate over one or the other by default, though I'd recommend having no __iter__ at all.\n" ]
[ 5, 4, 2 ]
[]
[]
[ "iterator", "python" ]
stackoverflow_0001025348_iterator_python.txt
Q: mercurial + OSX == fail? hg log abort: Is a directory $ mkdir foo $ cd foo $ hg init . $ hg log abort: Is a directory $ hg history abort: Is a directory Darwin Host.local 9.6.1 Darwin Kernel Version 9.6.1: Wed Dec 10 10:38:33 PST 2008; root:xnu-1228.9.75~3/RELEASE_I386 i386 $ hg --version Mercurial Distributed SCM (version 1.2.1) $ python --version Python 2.5.4 (all installed via macports) Thoughts? The google gives us nothing. Update: (as root): $ hg init /tmp/foo $ cd /tmp/foo; hg log (works) (as user): $ hg init /tmp/bar $ cd /tmp/bar; hg log abort: Is a directory So Travis was right (see comments) it does look like a permission problem somewhere but where? This is a stock install of Leopard barely a week old and stock installs of macport versions of python and mercurial. I hope that isn't mercurial's idea of a good error message when it has a permission problem. 2nd update (from dkbits suggestions below): $ sudo dtruss hg log [snip] ... stat("/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site- packages/mercurial/templates/gitweb\0", 0xBFFFC7DC, 0x1000) = 0 0 open_nocancel("/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/mercurial/templates/gitweb\0", 0x0, 0x1B6) = 3 0 fstat(0x3, 0xBFFFC900, 0x1B6) = 0 0 close_nocancel(0x3) = 0 0 write_nocancel(0x2, "abort: Is a directory\n\0", 0x16) = 22 0 Also, the temp directory is where you expected it to be. Permissions look okay on it. A: Not a direct answer to your question, but I've successfully been using the Mercurial pre-packaged binaries from here with the standard Python 2.5.1 install on OSX 10.5 without issue. $ mkdir foo $ cd foo $ hg init . $ hg log $ hg history $ hg --version Mercurial Distributed SCM (version 1.2.1) $ python --version Python 2.5.1 A: From https://www.mercurial-scm.org/bts/issue233, there is an interesting traceback: hg --traceback qpop Traceback (most recent call last): [...] File "/export/home/bos/lib/python/mercurial/util.py", line 747, in o rename(mktempcopy(f), f) File "/export/home/bos/lib/python/mercurial/util.py", line 690, in mktempcopy fp.write(posixfile(name, "rb").read()) IOError: [Errno 21] Is a directory abort: Is a directory Perhaps the permission error is with your temp folder? To find the temp dir, do.. $ python >>> import tempfile >>> print tempfile.gettempdir() It's should be in /var/folders/[...]/[...]/-Tmp-/ Also inspired by the above link, you could try running.. $ hg init /tmp/bar $ cd /tmp/bar $ hg --traceback log A: I found the problem: If you symlink your .hgrc somewhere it causes this. Definitely a bug in mercurial (and a stupid error message at that). $ rm -f ~/.hgrc $ hg init foo $ cd foo $ hg log (works) $ ls -l ~/Dropbox/.hgrc -rw-r--r--@ 1 schubert staff 83 Jan 9 02:15 /Users/schubert/Dropbox/.hgrc $ ln -s ~/Dropbox/.hgrc ~/.hgrc $ hg log abort: Is a directory $ rm -f ~/.hgrc $ hg log (works) A: I took another look and it is actually caused by the .hgrc line: style = gitweb Why that is I'm not sure but the error message sure sucks still.
mercurial + OSX == fail? hg log abort: Is a directory
$ mkdir foo $ cd foo $ hg init . $ hg log abort: Is a directory $ hg history abort: Is a directory Darwin Host.local 9.6.1 Darwin Kernel Version 9.6.1: Wed Dec 10 10:38:33 PST 2008; root:xnu-1228.9.75~3/RELEASE_I386 i386 $ hg --version Mercurial Distributed SCM (version 1.2.1) $ python --version Python 2.5.4 (all installed via macports) Thoughts? The google gives us nothing. Update: (as root): $ hg init /tmp/foo $ cd /tmp/foo; hg log (works) (as user): $ hg init /tmp/bar $ cd /tmp/bar; hg log abort: Is a directory So Travis was right (see comments) it does look like a permission problem somewhere but where? This is a stock install of Leopard barely a week old and stock installs of macport versions of python and mercurial. I hope that isn't mercurial's idea of a good error message when it has a permission problem. 2nd update (from dkbits suggestions below): $ sudo dtruss hg log [snip] ... stat("/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site- packages/mercurial/templates/gitweb\0", 0xBFFFC7DC, 0x1000) = 0 0 open_nocancel("/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/mercurial/templates/gitweb\0", 0x0, 0x1B6) = 3 0 fstat(0x3, 0xBFFFC900, 0x1B6) = 0 0 close_nocancel(0x3) = 0 0 write_nocancel(0x2, "abort: Is a directory\n\0", 0x16) = 22 0 Also, the temp directory is where you expected it to be. Permissions look okay on it.
[ "Not a direct answer to your question, but I've successfully been using the Mercurial pre-packaged binaries from here with the standard Python 2.5.1 install on OSX 10.5 without issue.\n$ mkdir foo\n$ cd foo\n$ hg init .\n$ hg log\n$ hg history\n\n$ hg --version\nMercurial Distributed SCM (version 1.2.1)\n\n$ python --version\nPython 2.5.1\n\n", "From https://www.mercurial-scm.org/bts/issue233, there is an interesting traceback:\nhg --traceback qpop\nTraceback (most recent call last):\n [...]\n File \"/export/home/bos/lib/python/mercurial/util.py\", line 747, in o\n rename(mktempcopy(f), f)\n File \"/export/home/bos/lib/python/mercurial/util.py\", line 690, in mktempcopy\n fp.write(posixfile(name, \"rb\").read())\nIOError: [Errno 21] Is a directory\nabort: Is a directory\n\nPerhaps the permission error is with your temp folder? To find the temp dir, do..\n$ python\n>>> import tempfile\n>>> print tempfile.gettempdir()\n\nIt's should be in /var/folders/[...]/[...]/-Tmp-/\nAlso inspired by the above link, you could try running..\n$ hg init /tmp/bar\n$ cd /tmp/bar\n$ hg --traceback log\n\n", "I found the problem: \nIf you symlink your .hgrc somewhere it causes this. Definitely a bug in mercurial (and a stupid error message at that).\n$ rm -f ~/.hgrc\n$ hg init foo\n$ cd foo\n$ hg log\n\n(works)\n$ ls -l ~/Dropbox/.hgrc\n-rw-r--r--@ 1 schubert staff 83 Jan 9 02:15 /Users/schubert/Dropbox/.hgrc\n$ ln -s ~/Dropbox/.hgrc ~/.hgrc\n$ hg log\nabort: Is a directory\n$ rm -f ~/.hgrc\n$ hg log\n\n(works)\n", "I took another look and it is actually caused by the .hgrc line:\nstyle = gitweb\nWhy that is I'm not sure but the error message sure sucks still.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "macos", "mercurial", "python" ]
stackoverflow_0000730319_macos_mercurial_python.txt
Q: Decimal alignment formatting in Python This should be easy. Here's my array (rather, a method of generating representative test arrays): >>> ri = numpy.random.randint >>> ri2 = lambda x: ''.join(ri(0,9,x).astype('S')) >>> a = array([float(ri2(x)+ '.' + ri2(y)) for x,y in ri(1,10,(10,2))]) >>> a array([ 7.99914000e+01, 2.08000000e+01, 3.94000000e+02, 4.66100000e+03, 5.00000000e+00, 1.72575100e+03, 3.91500000e+02, 1.90610000e+04, 1.16247000e+04, 3.53920000e+02]) I want a list of strings where '\n'.join(list_o_strings) would print: 79.9914 20.8 394.0 4661.0 5.0 1725.751 391.5 19061.0 11624.7 353.92 I want to space pad to the left and the right (but no more than necessary). I want a zero after the decimal if that is all that is after the decimal. I do not want scientific notation. ..and I do not want to lose any significant digits. (in 353.98000000000002 the 2 is not significant) Yeah, it's nice to want.. Python 2.5's %g, %fx.x, etc. are either befuddling me, or can't do it. I have not tried import decimal yet. I can't see that NumPy does it either (although, the array.__str__ and array.__repr__ are decimal aligned (but sometimes return scientific). Oh, and speed counts. I'm dealing with big arrays here. My current solution approaches are: to str(a) and parse off NumPy's brackets to str(e) each element in the array and split('.') then pad and reconstruct to a.astype('S'+str(i)) where i is the max(len(str(a))), then pad It seems like there should be some off-the-shelf solution out there... (but not required) Top suggestion fails with when dtype is float64: >>> a array([ 5.50056103e+02, 6.77383566e+03, 6.01001513e+05, 3.55425142e+08, 7.07254875e+05, 8.83174744e+02, 8.22320510e+01, 4.25076609e+08, 6.28662635e+07, 1.56503068e+02]) >>> ut0 = re.compile(r'(\d)0+$') >>> thelist = [ut0.sub(r'\1', "%12f" % x) for x in a] >>> print '\n'.join(thelist) 550.056103 6773.835663 601001.513 355425141.8471 707254.875038 883.174744 82.232051 425076608.7676 62866263.55 156.503068 A: Sorry, but after thorough investigation I can't find any way to perform the task you require without a minimum of post-processing (to strip off the trailing zeros you don't want to see); something like: import re ut0 = re.compile(r'(\d)0+$') thelist = [ut0.sub(r'\1', "%12f" % x) for x in a] print '\n'.join(thelist) is speedy and concise, but breaks your constraint of being "off-the-shelf" -- it is, instead, a modular combination of general formatting (which almost does what you want but leaves trailing zero you want to hide) and a RE to remove undesired trailing zeros. Practically, I think it does exactly what you require, but your conditions as stated are, I believe, over-constrained. Edit: original question was edited to specify more significant digits, require no extra leading space beyond what's required for the largest number, and provide a new example (where my previous suggestion, above, doesn't match the desired output). The work of removing leading whitespace that's common to a bunch of strings is best performed with textwrap.dedent -- but that works on a single string (with newlines) while the required output is a list of strings. No problem, we'll just put the lines together, dedent them, and split them up again: import re import textwrap a = [ 5.50056103e+02, 6.77383566e+03, 6.01001513e+05, 3.55425142e+08, 7.07254875e+05, 8.83174744e+02, 8.22320510e+01, 4.25076609e+08, 6.28662635e+07, 1.56503068e+02] thelist = textwrap.dedent( '\n'.join(ut0.sub(r'\1', "%20f" % x) for x in a)).splitlines() print '\n'.join(thelist) emits: 550.056103 6773.83566 601001.513 355425142.0 707254.875 883.174744 82.232051 425076609.0 62866263.5 156.503068 A: Pythons string formatting can both print out only the necessary decimals (with %g) or use a fixed set of decimals (with %f). However, you want to print out only the necessary decimals, except if the number is a whole number, then you want one decimal, and that makes it complex. This means you would end up with something like: def printarr(arr): for x in array: if math.floor(x) == x: res = '%.1f' % x else: res = '%.10g' % x print "%*s" % (15-res.find('.')+len(res), res) This will first create a string either with 1 decimal, if the value is a whole number, or it will print with automatic decimals (but only up to 10 numbers) if it is not a fractional number. Lastly it will print it, adjusted so that the decimal point will be aligned. Probably, though, numpy actually does what you want, because you typically do want it to be in exponential mode if it's too long.
Decimal alignment formatting in Python
This should be easy. Here's my array (rather, a method of generating representative test arrays): >>> ri = numpy.random.randint >>> ri2 = lambda x: ''.join(ri(0,9,x).astype('S')) >>> a = array([float(ri2(x)+ '.' + ri2(y)) for x,y in ri(1,10,(10,2))]) >>> a array([ 7.99914000e+01, 2.08000000e+01, 3.94000000e+02, 4.66100000e+03, 5.00000000e+00, 1.72575100e+03, 3.91500000e+02, 1.90610000e+04, 1.16247000e+04, 3.53920000e+02]) I want a list of strings where '\n'.join(list_o_strings) would print: 79.9914 20.8 394.0 4661.0 5.0 1725.751 391.5 19061.0 11624.7 353.92 I want to space pad to the left and the right (but no more than necessary). I want a zero after the decimal if that is all that is after the decimal. I do not want scientific notation. ..and I do not want to lose any significant digits. (in 353.98000000000002 the 2 is not significant) Yeah, it's nice to want.. Python 2.5's %g, %fx.x, etc. are either befuddling me, or can't do it. I have not tried import decimal yet. I can't see that NumPy does it either (although, the array.__str__ and array.__repr__ are decimal aligned (but sometimes return scientific). Oh, and speed counts. I'm dealing with big arrays here. My current solution approaches are: to str(a) and parse off NumPy's brackets to str(e) each element in the array and split('.') then pad and reconstruct to a.astype('S'+str(i)) where i is the max(len(str(a))), then pad It seems like there should be some off-the-shelf solution out there... (but not required) Top suggestion fails with when dtype is float64: >>> a array([ 5.50056103e+02, 6.77383566e+03, 6.01001513e+05, 3.55425142e+08, 7.07254875e+05, 8.83174744e+02, 8.22320510e+01, 4.25076609e+08, 6.28662635e+07, 1.56503068e+02]) >>> ut0 = re.compile(r'(\d)0+$') >>> thelist = [ut0.sub(r'\1', "%12f" % x) for x in a] >>> print '\n'.join(thelist) 550.056103 6773.835663 601001.513 355425141.8471 707254.875038 883.174744 82.232051 425076608.7676 62866263.55 156.503068
[ "Sorry, but after thorough investigation I can't find any way to perform the task you require without a minimum of post-processing (to strip off the trailing zeros you don't want to see); something like:\nimport re\nut0 = re.compile(r'(\\d)0+$')\n\nthelist = [ut0.sub(r'\\1', \"%12f\" % x) for x in a]\n\nprint '\\n'.join(thelist)\n\nis speedy and concise, but breaks your constraint of being \"off-the-shelf\" -- it is, instead, a modular combination of general formatting (which almost does what you want but leaves trailing zero you want to hide) and a RE to remove undesired trailing zeros. Practically, I think it does exactly what you require, but your conditions as stated are, I believe, over-constrained.\nEdit: original question was edited to specify more significant digits, require no extra leading space beyond what's required for the largest number, and provide a new example (where my previous suggestion, above, doesn't match the desired output). The work of removing leading whitespace that's common to a bunch of strings is best performed with textwrap.dedent -- but that works on a single string (with newlines) while the required output is a list of strings. No problem, we'll just put the lines together, dedent them, and split them up again:\nimport re\nimport textwrap\n\na = [ 5.50056103e+02, 6.77383566e+03, 6.01001513e+05,\n 3.55425142e+08, 7.07254875e+05, 8.83174744e+02,\n 8.22320510e+01, 4.25076609e+08, 6.28662635e+07,\n 1.56503068e+02]\n\nthelist = textwrap.dedent(\n '\\n'.join(ut0.sub(r'\\1', \"%20f\" % x) for x in a)).splitlines()\n\nprint '\\n'.join(thelist)\n\nemits:\n 550.056103\n 6773.83566\n 601001.513\n355425142.0\n 707254.875\n 883.174744\n 82.232051\n425076609.0\n 62866263.5\n 156.503068\n\n", "Pythons string formatting can both print out only the necessary decimals (with %g) or use a fixed set of decimals (with %f). However, you want to print out only the necessary decimals, except if the number is a whole number, then you want one decimal, and that makes it complex.\nThis means you would end up with something like:\ndef printarr(arr):\n for x in array:\n if math.floor(x) == x:\n res = '%.1f' % x\n else:\n res = '%.10g' % x\n print \"%*s\" % (15-res.find('.')+len(res), res)\n\nThis will first create a string either with 1 decimal, if the value is a whole number, or it will print with automatic decimals (but only up to 10 numbers) if it is not a fractional number. Lastly it will print it, adjusted so that the decimal point will be aligned.\nProbably, though, numpy actually does what you want, because you typically do want it to be in exponential mode if it's too long.\n" ]
[ 10, 2 ]
[]
[]
[ "code_golf", "formatting", "numpy", "python" ]
stackoverflow_0001025379_code_golf_formatting_numpy_python.txt
Q: How do I build a custom "list-type" entry to request.POST Basically I have a model with a ManyToMany field, and then a modelform derived from that model where that field is rendered as a "multiple choice" selectbox. In my template I'm having that field omitted, electing instead to prepare the values for that field in the view, then pass those prepared values into request.POST (actually a copy of request.POST because request.POST is immutable), then feeding request.POST to the form and then carry on as normal. I can't figure out how to do this, because request.POST isn't just a simple python dictionary, but instead a QueryDict, which behaves a little differently. The field I need to populate is called "not_bases". When I create the widget using the form, it works perfectly well internally, but just not to my liking UI-wise. When I inspect the django-form submitted POST value via django's handy debug error window, the working QueryDict looks like this: <QueryDict: {u'not_bases': [u'005', u'00AR', u'00F', u'00FD'], [...] }> It appears the value for "not_bases" is a list, but it's not simply a list. I can't just .append() to it because it won't work. I dug around the documentation and found .update(), which appears to work, but doesn't. Here is my code: newPOST = request.POST.copy() for base in bases: newPOST.update({"not_bases": base.identifier}) and here is the output: <QueryDict: {u'not_bases': [u'KMER', u'KYIP'], u'reference': [u''], [...] }> But when I feed that QueryDict to the form, I get an form validation error that says "not_bases: Enter a list of values.". Its obvious that the list-looking things coming from the str() representation of the QueryDict are not the same in the two cases above, even though they look exactly the same So how do I do this? A: It's really not clear what you're trying to do here, but I doubt that hacking the QueryDict is the right way to achieve it. If you are trying to customise the display of the not_bases field, you can simply override the definition in your modelform declaration: class MyModelForm(forms.ModelForm): not_bases = forms.ChoiceField(choices=[(base, base) for base in bases]) class Meta: model = MyModel Or, if you simply want to avoid showing it on the form, you can exclude it from the form and set the value after validation. class MyModelForm(forms.ModelForm): class Meta: model = MyModel exclude = ['not_bases'] .... if request.POST: if form.is_valid(): instance = form.save(commit=False) instance.not_bases = bases instance.save() Does either of these do what you want?
How do I build a custom "list-type" entry to request.POST
Basically I have a model with a ManyToMany field, and then a modelform derived from that model where that field is rendered as a "multiple choice" selectbox. In my template I'm having that field omitted, electing instead to prepare the values for that field in the view, then pass those prepared values into request.POST (actually a copy of request.POST because request.POST is immutable), then feeding request.POST to the form and then carry on as normal. I can't figure out how to do this, because request.POST isn't just a simple python dictionary, but instead a QueryDict, which behaves a little differently. The field I need to populate is called "not_bases". When I create the widget using the form, it works perfectly well internally, but just not to my liking UI-wise. When I inspect the django-form submitted POST value via django's handy debug error window, the working QueryDict looks like this: <QueryDict: {u'not_bases': [u'005', u'00AR', u'00F', u'00FD'], [...] }> It appears the value for "not_bases" is a list, but it's not simply a list. I can't just .append() to it because it won't work. I dug around the documentation and found .update(), which appears to work, but doesn't. Here is my code: newPOST = request.POST.copy() for base in bases: newPOST.update({"not_bases": base.identifier}) and here is the output: <QueryDict: {u'not_bases': [u'KMER', u'KYIP'], u'reference': [u''], [...] }> But when I feed that QueryDict to the form, I get an form validation error that says "not_bases: Enter a list of values.". Its obvious that the list-looking things coming from the str() representation of the QueryDict are not the same in the two cases above, even though they look exactly the same So how do I do this?
[ "It's really not clear what you're trying to do here, but I doubt that hacking the QueryDict is the right way to achieve it.\nIf you are trying to customise the display of the not_bases field, you can simply override the definition in your modelform declaration:\nclass MyModelForm(forms.ModelForm):\n not_bases = forms.ChoiceField(choices=[(base, base) for base in bases])\n\n class Meta:\n model = MyModel\n\nOr, if you simply want to avoid showing it on the form, you can exclude it from the form and set the value after validation.\nclass MyModelForm(forms.ModelForm):\n\n class Meta:\n model = MyModel\n exclude = ['not_bases']\n\n\n....\nif request.POST:\n if form.is_valid():\n instance = form.save(commit=False)\n instance.not_bases = bases\n instance.save()\n\nDoes either of these do what you want?\n" ]
[ 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001025216_django_python.txt
Q: Call program from within a browser without using a webserver Is there a way to call a program (Python script) from a local HTML page? I have a YUI-colorpicker on that page and need to send its value to a microcontroller via rs232. (There is other stuff than the picker, so I can't code an application instead of an HTML page.) Later, this will migrate to a server, but I need a fast and easy solution now. Thanks. A: I see now that Daff mentioned the simple HTTP server, but I made an example on how you'd solve your problem (using BaseHTTPServer): import BaseHTTPServer HOST_NAME = 'localhost' PORT_NUMBER = 1337 class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler): def do_GET(s): s.send_response(200) s.send_header('Content-Type', 'text/html') s.end_headers() # Get parameters in query. params = {} index = s.path.rfind('?') if index >= 0: parts = s.path[index + 1:].split('&') for p in parts: try: a, b = p.split('=', 2) params[a] = b except: params[p] = '' # !!! # Check if there is a color parameter and send to controller... if 'color' in params: print 'Send something to controller...' # !!! s.wfile.write('<pre>%s</pre>' % params) if __name__ == '__main__': server_class = BaseHTTPServer.HTTPServer httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler) try: httpd.serve_forever() except KeyboardInterrupt: pass httpd.server_close() Now, from your JavaScript, you'd call http://localhost:1337/?color=ffaabb A: Python has a small built in Web server. If you already already got Python to run with the RS232 you might need to read here on how to set up a very simple and basic webserver. An even easier one can look like this: import SimpleHTTPServer import SocketServer port = 8000 Handler = SimpleHTTPServer.SimpleHTTPRequestHandler httpd = SocketServer.TCPServer(("", port), Handler) httpd.serve_forever() Try so separate you source as good as possible, to that you won't have too much trouble to move it to a production ready Python capable webserver. A: If you want an an HTML page to have some sort of server-side programming then you will need a webserver of some sort to do the processing. My suggestion would be to get a web server running on your development box, or try to accomplish what you need to do with a local desktop application or script. A: another quick solution is https://addons.mozilla.org/en-US/firefox/addon/3002 POW, it's a firefox extension that adds a simple web server with Server Side JS built in. You'd be able to access a command line and call a python script from there. A: No you need some kind of server. Wh not try out the portable webservers? You can run them from your usb drive. A: Try also XML-RPC it gives you a simple way to pass remote procedure calls from YUI towards a simple XMLRPC server and from that towards your rs232 device A: I see no reason why you can't setup a handler for .py/.bat/.vbs files in your browser. This should result in your chosen application running a script when you link to it. This won't work when you migrate to the server but as a testing platform it would work. Just remember to turn it off when you're done or you expose yourself to viruses from other sites.
Call program from within a browser without using a webserver
Is there a way to call a program (Python script) from a local HTML page? I have a YUI-colorpicker on that page and need to send its value to a microcontroller via rs232. (There is other stuff than the picker, so I can't code an application instead of an HTML page.) Later, this will migrate to a server, but I need a fast and easy solution now. Thanks.
[ "I see now that Daff mentioned the simple HTTP server, but I made an example on how you'd solve your problem (using BaseHTTPServer):\nimport BaseHTTPServer\n\nHOST_NAME = 'localhost'\nPORT_NUMBER = 1337\n\nclass MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):\n def do_GET(s):\n s.send_response(200)\n s.send_header('Content-Type', 'text/html')\n s.end_headers()\n\n # Get parameters in query.\n params = {}\n index = s.path.rfind('?')\n if index >= 0:\n parts = s.path[index + 1:].split('&')\n for p in parts:\n try:\n a, b = p.split('=', 2)\n params[a] = b\n except:\n params[p] = ''\n\n # !!!\n # Check if there is a color parameter and send to controller...\n if 'color' in params:\n print 'Send something to controller...'\n # !!!\n\n s.wfile.write('<pre>%s</pre>' % params)\n\nif __name__ == '__main__':\n server_class = BaseHTTPServer.HTTPServer\n httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)\n\n try:\n httpd.serve_forever()\n except KeyboardInterrupt:\n pass\n\n httpd.server_close()\n\nNow, from your JavaScript, you'd call http://localhost:1337/?color=ffaabb\n", "Python has a small built in Web server. If you already already got Python to run with the RS232 you might need to read here on how to set up a very simple and basic webserver.\nAn even easier one can look like this:\nimport SimpleHTTPServer\nimport SocketServer\n\nport = 8000\nHandler = SimpleHTTPServer.SimpleHTTPRequestHandler\nhttpd = SocketServer.TCPServer((\"\", port), Handler)\nhttpd.serve_forever()\n\nTry so separate you source as good as possible, to that you won't have too much trouble to move it to a production ready Python capable webserver.\n", "If you want an an HTML page to have some sort of server-side programming then you will need a webserver of some sort to do the processing.\nMy suggestion would be to get a web server running on your development box, or try to accomplish what you need to do with a local desktop application or script.\n", "another quick solution is https://addons.mozilla.org/en-US/firefox/addon/3002\nPOW, it's a firefox extension that adds a simple web server with Server Side JS built in. \nYou'd be able to access a command line and call a python script from there.\n", "No you need some kind of server. Wh not try out the portable webservers? You can run them from your usb drive.\n", "Try also XML-RPC it gives you a simple way to pass remote procedure calls from YUI towards a simple XMLRPC server and from that towards your rs232 device\n", "I see no reason why you can't setup a handler for .py/.bat/.vbs files in your browser. This should result in your chosen application running a script when you link to it. This won't work when you migrate to the server but as a testing platform it would work. Just remember to turn it off when you're done or you expose yourself to viruses from other sites.\n" ]
[ 6, 3, 1, 1, 0, 0, 0 ]
[]
[]
[ "browser", "html", "python" ]
stackoverflow_0001025817_browser_html_python.txt
Q: Django model query with custom select fields I'm using the row-level permission model known as django-granular-permissions (http://code.google.com/p/django-granular-permissions/). The permission model simply has just two more fields which are content-type and object id. I've used the following query: User.objects.filter(Q(row_permission_set__name='staff') | \ Q(row_permission_set__name='student'), \ row_permission_set__object_id=labsite.id) I want to add is_staff and is_student boolean fields to the result set without querying everytime when I fetch the result. Django documentation shows extra() method of querysets, but I can't figure out what I should write for plain SQL selection query with this relation. How to do this? A: .extra(select={'is_staff': "%s.name='staff'" % Permission._meta.db_table, 'is_student': "%s.name='student'" % Permission._meta.db_table, }) A: Normally you'd use select_related() for things like this, but unfortunately it doesn't work on reverse relationships. What you could do is turn the query around: users = [permission.user for permission in Permission.objects.select_related('user').filter(...)]
Django model query with custom select fields
I'm using the row-level permission model known as django-granular-permissions (http://code.google.com/p/django-granular-permissions/). The permission model simply has just two more fields which are content-type and object id. I've used the following query: User.objects.filter(Q(row_permission_set__name='staff') | \ Q(row_permission_set__name='student'), \ row_permission_set__object_id=labsite.id) I want to add is_staff and is_student boolean fields to the result set without querying everytime when I fetch the result. Django documentation shows extra() method of querysets, but I can't figure out what I should write for plain SQL selection query with this relation. How to do this?
[ ".extra(select={'is_staff': \"%s.name='staff'\" % Permission._meta.db_table, 'is_student': \"%s.name='student'\" % Permission._meta.db_table, }) \n\n", "Normally you'd use select_related() for things like this, but unfortunately it doesn't work on reverse relationships. What you could do is turn the query around:\nusers = [permission.user for permission in Permission.objects.select_related('user').filter(...)]\n\n" ]
[ 5, 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001026204_django_django_models_python.txt
Q: How can you add a camera to a robot in the Breve Simulator? I've created a two wheeled robot based on the braitenberg vehicle. Our robots have two wheels and a PolygonDisk body(Much like kepera and e-puck robots). I would like to add a camera to the front of the robot. The problem then becomes how to control the camera and how to keep pointing it in the right direction(same direction as the robot). How can you make the camera point in the same direction as the robot ? A: After much trying and failing I finally made it work. So here is how I did it: The general idea is to have an link or object linked to the vehicle and then measuring its rotation and location in order to find out in which direction the camera should be aimed. 1) Add an object that is linked to the robot: def addVisualCam(self): joint = None cam = breve.createInstances(breve.Link,1) cam.setShape(breve.createInstances(breve.PolygonCone, 1).initWith(10,0.08,0.08)) joint = breve.createInstances(breve.FixedJoint,1) # So ad-hoc it hurts. oh well... joint.setRelativeRotation(breve.vector(0,1,0), -3.14/2) joint.link(breve.vector(0,1.05,0), breve.vector(0,0,0), cam, self.vehicle.bodyLink, 0) joint.setDoubleSpring(300, 1.01000, -1.01000) self.vehicle.addDependency(joint) self.vehicle.addDependency(cam) cam.setColor(breve.vector(0,0,0)) self.cam = cam 2) Add this postIterate: def postIterate(self): look_at = self.cam.getLocation() + (self.cam.getRotation() * breve.vector(0,0,1)) look_from = -(self.cam.getRotation()*breve.vector(0,0,1)) self.vision.look(look_at, look_from)
How can you add a camera to a robot in the Breve Simulator?
I've created a two wheeled robot based on the braitenberg vehicle. Our robots have two wheels and a PolygonDisk body(Much like kepera and e-puck robots). I would like to add a camera to the front of the robot. The problem then becomes how to control the camera and how to keep pointing it in the right direction(same direction as the robot). How can you make the camera point in the same direction as the robot ?
[ "After much trying and failing I finally made it work.\nSo here is how I did it:\nThe general idea is to have an link or object linked to the vehicle and then measuring \nits rotation and location in order to find out in which direction the camera should be aimed.\n1) Add an object that is linked to the robot:\ndef addVisualCam(self):\n joint = None\n cam = breve.createInstances(breve.Link,1)\n cam.setShape(breve.createInstances(breve.PolygonCone, 1).initWith(10,0.08,0.08))\n joint = breve.createInstances(breve.FixedJoint,1)\n # So ad-hoc it hurts. oh well...\n joint.setRelativeRotation(breve.vector(0,1,0), -3.14/2)\n joint.link(breve.vector(0,1.05,0), breve.vector(0,0,0), cam, self.vehicle.bodyLink, 0)\n joint.setDoubleSpring(300, 1.01000, -1.01000)\n self.vehicle.addDependency(joint)\n self.vehicle.addDependency(cam)\n cam.setColor(breve.vector(0,0,0))\n self.cam = cam\n\n2) Add this postIterate:\ndef postIterate(self):\n look_at = self.cam.getLocation() + (self.cam.getRotation() * breve.vector(0,0,1))\n look_from = -(self.cam.getRotation()*breve.vector(0,0,1))\n self.vision.look(look_at, look_from)\n\n" ]
[ 1 ]
[]
[]
[ "python", "robotics", "simulation" ]
stackoverflow_0001011602_python_robotics_simulation.txt
Q: MySQLdb through proxy I'm using the above mentioned Python lib to connect to a MySQL server. So far I've worked locally and all worked fine, until i realized I'll have to use my program in a network where all access goes through a proxy. Does anyone now how I can set the connections managed by that lib to use a proxy? Alternatively: do you know of another Python lib for MySQL that can handle this? I also have no idea if the if the proxy server will allow access to the standard MySQL port or how I can trick it to allow it. Help on this is also welcomed. A: I use ssh tunneling for that kind of issues. For example I am developing an application that connects to an oracle db. In my code I write to connect to localhost and then from a shell I do: ssh -L1521:localhost:1521 [email protected] If you are in windows you can use PuTTY A: there are a lot of different possibilities here. the only way you're going to get a definitive answer is to talk to the person that runs the proxy. if this is a web app and the web server and the database serve are both on the other side of a proxy, then you won't need to connect to the mysql server at all since the web app will do it for you. A: Do you have to do anything special to connect through a proxy? I would guess you just supply the correct parameters to the connect function. From the documentation, it looks as if you can specify the host name and port number with an arguments to connect. Like this: connection = connect(host="dbserver.somewhere.com", port=nnnn) # etc..
MySQLdb through proxy
I'm using the above mentioned Python lib to connect to a MySQL server. So far I've worked locally and all worked fine, until i realized I'll have to use my program in a network where all access goes through a proxy. Does anyone now how I can set the connections managed by that lib to use a proxy? Alternatively: do you know of another Python lib for MySQL that can handle this? I also have no idea if the if the proxy server will allow access to the standard MySQL port or how I can trick it to allow it. Help on this is also welcomed.
[ "I use ssh tunneling for that kind of issues.\nFor example I am developing an application that connects to an oracle db.\nIn my code I write to connect to localhost and then from a shell I do:\nssh -L1521:localhost:1521 [email protected]\n\nIf you are in windows you can use PuTTY\n", "there are a lot of different possibilities here. the only way you're going to get a definitive answer is to talk to the person that runs the proxy.\nif this is a web app and the web server and the database serve are both on the other side of a proxy, then you won't need to connect to the mysql server at all since the web app will do it for you.\n", "Do you have to do anything special to connect through a proxy?\nI would guess you just supply the correct parameters to the connect function. From the documentation, it looks as if you can specify the host name and port number with an arguments to connect.\nLike this:\nconnection = connect(host=\"dbserver.somewhere.com\", port=nnnn)\n# etc..\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "mysql", "proxy", "python" ]
stackoverflow_0001027751_mysql_proxy_python.txt
Q: Detect if X11 is available (python) Firstly, what is the best/simplest way to detect if X11 is running and available for a python script. parent process? session leader? X environment variables? other? Secondly, I would like to have a utility (python script) to present a gui if available, otherwise use a command line backed tool. Off the top of my head I thought of this -main python script (detects if gui is available and launches appropriate script) -gui or command line python script starts -both use a generic module to do actual work I am very open to suggestions to simplify this. A: Check the return code of xset -q: def X_is_running(): from subprocess import Popen, PIPE p = Popen(["xset", "-q"], stdout=PIPE, stderr=PIPE) p.communicate() return p.returncode == 0 As for the second part of your question, I suggest the following main.py structure: import common_lib def gui_main(): ... def cli_main(): ... def X_is_running(): ... if __name__ == '__main__': if X_is_running(): gui_main() else: cli_main() A: I'd check to see if DISPLAY is set ( this is what C API X11 applications do after all ). import os if os.environ.get('DISPLAY'): print("X11 is available") A: You could simply launch the gui part, and catch the exception it raises when X (or any other platform dependent graphics system is not available. Make sure you really have an interactive terminal before running the text based part. Your process might have been started without a visible terminal, as is common in graphical user environments like KDE, gnome or windows.
Detect if X11 is available (python)
Firstly, what is the best/simplest way to detect if X11 is running and available for a python script. parent process? session leader? X environment variables? other? Secondly, I would like to have a utility (python script) to present a gui if available, otherwise use a command line backed tool. Off the top of my head I thought of this -main python script (detects if gui is available and launches appropriate script) -gui or command line python script starts -both use a generic module to do actual work I am very open to suggestions to simplify this.
[ "Check the return code of xset -q:\ndef X_is_running():\n from subprocess import Popen, PIPE\n p = Popen([\"xset\", \"-q\"], stdout=PIPE, stderr=PIPE)\n p.communicate()\n return p.returncode == 0\n\nAs for the second part of your question, I suggest the following main.py structure:\nimport common_lib\n\ndef gui_main():\n ...\n\ndef cli_main():\n ...\n\ndef X_is_running():\n ...\n\nif __name__ == '__main__':\n if X_is_running():\n gui_main()\n else:\n cli_main()\n\n", "I'd check to see if DISPLAY is set ( this is what C API X11 applications do after all ). \nimport os\n\nif os.environ.get('DISPLAY'):\n print(\"X11 is available\")\n\n", "You could simply launch the gui part, and catch the exception it raises when X (or any other platform dependent graphics system is not available.\nMake sure you really have an interactive terminal before running the text based part. Your process might have been started without a visible terminal, as is common in graphical user environments like KDE, gnome or windows.\n" ]
[ 14, 10, 5 ]
[]
[]
[ "python", "user_interface" ]
stackoverflow_0001027894_python_user_interface.txt
Q: Writing with Python's built-in .csv module [Please note that this is a different question from the already answered How to replace a column using Python’s built-in .csv writer module?] I need to do a find and replace (specific to one column of URLs) in a huge Excel .csv file. Since I'm in the beginning stages of trying to teach myself a scripting language, I figured I'd try to implement the solution in python. I'm having trouble when I try to write back to a .csv file after making a change to the contents of an entry. I've read the official csv module documentation about how to use the writer, but there isn't an example that covers this case. Specifically, I am trying to get the read, replace, and write operations accomplished in one loop. However, one cannot use the same 'row' reference in both the for loop's argument and as the parameter for writer.writerow(). So, once I've made the change in the for loop, how should I write back to the file? edit: I implemented the suggestions from S. Lott and Jimmy, still the same result edit #2: I added the "rb" and "wb" to the open() functions, per S. Lott's suggestion import csv #filename = 'C:/Documents and Settings/username/My Documents/PALTemplateData.xls' csvfile = open("PALTemplateData.csv","rb") csvout = open("PALTemplateDataOUT.csv","wb") reader = csv.reader(csvfile) writer = csv.writer(csvout) changed = 0; for row in reader: row[-1] = row[-1].replace('/?', '?') writer.writerow(row) #this is the line that's causing issues changed=changed+1 print('Total URLs changed:', changed) edit: For your reference, this is the new full traceback from the interpreter: Traceback (most recent call last): File "C:\Documents and Settings\g41092\My Documents\palScript.py", line 13, in <module> for row in reader: _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?) A: You cannot read and write the same file. source = open("PALTemplateData.csv","rb") reader = csv.reader(source , dialect) target = open("AnotherFile.csv","wb") writer = csv.writer(target , dialect) The normal approach to ALL file manipulation is to create a modified COPY of the original file. Don't try to update files in place. It's just a bad plan. Edit In the lines source = open("PALTemplateData.csv","rb") target = open("AnotherFile.csv","wb") The "rb" and "wb" are absolutely required. Every time you ignore those, you open the file for reading in the wrong format. You must use "rb" to read a .CSV file. There is no choice with Python 2.x. With Python 3.x, you can omit this, but use "r" explicitly to make it clear. You must use "wb" to write a .CSV file. There is no choice with Python 2.x. With Python 3.x, you must use "w". Edit It appears you are using Python3. You'll need to drop the "b" from "rb" and "wb". Read this: http://docs.python.org/3.0/library/functions.html#open A: Opening csv files as binary is just wrong. CSV are normal text files so You need to open them with source = open("PALTemplateData.csv","r") target = open("AnotherFile.csv","w") The error _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?) comes because You are opening them in binary mode. When I was opening excel csv's with python, I used something like: try: # checking if file exists f = csv.reader(open(filepath, "r", encoding="cp1250"), delimiter=";", quotechar='"') except IOError: f = [] for record in f: # do something with record and it worked rather fast (I was opening two about 10MB each csv files, though I did this with python 2.6, not the 3.0 version). There are few working modules for working with excel csv files from within python - pyExcelerator is one of them. A: the problem is you're trying to write to the same file you're reading from. write to a different file and then rename it after deleting the original.
Writing with Python's built-in .csv module
[Please note that this is a different question from the already answered How to replace a column using Python’s built-in .csv writer module?] I need to do a find and replace (specific to one column of URLs) in a huge Excel .csv file. Since I'm in the beginning stages of trying to teach myself a scripting language, I figured I'd try to implement the solution in python. I'm having trouble when I try to write back to a .csv file after making a change to the contents of an entry. I've read the official csv module documentation about how to use the writer, but there isn't an example that covers this case. Specifically, I am trying to get the read, replace, and write operations accomplished in one loop. However, one cannot use the same 'row' reference in both the for loop's argument and as the parameter for writer.writerow(). So, once I've made the change in the for loop, how should I write back to the file? edit: I implemented the suggestions from S. Lott and Jimmy, still the same result edit #2: I added the "rb" and "wb" to the open() functions, per S. Lott's suggestion import csv #filename = 'C:/Documents and Settings/username/My Documents/PALTemplateData.xls' csvfile = open("PALTemplateData.csv","rb") csvout = open("PALTemplateDataOUT.csv","wb") reader = csv.reader(csvfile) writer = csv.writer(csvout) changed = 0; for row in reader: row[-1] = row[-1].replace('/?', '?') writer.writerow(row) #this is the line that's causing issues changed=changed+1 print('Total URLs changed:', changed) edit: For your reference, this is the new full traceback from the interpreter: Traceback (most recent call last): File "C:\Documents and Settings\g41092\My Documents\palScript.py", line 13, in <module> for row in reader: _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
[ "You cannot read and write the same file.\nsource = open(\"PALTemplateData.csv\",\"rb\")\nreader = csv.reader(source , dialect)\n\ntarget = open(\"AnotherFile.csv\",\"wb\")\nwriter = csv.writer(target , dialect)\n\nThe normal approach to ALL file manipulation is to create a modified COPY of the original file. Don't try to update files in place. It's just a bad plan.\n\nEdit\nIn the lines\nsource = open(\"PALTemplateData.csv\",\"rb\")\n\ntarget = open(\"AnotherFile.csv\",\"wb\")\n\nThe \"rb\" and \"wb\" are absolutely required. Every time you ignore those, you open the file for reading in the wrong format.\nYou must use \"rb\" to read a .CSV file. There is no choice with Python 2.x. With Python 3.x, you can omit this, but use \"r\" explicitly to make it clear.\nYou must use \"wb\" to write a .CSV file. There is no choice with Python 2.x. With Python 3.x, you must use \"w\". \n\nEdit\nIt appears you are using Python3. You'll need to drop the \"b\" from \"rb\" and \"wb\".\nRead this: http://docs.python.org/3.0/library/functions.html#open\n", "Opening csv files as binary is just wrong. CSV are normal text files so You need to open them with\nsource = open(\"PALTemplateData.csv\",\"r\")\ntarget = open(\"AnotherFile.csv\",\"w\")\n\nThe error \n_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)\n\ncomes because You are opening them in binary mode.\nWhen I was opening excel csv's with python, I used something like:\ntry: # checking if file exists\n f = csv.reader(open(filepath, \"r\", encoding=\"cp1250\"), delimiter=\";\", quotechar='\"')\nexcept IOError:\n f = []\n\nfor record in f:\n # do something with record\n\nand it worked rather fast (I was opening two about 10MB each csv files, though I did this with python 2.6, not the 3.0 version).\nThere are few working modules for working with excel csv files from within python - pyExcelerator is one of them.\n", "the problem is you're trying to write to the same file you're reading from. write to a different file and then rename it after deleting the original.\n" ]
[ 10, 4, 2 ]
[]
[]
[ "csv", "file_io", "python", "python_3.x" ]
stackoverflow_0001020053_csv_file_io_python_python_3.x.txt
Q: sqlite version for python26 which versions of sqlite may best suite for python 2.6.2? A: If your Python distribution already comes with a copy of sqlite (such as the Windows distribution, or Debian), this is the version you should use. If you compile sqlite yourself, you should use the version that is recommended by the sqlite authors (currently 3.6.15). A: I'm using 3.4.0 out of inertia (it's what came with the Python 2.* versions I'm using) but there's no real reason (save powerful inertia;-) to avoid upgrading to 3.4.2, which fixes a couple of bugs that could lead to DB corruption and introduces no incompatibilities that I know of. (If you stick with 3.4.0 I'm told the key thing is to avoid VACUUM as it might mangle your data). Python 3.1 comes with SQLite 3.6.11 (which is supposed to work with Python 2.* just as well) and I might one day update to that (or probably to the latest, currently 3.6.15, to pick up a slew of minor bug fixes and enhancements) just to make sure I'm using identical releases on either Python 2 or Python 3 -- I've never observed a compatibility problem, but I doubt there has been thorough testing to support reading and writing the same DB from 3.4.0 and 3.6.11 (or any two releases so far apart from each other!-).
sqlite version for python26
which versions of sqlite may best suite for python 2.6.2?
[ "If your Python distribution already comes with a copy of sqlite (such as the Windows distribution, or Debian), this is the version you should use.\nIf you compile sqlite yourself, you should use the version that is recommended by the sqlite authors (currently 3.6.15).\n", "I'm using 3.4.0 out of inertia (it's what came with the Python 2.* versions I'm using) but there's no real reason (save powerful inertia;-) to avoid upgrading to 3.4.2, which fixes a couple of bugs that could lead to DB corruption and introduces no incompatibilities that I know of. (If you stick with 3.4.0 I'm told the key thing is to avoid VACUUM as it might mangle your data).\nPython 3.1 comes with SQLite 3.6.11 (which is supposed to work with Python 2.* just as well) and I might one day update to that (or probably to the latest, currently 3.6.15, to pick up a slew of minor bug fixes and enhancements) just to make sure I'm using identical releases on either Python 2 or Python 3 -- I've never observed a compatibility problem, but I doubt there has been thorough testing to support reading and writing the same DB from 3.4.0 and 3.6.11 (or any two releases so far apart from each other!-).\n" ]
[ 9, 0 ]
[]
[]
[ "python", "python_2.6", "sqlite" ]
stackoverflow_0001025493_python_python_2.6_sqlite.txt
Q: Python - when is 'import' required? mod1.py import mod2 class Universe: def __init__(self): pass def answer(self): return 42 u = Universe() mod2.show_answer(u) mod2.py #import mod1 -- not necessary def show_answer(thing): print thing.answer() Coming from a C++ background I had the feeling it was necessary to import the module containing the Universe class definition before the show_answer function would work. I.e. everything had to be declared before it could be used. Am I right in thinking this isn't necessary? This is duck typing, right? So if an import isn't required to see the methods of a class, I'd at least need it for the class definition itself and the top level functions of a module? In one script I've written, I even went as far as writing a base class to declare an interface with a set of methods, and then deriving concrete classes to inherit that interface, but I think I get it now - that's just wrong in Python, and whether an object has a particular method is checked at runtime at the point where the call is made? I realise Python is so much more dynamic than C++, it's taken me a while to see how little code you actually need to write! I think I know the answer to this question, but I just wanted to get clarification and make sure I was on the right track. UPDATE: Thanks for all the answers, I think I should clarify my question now: Does mod2.show_answer() need an import (of any description) to know that thing has a method called answer(), or is that determined dynamically at runtime? A: In this case you're right: show_answer() is given an object, of which it calls the method "answer". As long as the object given to show_answer() has such a method, it doesn't matter where the object comes from. If, however, you wanted to create an instance of Universe inside mod2, you'd have to import mod1, because Universe is not in the mod2 namespace, even after mod2 has been imported by mod1. A: import is all about names -- mostly "bare names" that are bound at top level (AKA global level, AKA module-level names) in a certain module, say mod2. When you've done import mod2, you get the mod2 namespace as an available name (top-level in your own module, if you're doing the import itself as top level, as is most common; but a local import within a function would make mod2 a local variable of that function, etc); and therefore you can use mod2.foobar to access the name foobar that's bound at top level in mod2. If you have no need to access such names, then you have no need to import mod2 in your own module. A: Think of import being more like the linker. With "import mod2" you are simply telling python that it can find the function in the file mod2.py A: import in Python loads the module into the given namespace. As such, is it as if the def show_answer actually existed in the mod1.py module. Because of this, mod2.py does not need to know of the Universe class and thus you do not need to import mod1 from mod2.py. A: Actually, in this case, importing mod1 in mod2.py should not work. Would it not create a circular reference? A: In fact, according to this explanation , the circular import will not work the way you want it to work: if you uncomment import mod1, the second module will still not know about the Universe. I think this is quite reasonable. If both of your files need access to the type of some specific object, like Universe, you have several choices: if your program is small, just use one file if it's big, you need to decide if your files both need to know how Universe is implemented, perhaps passing an object of not-yet-known type to show_answer is fine if that doesn't work for you, by all means put Universe in a separate module and load it first. A: I don't know much about C++, so can't directly compare it, but.. import basically loads the other Python script (mod2.py) into the current script (the top level of mod1.py). It's not so much a link, it's closer to an eval For example, in Python'ish psuedo-code: eval("mod2.py") is the same as.. from mod2 import * ..it executes mod2.py, and makes the functions/classes defined accessible in the current script. Both above snippets would allow you to call show_answer() (well, eval doesn't quite work like that, thus I called it pseudo code!) import mod2 ..is basically the same, but instead of bringing in all the functions into the "top level", it brings them into the mod2 module, so you call show_answer by doing.. mod2.show_answer Am I right in thinking [the import in mod2.py] isn't necessary? Absolutely. In fact if you try and import mod1 from mod2 you get a circular dependancy error (since mod2 then tries to import mod1 and so on..)
Python - when is 'import' required?
mod1.py import mod2 class Universe: def __init__(self): pass def answer(self): return 42 u = Universe() mod2.show_answer(u) mod2.py #import mod1 -- not necessary def show_answer(thing): print thing.answer() Coming from a C++ background I had the feeling it was necessary to import the module containing the Universe class definition before the show_answer function would work. I.e. everything had to be declared before it could be used. Am I right in thinking this isn't necessary? This is duck typing, right? So if an import isn't required to see the methods of a class, I'd at least need it for the class definition itself and the top level functions of a module? In one script I've written, I even went as far as writing a base class to declare an interface with a set of methods, and then deriving concrete classes to inherit that interface, but I think I get it now - that's just wrong in Python, and whether an object has a particular method is checked at runtime at the point where the call is made? I realise Python is so much more dynamic than C++, it's taken me a while to see how little code you actually need to write! I think I know the answer to this question, but I just wanted to get clarification and make sure I was on the right track. UPDATE: Thanks for all the answers, I think I should clarify my question now: Does mod2.show_answer() need an import (of any description) to know that thing has a method called answer(), or is that determined dynamically at runtime?
[ "In this case you're right: show_answer() is given an object, of which it calls the method \"answer\". As long as the object given to show_answer() has such a method, it doesn't matter where the object comes from.\nIf, however, you wanted to create an instance of Universe inside mod2, you'd have to import mod1, because Universe is not in the mod2 namespace, even after mod2 has been imported by mod1.\n", "import is all about names -- mostly \"bare names\" that are bound at top level (AKA global level, AKA module-level names) in a certain module, say mod2. When you've done import mod2, you get the mod2 namespace as an available name (top-level in your own module, if you're doing the import itself as top level, as is most common; but a local import within a function would make mod2 a local variable of that function, etc); and therefore you can use mod2.foobar to access the name foobar that's bound at top level in mod2. If you have no need to access such names, then you have no need to import mod2 in your own module.\n", "Think of import being more like the linker.\nWith \"import mod2\" you are simply telling python that it can find the function in the file mod2.py \n", "import in Python loads the module into the given namespace. As such, is it as if the def show_answer actually existed in the mod1.py module. Because of this, mod2.py does not need to know of the Universe class and thus you do not need to import mod1 from mod2.py.\n", "Actually, in this case, importing mod1 in mod2.py should not work.\nWould it not create a circular reference? \n", "In fact, according to this explanation , the circular import will not work the way you want it to work: if you uncomment import mod1, the second module will still not know about the Universe. \nI think this is quite reasonable. If both of your files need access to the type of some specific object, like Universe, you have several choices: \n\nif your program is small, just use one file\nif it's big, you need to decide if your files both need to know how Universe is implemented, perhaps passing an object of not-yet-known type to show_answer is fine\nif that doesn't work for you, by all means put Universe in a separate module and load it first.\n\n", "I don't know much about C++, so can't directly compare it, but..\nimport basically loads the other Python script (mod2.py) into the current script (the top level of mod1.py). It's not so much a link, it's closer to an eval\nFor example, in Python'ish psuedo-code:\neval(\"mod2.py\")\n\nis the same as..\nfrom mod2 import *\n\n..it executes mod2.py, and makes the functions/classes defined accessible in the current script.\nBoth above snippets would allow you to call show_answer() (well, eval doesn't quite work like that, thus I called it pseudo code!)\nimport mod2\n\n..is basically the same, but instead of bringing in all the functions into the \"top level\", it brings them into the mod2 module, so you call show_answer by doing..\nmod2.show_answer\n\n\nAm I right in thinking [the import in mod2.py] isn't necessary?\n\nAbsolutely. In fact if you try and import mod1 from mod2 you get a circular dependancy error (since mod2 then tries to import mod1 and so on..)\n" ]
[ 6, 4, 1, 1, 1, 1, 1 ]
[]
[]
[ "import", "module", "python" ]
stackoverflow_0001027557_import_module_python.txt
Q: Python MySQLdb update query fails Okay. I've built here a mysql query browser, like navicat. Using MySQLdb to perform queries. Here's the weird part. When i run the query through the program(using MySQLdb), it gives me success, affected rows = 1, but when i look at it in phpmyadmin, the value hasn't changed. so before i perform the query, i print it out, copy and paste into phpmyadmin's query window, hit go and it works. So long story short, update query isn't working, but when i copy and paste into phpmyadmin, it works. self.tbl.sql.use(self.tbl.database) # switches to correct database. I've printed this and it uses the corrected db if self.tbl.sql.execute(query) == True: print sql_obj.rows_affected() # returns 1 (since i only do 1 query) And here's the part of the SQL class def execute(self, query): try: self.cursor.execute(query) return True except MySQLdb.ProgrammingError as error: print "---->SQL Error: %s" % error return False except MySQLdb.IntegrityError as e: print "--->SQL Error: %s" % e return False So any ideas what could be happening? A: I believe @Jason Creighton and @S.Lott are correct. At least if the table that you're updating is on a transactional storage engine. InnoDB is transactional, ISAM is not. You either have to call commit() on your connection object before closing it, or you must set the connection to autocommit mode. I am not sure how you do that for a MySQLdb connection, I guess you either set an argument to the connection constructor, or set a property after creating the connection object. Something like: conn = mysql.connection(host, port, autocommit=True) # or conn = mysql.connection(host, port) conn.autocommit(True) A: Just a guess: Perhaps the code in Python is running within a transaction, and the transaction might need to be explicitly committed? Edit: There's an entry in the MySQLdb FAQ that might be relevant.
Python MySQLdb update query fails
Okay. I've built here a mysql query browser, like navicat. Using MySQLdb to perform queries. Here's the weird part. When i run the query through the program(using MySQLdb), it gives me success, affected rows = 1, but when i look at it in phpmyadmin, the value hasn't changed. so before i perform the query, i print it out, copy and paste into phpmyadmin's query window, hit go and it works. So long story short, update query isn't working, but when i copy and paste into phpmyadmin, it works. self.tbl.sql.use(self.tbl.database) # switches to correct database. I've printed this and it uses the corrected db if self.tbl.sql.execute(query) == True: print sql_obj.rows_affected() # returns 1 (since i only do 1 query) And here's the part of the SQL class def execute(self, query): try: self.cursor.execute(query) return True except MySQLdb.ProgrammingError as error: print "---->SQL Error: %s" % error return False except MySQLdb.IntegrityError as e: print "--->SQL Error: %s" % e return False So any ideas what could be happening?
[ "I believe @Jason Creighton and @S.Lott are correct.\nAt least if the table that you're updating is on a transactional storage engine. InnoDB is transactional, ISAM is not.\nYou either have to call commit() on your connection object before closing it, or you must set the connection to autocommit mode. I am not sure how you do that for a MySQLdb connection, I guess you either set an argument to the connection constructor, or set a property after creating the connection object.\nSomething like:\nconn = mysql.connection(host, port, autocommit=True)\n\n# or\nconn = mysql.connection(host, port)\nconn.autocommit(True)\n\n", "Just a guess: Perhaps the code in Python is running within a transaction, and the transaction might need to be explicitly committed?\nEdit: There's an entry in the MySQLdb FAQ that might be relevant.\n" ]
[ 20, 16 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0001028671_mysql_python.txt
Q: Is there a Python interface to the Apache scoreboard (for server statistics)? In short: Is there an existing open-source Python interface for the Apache scoreboard IPC facility? I need to collect statistics from a running server WITHOUT using the "mod_status" HTTP interface, and I'd like to avoid Perl if possible. Some background: As I understand it, the Apache web server uses a functionality called the "scoreboard" to handle inter-process coordination. This may be purely in-memory, or it may be file-backed shared memory. (PLEASE correct me if that's a mis-statement!) Among other uses, "mod_status" lets you query a special path on a properly-configured server, receiving back a dynamically-generated page with a human-readable breakdown of Apache's overall functioning: uptime, request count, aggregate data transfer size, and process/thread status. (VERY useful information for monitoring perforamnce, or troubleshooting running servers that can't be shut down for debugging.) But what if you need the Apache status, but you can't open an HTTP connection to the server? (My organization sees this case from time to time. For example, the Slowloris attack.) What are some different ways that we can get the scoreboard statistics, and is there a Python interface for any of those methods? Note that a Perl module, Apache::Scoreboard, appears to be able to do this. But I'm not at all sure whether it can reach a local server's stats directly (shared memory, with or without a backing file), or whether it has to make a TCP connection to the localhost interface. So I'm not even sure whether this Perl module can do what we're asking. Beyond that, I'd like to avoid Perl in the solution for independent organizational reasons (no offense, Perl guys!). Also, if people are using a completely different method to grab these statistics, I would be interested in learning about it. A: Apache::Scoreboard can both fetch the scoreboard over HTTP, or, if it is loaded into the same server, access the scoreboard memory directly. This is done via a XS extension (i.e. native C). See httpd/include/scoreboard.h for how to access the in-memory scoreboard from C. If you're running in mod_python, you should be able to use the same trick as Apache::Scoreboard: write a C extension to access the scoreboard directly.
Is there a Python interface to the Apache scoreboard (for server statistics)?
In short: Is there an existing open-source Python interface for the Apache scoreboard IPC facility? I need to collect statistics from a running server WITHOUT using the "mod_status" HTTP interface, and I'd like to avoid Perl if possible. Some background: As I understand it, the Apache web server uses a functionality called the "scoreboard" to handle inter-process coordination. This may be purely in-memory, or it may be file-backed shared memory. (PLEASE correct me if that's a mis-statement!) Among other uses, "mod_status" lets you query a special path on a properly-configured server, receiving back a dynamically-generated page with a human-readable breakdown of Apache's overall functioning: uptime, request count, aggregate data transfer size, and process/thread status. (VERY useful information for monitoring perforamnce, or troubleshooting running servers that can't be shut down for debugging.) But what if you need the Apache status, but you can't open an HTTP connection to the server? (My organization sees this case from time to time. For example, the Slowloris attack.) What are some different ways that we can get the scoreboard statistics, and is there a Python interface for any of those methods? Note that a Perl module, Apache::Scoreboard, appears to be able to do this. But I'm not at all sure whether it can reach a local server's stats directly (shared memory, with or without a backing file), or whether it has to make a TCP connection to the localhost interface. So I'm not even sure whether this Perl module can do what we're asking. Beyond that, I'd like to avoid Perl in the solution for independent organizational reasons (no offense, Perl guys!). Also, if people are using a completely different method to grab these statistics, I would be interested in learning about it.
[ "Apache::Scoreboard can both fetch the scoreboard over HTTP, or, if it is loaded into the same server, access the scoreboard memory directly. This is done via a XS extension (i.e. native C). See httpd/include/scoreboard.h for how to access the in-memory scoreboard from C.\nIf you're running in mod_python, you should be able to use the same trick as Apache::Scoreboard: write a C extension to access the scoreboard directly.\n" ]
[ 3 ]
[]
[]
[ "apache", "http", "perl", "python" ]
stackoverflow_0001028408_apache_http_perl_python.txt
Q: Stop execution of a script called with execfile Is it possible to break the execution of a Python script called with the execfile function without using an if/else statement? I've tried exit(), but it doesn't allow main.py to finish. # main.py print "Main starting" execfile("script.py") print "This should print" # script.py print "Script starting" a = False if a == False: # Sanity checks. Script should break here # <insert magic command> # I'd prefer not to put an "else" here and have to indent the rest of the code print "this should not print" # lots of lines below A: main can wrap the execfile into a try/except block: sys.exit raises a SystemExit exception which main can catch in the except clause in order to continue its execution normally, if desired. I.e., in main.py: try: execfile('whatever.py') except SystemExit: print "sys.exit was called but I'm proceeding anyway (so there!-)." print "so I'll print this, etc, etc" and whatever.py can use sys.exit(0) or whatever to terminate its own execution only. Any other exception will work as well as long as it's agreed between the source to be execfiled and the source doing the execfile call -- but SystemExit is particularly suitable as its meaning is pretty clear! A: # script.py def main(): print "Script starting" a = False if a == False: # Sanity checks. Script should break here # <insert magic command> return; # I'd prefer not to put an "else" here and have to indent the rest of the code print "this should not print" # lots of lines bellow if __name__ == "__main__": main(); I find this aspect of Python (the __name__ == "__main__", etc.) irritating. A: What's wrong with plain old exception handling? scriptexit.py class ScriptExit( Exception ): pass main.py from scriptexit import ScriptExit print "Main Starting" try: execfile( "script.py" ) except ScriptExit: pass print "This should print" script.py from scriptexit import ScriptExit print "Script starting" a = False if a == False: # Sanity checks. Script should break here raise ScriptExit( "A Good Reason" ) # I'd prefer not to put an "else" here and have to indent the rest of the code print "this should not print" # lots of lines below
Stop execution of a script called with execfile
Is it possible to break the execution of a Python script called with the execfile function without using an if/else statement? I've tried exit(), but it doesn't allow main.py to finish. # main.py print "Main starting" execfile("script.py") print "This should print" # script.py print "Script starting" a = False if a == False: # Sanity checks. Script should break here # <insert magic command> # I'd prefer not to put an "else" here and have to indent the rest of the code print "this should not print" # lots of lines below
[ "main can wrap the execfile into a try/except block: sys.exit raises a SystemExit exception which main can catch in the except clause in order to continue its execution normally, if desired. I.e., in main.py:\ntry:\n execfile('whatever.py')\nexcept SystemExit:\n print \"sys.exit was called but I'm proceeding anyway (so there!-).\"\nprint \"so I'll print this, etc, etc\"\n\nand whatever.py can use sys.exit(0) or whatever to terminate its own execution only. Any other exception will work as well as long as it's agreed between the source to be execfiled and the source doing the execfile call -- but SystemExit is particularly suitable as its meaning is pretty clear!\n", "# script.py\ndef main():\n print \"Script starting\"\n a = False\n\n if a == False:\n # Sanity checks. Script should break here\n # <insert magic command> \n return;\n # I'd prefer not to put an \"else\" here and have to indent the rest of the code\n print \"this should not print\"\n # lots of lines bellow\n\nif __name__ == \"__main__\":\n main();\n\nI find this aspect of Python (the __name__ == \"__main__\", etc.) irritating.\n", "What's wrong with plain old exception handling?\nscriptexit.py\nclass ScriptExit( Exception ): pass\n\nmain.py\nfrom scriptexit import ScriptExit\nprint \"Main Starting\"\ntry:\n execfile( \"script.py\" )\nexcept ScriptExit:\n pass\nprint \"This should print\"\n\nscript.py\nfrom scriptexit import ScriptExit\nprint \"Script starting\"\na = False\n\nif a == False:\n # Sanity checks. Script should break here\n raise ScriptExit( \"A Good Reason\" )\n\n# I'd prefer not to put an \"else\" here and have to indent the rest of the code\nprint \"this should not print\"\n# lots of lines below\n\n" ]
[ 22, 4, 1 ]
[]
[]
[ "control_flow", "execfile", "python" ]
stackoverflow_0001028609_control_flow_execfile_python.txt
Q: Can I avoid handling a file twice if I need the number of lines and I need to append to the file? I am writing a file to disk in stages. As I write it I need to know the line numbers that I am writing to use to build an index. The file now has 12 million lines so I need to build the index on the fly. I am doing this in four steps, with four groupings of the value that I am indexing on. Based on some examples I found elsewhere on SO I decided that to keep my functions as clean as possible I would get the linesize of the file before I start writing so I can use that count to continue to build my index. So I have run across this problem, theoretically I don't know if I am adding the first chunk or the last chunk to my file so I thought to get the current size I would myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt','a') try: num_lines=sum(1 for line in myFile) except IOError: num_lines=0 When I do this the result is always 0-even if myFile exists and has a num_lines >0 If I do this instead: myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt') try: num_lines=sum(1 for line in myFile) except IOError: num_lines=0 I get the correct value iff myFile exists. byt if myFile does not exist, if I am on the first cycle, I get an error message. As I was writing out this question it occurred to me that the reason for the value num_lines=0 on every case the file exists is because the file is being opened for appending to so the file is opened at the last line and is now awaiting for lines to be delivered. So this fixes the problem try: myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt') num_lines=sum(1 for line in myFile) except IOError: num_lines=0 My question is whether or not this can be done another way. The reason I ask is because I have to now close myFile and reopen it for appending: That is to do the work I need to do now that I have the ending index number for the data that is already in the file I have to myFile.close() myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt','a') Now, here is where maybe I am learning something- given that I have to open the file twice then maybe getting the starting index (num_lines) should be moved to a function def getNumbLines(myFileRef): try: myFile=open(myFileRef) num_lines=sum(1 for line in myFile) myFile.close() except IOError: num_lines=0 return num_lines It would be cleaner if I did not have to open/handle the file twice. Based on Eric Wendelin's answer I can just do: myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt','a+') num_lines=sum(1 for line in myFile) Thanks A: You can open a file for reading AND writing: myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt','r+') Try that. UPDATE: Ah, my mistake since the file might not exist. Use 'a+' instead of 'r+'. A: Open the file for updates ('u' or 'rw', I forget). Now you can read it until EOF and then start writing to append. A: I assume you are writing the file, in that case why don't you maintain a separate track of how many lines you have already written? to me it looks very wasteful that you have to read whole file line by line just to get line number. A: A bit late to the party but for the file existing problem why not use (Psuedocode): If FileExists(C:\NEWMASTERLIST\FULLLIST.txt') then begin Open file etc Calc numlines etc end else Create new file etc NumLines := 0; end;
Can I avoid handling a file twice if I need the number of lines and I need to append to the file?
I am writing a file to disk in stages. As I write it I need to know the line numbers that I am writing to use to build an index. The file now has 12 million lines so I need to build the index on the fly. I am doing this in four steps, with four groupings of the value that I am indexing on. Based on some examples I found elsewhere on SO I decided that to keep my functions as clean as possible I would get the linesize of the file before I start writing so I can use that count to continue to build my index. So I have run across this problem, theoretically I don't know if I am adding the first chunk or the last chunk to my file so I thought to get the current size I would myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt','a') try: num_lines=sum(1 for line in myFile) except IOError: num_lines=0 When I do this the result is always 0-even if myFile exists and has a num_lines >0 If I do this instead: myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt') try: num_lines=sum(1 for line in myFile) except IOError: num_lines=0 I get the correct value iff myFile exists. byt if myFile does not exist, if I am on the first cycle, I get an error message. As I was writing out this question it occurred to me that the reason for the value num_lines=0 on every case the file exists is because the file is being opened for appending to so the file is opened at the last line and is now awaiting for lines to be delivered. So this fixes the problem try: myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt') num_lines=sum(1 for line in myFile) except IOError: num_lines=0 My question is whether or not this can be done another way. The reason I ask is because I have to now close myFile and reopen it for appending: That is to do the work I need to do now that I have the ending index number for the data that is already in the file I have to myFile.close() myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt','a') Now, here is where maybe I am learning something- given that I have to open the file twice then maybe getting the starting index (num_lines) should be moved to a function def getNumbLines(myFileRef): try: myFile=open(myFileRef) num_lines=sum(1 for line in myFile) myFile.close() except IOError: num_lines=0 return num_lines It would be cleaner if I did not have to open/handle the file twice. Based on Eric Wendelin's answer I can just do: myFile=open(r'C:\NEWMASTERLIST\FULLLIST.txt','a+') num_lines=sum(1 for line in myFile) Thanks
[ "You can open a file for reading AND writing:\nmyFile=open(r'C:\\NEWMASTERLIST\\FULLLIST.txt','r+')\n\nTry that.\nUPDATE: Ah, my mistake since the file might not exist. Use 'a+' instead of 'r+'.\n", "Open the file for updates ('u' or 'rw', I forget). Now you can read it until EOF and then start writing to append.\n", "I assume you are writing the file, in that case why don't you maintain a separate track of how many lines you have already written?\nto me it looks very wasteful that you have to read whole file line by line just to get line number.\n", "A bit late to the party but for the file existing problem why not use (Psuedocode):\nIf FileExists(C:\\NEWMASTERLIST\\FULLLIST.txt') then\nbegin\n Open file etc \n Calc numlines etc\nend\nelse\n Create new file etc\n NumLines := 0;\nend;\n\n" ]
[ 4, 0, 0, 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0001028122_file_python.txt
Q: Formatting cells in Excel with Python How do I format cells in Excel with python? In particular I need to change the font of several subsequent rows to be regular instead of bold. A: Using xlwt: from xlwt import * font0 = Font() font0.bold = False style0 = XFStyle() style0.font = font0 wb = Workbook() ws0 = wb.add_sheet('0') ws0.write(0, 0, 'myNormalText', style0) font1 = Font() font1.bold = True style1 = XFStyle() style1.font = font1 ws0.write(0, 1, 'myBoldText', style1) wb.save('format.xls') A: For using Python for Excel operations in general, I highly recommend checking out this site. There are three python modules that allow you to do pretty much anything you need: xlrd (reading), xlwt (writing), and xlutils (copy/modify/filter). On the site I mentioned, there is quite a bit of associated information including documentation and examples. In particular, you may be interested in this example. Good luck! A: For generic examples of Excel scripting from Python, this snippet is very handy. It doesn't specifically do the "change font to regular", but that's just range.Font.Bold = False in a function otherwise very similar to the set_border one in that snippet. A: Here is a brief introduction to using xlwt and the complementary xlrd (for reading .xls files). However, the Reddit thread where I discovered that article has a huge number of useful bits of advice, including some cautionary notes and how to use the win32com module to write Excel files better (see this comment, for example) - frankly, I think the code is easier to read/maintain. You can probably learn a lot more over at the pretty active python-excel group.
Formatting cells in Excel with Python
How do I format cells in Excel with python? In particular I need to change the font of several subsequent rows to be regular instead of bold.
[ "Using xlwt:\nfrom xlwt import *\n\nfont0 = Font()\nfont0.bold = False\n\nstyle0 = XFStyle()\nstyle0.font = font0\n\nwb = Workbook()\nws0 = wb.add_sheet('0')\n\nws0.write(0, 0, 'myNormalText', style0)\n\nfont1 = Font()\nfont1.bold = True\n\nstyle1 = XFStyle()\nstyle1.font = font1\n\nws0.write(0, 1, 'myBoldText', style1)\n\nwb.save('format.xls')\n\n", "For using Python for Excel operations in general, I highly recommend checking out this site. There are three python modules that allow you to do pretty much anything you need: xlrd (reading), xlwt (writing), and xlutils (copy/modify/filter). On the site I mentioned, there is quite a bit of associated information including documentation and examples. In particular, you may be interested in this example. Good luck!\n", "For generic examples of Excel scripting from Python, this snippet is very handy. It doesn't specifically do the \"change font to regular\", but that's just range.Font.Bold = False in a function otherwise very similar to the set_border one in that snippet.\n", "Here is a brief introduction to using xlwt and the complementary xlrd (for reading .xls files). However, the Reddit thread where I discovered that article has a huge number of useful bits of advice, including some cautionary notes and how to use the win32com module to write Excel files better (see this comment, for example) - frankly, I think the code is easier to read/maintain. You can probably learn a lot more over at the pretty active python-excel group.\n" ]
[ 3, 3, 1, 1 ]
[]
[]
[ "excel", "formatting", "python" ]
stackoverflow_0001029500_excel_formatting_python.txt
Q: Formatting with mako Anyone know how to format the length of a string with Mako? The equivalent of print "%20s%10s" % ("string 1", "string 2")? A: you can use python's string formatting fairly easily in mako ${"%20s%10s" % ("string 1", "string 2")} giving: >>> from mako.template import Template >>> Template('${"%20s%10s" % ("string 1", "string 2")}').render() ' string 1 string 2'
Formatting with mako
Anyone know how to format the length of a string with Mako? The equivalent of print "%20s%10s" % ("string 1", "string 2")?
[ "you can use python's string formatting fairly easily in mako\n${\"%20s%10s\" % (\"string 1\", \"string 2\")}\n\ngiving:\n>>> from mako.template import Template\n>>> Template('${\"%20s%10s\" % (\"string 1\", \"string 2\")}').render()\n' string 1 string 2'\n\n" ]
[ 7 ]
[]
[]
[ "mako", "python", "template_engine" ]
stackoverflow_0001029965_mako_python_template_engine.txt
Q: Calling a hook function every time an Exception is raised Let's say I want to be able to log to file every time any exception is raised, anywhere in my program. I don't want to modify any existing code. Of course, this could be generalized to being able to insert a hook every time an exception is raised. Would the following code be considered safe for doing such a thing? class MyException(Exception): def my_hook(self): print('---> my_hook() was called'); def __init__(self, *args, **kwargs): global BackupException; self.my_hook(); return BackupException.__init__(self, *args, **kwargs); def main(): global BackupException; global Exception; BackupException = Exception; Exception = MyException; raise Exception('Contrived Exception'); if __name__ == '__main__': main(); A: If you want to log uncaught exceptions, just use sys.excepthook. I'm not sure I see the value of logging all raised exceptions, since lots of libraries will raise/catch exceptions internally for things you probably won't care about. A: Your code as far as I can tell would not work. __init__ has to return None and you are trying to return an instance of backup exception. In general if you would like to change what instance is returned when instantiating a class you should override __new__. Unfortunately you can't change any of the attributes on the Exception class. If that was an option you could have changed Exception.__new__ and placed your hook there. the "global Exception" trick will only work for code in the current module. Exception is a builtin and if you really want to change it globally you need to import __builtin__; __builtin__.Exception = MyException Even if you changed __builtin__.Exception it will only affect future uses of Exception, subclasses that have already been defined will use the original Exception class and will be unaffected by your changes. You could loop over Exception.__subclasses__ and change the __bases__ for each one of them to insert your Exception subclass there. There are subclasses of Exception that are also built-in types that you also cannot modify, although I'm not sure you would want to hook any of them (think StopIterration). I think that the only decent way to do what you want is to patch the Python sources. A: This code will not affect any exception classes that were created before the start of main, and most of the exceptions that happen will be of such kinds (KeyError, AttributeError, and so forth). And you can't really affect those "built-in exceptions" in the most important sense -- if anywhere in your code is e.g. a 1/0, the real ZeroDivisionError will be raised (by Python's own internals), not whatever else you may have bound to that exceptions' name. So, I don't think your code can do what you want (despite all the semicolons, it's still supposed to be Python, right?) -- it could be done by patching the C sources for the Python runtime, essentially (e.g. by providing a hook potentially caught on any exception even if it's later caught) -- such a hook currently does not exist because the use cases for it would be pretty rare (for example, a StopIteration is always raised at the normal end of every for loop -- and caught, too; why on Earth would one want to trace that, and the many other routine uses of caught exceptions in the Python internals and standard library?!).
Calling a hook function every time an Exception is raised
Let's say I want to be able to log to file every time any exception is raised, anywhere in my program. I don't want to modify any existing code. Of course, this could be generalized to being able to insert a hook every time an exception is raised. Would the following code be considered safe for doing such a thing? class MyException(Exception): def my_hook(self): print('---> my_hook() was called'); def __init__(self, *args, **kwargs): global BackupException; self.my_hook(); return BackupException.__init__(self, *args, **kwargs); def main(): global BackupException; global Exception; BackupException = Exception; Exception = MyException; raise Exception('Contrived Exception'); if __name__ == '__main__': main();
[ "If you want to log uncaught exceptions, just use sys.excepthook.\nI'm not sure I see the value of logging all raised exceptions, since lots of libraries will raise/catch exceptions internally for things you probably won't care about.\n", "Your code as far as I can tell would not work. \n\n__init__ has to return None and you are trying to return an instance of backup exception. In general if you would like to change what instance is returned when instantiating a class you should override __new__.\nUnfortunately you can't change any of the attributes on the Exception class. If that was an option you could have changed Exception.__new__ and placed your hook there.\nthe \"global Exception\" trick will only work for code in the current module. Exception is a builtin and if you really want to change it globally you need to import __builtin__; __builtin__.Exception = MyException\nEven if you changed __builtin__.Exception it will only affect future uses of Exception, subclasses that have already been defined will use the original Exception class and will be unaffected by your changes. You could loop over Exception.__subclasses__ and change the __bases__ for each one of them to insert your Exception subclass there.\nThere are subclasses of Exception that are also built-in types that you also cannot modify, although I'm not sure you would want to hook any of them (think StopIterration).\n\nI think that the only decent way to do what you want is to patch the Python sources.\n", "This code will not affect any exception classes that were created before the start of main, and most of the exceptions that happen will be of such kinds (KeyError, AttributeError, and so forth). And you can't really affect those \"built-in exceptions\" in the most important sense -- if anywhere in your code is e.g. a 1/0, the real ZeroDivisionError will be raised (by Python's own internals), not whatever else you may have bound to that exceptions' name.\nSo, I don't think your code can do what you want (despite all the semicolons, it's still supposed to be Python, right?) -- it could be done by patching the C sources for the Python runtime, essentially (e.g. by providing a hook potentially caught on any exception even if it's later caught) -- such a hook currently does not exist because the use cases for it would be pretty rare (for example, a StopIteration is always raised at the normal end of every for loop -- and caught, too; why on Earth would one want to trace that, and the many other routine uses of caught exceptions in the Python internals and standard library?!).\n" ]
[ 19, 10, 7 ]
[ "Download pypy and instrument it.\n" ]
[ -8 ]
[ "exception", "python" ]
stackoverflow_0001029318_exception_python.txt
Q: smoothing irregularly sampled time data Given a table where the first column is seconds past a certain reference point and the second one is an arbitrary measurement: 6 0.738158581 21 0.801697222 39 1.797224596 49 2.77920469 54 2.839757536 79 3.832232283 91 4.676794376 97 5.18244704 100 5.521878863 118 6.316630137 131 6.778507504 147 7.020395216 157 7.331607129 176 7.637492223 202 7.848079136 223 7.989456499 251 8.76853608 278 9.092367123 ... As you see, the measurements are sampled at irregular time points. I need to smooth the data by averaging the reading up to 100 seconds prior each measurement (in Python). Since the data table is huge, an iterator-based method is really preferred. Unfortunately, after two hours of coding I can't figure out efficient and elegant solution. Can anyone help me? EDITs I want one smoothed reading for each raw reading, and the smoothed reading is to be the arithmetic mean of the raw reading and any others in the previous 100 (delta) seconds. (John, you are right) Huge ~ 1e6 - 10e6 lines + need to work with tight RAM The data is approximately random walk The data is sorted RESOLUTION I have tested solutions proposed by J Machin and yairchu. They both gave the same results, however, on my data set, J Machin's version performs exponentially, while that of yairchu is linear. Following are execution times as measured by IPython's %timeit (in microseconds): data size J Machin yairchu 10 90.2 55.6 50 930 258 100 3080 514 500 64700 2660 1000 253000 5390 2000 952000 11500 Thank you all for the help. A: I'm using a sum result to which I'm adding the new members and subtracting the old ones. However in this way one may suffer accumulating floating point inaccuracies. Therefore I implement a "Deque" with a list. And whenever my Deque reallocates to a smaller size. I recalculate the sum at the same occasion. I'm also calculating the average up to point x including point x so there's at least one sample point to average. def getAvgValues(data, avgSampleTime): lastTime = 0 prevValsBuf = [] prevValsStart = 0 tot = 0 for t, v in data: avgStart = t - avgSampleTime # remove too old values while prevValsStart < len(prevValsBuf): pt, pv = prevValsBuf[prevValsStart] if pt > avgStart: break tot -= pv prevValsStart += 1 # add new item tot += v prevValsBuf.append((t, v)) # yield result numItems = len(prevValsBuf) - prevValsStart yield (t, tot / numItems) # clean prevVals if it's time if prevValsStart * 2 > len(prevValsBuf): prevValsBuf = prevValsBuf[prevValsStart:] prevValsStart = 0 # recalculate tot for not accumulating float precision error tot = sum(v for (t, v) in prevValsBuf) A: You haven't said exactly when you want output. I'm assuming that you want one smoothed reading for each raw reading, and the smoothed reading is to be the arithmetic mean of the raw reading and any others in the previous 100 (delta) seconds. Short answer: use a collections.deque ... it will never hold more than "delta" seconds of readings. The way I've set it up you can treat the deque just like a list, and easily calculate the mean or some fancy gizmoid that gives more weight to recent readings. Long answer: >>> the_data = [tuple(map(float, x.split())) for x in """\ ... 6 0.738158581 ... 21 0.801697222 [snip] ... 251 8.76853608 ... 278 9.092367123""".splitlines()] >>> import collections >>> delta = 100.0 >>> q = collections.deque() >>> for t, v in the_data: ... while q and q[0][0] <= t - delta: ... # jettison outdated readings ... _unused = q.popleft() ... q.append((t, v)) ... count = len(q) ... print t, sum(item[1] for item in q) / count, count ... ... 6.0 0.738158581 1 21.0 0.7699279015 2 39.0 1.112360133 3 49.0 1.52907127225 4 54.0 1.791208525 5 79.0 2.13137915133 6 91.0 2.49500989771 7 97.0 2.8309395405 8 100.0 3.12993279856 9 118.0 3.74976297144 9 131.0 4.41385300278 9 147.0 4.99420529389 9 157.0 5.8325615685 8 176.0 6.033109419 9 202.0 7.15545189083 6 223.0 7.4342562845 6 251.0 7.9150342134 5 278.0 8.4246097095 4 >>> Edit One-stop shop: get your fancy gizmoid here. Here's the code: numerator = sum(item[1] * upsilon ** (t - item[0]) for item in q) denominator = sum(upsilon ** (t - item[0]) for item in q) gizmoid = numerator / denominator where upsilon should be a little less than 1.0 (<= zero is illegal, just above zero does little smoothing, one gets you the arithmetic mean plus wasted CPU time, and greater than one gives the inverse of your purpose). A: Your data seems to be roughly linear: Plot of your data http://rix0r.nl/~rix0r/share/shot-20090621.144851.gif What kind of smoothing are you looking for? A least squares fit of a line to this data set? Some sort of low-pass filter? Or something else? Please tell us the application so we can advise you a bit better. EDIT: For example, depending on the application, interpolating a line between the first and last point may be good enough for your purposes. A: This one makes it linear: def process_data(datafile): previous_n = 0 previous_t = 0 for line in datafile: t, number = line.strip().split() t = int(t) number = float(number) delta_n = number - previous_n delta_t = t - previous_t n_per_t = delta_n / delta_t for t0 in xrange(delta_t): yield previous_t + t0, previous_n + (n_per_t * t0) previous_n = n previous_t = t f = open('datafile.dat') for sample in process_data(f): print sample A: O(1) memory in case you can iterate the input more than once - you can use one iterator for the "left" and one for the "right". def getAvgValues(makeIter, avgSampleTime): leftIter = makeIter() leftT, leftV = leftIter.next() tot = 0 count = 0 for rightT, rightV in makeIter(): tot += rightV count += 1 while leftT <= rightT - avgSampleTime: tot -= leftV count -= 1 leftT, leftV = leftIter.next() yield rightT, tot / count A: While it gives an exponentially decaying average, rather than a total average, I think you may want what I called an exponential moving average with varying alpha, which is really a single-pole low-pass filter. There's now a solution to that question, and it runs in time linear to the number of data points. See if it works for you.
smoothing irregularly sampled time data
Given a table where the first column is seconds past a certain reference point and the second one is an arbitrary measurement: 6 0.738158581 21 0.801697222 39 1.797224596 49 2.77920469 54 2.839757536 79 3.832232283 91 4.676794376 97 5.18244704 100 5.521878863 118 6.316630137 131 6.778507504 147 7.020395216 157 7.331607129 176 7.637492223 202 7.848079136 223 7.989456499 251 8.76853608 278 9.092367123 ... As you see, the measurements are sampled at irregular time points. I need to smooth the data by averaging the reading up to 100 seconds prior each measurement (in Python). Since the data table is huge, an iterator-based method is really preferred. Unfortunately, after two hours of coding I can't figure out efficient and elegant solution. Can anyone help me? EDITs I want one smoothed reading for each raw reading, and the smoothed reading is to be the arithmetic mean of the raw reading and any others in the previous 100 (delta) seconds. (John, you are right) Huge ~ 1e6 - 10e6 lines + need to work with tight RAM The data is approximately random walk The data is sorted RESOLUTION I have tested solutions proposed by J Machin and yairchu. They both gave the same results, however, on my data set, J Machin's version performs exponentially, while that of yairchu is linear. Following are execution times as measured by IPython's %timeit (in microseconds): data size J Machin yairchu 10 90.2 55.6 50 930 258 100 3080 514 500 64700 2660 1000 253000 5390 2000 952000 11500 Thank you all for the help.
[ "I'm using a sum result to which I'm adding the new members and subtracting the old ones. However in this way one may suffer accumulating floating point inaccuracies.\nTherefore I implement a \"Deque\" with a list. And whenever my Deque reallocates to a smaller size. I recalculate the sum at the same occasion.\nI'm also calculating the average up to point x including point x so there's at least one sample point to average.\ndef getAvgValues(data, avgSampleTime):\n lastTime = 0\n prevValsBuf = []\n prevValsStart = 0\n tot = 0\n for t, v in data:\n avgStart = t - avgSampleTime\n # remove too old values\n while prevValsStart < len(prevValsBuf):\n pt, pv = prevValsBuf[prevValsStart]\n if pt > avgStart:\n break\n tot -= pv\n prevValsStart += 1\n # add new item\n tot += v\n prevValsBuf.append((t, v))\n # yield result\n numItems = len(prevValsBuf) - prevValsStart\n yield (t, tot / numItems)\n # clean prevVals if it's time\n if prevValsStart * 2 > len(prevValsBuf):\n prevValsBuf = prevValsBuf[prevValsStart:]\n prevValsStart = 0\n # recalculate tot for not accumulating float precision error\n tot = sum(v for (t, v) in prevValsBuf)\n\n", "You haven't said exactly when you want output. I'm assuming that you want one smoothed reading for each raw reading, and the smoothed reading is to be the arithmetic mean of the raw reading and any others in the previous 100 (delta) seconds.\nShort answer: use a collections.deque ... it will never hold more than \"delta\" seconds of readings. The way I've set it up you can treat the deque just like a list, and easily calculate the mean or some fancy gizmoid that gives more weight to recent readings.\nLong answer:\n>>> the_data = [tuple(map(float, x.split())) for x in \"\"\"\\\n... 6 0.738158581\n... 21 0.801697222\n[snip]\n... 251 8.76853608\n... 278 9.092367123\"\"\".splitlines()]\n>>> import collections\n>>> delta = 100.0\n>>> q = collections.deque()\n>>> for t, v in the_data:\n... while q and q[0][0] <= t - delta:\n... # jettison outdated readings\n... _unused = q.popleft()\n... q.append((t, v))\n... count = len(q)\n... print t, sum(item[1] for item in q) / count, count\n...\n...\n6.0 0.738158581 1\n21.0 0.7699279015 2\n39.0 1.112360133 3\n49.0 1.52907127225 4\n54.0 1.791208525 5\n79.0 2.13137915133 6\n91.0 2.49500989771 7\n97.0 2.8309395405 8\n100.0 3.12993279856 9\n118.0 3.74976297144 9\n131.0 4.41385300278 9\n147.0 4.99420529389 9\n157.0 5.8325615685 8\n176.0 6.033109419 9\n202.0 7.15545189083 6\n223.0 7.4342562845 6\n251.0 7.9150342134 5\n278.0 8.4246097095 4\n>>>\n\nEdit \nOne-stop shop: get your fancy gizmoid here. Here's the code:\nnumerator = sum(item[1] * upsilon ** (t - item[0]) for item in q)\ndenominator = sum(upsilon ** (t - item[0]) for item in q)\ngizmoid = numerator / denominator\n\nwhere upsilon should be a little less than 1.0 (<= zero is illegal, just above zero does little smoothing, one gets you the arithmetic mean plus wasted CPU time, and greater than one gives the inverse of your purpose).\n", "Your data seems to be roughly linear:\nPlot of your data http://rix0r.nl/~rix0r/share/shot-20090621.144851.gif\nWhat kind of smoothing are you looking for? A least squares fit of a line to this data set? Some sort of low-pass filter? Or something else?\nPlease tell us the application so we can advise you a bit better.\nEDIT: For example, depending on the application, interpolating a line between the first and last point may be good enough for your purposes.\n", "This one makes it linear:\ndef process_data(datafile):\n previous_n = 0\n previous_t = 0\n for line in datafile:\n t, number = line.strip().split()\n t = int(t)\n number = float(number)\n delta_n = number - previous_n\n delta_t = t - previous_t\n n_per_t = delta_n / delta_t\n for t0 in xrange(delta_t):\n yield previous_t + t0, previous_n + (n_per_t * t0)\n previous_n = n\n previous_t = t\n\nf = open('datafile.dat')\n\nfor sample in process_data(f):\n print sample\n\n", "O(1) memory in case you can iterate the input more than once - you can use one iterator for the \"left\" and one for the \"right\".\ndef getAvgValues(makeIter, avgSampleTime):\n leftIter = makeIter()\n leftT, leftV = leftIter.next()\n tot = 0\n count = 0\n for rightT, rightV in makeIter():\n tot += rightV\n count += 1\n while leftT <= rightT - avgSampleTime:\n tot -= leftV\n count -= 1\n leftT, leftV = leftIter.next()\n yield rightT, tot / count\n\n", "While it gives an exponentially decaying average, rather than a total average, I think you may want what I called an exponential moving average with varying alpha, which is really a single-pole low-pass filter. There's now a solution to that question, and it runs in time linear to the number of data points. See if it works for you.\n" ]
[ 3, 2, 0, 0, 0, 0 ]
[ "what about something like this, keep storing values till time difference with last time is > 100, average and yield such values\ne.g.\ndef getAvgValues(data):\n lastTime = 0\n prevValues = []\n avgSampleTime=100\n\n for t, v in data:\n if t - lastTime < avgSampleTime:\n prevValues.append(v)\n else:\n avgV = sum(prevValues)/len(prevValues)\n lastTime = t\n prevValues = [v]\n yield (t,avgV)\n\nfor v in getAvgValues(data):\n print v\n\n", "Sounds like you need a simple rounding formula. To round any number to an arbitrary interval:\nround(num/interval)*interval\nYou can substitute round with floor or ceiling for \"leading up to\" or \"since\" affects. It can work in any language, including SQL. \n" ]
[ -1, -2 ]
[ "data_mining", "datetime", "python", "smoothing" ]
stackoverflow_0001023719_data_mining_datetime_python_smoothing.txt
Q: Does python have a "causes_exception()" function? I have the following code: def causes_exception(lamb): try: lamb() return False except: return True I was wondering if it came already in any built-in library? /YGA Edit: Thx for all the commentary. It's actually impossible to detect whether code causes an exception without running it -- otherwise you could solve the halting problem (raise an exception if the program halts). I just wanted a syntactically clean way to filter a set of identifiers for those where the code didn't except. A: No, as far as I know there is no such function in the standard library. How would it be useful? I mean, presumably you would use it like this: if causes_exception(func): # do something else: # do something else But instead, you could just do try: func() except SomeException: # do something else else: # do something A: There's assertRaises(exception, callable) in unittest module and this is probably the only place where such check makes sense. In regular code you can never be 100% sure that causes_exception you suggested are not causing any side effects. A: I'm not aware of that function, or anything similar, in the Python standard library. It's rather misleading - if I saw it used, I might think it told you without calling the function whether the function could raise an exception.
Does python have a "causes_exception()" function?
I have the following code: def causes_exception(lamb): try: lamb() return False except: return True I was wondering if it came already in any built-in library? /YGA Edit: Thx for all the commentary. It's actually impossible to detect whether code causes an exception without running it -- otherwise you could solve the halting problem (raise an exception if the program halts). I just wanted a syntactically clean way to filter a set of identifiers for those where the code didn't except.
[ "No, as far as I know there is no such function in the standard library. How would it be useful? I mean, presumably you would use it like this:\nif causes_exception(func):\n # do something\nelse:\n # do something else\n\nBut instead, you could just do \ntry:\n func()\nexcept SomeException:\n # do something else\nelse:\n # do something\n\n", "There's assertRaises(exception, callable) in unittest module and this is probably the only place where such check makes sense.\nIn regular code you can never be 100% sure that causes_exception you suggested are not causing any side effects.\n", "I'm not aware of that function, or anything similar, in the Python standard library.\nIt's rather misleading - if I saw it used, I might think it told you without calling the function whether the function could raise an exception.\n" ]
[ 8, 4, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001030070_python.txt
Q: Calling an external program from python So I have this shell script: echo "Enter text to be classified, hit return to run classification." read text if [ `echo "$text" | sed -r 's/ +/ /g' | bin/stupidfilter data/c_rbf` = "1.000000" ] then echo "Text is not likely to be stupid." fi if [ `echo "$text" | sed -r 's/ +/ /g' | bin/stupidfilter data/c_rbf` = "0.000000" ] then echo "Text is likely to be stupid." fi I would like to write it in python. How do I do this? (As you can see it uses the library http://stupidfilter.org/stupidfilter-0.2-1.tar.gz) A: To do it just like the shell script does: import subprocess text = raw_input("Enter text to be classified: ") p1 = subprocess.Popen('bin/stupidfilter', 'data/c_trbf') stupid = float(p1.communicate(text)[0]) if stupid: print "Text is likely to be stupid" else: print "Text is not likely to be stupid" A: You could clearly run the commands as sub-shells and read the return value, just as in the shell script, and then process the result in Python. This is simpler than loading C functions. If you really want to load a function from the stupidfilter library, then first look to see whether someone else has already done it. If you cannot find anyone who has, then read the manual - how to call from Python onto C is covered in there. It is still simpler to use what others have already done.
Calling an external program from python
So I have this shell script: echo "Enter text to be classified, hit return to run classification." read text if [ `echo "$text" | sed -r 's/ +/ /g' | bin/stupidfilter data/c_rbf` = "1.000000" ] then echo "Text is not likely to be stupid." fi if [ `echo "$text" | sed -r 's/ +/ /g' | bin/stupidfilter data/c_rbf` = "0.000000" ] then echo "Text is likely to be stupid." fi I would like to write it in python. How do I do this? (As you can see it uses the library http://stupidfilter.org/stupidfilter-0.2-1.tar.gz)
[ "To do it just like the shell script does:\nimport subprocess\n\ntext = raw_input(\"Enter text to be classified: \")\np1 = subprocess.Popen('bin/stupidfilter', 'data/c_trbf')\nstupid = float(p1.communicate(text)[0])\n\nif stupid:\n print \"Text is likely to be stupid\"\nelse:\n print \"Text is not likely to be stupid\"\n\n", "You could clearly run the commands as sub-shells and read the return value, just as in the shell script, and then process the result in Python.\nThis is simpler than loading C functions.\nIf you really want to load a function from the stupidfilter library, then first look to see whether someone else has already done it. If you cannot find anyone who has, then read the manual - how to call from Python onto C is covered in there.\nIt is still simpler to use what others have already done.\n" ]
[ 8, 1 ]
[]
[]
[ "c", "python" ]
stackoverflow_0001030114_c_python.txt
Q: How to Make a PyMe (Python library) Run in Python 2.4 on Windows? I want to run this library on Python 2.4 in Windows XP. I installed the pygpgme-0.8.1.win32.exe file but got this: >>> from pyme import core Traceback (most recent call last): File "<stdin>", line 1, in ? File "C:\Python24\Lib\site-packages\pyme\core.py", line 22, in ? import pygpgme File "C:\Python24\Lib\site-packages\pyme\pygpgme.py", line 7, in ? import _pygpgme ImportError: DLL load failed: The specified module could not be found. And then this pop up comes up --------------------------- python.exe - Unable To Locate Component --------------------------- This application has failed to start because python25.dll was not found. Re-installing the application may fix this problem. --------------------------- OK Do I need to "compile" it for Python 2.4? How do I do that? A: While the pygpgme project does not clearly document it, it's clear from the error message you got that their .win32.exe was indeed compiled for Python 2.5. To compile their code for Python 2.4 (assuming they support that release!), download their sources, unpack them, open a command window, cd to the directory you unpacked their sources in, and run python setup.py install. This will probably not work unless you have the right Microsoft C compiler installed (MSVC 6.0 if I recall correctly). It's no doubt going to be much less trouble to download, install and use Python 2.5 for Windows (it can perfectly well coexist with your current 2.4, no need to remove that). Is that a problem?
How to Make a PyMe (Python library) Run in Python 2.4 on Windows?
I want to run this library on Python 2.4 in Windows XP. I installed the pygpgme-0.8.1.win32.exe file but got this: >>> from pyme import core Traceback (most recent call last): File "<stdin>", line 1, in ? File "C:\Python24\Lib\site-packages\pyme\core.py", line 22, in ? import pygpgme File "C:\Python24\Lib\site-packages\pyme\pygpgme.py", line 7, in ? import _pygpgme ImportError: DLL load failed: The specified module could not be found. And then this pop up comes up --------------------------- python.exe - Unable To Locate Component --------------------------- This application has failed to start because python25.dll was not found. Re-installing the application may fix this problem. --------------------------- OK Do I need to "compile" it for Python 2.4? How do I do that?
[ "While the pygpgme project does not clearly document it, it's clear from the error message you got that their .win32.exe was indeed compiled for Python 2.5.\nTo compile their code for Python 2.4 (assuming they support that release!), download their sources, unpack them, open a command window, cd to the directory you unpacked their sources in, and run python setup.py install. This will probably not work unless you have the right Microsoft C compiler installed (MSVC 6.0 if I recall correctly).\nIt's no doubt going to be much less trouble to download, install and use Python 2.5 for Windows (it can perfectly well coexist with your current 2.4, no need to remove that). Is that a problem?\n" ]
[ 2 ]
[]
[]
[ "c", "distutils", "installation", "python" ]
stackoverflow_0001030297_c_distutils_installation_python.txt
Q: Background process in GAE I am developing a website using Google App Engine and Django 1.0 (app-engine-patch) A major part of my program has to run in the background and change local data and also post to a remote URL Can someone suggest an effective way of doing this? A: Check out The Task Queue Python API. A: Without using a third-party system, I think currently your only option is to use the cron functionality. You'd still be bound by the usual GAE script-execution-time limitations, but it wouldn't happen on a page load. There is plans for background processing, see this App Engine issue #6, and this roadmap update A: I second dbr's recommendation of http://code.google.com/appengine/docs/python/config/cron.html (and hopes for better future approaches, such as the promised "task queues"). Nevertheless I suspect that if you do indeed need major (as in CPU heavy) background processing, GAE may not be the most hospitable environment for that. You may want to consider running those heavy background tasks in other environments, and have them communicate with GAE proper e.g. via the "bulk load/download" APIs, see http://code.google.com/appengine/docs/python/tools/uploadingdata.html (and http://code.google.com/appengine/docs/python/tools/uploadingdata.html#Downloading_Data_from_App_Engine for the downloading part). Google's documentation only describes the usage of the command-line appcfg.py for these purposes (I can't find a proper documentation of the APIs it uses!), but, if you do need more programmatic usage of these APIs, it's not hard to evince them from appcfg.py's sources.
Background process in GAE
I am developing a website using Google App Engine and Django 1.0 (app-engine-patch) A major part of my program has to run in the background and change local data and also post to a remote URL Can someone suggest an effective way of doing this?
[ "Check out The Task Queue Python API.\n", "Without using a third-party system, I think currently your only option is to use the cron functionality.\nYou'd still be bound by the usual GAE script-execution-time limitations, but it wouldn't happen on a page load.\nThere is plans for background processing, see this App Engine issue #6, and this roadmap update\n", "I second dbr's recommendation of http://code.google.com/appengine/docs/python/config/cron.html (and hopes for better future approaches, such as the promised \"task queues\").\nNevertheless I suspect that if you do indeed need major (as in CPU heavy) background processing, GAE may not be the most hospitable environment for that. You may want to consider running those heavy background tasks in other environments, and have them communicate with GAE proper e.g. via the \"bulk load/download\" APIs, see http://code.google.com/appengine/docs/python/tools/uploadingdata.html (and http://code.google.com/appengine/docs/python/tools/uploadingdata.html#Downloading_Data_from_App_Engine for the downloading part).\nGoogle's documentation only describes the usage of the command-line appcfg.py for these purposes (I can't find a proper documentation of the APIs it uses!), but, if you do need more programmatic usage of these APIs, it's not hard to evince them from appcfg.py's sources.\n" ]
[ 5, 2, 2 ]
[]
[]
[ "backgroundworker", "django", "google_app_engine", "python" ]
stackoverflow_0000845620_backgroundworker_django_google_app_engine_python.txt
Q: Unescape _xHHHH_ XML escape sequences using Python I'm using Python 2.x [not negotiable] to read XML documents [created by others] that allow the content of many elements to contain characters that are not valid XML characters by escaping them using the _xHHHH_ convention e.g. ASCII BEL aka U+0007 is represented by the 7-character sequence u"_x0007_". Neither the functionality that allows representation of any old character in the document nor the manner of escaping is negotiable. I'm parsing the documents using cElementTree or lxml [semi-negotiable]. Here is my best attempt at unescapeing the parser output as efficiently as possible: import re def unescape(s, subber=re.compile(r'_x[0-9A-Fa-f]{4,4}_').sub, repl=lambda mobj: unichr(int(mobj.group(0)[2:6], 16)), ): if "_" in s: return subber(repl, s) return s The above is biassed by observing a very low frequency of "_" in typical text and a better-than-doubling of speed by avoiding the regex apparatus where possible. The question: Any better ideas out there? A: You might as well check for '_x' rather than just _, that won't matter much but surely the two-character sequence's even rarer than the single underscore. Apart from such details, you do seem to be making the best of a bad situation!
Unescape _xHHHH_ XML escape sequences using Python
I'm using Python 2.x [not negotiable] to read XML documents [created by others] that allow the content of many elements to contain characters that are not valid XML characters by escaping them using the _xHHHH_ convention e.g. ASCII BEL aka U+0007 is represented by the 7-character sequence u"_x0007_". Neither the functionality that allows representation of any old character in the document nor the manner of escaping is negotiable. I'm parsing the documents using cElementTree or lxml [semi-negotiable]. Here is my best attempt at unescapeing the parser output as efficiently as possible: import re def unescape(s, subber=re.compile(r'_x[0-9A-Fa-f]{4,4}_').sub, repl=lambda mobj: unichr(int(mobj.group(0)[2:6], 16)), ): if "_" in s: return subber(repl, s) return s The above is biassed by observing a very low frequency of "_" in typical text and a better-than-doubling of speed by avoiding the regex apparatus where possible. The question: Any better ideas out there?
[ "You might as well check for '_x' rather than just _, that won't matter much but surely the two-character sequence's even rarer than the single underscore. Apart from such details, you do seem to be making the best of a bad situation!\n" ]
[ 1 ]
[]
[]
[ "escaping", "python", "xml" ]
stackoverflow_0001030522_escaping_python_xml.txt
Q: When to use a Templating Engine in Python? As a "newbie" to Python, and mainly having a background of writing scripts for automating system administration related tasks, I don't have a lot of sense for where to use certain tools. But I am very interested in developing instincts on where to use specific tools/techniques. I've seen a lot mentioned about templating engines, included zedshaw talking about using Jinja2 in Lamson So I thought I would ask the community to provide me with some advice as to where/when I might know I should consider using a templating engine, such as Jinja2, Genshi, Mako, and co. A: As @mikem says, templates help generating whatever form of output you like, in the right conditions. Essentially the first meaningful thing I ever wrote in Python was a templating system -- YAPTU, for Yet Another Python Templating Utility -- and that was 8+ years ago, before other good such systems existed... soon after I had the honor of having it enhanced by none less than Peter Norvig (see here), and now it sports almost 2,000 hits on search engines;-). Today's templating engines are much better in many respects (though most are pretty specialized, especially to HTML output), but the fundamental idea remains -- why bother with a lot of print statements and hard-coded strings in Python code, when it's even easier to hve the strings out in editable template files? If you ever want (e.g.) to have the ability to output in French, English, or Italian (that was YAPTU's original motivation during an intense weekend of hacking as I was getting acquainted with Python for the first time...!-), being able to just get your templates from the right directory (where the text is appropriately translated) will make everything SO much easier!!! Essentially, I think a templating system is more likely than not to be a good idea, whenever you're outputting text-ish stuff. I've used YAPTU or adaptations thereof to smoothly switch between JSON, XML, and human-readable (HTML, actually) output, for example; a really good templating system, in this day and age, should in fact be able to transcend the "text-ish" limit and output in protobuf or other binary serialization format. Which templating system is best entirely depend on your specific situation -- in your shoes, I'd study (and experiment with) a few of them, anyway. Some are designed for cases in which UI designers (who can't program) should be editing them, others for programmer use only; many are specialized to HTML, some to XML, others more general; etc, etc. But SOME one of them (or your own Yet Another one!-) is sure to be better than a bunch of prints!-) A: A templating engine works on templates to programmatically generate artifacts. A template specifies the skeleton of the artifact and the program fills in the blanks. This can be used to generate almost anything, be it HTML, some kind of script (ie: an RPM SPEC file), Python code which can be executed later, etc. A: Simply put, templating lets you easily write a human readable document by hand and add very simple markup to identify areas that should be replaced by variables, area that should repeat, etc. Typically a templating language can do only basic "template logic", meaning just enough logic to affect the layout of the document, but not enough to affect the data that's fed to the template to populate the document.
When to use a Templating Engine in Python?
As a "newbie" to Python, and mainly having a background of writing scripts for automating system administration related tasks, I don't have a lot of sense for where to use certain tools. But I am very interested in developing instincts on where to use specific tools/techniques. I've seen a lot mentioned about templating engines, included zedshaw talking about using Jinja2 in Lamson So I thought I would ask the community to provide me with some advice as to where/when I might know I should consider using a templating engine, such as Jinja2, Genshi, Mako, and co.
[ "As @mikem says, templates help generating whatever form of output you like, in the right conditions. Essentially the first meaningful thing I ever wrote in Python was a templating system -- YAPTU, for Yet Another Python Templating Utility -- and that was 8+ years ago, before other good such systems existed... soon after I had the honor of having it enhanced by none less than Peter Norvig (see here), and now it sports almost 2,000 hits on search engines;-).\nToday's templating engines are much better in many respects (though most are pretty specialized, especially to HTML output), but the fundamental idea remains -- why bother with a lot of print statements and hard-coded strings in Python code, when it's even easier to hve the strings out in editable template files? If you ever want (e.g.) to have the ability to output in French, English, or Italian (that was YAPTU's original motivation during an intense weekend of hacking as I was getting acquainted with Python for the first time...!-), being able to just get your templates from the right directory (where the text is appropriately translated) will make everything SO much easier!!!\nEssentially, I think a templating system is more likely than not to be a good idea, whenever you're outputting text-ish stuff. I've used YAPTU or adaptations thereof to smoothly switch between JSON, XML, and human-readable (HTML, actually) output, for example; a really good templating system, in this day and age, should in fact be able to transcend the \"text-ish\" limit and output in protobuf or other binary serialization format.\nWhich templating system is best entirely depend on your specific situation -- in your shoes, I'd study (and experiment with) a few of them, anyway. Some are designed for cases in which UI designers (who can't program) should be editing them, others for programmer use only; many are specialized to HTML, some to XML, others more general; etc, etc. But SOME one of them (or your own Yet Another one!-) is sure to be better than a bunch of prints!-)\n", "A templating engine works on templates to programmatically generate artifacts. A template specifies the skeleton of the artifact and the program fills in the blanks. This can be used to generate almost anything, be it HTML, some kind of script (ie: an RPM SPEC file), Python code which can be executed later, etc.\n", "Simply put, templating lets you easily write a human readable document by hand and add very simple markup to identify areas that should be replaced by variables, area that should repeat, etc.\nTypically a templating language can do only basic \"template logic\", meaning just enough logic to affect the layout of the document, but not enough to affect the data that's fed to the template to populate the document.\n" ]
[ 9, 3, 0 ]
[]
[]
[ "python", "template_engine", "templates" ]
stackoverflow_0001030622_python_template_engine_templates.txt