content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How to discard from the middle of a list using list comprehensions?
I could do this using an index but I thought there must be a cleaner way using list comprehensions. I'm a beginner. I hope it's not embarrassingly obvious. Thanks
for x in firstList:
firstFunc(x)
secondFunc(x)
x = process(x)
if x.discard == True:
(get rid of x)
secondList.append(firstList)
A:
Just a thought, and it does little for documentation, but why not try:
def masterFunc(x):
firstFunc(x)
secondFunc(x)
process(x)
return x.discard
secondList = [ x for x in firstList if masterFunc(x) ]
Good news: does what you asked, strictly speaking.
Bad news: it hides firstFunc, secondFunc, and process
It sounds like you already have trouble with side-effects and command/query separation in the example, so I'm thinking that this hack is not as noble as cleaning up the code a bit. You might find that some methods need inverted (x.firstFunc() instead of firstFunc(x)) and others need broken up. There may even be a nicer way than 'x.discard' to deal with filtering.
A:
You know, your best solution is really to just initialize secondList how you like, and do all three functions in a regular loop, since they're all dependent and contain logic that is not just filtering (you say process sets attributes... I'm assuming you mean other than discard):
# If secondList not initialized...
secondList = []
for x in firstList:
firstFunc(x)
secondFunc(x)
process(x)
if not x.discard:
secondList.append(x)
List comprehensions don't help too much here since you're doing processing in each function (they take a line or two off though; depends on what you're looking for in "clean" code). If all process() did was return True if the item should be in the new list, and False if the item should not be in the new list, then the below would really be better, IMO.
If firstFunc(x) and secondFunc(x) do change the result of x.discard after process(), and the result of process(x) is just x, I would do the following in your situation:
for x in firstList:
firstFunc(x)
secondFunc(x)
secondList = [ x for x in firstList if not process(x).discard ]
If the result of process(x) is different from x though, as your sample appears to indicate, you could also change that last line to the following:
interimList = [ process(x) for x in firstList ]
secondList = [ x for x in interimList if not x.discard ]
Note that if you wanted to append these results to secondList, use secondList.extend([...]).
Edit: I realized I erroneously wrote "do not" change, but I meant if they do change the result of process().
Edit 2: Cleanup description / code.
A:
Edit: process(x) is necessary for x.discard, meaning that the answer is:
No there is no cleaner way. And the way you are doing it is already clean.
Old answer:
Not really, no. You can make this:
def process_item(x):
firstFunc(x)
secondFunc(x)
x = process(x)
def test_item(x):
return x.discard == False
list = [process_item(x) for x in firstList if test_item(x)]
But that is not cleaner, and also it requires x.discard to be set before you process it, which it doesn't seem to be from your code.
List comprehensions are not "cleaner". They are shorter ways of writing simple list processing. You list processing involves three steps. That's not really "simple". :)
A:
a few things:
you cannot append a list, you need to use extend.
no need for == True bit, use just if x.discard:
you'd rather create a new list with values that you don't want to discard and don't pollute your loop with removal.
so you'd have something along the lines:
tmp = []
for x in first_list:
x = process(x)
if not x.discard:
tmp.append(x)
second_list.extend(tmp)
list comprehension would obviously more pythonic, though:
[i for i in first_list if not process(i).discard]
A:
Sounds like
def allProcessing(x)
firstFunc(x)
secondFunc(x)
return !(process(x).discard)
newList = filter(allProcessing, oldList)
A:
Write this as two list comprehensions, one which assembles the data that might need filtering, and another which does the filtering. Make firstFunc and secondFunc return x (as process does), and then you can write it like so:
unfilteredList = [secondFunc(firstFunc(x)) for x in firstList]
secondList = [x for x in unfilteredList if not x.discard]
|
How to discard from the middle of a list using list comprehensions?
|
I could do this using an index but I thought there must be a cleaner way using list comprehensions. I'm a beginner. I hope it's not embarrassingly obvious. Thanks
for x in firstList:
firstFunc(x)
secondFunc(x)
x = process(x)
if x.discard == True:
(get rid of x)
secondList.append(firstList)
|
[
"Just a thought, and it does little for documentation, but why not try:\ndef masterFunc(x):\n firstFunc(x)\n secondFunc(x)\n process(x)\n return x.discard\n\nsecondList = [ x for x in firstList if masterFunc(x) ]\n\nGood news: does what you asked, strictly speaking.\nBad news: it hides firstFunc, secondFunc, and process\nIt sounds like you already have trouble with side-effects and command/query separation in the example, so I'm thinking that this hack is not as noble as cleaning up the code a bit. You might find that some methods need inverted (x.firstFunc() instead of firstFunc(x)) and others need broken up. There may even be a nicer way than 'x.discard' to deal with filtering.\n",
"You know, your best solution is really to just initialize secondList how you like, and do all three functions in a regular loop, since they're all dependent and contain logic that is not just filtering (you say process sets attributes... I'm assuming you mean other than discard):\n# If secondList not initialized...\nsecondList = []\nfor x in firstList:\n firstFunc(x)\n secondFunc(x)\n process(x)\n if not x.discard:\n secondList.append(x)\n\nList comprehensions don't help too much here since you're doing processing in each function (they take a line or two off though; depends on what you're looking for in \"clean\" code). If all process() did was return True if the item should be in the new list, and False if the item should not be in the new list, then the below would really be better, IMO.\n\nIf firstFunc(x) and secondFunc(x) do change the result of x.discard after process(), and the result of process(x) is just x, I would do the following in your situation:\nfor x in firstList:\n firstFunc(x)\n secondFunc(x)\nsecondList = [ x for x in firstList if not process(x).discard ]\n\nIf the result of process(x) is different from x though, as your sample appears to indicate, you could also change that last line to the following:\ninterimList = [ process(x) for x in firstList ]\nsecondList = [ x for x in interimList if not x.discard ]\n\nNote that if you wanted to append these results to secondList, use secondList.extend([...]).\nEdit: I realized I erroneously wrote \"do not\" change, but I meant if they do change the result of process().\nEdit 2: Cleanup description / code.\n",
"Edit: process(x) is necessary for x.discard, meaning that the answer is:\nNo there is no cleaner way. And the way you are doing it is already clean.\nOld answer:\nNot really, no. You can make this:\ndef process_item(x):\n firstFunc(x)\n secondFunc(x)\n x = process(x)\n\ndef test_item(x):\n return x.discard == False\n\nlist = [process_item(x) for x in firstList if test_item(x)]\n\nBut that is not cleaner, and also it requires x.discard to be set before you process it, which it doesn't seem to be from your code.\nList comprehensions are not \"cleaner\". They are shorter ways of writing simple list processing. You list processing involves three steps. That's not really \"simple\". :)\n",
"a few things: \n\nyou cannot append a list, you need to use extend.\nno need for == True bit, use just if x.discard:\nyou'd rather create a new list with values that you don't want to discard and don't pollute your loop with removal.\n\nso you'd have something along the lines:\ntmp = []\nfor x in first_list:\n x = process(x)\n if not x.discard:\n tmp.append(x)\nsecond_list.extend(tmp)\n\nlist comprehension would obviously more pythonic, though:\n[i for i in first_list if not process(i).discard]\n\n",
"Sounds like\ndef allProcessing(x)\n firstFunc(x)\n secondFunc(x)\n return !(process(x).discard)\n\nnewList = filter(allProcessing, oldList)\n\n",
"Write this as two list comprehensions, one which assembles the data that might need filtering, and another which does the filtering. Make firstFunc and secondFunc return x (as process does), and then you can write it like so:\nunfilteredList = [secondFunc(firstFunc(x)) for x in firstList]\nsecondList = [x for x in unfilteredList if not x.discard]\n\n"
] |
[
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001405883_python.txt
|
Q:
Why does "enable-shared failed" happen on libjpeg build for os X?
I'm trying to install libjpeg on os X to fix a problem with the Python Imaging Library JPEG setup.
I downloaded libjpeg from http://www.ijg.org/files/jpegsrc.v7.tar.gz
I then began to setup the config file
cp /usr/share/libtool/config.sub .
cp /usr/share/libtool/config.guess .
./configure –enable-shared
However, the enable-shared flag didn't seem to work.
$ ./configure –-enable-shared
configure: WARNING: you should use --build, --host, --target
configure: WARNING: invalid host type: –-enable-shared
checking build system type... Invalid configuration `–-enable-shared': machine `–-enable' not recognized
configure: error: /bin/sh ./config.sub –-enable-shared failed
I've done lot's of google searches and I can't figure out where the error is or how to work around this error.
A:
I had copied the code from a blog.
The flag character there was not a hyphem , it just looked like one:
ord("–")
TypeError: ord() expected a character, but string of length 3 found
I changed it to a proper hypen and it works fine.
|
Why does "enable-shared failed" happen on libjpeg build for os X?
|
I'm trying to install libjpeg on os X to fix a problem with the Python Imaging Library JPEG setup.
I downloaded libjpeg from http://www.ijg.org/files/jpegsrc.v7.tar.gz
I then began to setup the config file
cp /usr/share/libtool/config.sub .
cp /usr/share/libtool/config.guess .
./configure –enable-shared
However, the enable-shared flag didn't seem to work.
$ ./configure –-enable-shared
configure: WARNING: you should use --build, --host, --target
configure: WARNING: invalid host type: –-enable-shared
checking build system type... Invalid configuration `–-enable-shared': machine `–-enable' not recognized
configure: error: /bin/sh ./config.sub –-enable-shared failed
I've done lot's of google searches and I can't figure out where the error is or how to work around this error.
|
[
"I had copied the code from a blog. \nThe flag character there was not a hyphem , it just looked like one:\nord(\"–\")\n\nTypeError: ord() expected a character, but string of length 3 found\n\nI changed it to a proper hypen and it works fine.\n"
] |
[
3
] |
[] |
[] |
[
"build",
"macos",
"python"
] |
stackoverflow_0001405920_build_macos_python.txt
|
Q:
Consume a Web service thought an Active Directory
I have a project in which I have to consume a web service authenticating against Active Directory. I have my system written in python3, python-ldap module is not ported yet, so I want to know a way to acheive this consumption.
In the worst case, I will create a standalone consumer in python2.5, but I want to know how to consume a web service logged in an active directory.
A:
I've resolved it with python-ntlm a project in google that handles ntlm for www-authentication: negotiate, http://code.google.com/p/python-ntlm/ thanks!
|
Consume a Web service thought an Active Directory
|
I have a project in which I have to consume a web service authenticating against Active Directory. I have my system written in python3, python-ldap module is not ported yet, so I want to know a way to acheive this consumption.
In the worst case, I will create a standalone consumer in python2.5, but I want to know how to consume a web service logged in an active directory.
|
[
"I've resolved it with python-ntlm a project in google that handles ntlm for www-authentication: negotiate, http://code.google.com/p/python-ntlm/ thanks!\n"
] |
[
0
] |
[] |
[] |
[
"ldap",
"python",
"web_services"
] |
stackoverflow_0001405670_ldap_python_web_services.txt
|
Q:
Python: How do I convert a NonType variable to a String?
I have the following function:
>>> def rule(x):
... rule = bin(x)[2:].zfill(8)
... rule = str(rule)
... print rule
I am trying to convert rule into a string, but when I run the following command, here is what I get:
>>> type(rule(20))
00010100
<type 'NoneType'>
What is a NoneType variable and how can I convert this variable into a string?
Thanks,
A:
you need to do:
def rule(x):
return bin(x)[2:].zfill(8)
All functions return None implicitly if no return statement was encountered during execution of the function (as it is in your case).
A:
Your rule function doesn't return anything -- it only prints. So, rule(20) returns None -- the python object representing 'no value'. You can't turn that into a string, at least not in the way you are expecting.
You need to define rule to return something, like this:
>>> def rule(x):
... rule=bin(x)[2:].zfill(8)
... rule=str(rule)
... return rule
Then it will return a string object.
|
Python: How do I convert a NonType variable to a String?
|
I have the following function:
>>> def rule(x):
... rule = bin(x)[2:].zfill(8)
... rule = str(rule)
... print rule
I am trying to convert rule into a string, but when I run the following command, here is what I get:
>>> type(rule(20))
00010100
<type 'NoneType'>
What is a NoneType variable and how can I convert this variable into a string?
Thanks,
|
[
"you need to do:\ndef rule(x):\n return bin(x)[2:].zfill(8)\n\nAll functions return None implicitly if no return statement was encountered during execution of the function (as it is in your case).\n",
"Your rule function doesn't return anything -- it only prints. So, rule(20) returns None -- the python object representing 'no value'. You can't turn that into a string, at least not in the way you are expecting.\nYou need to define rule to return something, like this:\n>>> def rule(x):\n... rule=bin(x)[2:].zfill(8)\n... rule=str(rule)\n... return rule\n\nThen it will return a string object.\n"
] |
[
6,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001407557_python.txt
|
Q:
How to generate basic CRUD functionality in python given a database tables
I want to develop a desktop application using python with basic crud operation. Is there any library in python that can generate a code for CRUD functionality and user interface given a database table.
A:
Hopefully, this won't be the best option you end up with, but, in the tradition of using web-interfaces for desktop applications, you could always try django. I would particularLY take a look at the inspectdb command, which will generate the ORM code for you.
The advantage is that it won't require that much code to get off the ground, and if you just want to use it from the desktop, you don't need a webserver; you can use the provided test server. The bundled admin site is easy to get off the ground, and flexible up to a point; past which people seem to invest a lot of time battling it (probably a testimony to how helpful it is at first).
There are many disadvantages, not the least of which is the possibility of having to use html/javascript/css when you want to start customizing a lot.
A:
If it were me, I would consider borrowing django's ORM, but then again, I'm already familiar with it.
Having said that, I like working with it, it's usable outside the framework, and it will give you mysql, postgres, or sqlite support. You could also hook up the django admin site to your models and have a web-based editor.
There are surely other ORMs and code generators out there too (I hope some python gurus will point some out, I'm kind of curious).
A:
If you want something really small and simple, I like the Autumn ORM.
If you use the Django ORM, you can use the automatically-generated Django admin interface, which is really nice. It's basically a web-based GUI for browsing and editing records in your database.
If you think you will need advanced SQL features, SQLAlchemy is a good way to go. I suspect for a desktop application, Django or Autumn would be better.
There are other Python ORMs, such as Storm. Do a Google search on "python ORM". See also the discussion on this web site: What are some good Python ORM solutions?
|
How to generate basic CRUD functionality in python given a database tables
|
I want to develop a desktop application using python with basic crud operation. Is there any library in python that can generate a code for CRUD functionality and user interface given a database table.
|
[
"Hopefully, this won't be the best option you end up with, but, in the tradition of using web-interfaces for desktop applications, you could always try django. I would particularLY take a look at the inspectdb command, which will generate the ORM code for you.\nThe advantage is that it won't require that much code to get off the ground, and if you just want to use it from the desktop, you don't need a webserver; you can use the provided test server. The bundled admin site is easy to get off the ground, and flexible up to a point; past which people seem to invest a lot of time battling it (probably a testimony to how helpful it is at first).\nThere are many disadvantages, not the least of which is the possibility of having to use html/javascript/css when you want to start customizing a lot.\n",
"If it were me, I would consider borrowing django's ORM, but then again, I'm already familiar with it.\nHaving said that, I like working with it, it's usable outside the framework, and it will give you mysql, postgres, or sqlite support. You could also hook up the django admin site to your models and have a web-based editor. \nThere are surely other ORMs and code generators out there too (I hope some python gurus will point some out, I'm kind of curious). \n",
"If you want something really small and simple, I like the Autumn ORM.\nIf you use the Django ORM, you can use the automatically-generated Django admin interface, which is really nice. It's basically a web-based GUI for browsing and editing records in your database.\nIf you think you will need advanced SQL features, SQLAlchemy is a good way to go. I suspect for a desktop application, Django or Autumn would be better.\nThere are other Python ORMs, such as Storm. Do a Google search on \"python ORM\". See also the discussion on this web site: What are some good Python ORM solutions?\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"crud",
"database",
"desktop",
"python"
] |
stackoverflow_0001407016_crud_database_desktop_python.txt
|
Q:
Python urllib, minidom and parsing international characters
When I try to retrieve information from Google weather API with the following URL,
http://www.google.com/ig/api?weather=Munich,Germany&hl=de
and then try to parse it with minidom, I get error that the document is not well formed.
I use following code
sock = urllib.urlopen(url) # above mentioned url
doc = minidom.parse(sock)
I think the German characters in the response is the cause of the error.
What is the correct way of doing this ?
A:
This seems to work:
sock = urllib.urlopen(url)
# There is a nicer way for this, but I don't remember right now:
encoding = sock.headers['Content-type'].split('charset=')[1]
data = sock.read()
dom = minidom.parseString(data.decode(encoding).encode('ascii', 'xmlcharrefreplace'))
I guess minidom doesn't handle anything non-ascii. You might want to look into lxml instead, it does.
A:
The encoding sent in the headers is iso-8859-1 according to python's urllib.urlopen (although firefox's live http headers seems to disagree with me in this case - reports utf-8). In the xml itself there is no encoding specified --> that's why xml.dom.minidom assumes it's utf-8.
So the following should fix this specific issue:
import urllib
from xml.dom import minidom
sock = urllib.urlopen('http://www.google.com/ig/api?weather=Munich,Germany&hl=de')
s = sock.read()
encoding = sock.headers['Content-type'].split('charset=')[1] # iso-8859-1
doc = minidom.parseString(s.decode(encoding).encode('utf-8'))
Edit: I've updated this answer after the comment of Glenn Maynard. I took the liberty of taking one line out of the answer of Lennert Regebro.
|
Python urllib, minidom and parsing international characters
|
When I try to retrieve information from Google weather API with the following URL,
http://www.google.com/ig/api?weather=Munich,Germany&hl=de
and then try to parse it with minidom, I get error that the document is not well formed.
I use following code
sock = urllib.urlopen(url) # above mentioned url
doc = minidom.parse(sock)
I think the German characters in the response is the cause of the error.
What is the correct way of doing this ?
|
[
"This seems to work:\nsock = urllib.urlopen(url)\n# There is a nicer way for this, but I don't remember right now:\nencoding = sock.headers['Content-type'].split('charset=')[1]\ndata = sock.read()\ndom = minidom.parseString(data.decode(encoding).encode('ascii', 'xmlcharrefreplace'))\n\nI guess minidom doesn't handle anything non-ascii. You might want to look into lxml instead, it does.\n",
"The encoding sent in the headers is iso-8859-1 according to python's urllib.urlopen (although firefox's live http headers seems to disagree with me in this case - reports utf-8). In the xml itself there is no encoding specified --> that's why xml.dom.minidom assumes it's utf-8. \nSo the following should fix this specific issue:\nimport urllib\nfrom xml.dom import minidom\n\nsock = urllib.urlopen('http://www.google.com/ig/api?weather=Munich,Germany&hl=de')\ns = sock.read()\nencoding = sock.headers['Content-type'].split('charset=')[1] # iso-8859-1\ndoc = minidom.parseString(s.decode(encoding).encode('utf-8'))\n\nEdit: I've updated this answer after the comment of Glenn Maynard. I took the liberty of taking one line out of the answer of Lennert Regebro.\n"
] |
[
2,
1
] |
[] |
[] |
[
"internationalization",
"minidom",
"python",
"urllib"
] |
stackoverflow_0001407874_internationalization_minidom_python_urllib.txt
|
Q:
Create an executable process without using shell on Python 2.5 and below
Just what the title says:
The subprocess module cannot be used as this should work on 2.4 and 2.5
Shell process should not be spawned to pass arguments.
To explain (2), consider the following code:
>>> x=os.system('foo arg')
sh: foo: not found
>>> x=os.popen('foo arg')
sh: foo: not found
>>>
As you can see os.system and os.popen runs the given command ("foo") via a system shell ("sh"). I don't want this to happen (otherwise, ugly 'not found' messages are printed to program stderr without my control).
Finally, I should be able to pass arguments to this program ('arg' in the above example).
How would one go about doing this in Python 2.5 and 2.4?
A:
You probably need to use the subprocess module which is available in Python 2.4
Popen("/home/user/foo" + " arg")
>>> Popen("foo arg", shell=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/subprocess.py", line 595, in __init__
errread, errwrite)
File "/usr/lib/python2.6/subprocess.py", line 1092, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
You will need to include the full path since you aren't using the shell.
http://docs.python.org/library/subprocess.html#replacing-os-system
Alternatively you can also pass subprocess.PIPE to the stderr and stdout to suppress the messages. See the link above for more details.
A:
As previously described, you can (and should) use the subprocess module.
By default, shell parameter is False. This is good, and quite safe. Also, you don't need to pass the full path, just pass the executable name and the arguments as a sequence (tuple or list).
import subprocess
# This works fine
p = subprocess.Popen(["echo","2"])
# These will raise OSError exception:
p = subprocess.Popen("echo 2")
p = subprocess.Popen(["echo 2"])
p = subprocess.Popen(["echa", "2"])
You can also use these two convenience functions already defined in subprocess module:
# Their arguments are the same as the Popen constructor
retcode = subprocess.call(["echo", "2"])
subprocess.check_call(["echo", "2"])
Remember you can redirect stdout and/or stderr to PIPE, and thus it won't be printed to the screen (but the output is still available for reading by your python program). By default, stdout and stderr are both None, which means no redirection, which means they will use the same stdout/stderr as your python program.
Also, you can use shell=True and redirect the stdout.stderr to a PIPE, and thus no message will be printed:
# This will work fine, but no output will be printed
p = subprocess.Popen("echo 2", shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# This will NOT raise an exception, and the shell error message is redirected to PIPE
p = subprocess.Popen("echa 2", shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
Create an executable process without using shell on Python 2.5 and below
|
Just what the title says:
The subprocess module cannot be used as this should work on 2.4 and 2.5
Shell process should not be spawned to pass arguments.
To explain (2), consider the following code:
>>> x=os.system('foo arg')
sh: foo: not found
>>> x=os.popen('foo arg')
sh: foo: not found
>>>
As you can see os.system and os.popen runs the given command ("foo") via a system shell ("sh"). I don't want this to happen (otherwise, ugly 'not found' messages are printed to program stderr without my control).
Finally, I should be able to pass arguments to this program ('arg' in the above example).
How would one go about doing this in Python 2.5 and 2.4?
|
[
"You probably need to use the subprocess module which is available in Python 2.4\nPopen(\"/home/user/foo\" + \" arg\")\n\n>>> Popen(\"foo arg\", shell=False)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/lib/python2.6/subprocess.py\", line 595, in __init__\n errread, errwrite)\n File \"/usr/lib/python2.6/subprocess.py\", line 1092, in _execute_child\n raise child_exception\nOSError: [Errno 2] No such file or directory\n\nYou will need to include the full path since you aren't using the shell.\nhttp://docs.python.org/library/subprocess.html#replacing-os-system\nAlternatively you can also pass subprocess.PIPE to the stderr and stdout to suppress the messages. See the link above for more details.\n",
"As previously described, you can (and should) use the subprocess module.\nBy default, shell parameter is False. This is good, and quite safe. Also, you don't need to pass the full path, just pass the executable name and the arguments as a sequence (tuple or list).\nimport subprocess\n\n# This works fine\np = subprocess.Popen([\"echo\",\"2\"])\n\n# These will raise OSError exception:\np = subprocess.Popen(\"echo 2\")\np = subprocess.Popen([\"echo 2\"])\np = subprocess.Popen([\"echa\", \"2\"])\n\nYou can also use these two convenience functions already defined in subprocess module:\n# Their arguments are the same as the Popen constructor\nretcode = subprocess.call([\"echo\", \"2\"])\nsubprocess.check_call([\"echo\", \"2\"])\n\nRemember you can redirect stdout and/or stderr to PIPE, and thus it won't be printed to the screen (but the output is still available for reading by your python program). By default, stdout and stderr are both None, which means no redirection, which means they will use the same stdout/stderr as your python program.\nAlso, you can use shell=True and redirect the stdout.stderr to a PIPE, and thus no message will be printed:\n# This will work fine, but no output will be printed\np = subprocess.Popen(\"echo 2\", shell=True,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n# This will NOT raise an exception, and the shell error message is redirected to PIPE\np = subprocess.Popen(\"echa 2\", shell=True,\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"fork",
"popen",
"process",
"python",
"subprocess"
] |
stackoverflow_0001407992_fork_popen_process_python_subprocess.txt
|
Q:
Nice Python Decorators
How do I nicely write a decorator?
In particular issues include: compatibility with other decorators, preserving of signatures, etc.
I would like to avoid dependency on the decorator module if possible, but if there were sufficient advantages, then I would consider it.
Related
Preserving signatures of decorated functions - much more specific question. The answer here is to use the third-party decorator module annotating the decorator with @decorator.decorator
A:
Use functools to preserve the name and doc. The signature won't be preserved.
Directly from the doc.
>>> from functools import wraps
>>> def my_decorator(f):
... @wraps(f)
... def wrapper(*args, **kwds):
... print 'Calling decorated function'
... return f(*args, **kwds)
... return wrapper
...
>>> @my_decorator
... def example():
... """Docstring"""
... print 'Called example function'
...
>>> example()
Calling decorated function
Called example function
>>> example.__name__
'example'
>>> example.__doc__
'Docstring'
A:
Writing a good decorator is no different then writing a good function. Which means, ideally, using docstrings and making sure the decorator is included in your testing framework.
You should definitely use either the decorator library or, better, the functools.wraps() decorator in the standard library (since 2.5).
Beyond that, it's best to keep your decorators narrowly focused and well designed. Don't use *args or **kw if your decorator expects specific arguments. And do fill in what arguments you expect, so instead of:
def keep_none(func):
def _exec(*args, **kw):
return None if args[0] is None else func(*args, **kw)
return _exec
... use ...
def keep_none(func):
"""Wraps a function which expects a value as the first argument, and
ensures the function won't get called with *None*. If it is, this
will return *None*.
>>> def f(x):
... return x + 5
>>> f(1)
6
>>> f(None) is None
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
>>> f = keep_none(f)
>>> f(1)
6
>>> f(None) is None
True"""
@wraps(func)
def _exec(value, *args, **kw):
return None if value is None else func(value, *args, **kw)
return _exec
|
Nice Python Decorators
|
How do I nicely write a decorator?
In particular issues include: compatibility with other decorators, preserving of signatures, etc.
I would like to avoid dependency on the decorator module if possible, but if there were sufficient advantages, then I would consider it.
Related
Preserving signatures of decorated functions - much more specific question. The answer here is to use the third-party decorator module annotating the decorator with @decorator.decorator
|
[
"Use functools to preserve the name and doc. The signature won't be preserved.\nDirectly from the doc. \n>>> from functools import wraps\n>>> def my_decorator(f):\n... @wraps(f)\n... def wrapper(*args, **kwds):\n... print 'Calling decorated function'\n... return f(*args, **kwds)\n... return wrapper\n...\n>>> @my_decorator\n... def example():\n... \"\"\"Docstring\"\"\"\n... print 'Called example function'\n...\n>>> example()\nCalling decorated function\nCalled example function\n>>> example.__name__\n'example'\n>>> example.__doc__\n'Docstring'\n\n",
"Writing a good decorator is no different then writing a good function. Which means, ideally, using docstrings and making sure the decorator is included in your testing framework.\nYou should definitely use either the decorator library or, better, the functools.wraps() decorator in the standard library (since 2.5).\nBeyond that, it's best to keep your decorators narrowly focused and well designed. Don't use *args or **kw if your decorator expects specific arguments. And do fill in what arguments you expect, so instead of:\ndef keep_none(func):\n def _exec(*args, **kw):\n return None if args[0] is None else func(*args, **kw)\n\n return _exec\n\n... use ...\ndef keep_none(func):\n \"\"\"Wraps a function which expects a value as the first argument, and\n ensures the function won't get called with *None*. If it is, this \n will return *None*.\n\n >>> def f(x):\n ... return x + 5\n >>> f(1)\n 6\n >>> f(None) is None\n Traceback (most recent call last):\n ...\n TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'\n >>> f = keep_none(f)\n >>> f(1)\n 6\n >>> f(None) is None\n True\"\"\"\n\n @wraps(func)\n def _exec(value, *args, **kw):\n return None if value is None else func(value, *args, **kw)\n\n return _exec\n\n"
] |
[
6,
5
] |
[] |
[] |
[
"decorator",
"python"
] |
stackoverflow_0001408253_decorator_python.txt
|
Q:
NumPy array slice using None
This had me scratching my head for a while. I was unintentionally slicing an array with None and getting something other than an error (I expected an error). Instead, it returns an array with an extra dimension.
>>> import numpy
>>> a = numpy.arange(4).reshape(2,2)
>>> a
array([[0, 1],
[2, 3]])
>>> a[None]
array([[[0, 1],
[2, 3]]])
Is this behavior intentional or a side-effect? If intentional, is there some rationale for it?
A:
Using None is equivalent to using numpy.newaxis, so yes, it's intentional. In fact, they're the same thing, but, of course, newaxis spells it out better.
The docs:
The newaxis object can be used in all slicing operations to create an axis of length one. newaxis is an alias for ‘None’, and ‘None’ can be used in place of this with the same result.
A related SO question.
|
NumPy array slice using None
|
This had me scratching my head for a while. I was unintentionally slicing an array with None and getting something other than an error (I expected an error). Instead, it returns an array with an extra dimension.
>>> import numpy
>>> a = numpy.arange(4).reshape(2,2)
>>> a
array([[0, 1],
[2, 3]])
>>> a[None]
array([[[0, 1],
[2, 3]]])
Is this behavior intentional or a side-effect? If intentional, is there some rationale for it?
|
[
"Using None is equivalent to using numpy.newaxis, so yes, it's intentional. In fact, they're the same thing, but, of course, newaxis spells it out better.\nThe docs:\n\nThe newaxis object can be used in all slicing operations to create an axis of length one. newaxis is an alias for ‘None’, and ‘None’ can be used in place of this with the same result.\n\nA related SO question.\n"
] |
[
75
] |
[] |
[] |
[
"arrays",
"numpy",
"python"
] |
stackoverflow_0001408311_arrays_numpy_python.txt
|
Q:
send an arbitrary number of inputs from python to a .exe
p = subprocess.Popen(args = "myprog.exe" + " " +
str(input1) + " " +
str(input2) + " " +
str(input3) + " " +
strpoints, stdout = subprocess.PIPE)
in the code above, input1, input2, and input3 are all integers that get converted to strings. the variable "strpoints" is a list of arbitrary length of strings. input1 tells myprog the length of strpoints. of course, when i try to run the above code, i get the following error message:
TypeError: Can't convert 'list' object to str implicitly
how do i pass all the elements of strpoints to myprog.exe? am i doomed to having to do str(strpoints) and then have myprog.exe parse this for commas, apostrophes, etc.? e.g.,
`>>> x = ['a', 'b']
`>>> str(x)
"['a', 'b']"
or should i create a huge string in advance? e.g.,
'>>> x = ['a', 'b']
'>>> stringify(x)
' a b'
where stringify would be something like
def stringify(strlist):
rlist = ""
for i in strlist:
rlist = rlist + i + " "
return rlist
A:
args can be a sequence:
p = subprocess.Popen(args = ["myprog.exe"] +
[str(x) for x in [input1,input2,input3]] +
strpoints,stdout = subprocess.PIPE)
This is more correct if your arguments contain shell metacharacters e.g. ' * and you don't want them interpreted as such.
A:
Try using string.join:
p = subprocess.Popen(args = "myprog.exe" + " " +
str(input1) + " " +
str(input2) + " " +
str(input3) + " " +
" ".join(strpoints), stdout = subprocess.PIPE)
A:
Avoid concatenating all arguments into one string using that string.
It's a lot simpler and better and safer to just pass a sequence (list or tuple) of arguments. This is specially true if any argument contains a space character (which is quite common for filenames).
A:
Any thought that you might want to enclose those strings in quotes, in case they have embedded spaces? Something like:
if strpoints:
finalargs = '"' + '" "'.join(strpoints) + '"'
else:
finalargs = ""
p = subprocess.Popen(args = "myprog.exe" + " " +
str(input1) + " " +
str(input2) + " " +
str(input3) + " " +
finalargs, stdout = subprocess.PIPE)
This makes your string longer, but will preserve the integrity of your individual elements in the list.
|
send an arbitrary number of inputs from python to a .exe
|
p = subprocess.Popen(args = "myprog.exe" + " " +
str(input1) + " " +
str(input2) + " " +
str(input3) + " " +
strpoints, stdout = subprocess.PIPE)
in the code above, input1, input2, and input3 are all integers that get converted to strings. the variable "strpoints" is a list of arbitrary length of strings. input1 tells myprog the length of strpoints. of course, when i try to run the above code, i get the following error message:
TypeError: Can't convert 'list' object to str implicitly
how do i pass all the elements of strpoints to myprog.exe? am i doomed to having to do str(strpoints) and then have myprog.exe parse this for commas, apostrophes, etc.? e.g.,
`>>> x = ['a', 'b']
`>>> str(x)
"['a', 'b']"
or should i create a huge string in advance? e.g.,
'>>> x = ['a', 'b']
'>>> stringify(x)
' a b'
where stringify would be something like
def stringify(strlist):
rlist = ""
for i in strlist:
rlist = rlist + i + " "
return rlist
|
[
"args can be a sequence:\np = subprocess.Popen(args = [\"myprog.exe\"] + \n [str(x) for x in [input1,input2,input3]] + \n strpoints,stdout = subprocess.PIPE)\n\nThis is more correct if your arguments contain shell metacharacters e.g. ' * and you don't want them interpreted as such.\n",
"Try using string.join: \np = subprocess.Popen(args = \"myprog.exe\" + \" \" +\n str(input1) + \" \" +\n str(input2) + \" \" +\n str(input3) + \" \" +\n \" \".join(strpoints), stdout = subprocess.PIPE)\n\n",
"Avoid concatenating all arguments into one string using that string.\nIt's a lot simpler and better and safer to just pass a sequence (list or tuple) of arguments. This is specially true if any argument contains a space character (which is quite common for filenames).\n",
"Any thought that you might want to enclose those strings in quotes, in case they have embedded spaces? Something like:\nif strpoints:\n finalargs = '\"' + '\" \"'.join(strpoints) + '\"'\nelse:\n finalargs = \"\"\np = subprocess.Popen(args = \"myprog.exe\" + \" \" +\n str(input1) + \" \" +\n str(input2) + \" \" +\n str(input3) + \" \" +\n finalargs, stdout = subprocess.PIPE)\n\nThis makes your string longer, but will preserve the integrity of your individual elements in the list.\n"
] |
[
7,
4,
1,
0
] |
[] |
[] |
[
"executable",
"python"
] |
stackoverflow_0001408326_executable_python.txt
|
Q:
Help with Python/Qt4 and QTableWidget column click
I'm trying to learn PyQt4 and GUI design with QtDesigner. I've got my basic GUI designed, and I now want to capture when the user clicks on a column header.
My thought is that I need to override QTableWidget, but I don't know how to attach to the signal. Here's my class so far:
class MyTableWidget(QtGui.QTableWidget):
def __init__(self, parent = None):
super(MyTableWidget, self).__init__(parent)
self.connect(self, SIGNAL('itemClicked(QTreeWidgetItem*)'), self.onClick)
def onClick(self):
print "Here!"
But, setting a breakpoint in the onClick, nothing is firing.
Can somebody please help me?
TIA
Mike
A:
OK, the SIGNAL needed is:
self.connect(self.horizontalHeader(), SIGNAL('sectionClicked(int)'), self.onClick)
|
Help with Python/Qt4 and QTableWidget column click
|
I'm trying to learn PyQt4 and GUI design with QtDesigner. I've got my basic GUI designed, and I now want to capture when the user clicks on a column header.
My thought is that I need to override QTableWidget, but I don't know how to attach to the signal. Here's my class so far:
class MyTableWidget(QtGui.QTableWidget):
def __init__(self, parent = None):
super(MyTableWidget, self).__init__(parent)
self.connect(self, SIGNAL('itemClicked(QTreeWidgetItem*)'), self.onClick)
def onClick(self):
print "Here!"
But, setting a breakpoint in the onClick, nothing is firing.
Can somebody please help me?
TIA
Mike
|
[
"OK, the SIGNAL needed is:\nself.connect(self.horizontalHeader(), SIGNAL('sectionClicked(int)'), self.onClick)\n\n"
] |
[
2
] |
[] |
[] |
[
"pyqt4",
"python",
"qt4"
] |
stackoverflow_0001408277_pyqt4_python_qt4.txt
|
Q:
Python: NoneType errors.Do they look familiar
I've been looking for the NoneType for half a day. I've put 'print' and dir() all through the generation of the Object represented by t2. I've looked at the data structure after the crash using 'post mortem' and nowhere can I find a NoneType.
I was wondering if perhaps it's one of those errors that are initiated by some other part of the code (wishful thinking) and I was wondering if anybody recognizes this?
( k2 is an 'int' )
File "C:\Python26\Code\OO.py", line 48, in removeSubtreeFromTree
assert getattr(parent, branch) is subtreenode
TypeError: getattr(): attribute name must be string, not 'NoneType
File "C:\Python26\Code\OO.py", line 94, in theSwapper
st2, p2, b2 = self.removeSubtreeFromTree(t2, k2)
TypeError: 'NoneType' object is not iterable
A:
NoneType is the type of the None object. So, in the first error, branch is None. The second error is tougher to diagnose without seeing the source code, but suggests that somewhere in t2, the data structure isn't exactly as you believe.
When this comes up for me, I usually find that I've forgotten to end one of my functions with a return statement. Functions without an explicit return will return None.
A:
for some reason, at the point of the assert line, the value of branch is None.
If your second exception is separate, Then most likely what is happening is the method call self.removeSubtreeFromTree() is returning None, instead of a sequence (like a tuple), so when Python tries to unpack it into the variables, it fails.
A:
I agree with Managu that it's likely you've forgotten to return a value from a function. I do that all the time.
As another possibility, I presume you are writing some kind of tree data structure. Is it possible that you're using None to indicate "this node has no children" and you aren't handling that case correctly?
A:
Another one that got me was in-place functions, like list.append() (can't use that in a function call, list.append() returns None and changes the variable).
I spent the better part of a day and a half chasing that bug....
|
Python: NoneType errors.Do they look familiar
|
I've been looking for the NoneType for half a day. I've put 'print' and dir() all through the generation of the Object represented by t2. I've looked at the data structure after the crash using 'post mortem' and nowhere can I find a NoneType.
I was wondering if perhaps it's one of those errors that are initiated by some other part of the code (wishful thinking) and I was wondering if anybody recognizes this?
( k2 is an 'int' )
File "C:\Python26\Code\OO.py", line 48, in removeSubtreeFromTree
assert getattr(parent, branch) is subtreenode
TypeError: getattr(): attribute name must be string, not 'NoneType
File "C:\Python26\Code\OO.py", line 94, in theSwapper
st2, p2, b2 = self.removeSubtreeFromTree(t2, k2)
TypeError: 'NoneType' object is not iterable
|
[
"NoneType is the type of the None object. So, in the first error, branch is None. The second error is tougher to diagnose without seeing the source code, but suggests that somewhere in t2, the data structure isn't exactly as you believe.\nWhen this comes up for me, I usually find that I've forgotten to end one of my functions with a return statement. Functions without an explicit return will return None.\n",
"for some reason, at the point of the assert line, the value of branch is None. \nIf your second exception is separate, Then most likely what is happening is the method call self.removeSubtreeFromTree() is returning None, instead of a sequence (like a tuple), so when Python tries to unpack it into the variables, it fails.\n",
"I agree with Managu that it's likely you've forgotten to return a value from a function. I do that all the time.\nAs another possibility, I presume you are writing some kind of tree data structure. Is it possible that you're using None to indicate \"this node has no children\" and you aren't handling that case correctly?\n",
"Another one that got me was in-place functions, like list.append() (can't use that in a function call, list.append() returns None and changes the variable).\nI spent the better part of a day and a half chasing that bug....\n"
] |
[
5,
2,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001408286_python.txt
|
Q:
How can you profile a parallelized Python script?
Suppose I have a python script called my_parallel_script.py that involves using multiprocessing to parallelize several things and I run it with the following command:
python -m cProfile my_parallel_script.py
This generates profiling output for the parent process only. Calls made in child processes are not recorded at all. Is it possible to profile the child processes as well?
If the only option is to modify the source, what would be the simplest way to do this?
A:
cProfile only works within a single process, so you will not automatically get the child process profiled.
I would recommend that you tweak the child process code so that you can invoke it separately as a single process. Then run it under the profiler. You probably don't need to run your system multi-process while profiling, and it will simplify the job to have only a single child running.
|
How can you profile a parallelized Python script?
|
Suppose I have a python script called my_parallel_script.py that involves using multiprocessing to parallelize several things and I run it with the following command:
python -m cProfile my_parallel_script.py
This generates profiling output for the parent process only. Calls made in child processes are not recorded at all. Is it possible to profile the child processes as well?
If the only option is to modify the source, what would be the simplest way to do this?
|
[
"cProfile only works within a single process, so you will not automatically get the child process profiled.\nI would recommend that you tweak the child process code so that you can invoke it separately as a single process. Then run it under the profiler. You probably don't need to run your system multi-process while profiling, and it will simplify the job to have only a single child running.\n"
] |
[
10
] |
[] |
[] |
[
"multiprocessing",
"parallel_processing",
"profile",
"python"
] |
stackoverflow_0001408393_multiprocessing_parallel_processing_profile_python.txt
|
Q:
Track process status with Python
I want to start a number of subprocesses in my Python script and then track when they complete or crash.
subprocess.Popen.poll() seems to return None when the process is still running, 0 on success, and non-zero on failure. Can that be expected on all OS's?
Unfortunately the standard library documentation is lacking for these methods...
Is the subprocess module the most suitable to achieve this goal?
thanks
A:
This may not be a very good answer to your question, but just in case you are at risk of reinventing a wheel, take a look at Supervisor
Supervisor is a client/server system that allows its users to monitor and
control a number of processes on
UNIX-like operating systems.
And it's all written in Python, so if you feel like tinkering with it, you can dig right in!
A:
Yes to all.
|
Track process status with Python
|
I want to start a number of subprocesses in my Python script and then track when they complete or crash.
subprocess.Popen.poll() seems to return None when the process is still running, 0 on success, and non-zero on failure. Can that be expected on all OS's?
Unfortunately the standard library documentation is lacking for these methods...
Is the subprocess module the most suitable to achieve this goal?
thanks
|
[
"This may not be a very good answer to your question, but just in case you are at risk of reinventing a wheel, take a look at Supervisor \n\nSupervisor is a client/server system that allows its users to monitor and\n control a number of processes on\n UNIX-like operating systems.\n\nAnd it's all written in Python, so if you feel like tinkering with it, you can dig right in!\n",
"Yes to all. \n"
] |
[
4,
1
] |
[] |
[] |
[
"crash",
"process",
"python",
"subprocess"
] |
stackoverflow_0001408627_crash_process_python_subprocess.txt
|
Q:
Trimming Python Runtime
We've got a (Windows) application, with which we distribute an entire Python installation (including several 3rd-party modules that we use), so we have consistency and so we don't need to install everything separately. This works pretty well, but the application is pretty huge.
Obviously, we don't use everything available in the runtime. I'd like to trim down the runtime to only include what we really need.
I plan on trying out py2exe, but I'd like to try and find another solution that will just help me remove the unneeded parts of the Python runtime.
A:
One trick I've learned while trimming down .py files to ship: Delete all the .pyc files in the standard library, then run your application throughly (that is, enough to be sure all the Python modules it needs will be loaded). If you examine the standard library directories, there will be .pyc files for all the modules that were actually used. .py files without .pyc are ones that you don't need.
A:
Both py2exe and pyinstaller (NOTE: for the latter use the SVN version, the released one is VERY long in the tooth;-) do their "trimming" via modulefinder, the standard library module for finding all modules used by a given Python script; you can of course use the latter yourself to identify all needed modules, if you don't trust pyinstaller or py2exe to do it properly and automatically on your behalf.
A:
This py2exe page on compression suggests using UPX to compress any DLLs or .pyd files (which are actually just DLLs, still). Obviously this doesn't help in trimming out unneeded modules, but it can/will trim down the size of your distribution, if that's a large concern.
|
Trimming Python Runtime
|
We've got a (Windows) application, with which we distribute an entire Python installation (including several 3rd-party modules that we use), so we have consistency and so we don't need to install everything separately. This works pretty well, but the application is pretty huge.
Obviously, we don't use everything available in the runtime. I'd like to trim down the runtime to only include what we really need.
I plan on trying out py2exe, but I'd like to try and find another solution that will just help me remove the unneeded parts of the Python runtime.
|
[
"One trick I've learned while trimming down .py files to ship: Delete all the .pyc files in the standard library, then run your application throughly (that is, enough to be sure all the Python modules it needs will be loaded). If you examine the standard library directories, there will be .pyc files for all the modules that were actually used. .py files without .pyc are ones that you don't need.\n",
"Both py2exe and pyinstaller (NOTE: for the latter use the SVN version, the released one is VERY long in the tooth;-) do their \"trimming\" via modulefinder, the standard library module for finding all modules used by a given Python script; you can of course use the latter yourself to identify all needed modules, if you don't trust pyinstaller or py2exe to do it properly and automatically on your behalf.\n",
"This py2exe page on compression suggests using UPX to compress any DLLs or .pyd files (which are actually just DLLs, still). Obviously this doesn't help in trimming out unneeded modules, but it can/will trim down the size of your distribution, if that's a large concern.\n"
] |
[
6,
5,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001408726_python.txt
|
Q:
Getting another program's output as input on the fly
I've two programs I'm using in this way:
$ c_program | python_program.py
c_program prints something using printf() and python_program.py reads using sys.stdin.readline()
I'd like to make the python_program.py process c_program's output as it prints, immediately, so that it can print its own current output. Unfortunately python_program.py gets its input only after c_program ends.
How can I solve this?
A:
Just set stdout to be line buffered at the beginning of your C program (before performing any output), like this:
#include <stdio.h>
setvbuf(stdout, NULL, _IOLBF, 0);
or
#include <stdio.h>
setlinebuf(stdout);
Either one will work on Linux, but setvbuf is part of the C standard so it will work on more systems.
By default stdout will be block buffered for a pipe or file, or line buffered for a terminal. Since stdout is a pipe in this case, the default will be block buffered. If it is block buffered then the buffer will be flushed when it is full, or when you call fflush(stdout). If it is line buffered then it will be flushed automatically after each line.
A:
What you need is for your C program to call fflush(stdout) after every line. For example, with the GNU grep tool, you can invoke the option '--line-buffered', which causes this behavior. See fflush.
A:
If you can modify your C program, you've already received your answer but i thought i'd include a solution for those that can't/won't modify code.
expect has an example script called unbuffer that will do the trick.
A:
You may want to try flushing the stdout stream in the cpp program.
A:
All the Unix shells (that I know of) implement shell pipelines via something else than a pty
(typically, they use Unix pipes!-); therefore, the C/C++ runtime library in cpp_program will KNOW its output is NOT a terminal, and therefore it WILL buffer the output (in chunks of a few KB at a time). Unless you write your own shell (or semiquasimaybeshelloid) that implements pipelines via pyt's, I believe there is no way to do what you require using pipeline notation.
The "shelloid" thing in question might be written in Python (or in C, or Tcl, or...), using the pty module of the standard library or higher-level abstraction based on it such as pexpect, and the fact that the two programs to be connected via a "pty-based pipeline" are written in C++ and Python is pretty irrelevant. The key idea is to trick the program to the left of the pipe into believing its stdout is a terminal (that's why a pty must be at the root of the trick) to fool its runtime library into NOT buffering output. Once you have written such a shelloid, you'd call it with some syntax such as:
$ shelloid 'cpp_program | python_program.py'
Of course it would be easier to provide a "point solution" by writing python_program in the knowledge that it must spawn cpp_program as a sub-process AND trick it into believing its stdout is a terminal (i.e., python_program would then directly use pexpect, for example). But if you have a million of such situations where you want to defeat the normal buffering performed by the system-provided C runtime library, or many cases in which you want to reuse existing filters, etc, writing shelloid might actually be preferable.
|
Getting another program's output as input on the fly
|
I've two programs I'm using in this way:
$ c_program | python_program.py
c_program prints something using printf() and python_program.py reads using sys.stdin.readline()
I'd like to make the python_program.py process c_program's output as it prints, immediately, so that it can print its own current output. Unfortunately python_program.py gets its input only after c_program ends.
How can I solve this?
|
[
"Just set stdout to be line buffered at the beginning of your C program (before performing any output), like this:\n#include <stdio.h>\nsetvbuf(stdout, NULL, _IOLBF, 0);\n\nor\n#include <stdio.h>\nsetlinebuf(stdout);\n\nEither one will work on Linux, but setvbuf is part of the C standard so it will work on more systems.\nBy default stdout will be block buffered for a pipe or file, or line buffered for a terminal. Since stdout is a pipe in this case, the default will be block buffered. If it is block buffered then the buffer will be flushed when it is full, or when you call fflush(stdout). If it is line buffered then it will be flushed automatically after each line.\n",
"What you need is for your C program to call fflush(stdout) after every line. For example, with the GNU grep tool, you can invoke the option '--line-buffered', which causes this behavior. See fflush.\n",
"If you can modify your C program, you've already received your answer but i thought i'd include a solution for those that can't/won't modify code.\nexpect has an example script called unbuffer that will do the trick.\n",
"You may want to try flushing the stdout stream in the cpp program.\n",
"All the Unix shells (that I know of) implement shell pipelines via something else than a pty\n(typically, they use Unix pipes!-); therefore, the C/C++ runtime library in cpp_program will KNOW its output is NOT a terminal, and therefore it WILL buffer the output (in chunks of a few KB at a time). Unless you write your own shell (or semiquasimaybeshelloid) that implements pipelines via pyt's, I believe there is no way to do what you require using pipeline notation.\nThe \"shelloid\" thing in question might be written in Python (or in C, or Tcl, or...), using the pty module of the standard library or higher-level abstraction based on it such as pexpect, and the fact that the two programs to be connected via a \"pty-based pipeline\" are written in C++ and Python is pretty irrelevant. The key idea is to trick the program to the left of the pipe into believing its stdout is a terminal (that's why a pty must be at the root of the trick) to fool its runtime library into NOT buffering output. Once you have written such a shelloid, you'd call it with some syntax such as:\n$ shelloid 'cpp_program | python_program.py'\nOf course it would be easier to provide a \"point solution\" by writing python_program in the knowledge that it must spawn cpp_program as a sub-process AND trick it into believing its stdout is a terminal (i.e., python_program would then directly use pexpect, for example). But if you have a million of such situations where you want to defeat the normal buffering performed by the system-provided C runtime library, or many cases in which you want to reuse existing filters, etc, writing shelloid might actually be preferable.\n"
] |
[
18,
9,
7,
2,
1
] |
[
"ok this maybe sound stupid but it might work:\noutput your pgm to a file\n$ c_program >> ./out.log\n\ndevelop a python program that read from tail command\nimport os\n\ntailoutput = os.popen(\"tail -n 0 -f ./out.log\")\n\ntry:\n while 1:\n line = tailoutput.readline()\n if len(line) == 0:\n break\n\n #do the rest of your things here\n print line\n\nexcept KeyboardInterrupt:\n print \"Quitting \\n\"\n\n"
] |
[
-1
] |
[
"bash",
"c",
"linux",
"python",
"stdio"
] |
stackoverflow_0001408678_bash_c_linux_python_stdio.txt
|
Q:
Serving static files with Twisted and Django under non-root folders
I am in the process of migrating an application (Sage) from Twisted to Django.
Static documentation is currently served under /doc/static, while live (built on-the-fly) documentation are served under /doc/live.
Is it possible to use Twisted to serve /doc/static only, leaving Django to serve the rest of /doc/*?
A:
Have a look at this link on how to run Django on top of Twisted: (instructions copied from the blog)
easy_install Twisted
easy_install Django
Profit!
django-admin.py startproject foo
Create a myapp.py with the following code:
from django.core.handlers.wsgi import WSGIHandler
application = WSGIHandler()
export DJANGO_SETTINGS_MODULE=foo.settings
twistd -no web --wsgi=myapp.application
Further down in the comments there is also an example of how to serve media directly with Twisted before the request is passed on to Django:
To handle media files just use
"static.File" from "twisted.web" like
so: staticrsrc =
static.File(os.path.join(os.path.abspath("."),
"mydjangosite/media")) and then add
that resource to your root resource
like so: root.putChild("media",
staticrsrc)
Disclaimer: I have not tried this myself, but the blog article seems quite recent and the author willing to respond to questions.
EDIT: There is also another article written on the subject with instructions on how to get it working here, which seems to include servering static files with Twisted directly.
A:
It can be done, the degree of elegance just varies... I understand this is for transition, so it might not have to be very beautiful.
If you have to have Twisted serving the static files, then you either have to hack together in django a proxy-through for those files, or throw something in front of the whole thing. Also Perlbal with VPATH could do this, it'll take regexp's of urls and make them hit the right services.
If you don't have to use Twisted, there are lots of different ways to do it. You could still use Perlbal or something similar to serve static files, something you should have anyway in the long run.
|
Serving static files with Twisted and Django under non-root folders
|
I am in the process of migrating an application (Sage) from Twisted to Django.
Static documentation is currently served under /doc/static, while live (built on-the-fly) documentation are served under /doc/live.
Is it possible to use Twisted to serve /doc/static only, leaving Django to serve the rest of /doc/*?
|
[
"Have a look at this link on how to run Django on top of Twisted: (instructions copied from the blog)\n\neasy_install Twisted\neasy_install Django\nProfit!\ndjango-admin.py startproject foo\nCreate a myapp.py with the following code:\nfrom django.core.handlers.wsgi import WSGIHandler\napplication = WSGIHandler()\nexport DJANGO_SETTINGS_MODULE=foo.settings\ntwistd -no web --wsgi=myapp.application\n\nFurther down in the comments there is also an example of how to serve media directly with Twisted before the request is passed on to Django:\n\nTo handle media files just use\n \"static.File\" from \"twisted.web\" like\n so: staticrsrc =\n static.File(os.path.join(os.path.abspath(\".\"),\n \"mydjangosite/media\")) and then add\n that resource to your root resource\n like so: root.putChild(\"media\",\n staticrsrc)\n\nDisclaimer: I have not tried this myself, but the blog article seems quite recent and the author willing to respond to questions.\nEDIT: There is also another article written on the subject with instructions on how to get it working here, which seems to include servering static files with Twisted directly.\n",
"It can be done, the degree of elegance just varies... I understand this is for transition, so it might not have to be very beautiful.\nIf you have to have Twisted serving the static files, then you either have to hack together in django a proxy-through for those files, or throw something in front of the whole thing. Also Perlbal with VPATH could do this, it'll take regexp's of urls and make them hit the right services.\nIf you don't have to use Twisted, there are lots of different ways to do it. You could still use Perlbal or something similar to serve static files, something you should have anyway in the long run. \n"
] |
[
3,
2
] |
[
"Unless I misunderstood the question, why not simply rewrite the /doc/static url to Twisted before it even reaches Django (ie. at the Apache / proxy level)?\nhttp://httpd.apache.org/docs/2.0/mod/mod_rewrite.html\n"
] |
[
-1
] |
[
"django",
"python",
"twisted"
] |
stackoverflow_0001405254_django_python_twisted.txt
|
Q:
Inverse Dict in Python
I am trying to create a new dict using a list of values of an existing dict as individual keys.
So for example:
dict1 = dict({'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]})
and I would like to obtain:
dict2 = dict({1:['a','b','c'], 2:['a','b','c'], 3:['a','b'], 4:['b']})
So far, I've not been able to do this in a very clean way. Any suggestions?
A:
If you are using Python 2.5 or above, use the defaultdict class from the collections module; a defaultdict automatically creates values on the first access to a missing key, so you can use that here to create the lists for dict2, like this:
from collections import defaultdict
dict1 = dict({'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]})
dict2 = defaultdict(list)
for key, values in dict1.items():
for value in values:
# The list for dict2[value] is created automatically
dict2[value].append(key)
Note that the lists in dict2 will not be in any particular order, as a dictionaries do not order their key-value pairs.
If you want an ordinary dict out at the end that will raise a KeyError for missing keys, just use dict2 = dict(dict2) after the above.
A:
Notice that you don't need the dict in your examples: the {} syntax gives you a dict:
dict1 = {'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]}
|
Inverse Dict in Python
|
I am trying to create a new dict using a list of values of an existing dict as individual keys.
So for example:
dict1 = dict({'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]})
and I would like to obtain:
dict2 = dict({1:['a','b','c'], 2:['a','b','c'], 3:['a','b'], 4:['b']})
So far, I've not been able to do this in a very clean way. Any suggestions?
|
[
"If you are using Python 2.5 or above, use the defaultdict class from the collections module; a defaultdict automatically creates values on the first access to a missing key, so you can use that here to create the lists for dict2, like this:\nfrom collections import defaultdict\ndict1 = dict({'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]})\ndict2 = defaultdict(list)\nfor key, values in dict1.items():\n for value in values:\n # The list for dict2[value] is created automatically\n dict2[value].append(key)\n\nNote that the lists in dict2 will not be in any particular order, as a dictionaries do not order their key-value pairs.\nIf you want an ordinary dict out at the end that will raise a KeyError for missing keys, just use dict2 = dict(dict2) after the above.\n",
"Notice that you don't need the dict in your examples: the {} syntax gives you a dict:\ndict1 = {'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]}\n\n"
] |
[
8,
4
] |
[
"Other way:\ndict2={}\n[[ (dict2.setdefault(i,[]) or 1) and (dict2[i].append(x)) for i in y ] for (x,y) in dict1.items()] \n\n"
] |
[
-3
] |
[
"data_structures",
"dictionary",
"python"
] |
stackoverflow_0001410087_data_structures_dictionary_python.txt
|
Q:
Python - IronPython dilemma
I'm starting to study Python, and for now I like it very much. But, if you could just answer a few questions for me, which have been troubling me, and I can't find any definite answers to them:
What is the relationship between Python's C implementation (main version from python.org) and IronPython, in terms of language compatibility ? Is it the same language, and do I by learning one, will be able to smoothly cross to another, or is it Java to JavaScript ?
What is the current status to IronPython's libraries ? How much does it lags behind CPython libraries ? I'm mostly interested in numpy/scipy and f2py. Are they available to IronPython ?
What would be the best way to access VB from Python and the other way back (connecting some python libraries to Excel's VBA, to be exact) ?
A:
1) IronPython and CPython share nearly identical language syntax. There is very little difference between them. Transitioning should be trivial.
2) The libraries in IronPython are very different than CPython. The Python libraries are a fair bit behind - quite a few of the CPython-accessible libraries will not work (currently) under IronPython. However, IronPython has clean, direct access to the entire .NET Framework, which means that it has one of the most extensive libraries natively accessible to it, so in many ways, it's far ahead of CPython. Some of the numpy/scipy libraries do not work in IronPython, but due to the .NET implementation, some of the functionality is not necessary, since the perf. characteristics are different.
3) Accessing Excel VBA is going to be easier using IronPython, if you're doing it from VBA. If you're trying to automate excel, IronPython is still easier, since you have access to the Execl Primary Interop Assemblies, and can directly automate it using the same libraries as C# and VB.NET.
A:
What is the relationship between
Python's C implementation (main
version from python.org) and
IronPython, in terms of language
compatibility ? Is it the same
language, and do I by learning one,
will be able to smoothly cross to
another, or is it Java to JavaScript ?
Same language (at 2.5 level for now -- IronPython's not 2.6 yet AFAIK).
What is the current status to
IronPython's libraries ? How much does
it lags behind CPython libraries ? I'm
mostly interested in numpy/scipy and
f2py. Are they available to IronPython
?
Standard libraries are in a great state in today's IronPython, huge third-party extensions like the ones you mention far from it. numpy's starting to get feasible thanks to ironclad, but not production-level as numpy is from IronPython (as witnessed by the 0.5 version number for ironclad, &c). scipy is huge and sprawling and chock full of C-coded and Fortran-coded extensions: I have no first-hand experience but I suspect less than half will even run, much less run flawlessly, under any implementation except CPython.
What would be the best way to access
VB from Python and the other way back
(connecting some python libraries to
Excel's VBA, to be exact) ?
IronPython should make it easier via .NET approaches, but CPython is not that far via COM implementation in win32all.
Last but not least, by all means check out the book IronPython in Action -- as I say every time I recommend it, I'm biased (by having been a tech reviewer for it AND by friendship with one author) but I think it's objectively the best intro to Python for .NET developers AND at the same time the best intro to .NET for Pythonistas.
If you need all of scipy (WOW, but that's some "Renaissance Man" computational scientist!-), CPython is really the only real option today. I'm sure other large extensions (PyQt, say, or Mayavi) are in a similar state. For deep integration to today's Windows, however, I think IronPython may have an edge. For general-purpose uses, CPython may be better (esp. thanks to the many new features in 2.6), unless you're really keen to use many cores to the hilt within a single process, in which case IronPython (which is GIL-less) may prove advantageous again.
One way or another (or even on the JVM via Jython, or in peculiar environments via PyPy) Python is surely an awesome language, whatever implementation(s) you pick for a given application!-) Note that you don't need to stick with ONE implementation (though you should probably pick one VERSION -- 2.5 for maximal compatibility with IronPython, Jython, Google App Engine, etc; 2.6 if you don't care about any deployment options except "CPython on a machine under my own sole or virtual control";-).
A:
IronPython version 2.0.2, the current release, supports Python 2.5 syntax. With the next release, 2.6, which is expected sometime over the next month or so (though I'm not sure the team have set a hard release date; here's the beta), they will support Python 2.6 syntax. So, less Java to JavaScript and more Java to J# :-)
All of the libraries that are themselves written in Python work pretty much perfectly. The ones that are written in C are more problematic; there is an open source project called Ironclad (full disclosure: developed and supported by my company), currently at version 0.8.5, which supports numpy pretty well but doesn't cover all of scipy. I don't think we've tested it with f2py. (The version of Ironclad mentioned below by Alex Martelli is over a year old, please avoid that one!)
Calling regular VB.NET from IronPython is pretty easy -- you can instantiate .NET classes and call methods (static or instance) really easily. Calling existing VBA code might be trickier; there are open source projects called Pyinex and XLW that you might want to take a look at. Alternatively, if you want a spreadsheet you can code in Python, then there's always Resolver One (another one from my company... :-)
A:
1) The language implemented by CPython and IronPython are the same, or at most a version or two apart. This is nothing like the situation with Java and Javascript, which are two completely different languages given similar names by some bone-headed marketing decision.
2) 3rd-party libraries implemented in C (such as numpy) will have to be evaluated carefully. IronPython has a facility to execute C extensions (I forget the name), but there are many pitfalls, so you need to check with each library's maintainer
3) I have no idea.
A:
CPython is implemented by C for corresponding platform, such as Windows, Linux or Unix; IronPython is implemented by C# and Windows .Net Framework, so it can only run on Windows Platform with .Net Framework.
For gramma, both are same. But we cannot say they are one same language. IronPython can use .Net Framework essily when you develop on windows platform.
By far, July 21, 2009 - IronPython 2.0.2, our latest stable release of IronPython, was released. you can refer to http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython.
You can access VB with .Net Framework function by IronPython. So, if you want to master IronPython, you'd better learn more .Net Framework.
|
Python - IronPython dilemma
|
I'm starting to study Python, and for now I like it very much. But, if you could just answer a few questions for me, which have been troubling me, and I can't find any definite answers to them:
What is the relationship between Python's C implementation (main version from python.org) and IronPython, in terms of language compatibility ? Is it the same language, and do I by learning one, will be able to smoothly cross to another, or is it Java to JavaScript ?
What is the current status to IronPython's libraries ? How much does it lags behind CPython libraries ? I'm mostly interested in numpy/scipy and f2py. Are they available to IronPython ?
What would be the best way to access VB from Python and the other way back (connecting some python libraries to Excel's VBA, to be exact) ?
|
[
"1) IronPython and CPython share nearly identical language syntax. There is very little difference between them. Transitioning should be trivial.\n2) The libraries in IronPython are very different than CPython. The Python libraries are a fair bit behind - quite a few of the CPython-accessible libraries will not work (currently) under IronPython. However, IronPython has clean, direct access to the entire .NET Framework, which means that it has one of the most extensive libraries natively accessible to it, so in many ways, it's far ahead of CPython. Some of the numpy/scipy libraries do not work in IronPython, but due to the .NET implementation, some of the functionality is not necessary, since the perf. characteristics are different.\n3) Accessing Excel VBA is going to be easier using IronPython, if you're doing it from VBA. If you're trying to automate excel, IronPython is still easier, since you have access to the Execl Primary Interop Assemblies, and can directly automate it using the same libraries as C# and VB.NET.\n",
"\nWhat is the relationship between\n Python's C implementation (main\n version from python.org) and\n IronPython, in terms of language\n compatibility ? Is it the same\n language, and do I by learning one,\n will be able to smoothly cross to\n another, or is it Java to JavaScript ?\n\nSame language (at 2.5 level for now -- IronPython's not 2.6 yet AFAIK).\n\nWhat is the current status to\n IronPython's libraries ? How much does\n it lags behind CPython libraries ? I'm\n mostly interested in numpy/scipy and\n f2py. Are they available to IronPython\n ?\n\nStandard libraries are in a great state in today's IronPython, huge third-party extensions like the ones you mention far from it. numpy's starting to get feasible thanks to ironclad, but not production-level as numpy is from IronPython (as witnessed by the 0.5 version number for ironclad, &c). scipy is huge and sprawling and chock full of C-coded and Fortran-coded extensions: I have no first-hand experience but I suspect less than half will even run, much less run flawlessly, under any implementation except CPython.\n\nWhat would be the best way to access\n VB from Python and the other way back\n (connecting some python libraries to\n Excel's VBA, to be exact) ?\n\nIronPython should make it easier via .NET approaches, but CPython is not that far via COM implementation in win32all.\nLast but not least, by all means check out the book IronPython in Action -- as I say every time I recommend it, I'm biased (by having been a tech reviewer for it AND by friendship with one author) but I think it's objectively the best intro to Python for .NET developers AND at the same time the best intro to .NET for Pythonistas.\nIf you need all of scipy (WOW, but that's some \"Renaissance Man\" computational scientist!-), CPython is really the only real option today. I'm sure other large extensions (PyQt, say, or Mayavi) are in a similar state. For deep integration to today's Windows, however, I think IronPython may have an edge. For general-purpose uses, CPython may be better (esp. thanks to the many new features in 2.6), unless you're really keen to use many cores to the hilt within a single process, in which case IronPython (which is GIL-less) may prove advantageous again.\nOne way or another (or even on the JVM via Jython, or in peculiar environments via PyPy) Python is surely an awesome language, whatever implementation(s) you pick for a given application!-) Note that you don't need to stick with ONE implementation (though you should probably pick one VERSION -- 2.5 for maximal compatibility with IronPython, Jython, Google App Engine, etc; 2.6 if you don't care about any deployment options except \"CPython on a machine under my own sole or virtual control\";-).\n",
"\nIronPython version 2.0.2, the current release, supports Python 2.5 syntax. With the next release, 2.6, which is expected sometime over the next month or so (though I'm not sure the team have set a hard release date; here's the beta), they will support Python 2.6 syntax. So, less Java to JavaScript and more Java to J# :-)\nAll of the libraries that are themselves written in Python work pretty much perfectly. The ones that are written in C are more problematic; there is an open source project called Ironclad (full disclosure: developed and supported by my company), currently at version 0.8.5, which supports numpy pretty well but doesn't cover all of scipy. I don't think we've tested it with f2py. (The version of Ironclad mentioned below by Alex Martelli is over a year old, please avoid that one!)\nCalling regular VB.NET from IronPython is pretty easy -- you can instantiate .NET classes and call methods (static or instance) really easily. Calling existing VBA code might be trickier; there are open source projects called Pyinex and XLW that you might want to take a look at. Alternatively, if you want a spreadsheet you can code in Python, then there's always Resolver One (another one from my company... :-)\n\n",
"1) The language implemented by CPython and IronPython are the same, or at most a version or two apart. This is nothing like the situation with Java and Javascript, which are two completely different languages given similar names by some bone-headed marketing decision.\n2) 3rd-party libraries implemented in C (such as numpy) will have to be evaluated carefully. IronPython has a facility to execute C extensions (I forget the name), but there are many pitfalls, so you need to check with each library's maintainer\n3) I have no idea.\n",
"\nCPython is implemented by C for corresponding platform, such as Windows, Linux or Unix; IronPython is implemented by C# and Windows .Net Framework, so it can only run on Windows Platform with .Net Framework.\nFor gramma, both are same. But we cannot say they are one same language. IronPython can use .Net Framework essily when you develop on windows platform.\nBy far, July 21, 2009 - IronPython 2.0.2, our latest stable release of IronPython, was released. you can refer to http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython.\nYou can access VB with .Net Framework function by IronPython. So, if you want to master IronPython, you'd better learn more .Net Framework.\n\n"
] |
[
18,
11,
2,
1,
0
] |
[] |
[] |
[
"excel",
"ironpython",
"python",
"python.net",
"vba"
] |
stackoverflow_0001403103_excel_ironpython_python_python.net_vba.txt
|
Q:
Auto-generate form fields for a Form in django
I have some models and I want to generate a multi-selection form from this data.
So the form would contain an entry for each category and the choices would be the skills in that category.
models.py
class SkillCategory(models.Model):
name = models.CharField(max_length=50)
class Skill(models.Model):
name = models.CharField(max_length=50)
category = models.ForeignKey(SkillCategory)
Is there a way to auto-generate the form fields?
I know I can manually add a 'SkillCategory' entry in the form for each SkillCategory, but the reason to have it as a model is so skills and skillcategories can be edited freely.
I want to do something like this:
(I tried this, but didn't get it to work, don't remember the exact error...)
forms.py
class SkillSelectionForm(forms.Form):
def __init__(*args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
for c in SkillCategory.objects.all():
category_skills = [(pk, s.name) for s in c.skill_set.all()]
setattr(self, c.name, forms.MultipleChoiceField(choices=category_skills, widget=forms.CheckboxSelectMultiple))
SOLUTION
This creates a form field entry using the SkillCategory.name and assigns choices as those in Skill. field_name/display_name are used to avoid issues with non-ascii category names.
forms.py
def get_categorized_skills():
skills = {}
for s in Skill.objects.values('pk', 'name', 'category__name').order_by('category__name'):
if s['category__name'] not in skills.keys():
skills[s['category__name']] = []
skills[s['category__name']].append((s['pk'], s['name']))
return skills
class SkillSelectionForm(forms.Form):
def __init__(self, *args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
skills = get_categorized_skills()
for idx, cat in enumerate(skills.keys()):
field_name = u'category-{0}'.format(idx)
display_name = cat
self.fields[field_name] = forms.MultipleChoiceField(choices=skills[cat], widget=forms.CheckboxSelectMultiple, label=display_name)
A:
Okay so you can't set fields like that on forms.Form, for reasons which will become apparent when you see DeclarativeFieldsMetaclass, the metaclass of forms.Form (but not of forms.BaseForm). A solution which may be overkill in your case but an example of how dynamic form construction can be done, is something like this:
base_fields = [
forms.MultipleChoiceField(choices=[
(pk, s.name) for s in c.skill_set.all()
]) for c in SkillCategory.objects.all()
]
SkillSelectionForm = type('SkillSelectionForm', (forms.BaseForm,), {'base_fields': base_fields})
A:
What you want is a Formset. This will give you a set of rows, each of which maps to a specific Skill.
See the Formset documentation and the page specifically on generating formsets for models.
A:
Take a look at creating dynamic forms in Django, from b-list.org and uswaretech.com. I've had success using these examples to dynamically create form content from models.
|
Auto-generate form fields for a Form in django
|
I have some models and I want to generate a multi-selection form from this data.
So the form would contain an entry for each category and the choices would be the skills in that category.
models.py
class SkillCategory(models.Model):
name = models.CharField(max_length=50)
class Skill(models.Model):
name = models.CharField(max_length=50)
category = models.ForeignKey(SkillCategory)
Is there a way to auto-generate the form fields?
I know I can manually add a 'SkillCategory' entry in the form for each SkillCategory, but the reason to have it as a model is so skills and skillcategories can be edited freely.
I want to do something like this:
(I tried this, but didn't get it to work, don't remember the exact error...)
forms.py
class SkillSelectionForm(forms.Form):
def __init__(*args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
for c in SkillCategory.objects.all():
category_skills = [(pk, s.name) for s in c.skill_set.all()]
setattr(self, c.name, forms.MultipleChoiceField(choices=category_skills, widget=forms.CheckboxSelectMultiple))
SOLUTION
This creates a form field entry using the SkillCategory.name and assigns choices as those in Skill. field_name/display_name are used to avoid issues with non-ascii category names.
forms.py
def get_categorized_skills():
skills = {}
for s in Skill.objects.values('pk', 'name', 'category__name').order_by('category__name'):
if s['category__name'] not in skills.keys():
skills[s['category__name']] = []
skills[s['category__name']].append((s['pk'], s['name']))
return skills
class SkillSelectionForm(forms.Form):
def __init__(self, *args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
skills = get_categorized_skills()
for idx, cat in enumerate(skills.keys()):
field_name = u'category-{0}'.format(idx)
display_name = cat
self.fields[field_name] = forms.MultipleChoiceField(choices=skills[cat], widget=forms.CheckboxSelectMultiple, label=display_name)
|
[
"Okay so you can't set fields like that on forms.Form, for reasons which will become apparent when you see DeclarativeFieldsMetaclass, the metaclass of forms.Form (but not of forms.BaseForm). A solution which may be overkill in your case but an example of how dynamic form construction can be done, is something like this:\nbase_fields = [\n forms.MultipleChoiceField(choices=[\n (pk, s.name) for s in c.skill_set.all()\n ]) for c in SkillCategory.objects.all()\n]\nSkillSelectionForm = type('SkillSelectionForm', (forms.BaseForm,), {'base_fields': base_fields})\n\n",
"What you want is a Formset. This will give you a set of rows, each of which maps to a specific Skill.\nSee the Formset documentation and the page specifically on generating formsets for models. \n",
"Take a look at creating dynamic forms in Django, from b-list.org and uswaretech.com. I've had success using these examples to dynamically create form content from models.\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"django",
"django_forms",
"python"
] |
stackoverflow_0001409192_django_django_forms_python.txt
|
Q:
Bypassing buffering of subprocess output with popen in C or Python
I have a general question about popen (and all related functions), applicable to all operating systems, when I write a python script or some c code and run the resulting executable from the console (win or linux), i can immediately see the output from the process. However, if I run the same executable as a forked process with its stdout redirected into a pipe, the output buffers somewhere, usually up to 4096 bytes before it is written to the pipe where the parent process can read it.
The following python script will generate output in chunks of 1024 bytes
import os, sys, time
if __name__ == "__main__":
dye = '@'*1024
for i in range (0,8):
print dye
time.sleep(1)
The following python script will execute the previous script and read the output as soon as it comes to the pipe, byte by byte
import os, sys, subprocess, time, thread
if __name__ == "__main__":
execArgs = ["c:\\python25\\python.exe", "C:\\Scripts\\PythonScratch\\byte_stream.py"]
p = subprocess.Popen(execArgs, bufsize=0, stdout=subprocess.PIPE)
while p.returncode == None:
data = p.stdout.read(1)
sys.stdout.write(data)
p.poll()
Adjust the path for your operating system. When run in this configuration, the output will not appear in chunks of 1024 but chunks of 4096, despite the buffer size of the popen command being set to 0 (which is the default anyway). Can anyone tell me how to change this behaviour?, is there any way I can force the operating system to treat the output from the forked process in the same way as when it is run from the console?, ie, just feed the data through without buffering?
A:
In general, the standard C runtime library (that's running on behalf of just about every program on every system, more or less;-) detects whether stdout is a terminal or not; if not, it buffers the output (which can be a huge efficiency win, compared to unbuffered output).
If you're in control of the program that's doing the writing, you can (as another answer suggested) flush stdout continuously, or (more elegantly if feasible) try to force stdout to be unbuffered, e.g. by running Python with the -u commandline flag:
-u : unbuffered binary stdout and stderr (also PYTHONUNBUFFERED=x)
see man page for details on internal buffering relating to '-u'
(what the man page adds is a mention of stdin and issues with binary mode[s]).
If you can't or don't want to touch the program that's writing, -u or the like on the program that's just reading is unlikely to help (the buffering that matters most is the one happening on the writer's stdout, not the one on the reader's stdin). The alternative is to trick the writer into believing that it's writing to a terminal (even though in fact it's writing to another program!), via the pty standard library module or the higher-level third party pexpect module (or, for Windows, its port wexpect).
A:
Thats correct, and applies to both Windows and Linux (and possibly other systems), with popen() and fopen(). If you want the output buffer to be dispatched before 4096 bytes, use fflush() (on C) or sys.stdout.flush() (Python).
|
Bypassing buffering of subprocess output with popen in C or Python
|
I have a general question about popen (and all related functions), applicable to all operating systems, when I write a python script or some c code and run the resulting executable from the console (win or linux), i can immediately see the output from the process. However, if I run the same executable as a forked process with its stdout redirected into a pipe, the output buffers somewhere, usually up to 4096 bytes before it is written to the pipe where the parent process can read it.
The following python script will generate output in chunks of 1024 bytes
import os, sys, time
if __name__ == "__main__":
dye = '@'*1024
for i in range (0,8):
print dye
time.sleep(1)
The following python script will execute the previous script and read the output as soon as it comes to the pipe, byte by byte
import os, sys, subprocess, time, thread
if __name__ == "__main__":
execArgs = ["c:\\python25\\python.exe", "C:\\Scripts\\PythonScratch\\byte_stream.py"]
p = subprocess.Popen(execArgs, bufsize=0, stdout=subprocess.PIPE)
while p.returncode == None:
data = p.stdout.read(1)
sys.stdout.write(data)
p.poll()
Adjust the path for your operating system. When run in this configuration, the output will not appear in chunks of 1024 but chunks of 4096, despite the buffer size of the popen command being set to 0 (which is the default anyway). Can anyone tell me how to change this behaviour?, is there any way I can force the operating system to treat the output from the forked process in the same way as when it is run from the console?, ie, just feed the data through without buffering?
|
[
"In general, the standard C runtime library (that's running on behalf of just about every program on every system, more or less;-) detects whether stdout is a terminal or not; if not, it buffers the output (which can be a huge efficiency win, compared to unbuffered output).\nIf you're in control of the program that's doing the writing, you can (as another answer suggested) flush stdout continuously, or (more elegantly if feasible) try to force stdout to be unbuffered, e.g. by running Python with the -u commandline flag:\n-u : unbuffered binary stdout and stderr (also PYTHONUNBUFFERED=x)\n see man page for details on internal buffering relating to '-u'\n\n(what the man page adds is a mention of stdin and issues with binary mode[s]).\nIf you can't or don't want to touch the program that's writing, -u or the like on the program that's just reading is unlikely to help (the buffering that matters most is the one happening on the writer's stdout, not the one on the reader's stdin). The alternative is to trick the writer into believing that it's writing to a terminal (even though in fact it's writing to another program!), via the pty standard library module or the higher-level third party pexpect module (or, for Windows, its port wexpect).\n",
"Thats correct, and applies to both Windows and Linux (and possibly other systems), with popen() and fopen(). If you want the output buffer to be dispatched before 4096 bytes, use fflush() (on C) or sys.stdout.flush() (Python).\n"
] |
[
16,
1
] |
[] |
[] |
[
"buffer",
"c",
"pipe",
"python"
] |
stackoverflow_0001410849_buffer_c_pipe_python.txt
|
Q:
Is there something like Ruby's Machinist for Python
Copied from the site http://github.com/notahat/machinist/
Machinist makes it easy to create test data within your tests. It generates data for the fields you don't care about, and constructs any necessary associated objects, leaving you to only specify the fields you do care about in your tests
A simple blueprint might look like this:
Post.blueprint do
title { Sham.title }
author { Sham.name }
body { Sham.body }
end
You can then construct a Post from this blueprint with:
Post.make
When you call make, Machinist calls Post.new, then runs through the attributes in your blueprint, calling the block for each attribute to generate a value. The Post is then saved and reloaded. An exception is thrown if Post can't be saved.
A:
I looked through the whole Python Testing Tools Taxonomy page (which has lots of great stuff) but didn't find much of anything like Machinist.
There is one simply script (called Peckcheck) that's basically unit-testing-with-data-generation, but it doesn't have the Blueprinting and such... so you might say it's just a Sham :)
|
Is there something like Ruby's Machinist for Python
|
Copied from the site http://github.com/notahat/machinist/
Machinist makes it easy to create test data within your tests. It generates data for the fields you don't care about, and constructs any necessary associated objects, leaving you to only specify the fields you do care about in your tests
A simple blueprint might look like this:
Post.blueprint do
title { Sham.title }
author { Sham.name }
body { Sham.body }
end
You can then construct a Post from this blueprint with:
Post.make
When you call make, Machinist calls Post.new, then runs through the attributes in your blueprint, calling the block for each attribute to generate a value. The Post is then saved and reloaded. An exception is thrown if Post can't be saved.
|
[
"I looked through the whole Python Testing Tools Taxonomy page (which has lots of great stuff) but didn't find much of anything like Machinist.\nThere is one simply script (called Peckcheck) that's basically unit-testing-with-data-generation, but it doesn't have the Blueprinting and such... so you might say it's just a Sham :)\n"
] |
[
1
] |
[] |
[] |
[
"python",
"ruby"
] |
stackoverflow_0001404503_python_ruby.txt
|
Q:
Locking in sqlalchemy
I'm confused about how to concurrently modify a table from several different processes. I've tried using Query.with_lockmode(), but it doesn't seem to be doing what I expect it to do, which would be to prevent two processes from simultaneously querying the same rows. Here's what I've tried:
import time
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import *
engine = create_engine('mysql://...?charset=utf8&use_unicode=0', pool_recycle=3600, echo=False)
Base = declarative_base(bind=engine)
session = scoped_session(sessionmaker(engine))
class Test(Base):
__tablename__ = "TESTXYZ"
id = Column(Integer, primary_key=True)
x = Column(Integer)
def keepUpdating():
test = session.query(Test).filter(Test.id==1).with_lockmode("update").one()
for counter in range(5):
test.x += 10
print test.x
time.sleep(2)
session.commit()
keepUpdating()
If I run this script twice simultaneously, I get session.query(Test).filter(Test.id==1).one().x equal to 50, rather than 100 (assuming it was 0 to begin with), which was what I was hoping for. How do I get both processes to either simultaneously update the values or have the second one wait until the first is done?
A:
Are you by accident using MyISAM tables? This works fine with InnoDB tables, but would have the described behavior (silent failure to respect isolation) with MyISAM.
|
Locking in sqlalchemy
|
I'm confused about how to concurrently modify a table from several different processes. I've tried using Query.with_lockmode(), but it doesn't seem to be doing what I expect it to do, which would be to prevent two processes from simultaneously querying the same rows. Here's what I've tried:
import time
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import *
engine = create_engine('mysql://...?charset=utf8&use_unicode=0', pool_recycle=3600, echo=False)
Base = declarative_base(bind=engine)
session = scoped_session(sessionmaker(engine))
class Test(Base):
__tablename__ = "TESTXYZ"
id = Column(Integer, primary_key=True)
x = Column(Integer)
def keepUpdating():
test = session.query(Test).filter(Test.id==1).with_lockmode("update").one()
for counter in range(5):
test.x += 10
print test.x
time.sleep(2)
session.commit()
keepUpdating()
If I run this script twice simultaneously, I get session.query(Test).filter(Test.id==1).one().x equal to 50, rather than 100 (assuming it was 0 to begin with), which was what I was hoping for. How do I get both processes to either simultaneously update the values or have the second one wait until the first is done?
|
[
"Are you by accident using MyISAM tables? This works fine with InnoDB tables, but would have the described behavior (silent failure to respect isolation) with MyISAM.\n"
] |
[
5
] |
[] |
[] |
[
"locking",
"python",
"sql",
"sqlalchemy"
] |
stackoverflow_0001411350_locking_python_sql_sqlalchemy.txt
|
Q:
Having problem with IronPython to instantiate a class in IronPython Console
I'm trying to learn IronPython. I created an extremely simple class like this one:
class Test:
def testMethod(self):
print "test"
Next I'm trying to use it in IronPython Console:
>>> import Test
>>> t = Test()
After the second line I get following error:
TypeError: Scope is not callable
What I'm doing wrong?
A:
you need to from filename import Test where filename is a basename of file class Test is saved in.
e.g.: class Test is saved in test.py
then:
from test import Test
t = Test()
will run as expected.
A:
import Test loads the module named Test, defined in a file called Test.py(c|d). This module in turn contains your class named Test. You're trying to instantiate the module called Test. To instantiate the class Test in module Test, you need to use:
t = Test.Test()
This concept can be quite tricky, especially if you have a background in other languages. Took me a while to figure out too :)
|
Having problem with IronPython to instantiate a class in IronPython Console
|
I'm trying to learn IronPython. I created an extremely simple class like this one:
class Test:
def testMethod(self):
print "test"
Next I'm trying to use it in IronPython Console:
>>> import Test
>>> t = Test()
After the second line I get following error:
TypeError: Scope is not callable
What I'm doing wrong?
|
[
"you need to from filename import Test where filename is a basename of file class Test is saved in.\ne.g.: class Test is saved in test.py\nthen:\nfrom test import Test\nt = Test()\n\nwill run as expected.\n",
"import Test loads the module named Test, defined in a file called Test.py(c|d). This module in turn contains your class named Test. You're trying to instantiate the module called Test. To instantiate the class Test in module Test, you need to use:\nt = Test.Test()\nThis concept can be quite tricky, especially if you have a background in other languages. Took me a while to figure out too :)\n"
] |
[
4,
2
] |
[] |
[] |
[
"ironpython",
"python"
] |
stackoverflow_0001411442_ironpython_python.txt
|
Q:
Help with basic Python function
I have a function to connect to a database. This code works:
def connect():
return MySQLdb.connect("example.com", "username", "password", "database")
But this doesn't:
def connect():
host = "example.com"
user = "username"
pass = "password"
base = "database"
return MySQLdb.connect(host, user, pass, base)
Why so?
A:
pass is a reserved keyword.
Pick different variable names and your code should work fine.
Maybe something like:
def connect():
_host = "example.com"
_user = "username"
_pass = "password"
_base = "database"
return MySQLdb.connect(_host, _user, _pass, _base)
|
Help with basic Python function
|
I have a function to connect to a database. This code works:
def connect():
return MySQLdb.connect("example.com", "username", "password", "database")
But this doesn't:
def connect():
host = "example.com"
user = "username"
pass = "password"
base = "database"
return MySQLdb.connect(host, user, pass, base)
Why so?
|
[
"pass is a reserved keyword. \nPick different variable names and your code should work fine.\nMaybe something like:\ndef connect():\n _host = \"example.com\"\n _user = \"username\"\n _pass = \"password\"\n _base = \"database\"\n return MySQLdb.connect(_host, _user, _pass, _base)\n\n"
] |
[
8
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001411553_python.txt
|
Q:
Is there a way to reopen a socket?
I create many "short-term" sockets in some code that look like that :
nb=1000
for i in range(nb):
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
sck.send('question %i'%i)
sck.shutdown(SHUT_WR)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
This works fine, as long as nb is "small" enough.
As nb might be quite large though, I'd like to do something like this
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
for i in range(nb):
reopen(sck) # ? ? ?
sck.send('question %i'%i)
sck.shutdown(SHUT_WR)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
So the question is :
Is there any way to "reuse" a socket that has been shutdown ?
A:
No, this is a limitation of the underlying C sockets (and the TCP/IP protocol, for that matter). My question to you is: why are you shutting them down when you can architect your application to use them?
The problem with many short-term sockets is that shutting them down puts them in a state where they cannot be used for a while (basically, twice the packet lifetime, to ensure any packets in the network either arrive and are discarded, or get discarded by the network itself). Basically what happens is that, in the 4-tuple that needs to be unique (source ip, source port, destination ip, destination port), the first one and last two tend to always be the same so, when you run out of source ports, you're hosed.
We've struck this problem in software before where it only became evident when we ran on faster machines (since we could use many more sessions).
Why dont you just open up the socket and continue to use it? It looks like your protocol is a simple request/response one, which should be easily do-able with that approach.
Something like:
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
for i in range(nb):
sck.send('question %i'%i)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
Update:
One possibility (and we've done this before) if you're running out of connection due to this continual open/close, is to detect the problem and throttle it. Consider the following code (the stuff I've added is more pseudo-code than Python since I haven't touched Python for quite a while):
for i in range(nb):
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
while sck.error() == NO_SOCKETS_AVAIL:
sleep 250 milliseconds
sck.connect((adr, prt)
sck.send('question %i'%i)
sck.shutdown(SHUT_WR)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
Basically, it lets you run at full speed while there are plenty of resources but slows down when you strike your problem area. This is actually what we did to our product to "fix" the problem of failing when resources got low. We would have re-architected it except for the fact it was a legacy product approaching end of life and we were basically in the fix-at-minimal-cost mode for service.
A:
I'm not sure what the extra overhead would be like, but you could fully close and reopen the socket. You need to set SO_REUSEADDR, and bind to a specific port you can reuse.
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
A:
If you keep opening and closing sockets for the same port then it is better to open this socket once and keep it open, then you will have much better performance, since opening and closing will take some time.
If you have many short-term sockets, you may also consider datagram sockets (UDP).
Note that you do not have a guarantee for arrival in this case, also order of the packets is not guaranteed.
A:
You cannot reuse the socket but it would not help if you could, since you are running out of ports, not sockets. Each port will stay in TIME_WAIT state for twice the maximum segment lifetime after you initiate the shutdown. It would be best not to require so many ports within such a short period, but if you need to use a large number you may be able to increase the ephemeral port range.
Port numbers are 16 bits and therefore there are only 65536 of them. If you are on Windows or Mac OS X then by default, ephemeral ports are chosen from the range 49152 to 65535. This is the official range designated by IANA, but on Linux and Solaris (often used for high traffic servers) the default range starts at 32768 to allow for more ports. You may want to make a similar change to your system if it is not already set that way and you are in need of more ephemeral ports.
It is also possible to reduce the maximum segment lifetime on your system, reducing the amount of time that each socket is in TIME_WAIT state, or to use SO_REUSEADDR or SO_LINGER in some cases to reuse ports before the time has expired. However this can at least theoretically cause older connections to be mixed up with newer connections that happen to be using the same port number, if some packets from the older connections are slow to arrive, so is generally not a good idea.
|
Is there a way to reopen a socket?
|
I create many "short-term" sockets in some code that look like that :
nb=1000
for i in range(nb):
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
sck.send('question %i'%i)
sck.shutdown(SHUT_WR)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
This works fine, as long as nb is "small" enough.
As nb might be quite large though, I'd like to do something like this
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
for i in range(nb):
reopen(sck) # ? ? ?
sck.send('question %i'%i)
sck.shutdown(SHUT_WR)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
So the question is :
Is there any way to "reuse" a socket that has been shutdown ?
|
[
"No, this is a limitation of the underlying C sockets (and the TCP/IP protocol, for that matter). My question to you is: why are you shutting them down when you can architect your application to use them?\nThe problem with many short-term sockets is that shutting them down puts them in a state where they cannot be used for a while (basically, twice the packet lifetime, to ensure any packets in the network either arrive and are discarded, or get discarded by the network itself). Basically what happens is that, in the 4-tuple that needs to be unique (source ip, source port, destination ip, destination port), the first one and last two tend to always be the same so, when you run out of source ports, you're hosed.\nWe've struck this problem in software before where it only became evident when we ran on faster machines (since we could use many more sessions).\nWhy dont you just open up the socket and continue to use it? It looks like your protocol is a simple request/response one, which should be easily do-able with that approach.\nSomething like:\nsck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nsck.connect((adr, prt)\nfor i in range(nb):\n sck.send('question %i'%i)\n answer=sck.recv(4096)\n print 'answer %i : %s' % (%i, answer)\nsck.close()\n\nUpdate:\nOne possibility (and we've done this before) if you're running out of connection due to this continual open/close, is to detect the problem and throttle it. Consider the following code (the stuff I've added is more pseudo-code than Python since I haven't touched Python for quite a while):\nfor i in range(nb):\n sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sck.connect((adr, prt)\n\n while sck.error() == NO_SOCKETS_AVAIL:\n sleep 250 milliseconds\n sck.connect((adr, prt)\n\n sck.send('question %i'%i)\n sck.shutdown(SHUT_WR)\n answer=sck.recv(4096)\n print 'answer %i : %s' % (%i, answer)\n sck.close()\n\nBasically, it lets you run at full speed while there are plenty of resources but slows down when you strike your problem area. This is actually what we did to our product to \"fix\" the problem of failing when resources got low. We would have re-architected it except for the fact it was a legacy product approaching end of life and we were basically in the fix-at-minimal-cost mode for service.\n",
"I'm not sure what the extra overhead would be like, but you could fully close and reopen the socket. You need to set SO_REUSEADDR, and bind to a specific port you can reuse.\nsck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nsck.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n\n",
"If you keep opening and closing sockets for the same port then it is better to open this socket once and keep it open, then you will have much better performance, since opening and closing will take some time.\nIf you have many short-term sockets, you may also consider datagram sockets (UDP).\nNote that you do not have a guarantee for arrival in this case, also order of the packets is not guaranteed.\n",
"You cannot reuse the socket but it would not help if you could, since you are running out of ports, not sockets. Each port will stay in TIME_WAIT state for twice the maximum segment lifetime after you initiate the shutdown. It would be best not to require so many ports within such a short period, but if you need to use a large number you may be able to increase the ephemeral port range.\nPort numbers are 16 bits and therefore there are only 65536 of them. If you are on Windows or Mac OS X then by default, ephemeral ports are chosen from the range 49152 to 65535. This is the official range designated by IANA, but on Linux and Solaris (often used for high traffic servers) the default range starts at 32768 to allow for more ports. You may want to make a similar change to your system if it is not already set that way and you are in need of more ephemeral ports.\nIt is also possible to reduce the maximum segment lifetime on your system, reducing the amount of time that each socket is in TIME_WAIT state, or to use SO_REUSEADDR or SO_LINGER in some cases to reuse ports before the time has expired. However this can at least theoretically cause older connections to be mixed up with newer connections that happen to be using the same port number, if some packets from the older connections are slow to arrive, so is generally not a good idea.\n"
] |
[
18,
3,
1,
0
] |
[] |
[] |
[
"python",
"sockets"
] |
stackoverflow_0001410723_python_sockets.txt
|
Q:
How to get '\x01' to 1
I am getting this:
_format_ = "7c7sc"
print struct.unpack(self._format_, data)
gives
('\x7f', 'E', 'L', 'F', '\x01', '\x01', '\x01', '\x00\x00\x00\x00\x00\x00\x00', '\x00')
I want to take '\x01' and get 1 from it, i.e., convert to ``int. Any ideas?
Thanks
A:
ord("\x01") will return 1.
A:
Perhaps you are thinking of the ord function?
>>> ord("\x01")
1
>>> ord("\x02")
2
>>> ord("\x7f")
127
|
How to get '\x01' to 1
|
I am getting this:
_format_ = "7c7sc"
print struct.unpack(self._format_, data)
gives
('\x7f', 'E', 'L', 'F', '\x01', '\x01', '\x01', '\x00\x00\x00\x00\x00\x00\x00', '\x00')
I want to take '\x01' and get 1 from it, i.e., convert to ``int. Any ideas?
Thanks
|
[
"ord(\"\\x01\") will return 1.\n",
"Perhaps you are thinking of the ord function?\n>>> ord(\"\\x01\")\n1\n>>> ord(\"\\x02\")\n2\n>>> ord(\"\\x7f\")\n127\n\n"
] |
[
26,
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001411658_python.txt
|
Q:
Python CGI returning an http status code, such as 403?
How can my python cgi return a specific http status code, such as 403 or 418?
I tried the obvious (print "Status:403 Forbidden") but it doesn't work.
A:
print 'Status: 403 Forbidden'
print
Works for me. You do need the second print though, as you need a double-newline to end the HTTP response headers. Otherwise your web server may complain you aren't sending it a complete set of headers.
sys.stdout('Status: 403 Forbidden\r\n\r\n')
may be technically more correct, according to RFC (assuming that your CGI script isn't running in text mode on Windows). However both line endings seem to work everywhere.
A:
I guess, you're looking for send_error. It would be located in http.server in py3k.
|
Python CGI returning an http status code, such as 403?
|
How can my python cgi return a specific http status code, such as 403 or 418?
I tried the obvious (print "Status:403 Forbidden") but it doesn't work.
|
[
"print 'Status: 403 Forbidden'\nprint\n\nWorks for me. You do need the second print though, as you need a double-newline to end the HTTP response headers. Otherwise your web server may complain you aren't sending it a complete set of headers.\nsys.stdout('Status: 403 Forbidden\\r\\n\\r\\n')\n\nmay be technically more correct, according to RFC (assuming that your CGI script isn't running in text mode on Windows). However both line endings seem to work everywhere.\n",
"I guess, you're looking for send_error. It would be located in http.server in py3k.\n"
] |
[
21,
0
] |
[] |
[] |
[
"cgi",
"http",
"http_status_codes",
"python"
] |
stackoverflow_0001411867_cgi_http_http_status_codes_python.txt
|
Q:
What difference it makes when I set python thread as a Daemon
What difference makes it when I set a python thread as a daemon, using thread.setDaemon(True)?
A:
A daemon thread will not prevent the application from exiting. The program ends when all non-daemon threads (main thread included) are complete.
So generally, if you're doing something in the background, you might want to set the thread as daemon so you don't have to explicitly have that thread's function return before the app can exit.
For example, if you are writing a GUI application and the user closes the main window, the program should quit. But if you have non-daemon threads hanging around, it won't.
From the docs: http://docs.python.org/library/threading.html#threading.Thread.daemon
Its initial value is inherited from
the creating thread; the main thread
is not a daemon thread and therefore
all threads created in the main thread
default to daemon = False.
The entire Python program exits when
no alive non-daemon threads are left.
|
What difference it makes when I set python thread as a Daemon
|
What difference makes it when I set a python thread as a daemon, using thread.setDaemon(True)?
|
[
"A daemon thread will not prevent the application from exiting. The program ends when all non-daemon threads (main thread included) are complete.\nSo generally, if you're doing something in the background, you might want to set the thread as daemon so you don't have to explicitly have that thread's function return before the app can exit.\nFor example, if you are writing a GUI application and the user closes the main window, the program should quit. But if you have non-daemon threads hanging around, it won't.\nFrom the docs: http://docs.python.org/library/threading.html#threading.Thread.daemon\n\nIts initial value is inherited from\n the creating thread; the main thread\n is not a daemon thread and therefore\n all threads created in the main thread\n default to daemon = False.\nThe entire Python program exits when\n no alive non-daemon threads are left.\n\n"
] |
[
24
] |
[] |
[] |
[
"daemon",
"multithreading",
"python"
] |
stackoverflow_0001411860_daemon_multithreading_python.txt
|
Q:
Python Warning control
I would like some kind of warning to be raisen as errors, but only the first occurrence. How to do that?
I read http://docs.python.org/library/warnings.html and I dont know how to combine this two types of behaviour.
A:
Looking at the code to warnings.py, you can't assign more than one filter action to a warning, and you can't (easily) define your own actions, like 'raise_once'.
However, if you want to raise a warning as an exception, but just once, that means that you are catching the exception. Why not put a line in your except clause that sets an 'ignore' action on that particular warning?
#!/usr/bin/python
import warnings
warnings.filterwarnings('error','Test')
for i in range(2):
try:
warnings.warn('Test');
except UserWarning, e:
print "Error caught"
warnings.filterwarnings('ignore','Test')
|
Python Warning control
|
I would like some kind of warning to be raisen as errors, but only the first occurrence. How to do that?
I read http://docs.python.org/library/warnings.html and I dont know how to combine this two types of behaviour.
|
[
"Looking at the code to warnings.py, you can't assign more than one filter action to a warning, and you can't (easily) define your own actions, like 'raise_once'.\nHowever, if you want to raise a warning as an exception, but just once, that means that you are catching the exception. Why not put a line in your except clause that sets an 'ignore' action on that particular warning?\n#!/usr/bin/python\n\nimport warnings\n\nwarnings.filterwarnings('error','Test')\nfor i in range(2):\n try:\n warnings.warn('Test');\n except UserWarning, e:\n print \"Error caught\"\n warnings.filterwarnings('ignore','Test')\n\n"
] |
[
7
] |
[] |
[] |
[
"python",
"warnings"
] |
stackoverflow_0001412575_python_warnings.txt
|
Q:
Python: what's the pythonic way to perform this loop?
What is the pythonic way to perform this loop. I'm trying to pick a random key that will return a subtree and not the root. Hence: 'parent == None' cannot be true. Nor can 'isRoot==True' be true.
thekey = random.choice(tree.thedict.keys())
while (tree.thedict[thekey].parent == None)or(tree.thedict[thekey].isRoot == True):
thekey = random.choice(tree.thedict.keys())
.......
edit: it works now
A:
get a random subtree that is not the
root
not_root_nodes = [key, node for key,node in tree.thedict.iteritems() if not ( node.parent is None or node.isRoot)]
item = random.choice( not_root_nodes )
A:
key = random.choice([key for key, subtree in tree.thedict.items()
if subtree.parent and not subtree.isRoot])
(Corrected after comments and question edition)
A:
thekey = random.choice(tree.thedict.keys())
parent = thedict[thekey].parent
while parent is None or parent.isRoot:
thekey = random.choice(tree.thedict.keys())
parent = thedict[thekey].parent
A:
I think that's a bit better:
theDict = tree.thedict
def getKey():
return random.choice(theDict.keys())
theKey = getKey()
while theDict[thekey].parent in (None, True):
thekey = getKey()
What do you think?
A:
def is_root(v):
assert (v.parent != None) == (v.isRoot)
return v.isRoot
#note how dumb this function looks when you guarantee that assertion
def get_random_nonroot_key():
while True:
thekey = random.choice(tree.thedict.keys())
value = tree.thedict[thekey]
if not is_root(value): return key
or a refactoring of Roberto Bonvallet's answer
def get_random_nonroot_key():
eligible_keys = [k for k, v in tree.thedict.items() if not is_root(v)]
return random.choice(eligible_keys)
A:
I think your while condition is flawed:
I think you expect this: tree.thedict[thekey].parent == None
should be equal to this: tree.thedict[thekey].parent.isRoot == True
When in fact, for both to mean "this node is not the root", you should change the second statement to: tree.thedict[thekey].isRoot == True
As written, your conditional test says "while this node is the root OR this node's parent is the root". If your tree structure is a single root node with many leaf nodes, you should expect an infinite loop in this case.
Here's a rewrite:
thekey = random.choice(k for k in tree.thedict.keys() if not k.isRoot)
A:
thekey = random.choice(tree.thedict.keys())
parent = tree.thedict[thekey].parent
while parent is None or tree.thedict[thekey].isRoot:
thekey = random.choice(tree.thedict.keys())
parent = thedict[thekey].parent
A:
Personally, I don't like the repetition of initializing thekey before the while loop and then again inside the loop. It's a possible source of bugs; what happens if someone edits one of the two initializations and forgets to edit the other? Even if that never happens, anyone reading the code needs to check carefully to make sure both initializations match perfectly.
I would write it like so:
while True:
thekey = random.choice(tree.thedict.keys())
subtree = tree.thedict[thekey]
if subtree.parent is not None and not subtree.isRoot:
break
P.S. If you really just want the subtree, and don't care about the key needed to lookup the subtree, you could even do this:
while True:
subtree = random.choice(tree.thedict.values())
if subtree.parent is not None and not subtree.isRoot:
break
Some people may not like the use of "while True:" but that is the standard Python idiom for "loop forever until something runs break". IMHO this is simple, clear, idiomatic Python.
P.P.S. This code should really be wrapped in an if statement that checks that the tree has more than one node. If the tree only has a root node, this code would loop forever.
|
Python: what's the pythonic way to perform this loop?
|
What is the pythonic way to perform this loop. I'm trying to pick a random key that will return a subtree and not the root. Hence: 'parent == None' cannot be true. Nor can 'isRoot==True' be true.
thekey = random.choice(tree.thedict.keys())
while (tree.thedict[thekey].parent == None)or(tree.thedict[thekey].isRoot == True):
thekey = random.choice(tree.thedict.keys())
.......
edit: it works now
|
[
"\nget a random subtree that is not the\n root\n\nnot_root_nodes = [key, node for key,node in tree.thedict.iteritems() if not ( node.parent is None or node.isRoot)]\nitem = random.choice( not_root_nodes )\n\n",
"key = random.choice([key for key, subtree in tree.thedict.items()\n if subtree.parent and not subtree.isRoot])\n\n(Corrected after comments and question edition)\n",
"thekey = random.choice(tree.thedict.keys())\nparent = thedict[thekey].parent\nwhile parent is None or parent.isRoot:\n thekey = random.choice(tree.thedict.keys())\n parent = thedict[thekey].parent\n\n",
"I think that's a bit better:\ntheDict = tree.thedict\n\ndef getKey():\n return random.choice(theDict.keys())\n\ntheKey = getKey()\n\nwhile theDict[thekey].parent in (None, True):\n thekey = getKey()\n\nWhat do you think?\n",
"def is_root(v): \n assert (v.parent != None) == (v.isRoot)\n return v.isRoot\n #note how dumb this function looks when you guarantee that assertion\n\ndef get_random_nonroot_key():\n while True:\n thekey = random.choice(tree.thedict.keys())\n value = tree.thedict[thekey]\n if not is_root(value): return key\n\nor a refactoring of Roberto Bonvallet's answer\ndef get_random_nonroot_key():\n eligible_keys = [k for k, v in tree.thedict.items() if not is_root(v)]\n return random.choice(eligible_keys)\n\n",
"I think your while condition is flawed:\nI think you expect this: tree.thedict[thekey].parent == None\nshould be equal to this: tree.thedict[thekey].parent.isRoot == True\nWhen in fact, for both to mean \"this node is not the root\", you should change the second statement to: tree.thedict[thekey].isRoot == True\nAs written, your conditional test says \"while this node is the root OR this node's parent is the root\". If your tree structure is a single root node with many leaf nodes, you should expect an infinite loop in this case.\nHere's a rewrite:\nthekey = random.choice(k for k in tree.thedict.keys() if not k.isRoot)\n\n",
"thekey = random.choice(tree.thedict.keys())\nparent = tree.thedict[thekey].parent\nwhile parent is None or tree.thedict[thekey].isRoot:\n thekey = random.choice(tree.thedict.keys())\n parent = thedict[thekey].parent\n\n",
"Personally, I don't like the repetition of initializing thekey before the while loop and then again inside the loop. It's a possible source of bugs; what happens if someone edits one of the two initializations and forgets to edit the other? Even if that never happens, anyone reading the code needs to check carefully to make sure both initializations match perfectly.\nI would write it like so:\nwhile True:\n thekey = random.choice(tree.thedict.keys())\n subtree = tree.thedict[thekey]\n if subtree.parent is not None and not subtree.isRoot:\n break\n\nP.S. If you really just want the subtree, and don't care about the key needed to lookup the subtree, you could even do this:\nwhile True:\n subtree = random.choice(tree.thedict.values())\n if subtree.parent is not None and not subtree.isRoot:\n break\n\nSome people may not like the use of \"while True:\" but that is the standard Python idiom for \"loop forever until something runs break\". IMHO this is simple, clear, idiomatic Python.\nP.P.S. This code should really be wrapped in an if statement that checks that the tree has more than one node. If the tree only has a root node, this code would loop forever.\n"
] |
[
3,
3,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001411943_python.txt
|
Q:
iTunes COM - How to acess Lyrics
I have been messing around with iTunes COM from python.
However, I haven't been able to access the Lyrics of any track.
I have been using python for this. Here is the code:
>>> import win32com.client
>>> itunes = win32com.client.Dispatch("iTunes.Application")
>>> lib = itunes.LibraryPlaylist
>>> tracks = lib.Tracks
>>> tracks
<win32com.gen_py.iTunes 1.12 Type Library.IITTrackCollection instance at 0x16726176>
>>> tracks[1]
<win32com.gen_py.iTunes 1.12 Type Library.IITTrack instance at 0x16746256>
>>> tracks[1].Lyrics
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "D:\Programas\Python26\lib\site-packages\win32com\client\__init__.py", line 462, in __getattr__
raise AttributeError("'%s' object has no attribute '%s'" % (repr(self), attr))
AttributeError: '<win32com.gen_py.iTunes 1.12 Type Library.IITTrack instance at 0x16780824>' object has no attribute 'Lyrics'
tracks[1] has no attribute 'Lyrics' because it is of type 'IITTrack'. Only 'IITFileOrCDTrack', which is a sub-type of 'IITTrack' has this attribute. My question is how to access the 'IITFileOrCDTrack's? Or how to convert a 'IITTrack' to a 'IITFileOrCDTrack'?
Any help on this is greatly appreciated. Thanks.
P.S: Info on how to download documentation of iTunes COM interface here.
A:
Try to convert it like this (not tested):
track_converted = win32com.client.CastTo(tracks[1], "IITFileOrCDTrack")
|
iTunes COM - How to acess Lyrics
|
I have been messing around with iTunes COM from python.
However, I haven't been able to access the Lyrics of any track.
I have been using python for this. Here is the code:
>>> import win32com.client
>>> itunes = win32com.client.Dispatch("iTunes.Application")
>>> lib = itunes.LibraryPlaylist
>>> tracks = lib.Tracks
>>> tracks
<win32com.gen_py.iTunes 1.12 Type Library.IITTrackCollection instance at 0x16726176>
>>> tracks[1]
<win32com.gen_py.iTunes 1.12 Type Library.IITTrack instance at 0x16746256>
>>> tracks[1].Lyrics
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "D:\Programas\Python26\lib\site-packages\win32com\client\__init__.py", line 462, in __getattr__
raise AttributeError("'%s' object has no attribute '%s'" % (repr(self), attr))
AttributeError: '<win32com.gen_py.iTunes 1.12 Type Library.IITTrack instance at 0x16780824>' object has no attribute 'Lyrics'
tracks[1] has no attribute 'Lyrics' because it is of type 'IITTrack'. Only 'IITFileOrCDTrack', which is a sub-type of 'IITTrack' has this attribute. My question is how to access the 'IITFileOrCDTrack's? Or how to convert a 'IITTrack' to a 'IITFileOrCDTrack'?
Any help on this is greatly appreciated. Thanks.
P.S: Info on how to download documentation of iTunes COM interface here.
|
[
"Try to convert it like this (not tested):\ntrack_converted = win32com.client.CastTo(tracks[1], \"IITFileOrCDTrack\")\n\n"
] |
[
3
] |
[] |
[] |
[
"com",
"itunes",
"python"
] |
stackoverflow_0001412689_com_itunes_python.txt
|
Q:
Django extends template
i have simple django/python app and i got 1 page - create.html. So i want to extend this page to use index.html. Everything work (no errors) and when the page is loaded all data from create.html and all text from index.html present but no formating is available - images and css that must be loaded from index.html is not loaded. When i load index.html in browser looks ok. Can someone help me?
Thanks!
here is the code of templates:
create.html
{% extends "index.html" %}
{% block title %}Projects{% endblock %}
{% block content %}
{% if projects %}
<table border="1">
<tr>
<td align="center">Name</td>
<td align="center">Description</td>
<td align="center">Priority</td>
<td align="center">X</td>
</tr>
{% for p in projects %}
<tr>
<td> <a href="/tasks/{{p.id}}/">{{p.Name}}</a> </td>
<td>{{p.Description}} </td>
<td> {{p.Priority.Name}} </td>
<td> <a href="/editproject/{{p.id}}/">Edit</a> <a href="/deleteproject/{{p.id}}/">Delete</a> </td>
<tr>
{% endfor %}
</table>
{% else %}
<p>No active projects.</p>
{% endif %}
{% endblock %}
and index.html:
<html>
<head>
{% block title %}{% endblock %}
<link rel="stylesheet" href="style.css" type="text/css" media="screen" />
</head>
<body>
{% block content %}{% endblock %}
<div class="PostContent">
<img class="article" src="images/spectacles.gif" alt="an image" style="float: left" />
<h1>Heading 1</h1>
<h2>Heading 2</h2>
<h3>Heading 3</h3>
<h4>Heading 4</h4>
<h5>Heading 5</h5>
<h6>Heading 6</h6>
<p>Lorem ipsum dolor sit amet,
<a href="#" title="link">link</a>, <a class="visited" href="#" title="visited link">visited link</a>,
<a class="hover" href="#" title="hovered link">hovered link</a> consectetuer
adipiscing elit. Quisque sed felis. Aliquam sit amet felis. Mauris semper,
velit semper laoreet dictum, quam diam dictum urna, nec placerat elit nisl
in quam. Etiam augue pede, molestie eget, rhoncus at, convallis ut, eros.</p>
....
</body>
</html>
A:
It looks like you are extending base.html and not index.html.
A:
More specifically, look at the first line of your content.html:
{% extends "base.html" %}
Change this to
{% extends "index.html" %}
(or rename index.html to be base.html)
A:
Ahaa find where is the problem!
MEDIA_ROOT and MEDIA_URL was not set up :-( after edit them everything work ok.
Django template can't see CSS files
|
Django extends template
|
i have simple django/python app and i got 1 page - create.html. So i want to extend this page to use index.html. Everything work (no errors) and when the page is loaded all data from create.html and all text from index.html present but no formating is available - images and css that must be loaded from index.html is not loaded. When i load index.html in browser looks ok. Can someone help me?
Thanks!
here is the code of templates:
create.html
{% extends "index.html" %}
{% block title %}Projects{% endblock %}
{% block content %}
{% if projects %}
<table border="1">
<tr>
<td align="center">Name</td>
<td align="center">Description</td>
<td align="center">Priority</td>
<td align="center">X</td>
</tr>
{% for p in projects %}
<tr>
<td> <a href="/tasks/{{p.id}}/">{{p.Name}}</a> </td>
<td>{{p.Description}} </td>
<td> {{p.Priority.Name}} </td>
<td> <a href="/editproject/{{p.id}}/">Edit</a> <a href="/deleteproject/{{p.id}}/">Delete</a> </td>
<tr>
{% endfor %}
</table>
{% else %}
<p>No active projects.</p>
{% endif %}
{% endblock %}
and index.html:
<html>
<head>
{% block title %}{% endblock %}
<link rel="stylesheet" href="style.css" type="text/css" media="screen" />
</head>
<body>
{% block content %}{% endblock %}
<div class="PostContent">
<img class="article" src="images/spectacles.gif" alt="an image" style="float: left" />
<h1>Heading 1</h1>
<h2>Heading 2</h2>
<h3>Heading 3</h3>
<h4>Heading 4</h4>
<h5>Heading 5</h5>
<h6>Heading 6</h6>
<p>Lorem ipsum dolor sit amet,
<a href="#" title="link">link</a>, <a class="visited" href="#" title="visited link">visited link</a>,
<a class="hover" href="#" title="hovered link">hovered link</a> consectetuer
adipiscing elit. Quisque sed felis. Aliquam sit amet felis. Mauris semper,
velit semper laoreet dictum, quam diam dictum urna, nec placerat elit nisl
in quam. Etiam augue pede, molestie eget, rhoncus at, convallis ut, eros.</p>
....
</body>
</html>
|
[
"It looks like you are extending base.html and not index.html.\n",
"More specifically, look at the first line of your content.html:\n {% extends \"base.html\" %}\n\nChange this to\n {% extends \"index.html\" %}\n\n(or rename index.html to be base.html)\n",
"Ahaa find where is the problem!\nMEDIA_ROOT and MEDIA_URL was not set up :-( after edit them everything work ok.\nDjango template can't see CSS files\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"django",
"extends",
"python"
] |
stackoverflow_0001412476_django_extends_python.txt
|
Q:
python mysql fetch query
def dispcar ( self, reg ):
print ("The car information for '%s' is: "), (reg)
numrows = int(self.dbc.rowcount) #get the count of total rows
self.dbc.execute("select * from car where reg='%s'") %(reg)
for x in range(0, numrows):
car_info = self.dbc.fetchone()
print row[0], "-->", row[1]
the above code gives this error:
self.dbc.execute("select * from car where reg='%s' " %(reg)
TypeError: unsupported operand type(s) for %: 'long' and 'str'
can anyone please help me understand why am i getting this error?
FYI: reg is a raw_input var i input from user in the function getitem and pass the reg var as an argument to this function.
A:
This confuses just about everyone who works with MySQLDB. You are passing arguments to the execute function, not doing python string substitution. The %s in the query string is used more like a prepared statement than a python string substitution. This also prevents SQL injection as MySQLDB will do the escaping for you. As you had it before (using % and string substitution), you are vulnerable to injection.
Don't use quotes. MySQLDB will put them there (if needed).
Use a , instead of a %. Again, you are passing a tuple as an argument to the execute function.
self.dbc.execute("select * from car where reg=%s" , (reg,))
A:
I think this line simply has the parens in the wrong place:
self.dbc.execute("select * from car where reg='%s'") %(reg)
You are using % on the result of execute(), and reg.
Change it to:
self.dbc.execute("select * from car where reg='%s'" % reg)
or
self.dbc.execute("select * from car where reg='%s'", reg)
depending on whether it will do the param substitution for you.
A:
You got the brackets wrong:
self.dbc.execute("select * from car where reg=%s" , (reg,))
Any particular reason you are looping using fetchone (in this ugly loop with a range based on a rowcount which will probably be zero as you get it before you execute the query)?
Just do
for car_info in self.dbc.fetchall():
....
|
python mysql fetch query
|
def dispcar ( self, reg ):
print ("The car information for '%s' is: "), (reg)
numrows = int(self.dbc.rowcount) #get the count of total rows
self.dbc.execute("select * from car where reg='%s'") %(reg)
for x in range(0, numrows):
car_info = self.dbc.fetchone()
print row[0], "-->", row[1]
the above code gives this error:
self.dbc.execute("select * from car where reg='%s' " %(reg)
TypeError: unsupported operand type(s) for %: 'long' and 'str'
can anyone please help me understand why am i getting this error?
FYI: reg is a raw_input var i input from user in the function getitem and pass the reg var as an argument to this function.
|
[
"This confuses just about everyone who works with MySQLDB. You are passing arguments to the execute function, not doing python string substitution. The %s in the query string is used more like a prepared statement than a python string substitution. This also prevents SQL injection as MySQLDB will do the escaping for you. As you had it before (using % and string substitution), you are vulnerable to injection.\n\nDon't use quotes. MySQLDB will put them there (if needed).\nUse a , instead of a %. Again, you are passing a tuple as an argument to the execute function.\nself.dbc.execute(\"select * from car where reg=%s\" , (reg,))\n\n",
"I think this line simply has the parens in the wrong place:\nself.dbc.execute(\"select * from car where reg='%s'\") %(reg)\n\nYou are using % on the result of execute(), and reg.\nChange it to:\nself.dbc.execute(\"select * from car where reg='%s'\" % reg)\n\nor\nself.dbc.execute(\"select * from car where reg='%s'\", reg)\n\ndepending on whether it will do the param substitution for you.\n",
"You got the brackets wrong:\nself.dbc.execute(\"select * from car where reg=%s\" , (reg,))\n\nAny particular reason you are looping using fetchone (in this ugly loop with a range based on a rowcount which will probably be zero as you get it before you execute the query)?\nJust do\nfor car_info in self.dbc.fetchall():\n ....\n\n"
] |
[
4,
3,
1
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0001413057_mysql_python.txt
|
Q:
Grant anonymous access to a specific url/action in Plone
I am running Plone 3.2.3 and I have installed HumaineMailman so that the users on the website can subscribe and unsubscribe themselves from our various mailinglists. HumaineMailman works very simple. There is a special URL/action that gives you a plain text list of all e-mail addresses that are subscribed on a list. For example:
http://www.example.org/[email protected]&password=secret
You're supposed to simply wget that URL and feed the plain text list into Mailman's sync_members. Easy.
The problem is that Plone does not allow me to access that URL anonymously. When I am logged in as administrator I can access the URL in my browser and see the list of e-mail addresses. But when I am not logged in (and when retrieving that URL using wget) then Plone redirects me to the login page.
How do I tell plone that I want to allow anonymous access to that URL/action? The action itself (in code) is defined in Products/HumaineMailman/skins/mailman_autolist_update.py.
Thanks in advance!
A:
There are a couple ways to address this without apache or redeclaring security (which would make me nervous too)
http://www.example.org:8080/[email protected]&password=secret&__ac_name=**USERNAME**&__ac_password=**PASSWORD**&pwd_empty=0&cookies_enabled=1&js_enabled=0&form.submitted=1"
I frequently use this trick in scripts with a special user only does "services". There is also a HTTP Auth trick that looks like http://**USERNAME:PASSWORD@**www.example.org/[email protected]&password=secret which may or may not be supported depending on your client lib.
Alternatively, if that code is running in a (script) Python then you can add a metadata file (myScript.py.metadata) and give that script a proxy permission of Manager.i.e.
[default]
title = Do something useful in the c/py that requires elevated privs
proxy = Manager
A:
Figure out what permission is protecting that page, and give that permission to the Anonymous role in the Plone root.
A:
HumaineMailman needs ManagePortal permissions. Those are too much to give to Anonymous so Lennarts answer didn't solve it for me. Instead, I edited HumaineMailman and redeclared the respective function calls as public. This is a slight security risk though. My Plone is behind an Apache proxy so I compensated by only allow access to the memberlist from localhost (where the wget synchronisation script and mailman itself are running as well).
|
Grant anonymous access to a specific url/action in Plone
|
I am running Plone 3.2.3 and I have installed HumaineMailman so that the users on the website can subscribe and unsubscribe themselves from our various mailinglists. HumaineMailman works very simple. There is a special URL/action that gives you a plain text list of all e-mail addresses that are subscribed on a list. For example:
http://www.example.org/[email protected]&password=secret
You're supposed to simply wget that URL and feed the plain text list into Mailman's sync_members. Easy.
The problem is that Plone does not allow me to access that URL anonymously. When I am logged in as administrator I can access the URL in my browser and see the list of e-mail addresses. But when I am not logged in (and when retrieving that URL using wget) then Plone redirects me to the login page.
How do I tell plone that I want to allow anonymous access to that URL/action? The action itself (in code) is defined in Products/HumaineMailman/skins/mailman_autolist_update.py.
Thanks in advance!
|
[
"There are a couple ways to address this without apache or redeclaring security (which would make me nervous too)\nhttp://www.example.org:8080/[email protected]&password=secret&__ac_name=**USERNAME**&__ac_password=**PASSWORD**&pwd_empty=0&cookies_enabled=1&js_enabled=0&form.submitted=1\"\n\nI frequently use this trick in scripts with a special user only does \"services\". There is also a HTTP Auth trick that looks like http://**USERNAME:PASSWORD@**www.example.org/[email protected]&password=secret which may or may not be supported depending on your client lib.\nAlternatively, if that code is running in a (script) Python then you can add a metadata file (myScript.py.metadata) and give that script a proxy permission of Manager.i.e.\n[default]\ntitle = Do something useful in the c/py that requires elevated privs\nproxy = Manager\n\n",
"Figure out what permission is protecting that page, and give that permission to the Anonymous role in the Plone root.\n",
"HumaineMailman needs ManagePortal permissions. Those are too much to give to Anonymous so Lennarts answer didn't solve it for me. Instead, I edited HumaineMailman and redeclared the respective function calls as public. This is a slight security risk though. My Plone is behind an Apache proxy so I compensated by only allow access to the memberlist from localhost (where the wget synchronisation script and mailman itself are running as well).\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"mailman",
"plone",
"python",
"zope"
] |
stackoverflow_0001321828_mailman_plone_python_zope.txt
|
Q:
Python Unicode and MIMEE
Can someone who is way smarter than I tell me what I'm doing wrong.. Shouldn't this simply process...
# encoding: utf-8
from email.MIMEText import MIMEText
msg = MIMEText("hi")
msg.set_charset('utf-8')
print msg.as_string()
a = 'Ho\xcc\x82tel Ste\xcc\x81phane '
b = unicode(a, "utf-8")
print b
msg = MIMEText(b)
msg.set_charset('utf-8')
print msg.as_string()
I'm stumped...
A:
Assuming Python 2.* (alas, you don't tell us whether you're on Python 3, but as you're using print as a statement it looks like you aren't): MIMEText" takes a string -- a plain string, NOT a Unicode object. So, useb.encode('utf-8')as the argument if what you start with is a Unicode objectb`.
|
Python Unicode and MIMEE
|
Can someone who is way smarter than I tell me what I'm doing wrong.. Shouldn't this simply process...
# encoding: utf-8
from email.MIMEText import MIMEText
msg = MIMEText("hi")
msg.set_charset('utf-8')
print msg.as_string()
a = 'Ho\xcc\x82tel Ste\xcc\x81phane '
b = unicode(a, "utf-8")
print b
msg = MIMEText(b)
msg.set_charset('utf-8')
print msg.as_string()
I'm stumped...
|
[
"Assuming Python 2.* (alas, you don't tell us whether you're on Python 3, but as you're using print as a statement it looks like you aren't): MIMEText\" takes a string -- a plain string, NOT a Unicode object. So, useb.encode('utf-8')as the argument if what you start with is a Unicode objectb`.\n"
] |
[
2
] |
[] |
[] |
[
"email",
"python",
"unicode"
] |
stackoverflow_0001413735_email_python_unicode.txt
|
Q:
Improving a Python Script that Updates Apache DocumentRoot
I'm tired of going through all the steps it takes (for me) to change the DocumentRoot in Apache. I'm trying to facilitate the process with the following Python script...
#!/usr/bin/python
import sys, re
if len(sys.argv) == 2:
f = open('/tmp/apachecdr', 'w')
f.write(open('/etc/apache2/httpd.conf').read())
f = open('/tmp/apachecdr', 'r')
r = re.sub('DocumentRoot "(.*?)"',
'DocumentRoot "' + sys.argv[1] + '"',
f.read())
f = open('/etc/apache2/httpd.conf', 'w')
f.write(r)
else:
print "Please supply the new DocumentRoot path."
I've saved this as /usr/bin/apachecdr so that I could simply open a shell and "sudo apachecdr /new/documentroot/path" and then restart with apachectl. My question is how would you write this?
It's my first time posting on Stack Overflow (and I'm new to Python) so please let me know if this is not specific enough of a question.
A:
You're doing a lot of file work for not much benefit. Why do you write /tmp/apachecdr and then immediately read it again? If you are copying httpd.conf to the tmp directory as a backup, the shutil module provides functions to copy files:
#!/usr/bin/python
import shutil, sys, re
HTTPD_CONF = '/etc/apache2/httpd.conf'
BACKUP = '/tmp/apachecdr'
if len(sys.argv) == 2:
shutil.copyfile(BACKUP, HTTPD_CONF)
conf = open(HTTPD_CONF).read()
new_conf = re.sub('DocumentRoot "(.*?)"',
'DocumentRoot "' + sys.argv[1] + '"',
conf)
open(HTTPD_CONF, 'w').write(new_conf)
else:
print "Please supply the new DocumentRoot path."
A:
In general Python programmers prefer explicit over implicit (see http://www.python.org/dev/peps/pep-0020/).
With that in mind you may want to rewrite this line
f.write(open('/etc/apache2/httpd.conf').read())
To:
infile = open('/etc/apache2/httpd.conf').read()
f.write(infile)
infile.close()
Another suggestion would be to add some error checking. For example checking that the file /etc/apache2/httpd.conf exists.
Normally one would use a try/except around the open line, but if you want to use the combined read and the write command you could use a os.stat for checking.
Example
import os
try:
os.stat('/etc/apache2/httpd.conf)
except OSError:
print "/etc/apache2/httpd.conf does not exist"
return #will exit the program
A:
Thank you both for your answers. Ned, I was hoping you'd say something like that. The reason I was doing it like that is simply because I didn't know better. Thanks for providing your code, it was exactly what I was looking for.
Btw, the error is happening because "BACKUP" and "HTTPD_CONF" are reversed on line 8. In other words, switch "shutil.copyfile(BACKUP, HTTPD_CONF)" to "shutil.copyfile(HTTPD_CONF, BACKUP)" on line 8 of Ned's script.
Thanks again guys! This was a great first experience posting on this site.
|
Improving a Python Script that Updates Apache DocumentRoot
|
I'm tired of going through all the steps it takes (for me) to change the DocumentRoot in Apache. I'm trying to facilitate the process with the following Python script...
#!/usr/bin/python
import sys, re
if len(sys.argv) == 2:
f = open('/tmp/apachecdr', 'w')
f.write(open('/etc/apache2/httpd.conf').read())
f = open('/tmp/apachecdr', 'r')
r = re.sub('DocumentRoot "(.*?)"',
'DocumentRoot "' + sys.argv[1] + '"',
f.read())
f = open('/etc/apache2/httpd.conf', 'w')
f.write(r)
else:
print "Please supply the new DocumentRoot path."
I've saved this as /usr/bin/apachecdr so that I could simply open a shell and "sudo apachecdr /new/documentroot/path" and then restart with apachectl. My question is how would you write this?
It's my first time posting on Stack Overflow (and I'm new to Python) so please let me know if this is not specific enough of a question.
|
[
"You're doing a lot of file work for not much benefit. Why do you write /tmp/apachecdr and then immediately read it again? If you are copying httpd.conf to the tmp directory as a backup, the shutil module provides functions to copy files:\n#!/usr/bin/python\n\nimport shutil, sys, re\n\nHTTPD_CONF = '/etc/apache2/httpd.conf'\nBACKUP = '/tmp/apachecdr'\nif len(sys.argv) == 2:\n shutil.copyfile(BACKUP, HTTPD_CONF)\n conf = open(HTTPD_CONF).read()\n\n new_conf = re.sub('DocumentRoot \"(.*?)\"', \n 'DocumentRoot \"' + sys.argv[1] + '\"', \n conf)\n open(HTTPD_CONF, 'w').write(new_conf)\nelse:\n print \"Please supply the new DocumentRoot path.\"\n\n",
"In general Python programmers prefer explicit over implicit (see http://www.python.org/dev/peps/pep-0020/). \nWith that in mind you may want to rewrite this line\nf.write(open('/etc/apache2/httpd.conf').read())\n\nTo:\ninfile = open('/etc/apache2/httpd.conf').read()\nf.write(infile)\ninfile.close()\n\nAnother suggestion would be to add some error checking. For example checking that the file /etc/apache2/httpd.conf exists.\nNormally one would use a try/except around the open line, but if you want to use the combined read and the write command you could use a os.stat for checking.\nExample\nimport os\ntry:\n os.stat('/etc/apache2/httpd.conf) \nexcept OSError:\n print \"/etc/apache2/httpd.conf does not exist\"\n return #will exit the program\n\n",
"Thank you both for your answers. Ned, I was hoping you'd say something like that. The reason I was doing it like that is simply because I didn't know better. Thanks for providing your code, it was exactly what I was looking for. \nBtw, the error is happening because \"BACKUP\" and \"HTTPD_CONF\" are reversed on line 8. In other words, switch \"shutil.copyfile(BACKUP, HTTPD_CONF)\" to \"shutil.copyfile(HTTPD_CONF, BACKUP)\" on line 8 of Ned's script.\nThanks again guys! This was a great first experience posting on this site.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"apache",
"document",
"python",
"root"
] |
stackoverflow_0001408424_apache_document_python_root.txt
|
Q:
Python with matplotlib - reusing drawing functions
I have a follow up question to this question.
Is it possible to streamline the figure generation by having multiple python scripts that work on different parts of the figure?
For example, if I have the following functions:
FunctionA: Draw a histogram of something
FunctionB: Draw a box with a text in it
FunctionC: Draw a plot of something C
FunctionD: Draw a plot of something D
How do I go about reusing the above functions in different scripts? If I wanted, for instance, to create a figure with a histogram with a plot of something C, I would somehow call FunctionA and FunctionC from my script. Or, if I wanted a figure with the two plots, I'd call FunctionC and FunctionD.
I'm not sure if I'm explaining myself clearly, but another way of asking this question is this: how do I pass a figure object to a function and then have the function draw something to the passed figure object and then return it back to the main script to add other things like the title or something?
A:
Here you want to use the Artist objects, and pass them as needed to the functions:
import numpy as np
import matplotlib.pyplot as plt
def myhist(ax, color):
ax.hist(np.log(np.arange(1, 10, .1)), facecolor=color)
def say_something(ax, words):
t = ax.text(.2, 20., words)
make_a_dim_yellow_bbox(t)
def make_a_dim_yellow_bbox(txt):
txt.set_bbox(dict(facecolor='yellow', alpha=.2))
fig = plt.figure()
ax0 = fig.add_subplot(1,2,1)
ax1 = fig.add_subplot(1,2,2)
myhist(ax0, 'blue')
myhist(ax1, 'green')
say_something(ax0, 'this is the blue plot')
say_something(ax1, 'this is the green plot')
plt.show()
A:
Okey, I've figured out how to do this. It was a lot simpler than what I had imagined. It just required a bit of reading here with the figure and axes classes.
In your main script:
import pylab as plt
import DrawFns
fig = plt.figure()
(do something with fig)
DrawFns.WriteText(fig, 'Testing')
plt.show()
In your DrawFns.py:
def WriteText(_fig, _text):
[indent]_fig.text(0, 0, _text)
And that's it! And I can add more functions in DrawFns.py and call them from any script as long as they are included with import call. :D
|
Python with matplotlib - reusing drawing functions
|
I have a follow up question to this question.
Is it possible to streamline the figure generation by having multiple python scripts that work on different parts of the figure?
For example, if I have the following functions:
FunctionA: Draw a histogram of something
FunctionB: Draw a box with a text in it
FunctionC: Draw a plot of something C
FunctionD: Draw a plot of something D
How do I go about reusing the above functions in different scripts? If I wanted, for instance, to create a figure with a histogram with a plot of something C, I would somehow call FunctionA and FunctionC from my script. Or, if I wanted a figure with the two plots, I'd call FunctionC and FunctionD.
I'm not sure if I'm explaining myself clearly, but another way of asking this question is this: how do I pass a figure object to a function and then have the function draw something to the passed figure object and then return it back to the main script to add other things like the title or something?
|
[
"Here you want to use the Artist objects, and pass them as needed to the functions:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef myhist(ax, color):\n ax.hist(np.log(np.arange(1, 10, .1)), facecolor=color)\n\ndef say_something(ax, words):\n t = ax.text(.2, 20., words)\n make_a_dim_yellow_bbox(t)\n\ndef make_a_dim_yellow_bbox(txt):\n txt.set_bbox(dict(facecolor='yellow', alpha=.2))\n\nfig = plt.figure()\nax0 = fig.add_subplot(1,2,1)\nax1 = fig.add_subplot(1,2,2)\n\nmyhist(ax0, 'blue')\nmyhist(ax1, 'green')\n\nsay_something(ax0, 'this is the blue plot')\nsay_something(ax1, 'this is the green plot')\n\nplt.show()\n\n\n",
"Okey, I've figured out how to do this. It was a lot simpler than what I had imagined. It just required a bit of reading here with the figure and axes classes.\nIn your main script:\nimport pylab as plt \nimport DrawFns \nfig = plt.figure() \n(do something with fig) \nDrawFns.WriteText(fig, 'Testing') \nplt.show()\n\nIn your DrawFns.py: \ndef WriteText(_fig, _text): \n[indent]_fig.text(0, 0, _text)\n\nAnd that's it! And I can add more functions in DrawFns.py and call them from any script as long as they are included with import call. :D\n"
] |
[
8,
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0001413681_matplotlib_python.txt
|
Q:
Python decorate a class to change parent object type
Suppose you have two classes X & Y. You want to decorate those classes by adding attributes to the class to produce new classes X1 and Y1.
For example:
class X1(X):
new_attribute = 'something'
class Y1(Y):
new_attribute = 'something'
new_attribute will always be the same for both X1 and Y1. X & Y are not related in any meaningful way, except that multiple inheritance is not possible. There are a set of other attributes as well, but this is degenerate to illustrate.
I feel like I'm overcomplicating this, but I had thought to use a decorator, somewhat likeso:
def _xywrap(cls):
class _xy(cls):
new_attribute = 'something'
return _xy
@_xywrap(X)
class X1():
pass
@_xywrap(Y)
class Y1():
pass
It feels like I'm missing a fairly common pattern, and I'd be much obliged for thoughts, input and feedback.
Thank you for reading.
Brian
EDIT: Example:
Here is a relevant extract that may illuminate. The common classes are as follows:
from google.appengine.ext import db
# I'm including PermittedUserProperty because it may have pertinent side-effects
# (albeit unlikely), which is documented here: [How can you limit access to a
# GAE instance to the current user][1].
class _AccessBase:
users_permitted = PermittedUserProperty()
owner = db.ReferenceProperty(User)
class AccessModel(db.Model, _AccessBase):
pass
class AccessExpando(db.Expando, _AccessBase):
pass
# the order of _AccessBase/db.* doesn't seem to resolve the issue
class AccessPolyModel(_AccessBase, polymodel.PolyModel):
pass
Here's a sub-document:
class Thing(AccessExpando):
it = db.StringProperty()
Sometimes Thing will have the following properties:
Thing { it: ... }
And other times:
Thing { it: ..., users_permitted:..., owner:... }
I've been unable to figure out why Thing would sometimes have its _AccessParent properties, and other times not.
A:
Use 3-arguments type:
def makeSomeNicelyDecoratedSubclass(someclass):
return type('MyNiceName', (someclass,), {'new_attribute':'something'})
This is indeed, as you surmised, a reasonably popular idiom.
Edit: in the general case if someclass has a custom metaclass you may need to extract and use it (with a 1-argument type) in lieu of type itself, to preserve it (this may be the case for your Django and App Engine models):
def makeSomeNicelyDecoratedSubclass(someclass):
mcl = type(someclass)
return mcl('MyNiceName', (someclass,), {'new_attribute':'something'})
This also works where the simpler version above does (since in simple cases w/no custom metaclasses type(someclass) is type).
A:
Responding to your comments on voyager's answer:
from google.appengine.ext import db
class Mixin(object):
"""Mix in attributes shared by different types of models."""
foo = 1
bar = 2
baz = 3
class Person(db.Model, Mixin):
name = db.StringProperty()
class Dinosaur(db.polymodel.PolyModel, Mixin):
height = db.IntegerProperty()
p = Person(name='Buck Armstrong, Dinosaur Hunter')
d = Dinosaur(height=5000)
print p.name, p.foo, p.bar, p.baz
print d.height, d.foo, d.bar, d.baz
Running that results in
Buck Armstrong, Dinosaur Hunter 1 2 3
5000 1 2 3
Is that not what you had in mind?
A:
Why can't you use multiple inheritance?
class Origin:
new_attribute = 'something'
class X:
pass
class Y:
pass
class X1(Origin, X):
pass
class Y1(Origin, Y):
pass
|
Python decorate a class to change parent object type
|
Suppose you have two classes X & Y. You want to decorate those classes by adding attributes to the class to produce new classes X1 and Y1.
For example:
class X1(X):
new_attribute = 'something'
class Y1(Y):
new_attribute = 'something'
new_attribute will always be the same for both X1 and Y1. X & Y are not related in any meaningful way, except that multiple inheritance is not possible. There are a set of other attributes as well, but this is degenerate to illustrate.
I feel like I'm overcomplicating this, but I had thought to use a decorator, somewhat likeso:
def _xywrap(cls):
class _xy(cls):
new_attribute = 'something'
return _xy
@_xywrap(X)
class X1():
pass
@_xywrap(Y)
class Y1():
pass
It feels like I'm missing a fairly common pattern, and I'd be much obliged for thoughts, input and feedback.
Thank you for reading.
Brian
EDIT: Example:
Here is a relevant extract that may illuminate. The common classes are as follows:
from google.appengine.ext import db
# I'm including PermittedUserProperty because it may have pertinent side-effects
# (albeit unlikely), which is documented here: [How can you limit access to a
# GAE instance to the current user][1].
class _AccessBase:
users_permitted = PermittedUserProperty()
owner = db.ReferenceProperty(User)
class AccessModel(db.Model, _AccessBase):
pass
class AccessExpando(db.Expando, _AccessBase):
pass
# the order of _AccessBase/db.* doesn't seem to resolve the issue
class AccessPolyModel(_AccessBase, polymodel.PolyModel):
pass
Here's a sub-document:
class Thing(AccessExpando):
it = db.StringProperty()
Sometimes Thing will have the following properties:
Thing { it: ... }
And other times:
Thing { it: ..., users_permitted:..., owner:... }
I've been unable to figure out why Thing would sometimes have its _AccessParent properties, and other times not.
|
[
"Use 3-arguments type:\ndef makeSomeNicelyDecoratedSubclass(someclass):\n return type('MyNiceName', (someclass,), {'new_attribute':'something'})\n\nThis is indeed, as you surmised, a reasonably popular idiom.\nEdit: in the general case if someclass has a custom metaclass you may need to extract and use it (with a 1-argument type) in lieu of type itself, to preserve it (this may be the case for your Django and App Engine models):\ndef makeSomeNicelyDecoratedSubclass(someclass):\n mcl = type(someclass)\n return mcl('MyNiceName', (someclass,), {'new_attribute':'something'})\n\nThis also works where the simpler version above does (since in simple cases w/no custom metaclasses type(someclass) is type).\n",
"Responding to your comments on voyager's answer:\nfrom google.appengine.ext import db\n\nclass Mixin(object):\n \"\"\"Mix in attributes shared by different types of models.\"\"\"\n foo = 1\n bar = 2\n baz = 3\n\nclass Person(db.Model, Mixin):\n name = db.StringProperty()\n\nclass Dinosaur(db.polymodel.PolyModel, Mixin):\n height = db.IntegerProperty()\n\np = Person(name='Buck Armstrong, Dinosaur Hunter')\nd = Dinosaur(height=5000)\n\nprint p.name, p.foo, p.bar, p.baz\nprint d.height, d.foo, d.bar, d.baz\n\nRunning that results in\nBuck Armstrong, Dinosaur Hunter 1 2 3\n5000 1 2 3\n\nIs that not what you had in mind?\n",
"Why can't you use multiple inheritance?\nclass Origin:\n new_attribute = 'something'\n\nclass X:\n pass\n\nclass Y:\n pass\n\nclass X1(Origin, X):\n pass\n\nclass Y1(Origin, Y):\n pass\n\n"
] |
[
5,
3,
2
] |
[] |
[] |
[
"decorator",
"multiple_inheritance",
"python"
] |
stackoverflow_0001412210_decorator_multiple_inheritance_python.txt
|
Q:
python serialize string
I am needing to unserialize a string into an array in python just like php and then serialize it back.
A:
If you mean PHP's explode, try this
>>> list("foobar")
['f', 'o', 'o', 'b', 'a', 'r']
>>> ''.join(['f', 'o', 'o', 'b', 'a', 'r'])
'foobar'
A:
Take a look at the pickle module. It is probably what you're looking for.
import pickle
# Unserialize the string to an array
my_array = pickle.loads(serialized_data)
# Serialized back
serialized_data = pickle.dumps(my_array)
A:
What "serialized" the data in question into the string in the first place? Do you really mean an array (and if so, of what simple type?), or do you actually mean a list (and if so, what are the list's items supposed to be?)... etc, etc...
From the OP's comments it looks like he has zin, a tuple, and is trying to treat it as if it was, instead, a str into which data was serialized by pickling. So he's trying to unserialize the tuple via pickle.loads, and obviously that can't work -- pickle.loads wants an str (that's what the s MEANS), NOT a tuple -- it can't work with a tuple, it has even no idea of what to DO with a tuple.
Of course, neither do we, having been given zero indication about where that tuple comes from, why is it supposed to be a string instead, etc, etc. The OP should edit his answer to show more code (how is zin procured or fetched) and especially the code where zin is supposed to be PRODUCED (via pickle.dumps, I imagine) and how the communication from the producer to this would-be consumer is happening (or failing to happen;-).
A:
A string is already a sequence type in Python. You can iterate over the string one character at a time like this:
for char in somestring:
do_something(char)
The question is... what did you want to do with it? Maybe we can give more details with a more detailed question.
|
python serialize string
|
I am needing to unserialize a string into an array in python just like php and then serialize it back.
|
[
"If you mean PHP's explode, try this\n>>> list(\"foobar\")\n['f', 'o', 'o', 'b', 'a', 'r']\n\n>>> ''.join(['f', 'o', 'o', 'b', 'a', 'r'])\n'foobar'\n\n",
"Take a look at the pickle module. It is probably what you're looking for. \nimport pickle\n\n# Unserialize the string to an array\nmy_array = pickle.loads(serialized_data)\n\n# Serialized back\nserialized_data = pickle.dumps(my_array)\n\n",
"What \"serialized\" the data in question into the string in the first place? Do you really mean an array (and if so, of what simple type?), or do you actually mean a list (and if so, what are the list's items supposed to be?)... etc, etc...\nFrom the OP's comments it looks like he has zin, a tuple, and is trying to treat it as if it was, instead, a str into which data was serialized by pickling. So he's trying to unserialize the tuple via pickle.loads, and obviously that can't work -- pickle.loads wants an str (that's what the s MEANS), NOT a tuple -- it can't work with a tuple, it has even no idea of what to DO with a tuple.\nOf course, neither do we, having been given zero indication about where that tuple comes from, why is it supposed to be a string instead, etc, etc. The OP should edit his answer to show more code (how is zin procured or fetched) and especially the code where zin is supposed to be PRODUCED (via pickle.dumps, I imagine) and how the communication from the producer to this would-be consumer is happening (or failing to happen;-).\n",
"A string is already a sequence type in Python. You can iterate over the string one character at a time like this:\nfor char in somestring:\n do_something(char)\n\nThe question is... what did you want to do with it? Maybe we can give more details with a more detailed question.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"python",
"serialization"
] |
stackoverflow_0001413763_python_serialization.txt
|
Q:
help with python forking child server for doing ajax push, long polling
Alright, I only know some basic python but if I can get help with this then I am considering making it open source.
What I am trying to do:
- (Done) Ajax send for init content
- Python server recv command "init" to send most recent content
- (Done) Ajax recv content and then immediately calls back to python server
- Python server recv command "wait", sets up child, and waits for command "new" from ajax
- (Done) Ajax sends "new" command
- Python server wakes up all waiting children and sends newest content
- (Done) Ajax sends "wait", and so forth
I have already written the Python Server part in php but it uses 100% CPU so I knew I had to use forking socket daemon to be able to have multi processes sitting there waiting. Now, I could write this with PHP but the extensions it needs have to be manually installed which can be a problem with asking host to install it on shared accounts and so forth. So I turned to Python which would also give more flexability and run faster. Plus more people could user it.
So, if anyone could help with this, or give some direction, that would be great.
I am working on the code myself just just do not know it well enough. I can add the if statements in for the different commands and add in mysql connection myself. If I end up having any problems, I will ask here. I love this site.
A:
Look at subprocess.
Read all of these related questions on StackOverflow: https://stackoverflow.com/search?q=[python]+web+subprocess
|
help with python forking child server for doing ajax push, long polling
|
Alright, I only know some basic python but if I can get help with this then I am considering making it open source.
What I am trying to do:
- (Done) Ajax send for init content
- Python server recv command "init" to send most recent content
- (Done) Ajax recv content and then immediately calls back to python server
- Python server recv command "wait", sets up child, and waits for command "new" from ajax
- (Done) Ajax sends "new" command
- Python server wakes up all waiting children and sends newest content
- (Done) Ajax sends "wait", and so forth
I have already written the Python Server part in php but it uses 100% CPU so I knew I had to use forking socket daemon to be able to have multi processes sitting there waiting. Now, I could write this with PHP but the extensions it needs have to be manually installed which can be a problem with asking host to install it on shared accounts and so forth. So I turned to Python which would also give more flexability and run faster. Plus more people could user it.
So, if anyone could help with this, or give some direction, that would be great.
I am working on the code myself just just do not know it well enough. I can add the if statements in for the different commands and add in mysql connection myself. If I end up having any problems, I will ask here. I love this site.
|
[
"Look at subprocess.\nRead all of these related questions on StackOverflow: https://stackoverflow.com/search?q=[python]+web+subprocess\n"
] |
[
1
] |
[] |
[] |
[
"fork",
"long_polling",
"python",
"sockets"
] |
stackoverflow_0001414098_fork_long_polling_python_sockets.txt
|
Q:
Expected LP_c_double instance instead of c_double_Array - python ctypes error
I have a function in a DLL that I have to wrap with python code. The function is expecting a pointer to an array of doubles. This is the error I'm getting:
Traceback (most recent call last):
File "C:\....\.FROGmoduleTEST.py", line 243, in <module>
FROGPCGPMonitorDLL.ReturnPulse(ptrpulse, ptrtdl, ptrtdP,ptrfdl,ptrfdP)
ArgumentError: argument 1: <type 'exceptions.TypeError'>: expected LP_c_double instance instead of c_double_Array_0_Array_2
I tried casting it like so:
ptrpulse = cast(ptrpulse, ctypes.LP_c_double)
but I get:
NameError: name 'LP_c_double' is not defined
Any help or direction is greatly appreciated.
Thanks all!
A:
LP_c_double is created dynamically by ctypes when you create a pointer to a double. i.e.
LP_c_double = POINTER(c_double)
At this point, you've created a C type. You can now create instances of these pointers.
my_pointer_one = LP_c_double()
But here's the kicker. Your function isn't expecting a pointer to a double. It's expecting an array of doubles. In C, an array of type X is represented by a pointer (of type X) to the first element in that array.
In other words, to create a pointer to a double suitable for passing to your function, you actually need to allocate an array of doubles of some finite size (the documentation for ReturnPulse should indicate how much to allocate), and then pass that element directly (do not cast, do not de-reference).
i.e.
size = GetSize()
# create the array type
array_of_size_doubles = c_double*size
# allocate several instances of that type
ptrpulse = array_of_size_doubles()
ptrtdl = array_of_size_doubles()
ptrtdP = array_of_size_doubles()
ptrfdl = array_of_size_doubles()
ptrfdP = array_of_size_doubles()
ReturnPulse(ptrpulse, ptrtdl, ptrtdP, ptrfdl, ptrfdP)
Now the five arrays should be populated with the values returned by ReturnPulse.
A:
Are you writing the wrapper in Python yourself? The error saying "expected LP_c_double instance" means it's expecting a pointer to a single double, not an array as you've suggested.
>>> ctypes.POINTER(ctypes.c_double * 10)()
<__main__.LP_c_double_Array_10 object at 0xb7eb24f4>
>>> ctypes.POINTER(ctypes.c_double * 20)()
<__main__.LP_c_double_Array_20 object at 0xb7d3a194>
>>> ctypes.POINTER(ctypes.c_double)()
<__main__.LP_c_double object at 0xb7eb24f4>
Either you need to fix your argtypes to correctly expect a pointer to an array of doubles, or you need to pass in a pointer to a single double like the function currently expects.
|
Expected LP_c_double instance instead of c_double_Array - python ctypes error
|
I have a function in a DLL that I have to wrap with python code. The function is expecting a pointer to an array of doubles. This is the error I'm getting:
Traceback (most recent call last):
File "C:\....\.FROGmoduleTEST.py", line 243, in <module>
FROGPCGPMonitorDLL.ReturnPulse(ptrpulse, ptrtdl, ptrtdP,ptrfdl,ptrfdP)
ArgumentError: argument 1: <type 'exceptions.TypeError'>: expected LP_c_double instance instead of c_double_Array_0_Array_2
I tried casting it like so:
ptrpulse = cast(ptrpulse, ctypes.LP_c_double)
but I get:
NameError: name 'LP_c_double' is not defined
Any help or direction is greatly appreciated.
Thanks all!
|
[
"LP_c_double is created dynamically by ctypes when you create a pointer to a double. i.e.\nLP_c_double = POINTER(c_double)\n\nAt this point, you've created a C type. You can now create instances of these pointers.\nmy_pointer_one = LP_c_double()\n\nBut here's the kicker. Your function isn't expecting a pointer to a double. It's expecting an array of doubles. In C, an array of type X is represented by a pointer (of type X) to the first element in that array.\nIn other words, to create a pointer to a double suitable for passing to your function, you actually need to allocate an array of doubles of some finite size (the documentation for ReturnPulse should indicate how much to allocate), and then pass that element directly (do not cast, do not de-reference).\ni.e.\nsize = GetSize()\n# create the array type\narray_of_size_doubles = c_double*size\n# allocate several instances of that type\nptrpulse = array_of_size_doubles()\nptrtdl = array_of_size_doubles()\nptrtdP = array_of_size_doubles()\nptrfdl = array_of_size_doubles()\nptrfdP = array_of_size_doubles()\nReturnPulse(ptrpulse, ptrtdl, ptrtdP, ptrfdl, ptrfdP)\n\nNow the five arrays should be populated with the values returned by ReturnPulse.\n",
"Are you writing the wrapper in Python yourself? The error saying \"expected LP_c_double instance\" means it's expecting a pointer to a single double, not an array as you've suggested. \n>>> ctypes.POINTER(ctypes.c_double * 10)()\n<__main__.LP_c_double_Array_10 object at 0xb7eb24f4>\n>>> ctypes.POINTER(ctypes.c_double * 20)()\n<__main__.LP_c_double_Array_20 object at 0xb7d3a194>\n>>> ctypes.POINTER(ctypes.c_double)()\n<__main__.LP_c_double object at 0xb7eb24f4>\n\nEither you need to fix your argtypes to correctly expect a pointer to an array of doubles, or you need to pass in a pointer to a single double like the function currently expects.\n"
] |
[
11,
2
] |
[] |
[] |
[
"ctypes",
"python",
"types"
] |
stackoverflow_0001413851_ctypes_python_types.txt
|
Q:
Python code readability
I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class Host:
class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
I don't understand from this definition, what "network_interface" should be. Is it a string, like "eth0" or is it an instance of a class NetworkInterface? The only way I'm thinking about to solve this is a documenting the code with a "docstring". Something like this:
class Host(object):
''' Attributes:
@name: a string
@network_interface: an instance of class NetworkInterface'''
Or may be there are name conventions for things like that?
A:
Using dynamic languages will teach you something about static languages: all the help you got from the static language that you now miss in the dynamic language, it wasn't all that helpful.
To use your example, in a static language, you'd know that the parameter was a string, and in Python you don't. So in Python you write a docstring. And while you're writing it, you realize you had more to say about it than, "it's a string". You need to say what data is in the string, and what format it should have, and what the default is, and something about error conditions.
And then you realize you should have written all that down for your static language as well. Sure, Java would force you know that it was a string, but there's all these other details that need to be specified, and you have to manually do that work in any language.
A:
The docstring conventions are at PEP 257.
The example there follows this format for specifying arguments, you can add the types if they matter:
def complex(real=0.0, imag=0.0):
"""Form a complex number.
Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)
"""
if imag == 0.0 and real == 0.0: return complex_zero
...
There was also a rejected PEP for docstrings for attributes ( rather than constructor arguments ).
A:
The most pythonic solution is to document with examples. If possible, state what operations an object must support to be acceptable, rather than a specific type.
class Host(object):
def __init__(self, name, network_interface)
"""Initialise host with given name and network_interface.
network_interface -- must support the same operations as NetworkInterface
>>> network_interface = NetworkInterface()
>>> host = Host("my_host", network_interface)
"""
...
At this point, hook your source up to doctest to make sure your doc examples continue to work in future.
A:
Personally I found very usefull to use pylint to validate my code.
If you follow pylint suggestion almost automatically your code become more readable,
you will improve your python writing skills, respect naming conventions. You can also define your own naming conventions and so on. It's very useful specially for a python beginner.
I suggest you to use.
A:
Python, though not as overtly typed as C or Java, is still typed and will throw exceptions if you're doing things with types that simply do not play nice together.
To that end, if you're concerned about your code being used correctly, maintained correctly, etc. simply use docstrings, comments, or even more explicit variable names to indicate what the type should be.
Even better yet, include code that will allow it to handle whichever type it may be passed as long as it yields a usable result.
A:
One benefit of static typing is that types are a form of documentation. When programming in Python, you can document more flexibly and fluently. Of course in your example you want to say that network_interface should implement NetworkInterface, but in many cases the type is obvious from the context, variable name, or by convention, and in these cases by omitting the obvious you can produce more readable code. Common is to describe the meaning of a parameter and implicitly giving the type.
For example:
def Bar(foo, count):
"""Bar the foo the given number of times."""
...
This describes the function tersely and precisely. What foo and bar mean will be obvious from context, and that count is a (positive) integer is implicit.
For your example, I'd just mention the type in the document string:
"""Create a named host on the given NetworkInterface."""
This is shorter, more readable, and contains more information than a listing of the types.
|
Python code readability
|
I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class Host:
class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
I don't understand from this definition, what "network_interface" should be. Is it a string, like "eth0" or is it an instance of a class NetworkInterface? The only way I'm thinking about to solve this is a documenting the code with a "docstring". Something like this:
class Host(object):
''' Attributes:
@name: a string
@network_interface: an instance of class NetworkInterface'''
Or may be there are name conventions for things like that?
|
[
"Using dynamic languages will teach you something about static languages: all the help you got from the static language that you now miss in the dynamic language, it wasn't all that helpful.\nTo use your example, in a static language, you'd know that the parameter was a string, and in Python you don't. So in Python you write a docstring. And while you're writing it, you realize you had more to say about it than, \"it's a string\". You need to say what data is in the string, and what format it should have, and what the default is, and something about error conditions. \nAnd then you realize you should have written all that down for your static language as well. Sure, Java would force you know that it was a string, but there's all these other details that need to be specified, and you have to manually do that work in any language.\n",
"The docstring conventions are at PEP 257. \nThe example there follows this format for specifying arguments, you can add the types if they matter:\ndef complex(real=0.0, imag=0.0):\n \"\"\"Form a complex number.\n\n Keyword arguments:\n real -- the real part (default 0.0)\n imag -- the imaginary part (default 0.0)\n\n \"\"\"\n if imag == 0.0 and real == 0.0: return complex_zero\n ...\n\nThere was also a rejected PEP for docstrings for attributes ( rather than constructor arguments ). \n",
"The most pythonic solution is to document with examples. If possible, state what operations an object must support to be acceptable, rather than a specific type.\nclass Host(object):\n def __init__(self, name, network_interface)\n \"\"\"Initialise host with given name and network_interface.\n\n network_interface -- must support the same operations as NetworkInterface\n\n >>> network_interface = NetworkInterface()\n >>> host = Host(\"my_host\", network_interface)\n\n \"\"\"\n ...\n\nAt this point, hook your source up to doctest to make sure your doc examples continue to work in future.\n",
"Personally I found very usefull to use pylint to validate my code.\nIf you follow pylint suggestion almost automatically your code become more readable,\nyou will improve your python writing skills, respect naming conventions. You can also define your own naming conventions and so on. It's very useful specially for a python beginner.\nI suggest you to use.\n",
"Python, though not as overtly typed as C or Java, is still typed and will throw exceptions if you're doing things with types that simply do not play nice together.\nTo that end, if you're concerned about your code being used correctly, maintained correctly, etc. simply use docstrings, comments, or even more explicit variable names to indicate what the type should be.\nEven better yet, include code that will allow it to handle whichever type it may be passed as long as it yields a usable result.\n",
"One benefit of static typing is that types are a form of documentation. When programming in Python, you can document more flexibly and fluently. Of course in your example you want to say that network_interface should implement NetworkInterface, but in many cases the type is obvious from the context, variable name, or by convention, and in these cases by omitting the obvious you can produce more readable code. Common is to describe the meaning of a parameter and implicitly giving the type.\nFor example:\ndef Bar(foo, count):\n \"\"\"Bar the foo the given number of times.\"\"\"\n ...\n\nThis describes the function tersely and precisely. What foo and bar mean will be obvious from context, and that count is a (positive) integer is implicit.\nFor your example, I'd just mention the type in the document string:\n\"\"\"Create a named host on the given NetworkInterface.\"\"\"\n\nThis is shorter, more readable, and contains more information than a listing of the types.\n"
] |
[
22,
10,
9,
4,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001409821_python.txt
|
Q:
embedding plot within Qt gui
How do you embed a vpython plot (animated) within your Qt GUI? so that it has its own display area and would not need to created a new window anymore.
A:
vpython's FAQs claim that vpython's architecture make any embedding a problem...:
Q: Is there a way to embed VPython in another environment?
This is difficult because VPython has
two threads, your computational thread
and a rendering thread which about 25
times per second paints the scene
using the current attributes of the
graphics objects. However, Stef
Mientki has managed to embed VPython
in a wxPython window on Windows; see
the contributed section.
So if with wxPython it takes heroic efforts ("has managed to" doesn't sound like a trivial achievement;-) AND only works on a single platform, I fear it won't be any easier with Qt... one hard, uphill slog separately on each and every single platform.
If you're up for a SERIOUS challenge, deeply familiar with vpython, reasonably familiar with Qt, and acquainted with the underlying window-level architecture on all platforms you care about (and with a minor in wxPython), the place to start is Mientki's amazing contribution. He's actually working well below wxPython's level of abstraction, and in terms of win32gui calls, win32con constants, plus "a finite state-machine, clocked by a wx.Timer" at 100 milliseconds (though he does admit that the result from the latter Frankenstein surgery is... "not perfect";-). Extremely similar approaches should see you home (in a similarly "not perfect" way) on any other framework on Windows, including Qt.
However, nobody's yet offered any ports of this to Mac OS X, nor to any window manager of the many that are popular on Linux and Unix-like architectures (I'm not sure whether the feat could be achieved just at xlib level -- window decoration aspects do seem to be involved, and in the X11 world those DO tend to need window manager cooperation).
So, the literal answer to your question is, "with a huge amount of work requiring lots of skills and/or incredible perseverance, and probably in a platform-dependent way that will require redoing on each and every platform of interest"... sorry to be the bearer of pretty bad news, but I prefer to call them as I see them.
A:
I contacted maintainer of VPython and he confirmed, that he is not aware of any working solution where Visual is embedded into QT window.
That turned me to try VTK and so far I'm pretty happy, no problem with using VTK within PyQT framework.
|
embedding plot within Qt gui
|
How do you embed a vpython plot (animated) within your Qt GUI? so that it has its own display area and would not need to created a new window anymore.
|
[
"vpython's FAQs claim that vpython's architecture make any embedding a problem...:\nQ: Is there a way to embed VPython in another environment?\n\nThis is difficult because VPython has\n two threads, your computational thread\n and a rendering thread which about 25\n times per second paints the scene\n using the current attributes of the\n graphics objects. However, Stef\n Mientki has managed to embed VPython\n in a wxPython window on Windows; see\n the contributed section.\n\nSo if with wxPython it takes heroic efforts (\"has managed to\" doesn't sound like a trivial achievement;-) AND only works on a single platform, I fear it won't be any easier with Qt... one hard, uphill slog separately on each and every single platform.\nIf you're up for a SERIOUS challenge, deeply familiar with vpython, reasonably familiar with Qt, and acquainted with the underlying window-level architecture on all platforms you care about (and with a minor in wxPython), the place to start is Mientki's amazing contribution. He's actually working well below wxPython's level of abstraction, and in terms of win32gui calls, win32con constants, plus \"a finite state-machine, clocked by a wx.Timer\" at 100 milliseconds (though he does admit that the result from the latter Frankenstein surgery is... \"not perfect\";-). Extremely similar approaches should see you home (in a similarly \"not perfect\" way) on any other framework on Windows, including Qt.\nHowever, nobody's yet offered any ports of this to Mac OS X, nor to any window manager of the many that are popular on Linux and Unix-like architectures (I'm not sure whether the feat could be achieved just at xlib level -- window decoration aspects do seem to be involved, and in the X11 world those DO tend to need window manager cooperation).\nSo, the literal answer to your question is, \"with a huge amount of work requiring lots of skills and/or incredible perseverance, and probably in a platform-dependent way that will require redoing on each and every platform of interest\"... sorry to be the bearer of pretty bad news, but I prefer to call them as I see them.\n",
"I contacted maintainer of VPython and he confirmed, that he is not aware of any working solution where Visual is embedded into QT window.\nThat turned me to try VTK and so far I'm pretty happy, no problem with using VTK within PyQT framework.\n"
] |
[
3,
1
] |
[] |
[] |
[
"python",
"qt",
"user_interface"
] |
stackoverflow_0001397553_python_qt_user_interface.txt
|
Q:
Profiling a python multiprocessing pool
I'm trying to run cProfile.runctx() on each process in a multiprocessing pool, to get an idea of what the multiprocessing bottlenecks are in my source. Here is a simplified example of what I'm trying to do:
from multiprocessing import Pool
import cProfile
def square(i):
return i*i
def square_wrapper(i):
cProfile.runctx("result = square(i)",
globals(), locals(), "file_"+str(i))
# NameError happens here - 'result' is not defined.
return result
if __name__ == "__main__":
pool = Pool(8)
results = pool.map_async(square_wrapper, range(15)).get(99999)
print results
Unfortunately, trying to execute "result = square(i)" in the profiler does not affect 'result' in the scope it was called from. How can I accomplish what I am trying to do here?
A:
Try this:
def square_wrapper(i):
result = [None]
cProfile.runctx("result[0] = square(i)", globals(), locals(), "file_%d" % i)
return result[0]
|
Profiling a python multiprocessing pool
|
I'm trying to run cProfile.runctx() on each process in a multiprocessing pool, to get an idea of what the multiprocessing bottlenecks are in my source. Here is a simplified example of what I'm trying to do:
from multiprocessing import Pool
import cProfile
def square(i):
return i*i
def square_wrapper(i):
cProfile.runctx("result = square(i)",
globals(), locals(), "file_"+str(i))
# NameError happens here - 'result' is not defined.
return result
if __name__ == "__main__":
pool = Pool(8)
results = pool.map_async(square_wrapper, range(15)).get(99999)
print results
Unfortunately, trying to execute "result = square(i)" in the profiler does not affect 'result' in the scope it was called from. How can I accomplish what I am trying to do here?
|
[
"Try this:\ndef square_wrapper(i):\n result = [None]\n cProfile.runctx(\"result[0] = square(i)\", globals(), locals(), \"file_%d\" % i)\n return result[0]\n\n"
] |
[
8
] |
[] |
[] |
[
"multiprocessing",
"pool",
"profiling",
"python"
] |
stackoverflow_0001414841_multiprocessing_pool_profiling_python.txt
|
Q:
Prompt on exit in PyQt application
Is there any way to promt user to exit the gui-program written in Python?
Something like "Are you sure you want to exit the program?"
I'm using PyQt.
A:
Yes. You need to override the default close behaviour of the QWidget representing your application so that it doesn't immediately accept the event. The basic structure you want is something like this:
def closeEvent(self, event):
quit_msg = "Are you sure you want to exit the program?"
reply = QtGui.QMessageBox.question(self, 'Message',
quit_msg, QtGui.QMessageBox.Yes, QtGui.QMessageBox.No)
if reply == QtGui.QMessageBox.Yes:
event.accept()
else:
event.ignore()
The PyQt tutorial mentioned by las3rjock has a nice discussion of this. Also check out the links from the PyQt page at Python.org, in particular the official reference, to learn more about events and how to handle them.
|
Prompt on exit in PyQt application
|
Is there any way to promt user to exit the gui-program written in Python?
Something like "Are you sure you want to exit the program?"
I'm using PyQt.
|
[
"Yes. You need to override the default close behaviour of the QWidget representing your application so that it doesn't immediately accept the event. The basic structure you want is something like this:\ndef closeEvent(self, event):\n\n quit_msg = \"Are you sure you want to exit the program?\"\n reply = QtGui.QMessageBox.question(self, 'Message', \n quit_msg, QtGui.QMessageBox.Yes, QtGui.QMessageBox.No)\n\n if reply == QtGui.QMessageBox.Yes:\n event.accept()\n else:\n event.ignore()\n\nThe PyQt tutorial mentioned by las3rjock has a nice discussion of this. Also check out the links from the PyQt page at Python.org, in particular the official reference, to learn more about events and how to handle them.\n"
] |
[
61
] |
[] |
[] |
[
"exit",
"pyqt",
"python"
] |
stackoverflow_0001414781_exit_pyqt_python.txt
|
Q:
sqlalchemy easy way to insert or update?
I have a sequence of new objects. They all look like similar to this:
Foo(pk_col1=x, pk_col2=y, val='bar')
Some of those are Foo that exist (i.e. only val differs from the row in the db)
and should generate update queries. The others should generate inserts.
I can think of a few ways of doing this, the best being:
pk_cols = Foo.table.primary_key.keys()
for f1 in foos:
f2 = Foo.get([getattr(f1, c) for c in pk_cols])
if f2 is not None:
f2.val = f1.val # update
# XXX do we need to do session.add(f2)
# (or at least keep f2 alive until after the commit?)
else:
session.add(f1) # insert
session.commit()
Is there an easier way?
A:
I think you are after new_obj = session.merge(obj). This will merge an object in a detached state into the session if the primary keys match and will make a new one otherwise. So session.save(new_obj) will work for both insert and update.
|
sqlalchemy easy way to insert or update?
|
I have a sequence of new objects. They all look like similar to this:
Foo(pk_col1=x, pk_col2=y, val='bar')
Some of those are Foo that exist (i.e. only val differs from the row in the db)
and should generate update queries. The others should generate inserts.
I can think of a few ways of doing this, the best being:
pk_cols = Foo.table.primary_key.keys()
for f1 in foos:
f2 = Foo.get([getattr(f1, c) for c in pk_cols])
if f2 is not None:
f2.val = f1.val # update
# XXX do we need to do session.add(f2)
# (or at least keep f2 alive until after the commit?)
else:
session.add(f1) # insert
session.commit()
Is there an easier way?
|
[
"I think you are after new_obj = session.merge(obj). This will merge an object in a detached state into the session if the primary keys match and will make a new one otherwise. So session.save(new_obj) will work for both insert and update.\n"
] |
[
59
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0001382469_python_sqlalchemy.txt
|
Q:
Does anybody use DjVu files in their production tools?
When it's about archiving and doc portability, it's all about PDF. I heard about DjVu somes years ago, and it seems to be now mature enough for serious usages. The benefits seems to be a small size format and a fast open / read experience.
But I have absolutely no feedback on how good / bad it is in the real world :
Is it technically hard to implement in traditional information management tools ?
Is is worth learning / implementing solution to generate / parse it when you now PDF ?
Is the final user feedback good when it comes to day to day use ?
How do you manage exchanges with the external world (the one with a PDF only state of mind) ?
As a programmer, what are the pro and cons ?
And what would you use to convince your boss to (or not to) use DjVU ?
And globally, what gain did you noticed after including DjVu in your workflow ?
Bonus question : do you know some good Python libs to hack some quick and dirty scripts as a begining ?
EDIT : doing some research, I ended up getting that Wikimedia use it to internally store its book collection but can't find any feedback about it. Anybody involved in that project around here ?
A:
I've found DjVu to be ideal for image-intensive documents. I used to sell books of highly details maps, and those were always in DjVu. PDF however works really well; it's a standard, and -everybody- will be able to open it without installing additional software.
There's more info at:
http://print-driver.com/news/pdf-vs-djvu-i1909.html
Personally, I'd say until its graphic-rich documents, just stick to PDF.
|
Does anybody use DjVu files in their production tools?
|
When it's about archiving and doc portability, it's all about PDF. I heard about DjVu somes years ago, and it seems to be now mature enough for serious usages. The benefits seems to be a small size format and a fast open / read experience.
But I have absolutely no feedback on how good / bad it is in the real world :
Is it technically hard to implement in traditional information management tools ?
Is is worth learning / implementing solution to generate / parse it when you now PDF ?
Is the final user feedback good when it comes to day to day use ?
How do you manage exchanges with the external world (the one with a PDF only state of mind) ?
As a programmer, what are the pro and cons ?
And what would you use to convince your boss to (or not to) use DjVU ?
And globally, what gain did you noticed after including DjVu in your workflow ?
Bonus question : do you know some good Python libs to hack some quick and dirty scripts as a begining ?
EDIT : doing some research, I ended up getting that Wikimedia use it to internally store its book collection but can't find any feedback about it. Anybody involved in that project around here ?
|
[
"I've found DjVu to be ideal for image-intensive documents. I used to sell books of highly details maps, and those were always in DjVu. PDF however works really well; it's a standard, and -everybody- will be able to open it without installing additional software.\nThere's more info at:\nhttp://print-driver.com/news/pdf-vs-djvu-i1909.html\nPersonally, I'd say until its graphic-rich documents, just stick to PDF.\n"
] |
[
3
] |
[] |
[] |
[
"djvu",
"pdf",
"python"
] |
stackoverflow_0001415400_djvu_pdf_python.txt
|
Q:
Read WordPerfect files with Python?
I really need to work with information contained in WordPerfect 12 files without using WordPerfect's sluggish visual interface, but I can't find any detailed documentation about the file format or any Python modules for reading/writing the files. I found a post on the web that seems to explain how to convert WordPerfect to text, but I didn't understand much about how it works.
http://mail.python.org/pipermail/python-list/2000-February/023093.html
How do I accomplish this?
A:
The relevant part of your link is this:
os.system( "%s %s %s" % ( WPD_TO_TEXT_CMD, "/tmp/tmpfile", "/tmp/tmpfile.txt" ) )
Which is making a system call to an outside program called "wp2txt". Googling for that program produces active hits.
A:
OpenOffice.org should read WordPerfect files, I think.
And you can script OpenOffice with Python.
A:
OK, here's what I did. I read the file in binary mode, converted by the data into a string representation of the hex values, and used unofficial WordPerfect documentation to create regular expressions to swap out all the hex strings representing non-text formatting codes and meta data, then converted everything back into text.
A dirty piece of hacking, but it got the job done.
|
Read WordPerfect files with Python?
|
I really need to work with information contained in WordPerfect 12 files without using WordPerfect's sluggish visual interface, but I can't find any detailed documentation about the file format or any Python modules for reading/writing the files. I found a post on the web that seems to explain how to convert WordPerfect to text, but I didn't understand much about how it works.
http://mail.python.org/pipermail/python-list/2000-February/023093.html
How do I accomplish this?
|
[
"The relevant part of your link is this:\nos.system( \"%s %s %s\" % ( WPD_TO_TEXT_CMD, \"/tmp/tmpfile\", \"/tmp/tmpfile.txt\" ) )\n\nWhich is making a system call to an outside program called \"wp2txt\". Googling for that program produces active hits.\n",
"OpenOffice.org should read WordPerfect files, I think.\nAnd you can script OpenOffice with Python.\n",
"OK, here's what I did. I read the file in binary mode, converted by the data into a string representation of the hex values, and used unofficial WordPerfect documentation to create regular expressions to swap out all the hex strings representing non-text formatting codes and meta data, then converted everything back into text.\nA dirty piece of hacking, but it got the job done.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"python",
"wordperfect"
] |
stackoverflow_0001297466_python_wordperfect.txt
|
Q:
How can I obtain pattern string from compiled regexp pattern in python?
I have some code like this one:
>>> import re
>>> p = re.compile('my pattern')
>>> print p
_sre.SRE_Pattern object at 0x02274380
Is it possible to get string "my pattern" from p variable?
A:
p.pattern
Read more about re module here:
http://docs.python.org/library/re.html
A:
From the "Regular Expression Objects" section of the re module documentation:
RegexObject.pattern
The pattern string from which the RE object was compiled.
For example:
>>> import re
>>> p = re.compile('my pattern')
>>> p
<_sre.SRE_Pattern object at 0x1001ba818>
>>> p.pattern
'my pattern'
With the re module in Python 3.0 and above, you can find this by doing a simple dir(p):
>>> print(dir(p))
['__class__', '__copy__', '__deepcopy__', '__delattr__', '__doc__',
'__eq__', '__format__', '__ge__', '__getattribute__', '__gt__',
'__hash__', '__init__', '__le__', '__lt__', '__ne__', '__new__',
'__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__',
'__str__', '__subclasshook__', 'findall', 'finditer', 'flags',
'groupindex', 'groups', 'match', 'pattern', 'scanner', 'search',
'split', 'sub', 'subn']
This however does not work on Python 2.6 (or 2.5) - the dir command isn't perfect, so it's always worth checking the docs!
>>> print dir(p)
['__copy__', '__deepcopy__', 'findall', 'finditer', 'match', 'scanner',
'search', 'split', 'sub', 'subn']
A:
Yes:
print p.pattern
hint, use the dir function in python to obtain a list of members:
dir(p)
this lists:
['__class__', '__copy__', '__deepcopy__', '__delattr__', '__doc__', '__eq__',
'__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__',
'__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__',
'__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__',
'findall', 'finditer', 'flags', 'groupindex', 'groups', 'match', 'pattern',
'scanner', 'search', 'split', 'sub', 'subn']
|
How can I obtain pattern string from compiled regexp pattern in python?
|
I have some code like this one:
>>> import re
>>> p = re.compile('my pattern')
>>> print p
_sre.SRE_Pattern object at 0x02274380
Is it possible to get string "my pattern" from p variable?
|
[
"p.pattern\n\nRead more about re module here:\nhttp://docs.python.org/library/re.html\n",
"From the \"Regular Expression Objects\" section of the re module documentation:\n\nRegexObject.pattern\nThe pattern string from which the RE object was compiled.\n\nFor example:\n>>> import re\n>>> p = re.compile('my pattern')\n>>> p\n<_sre.SRE_Pattern object at 0x1001ba818>\n>>> p.pattern\n'my pattern'\n\nWith the re module in Python 3.0 and above, you can find this by doing a simple dir(p):\n>>> print(dir(p))\n['__class__', '__copy__', '__deepcopy__', '__delattr__', '__doc__',\n'__eq__', '__format__', '__ge__', '__getattribute__', '__gt__',\n'__hash__', '__init__', '__le__', '__lt__', '__ne__', '__new__',\n'__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__',\n'__str__', '__subclasshook__', 'findall', 'finditer', 'flags',\n'groupindex', 'groups', 'match', 'pattern', 'scanner', 'search',\n'split', 'sub', 'subn']\n\nThis however does not work on Python 2.6 (or 2.5) - the dir command isn't perfect, so it's always worth checking the docs!\n>>> print dir(p)\n['__copy__', '__deepcopy__', 'findall', 'finditer', 'match', 'scanner',\n'search', 'split', 'sub', 'subn']\n\n",
"Yes:\nprint p.pattern\n\nhint, use the dir function in python to obtain a list of members:\ndir(p)\n\nthis lists:\n['__class__', '__copy__', '__deepcopy__', '__delattr__', '__doc__', '__eq__',\n'__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__',\n'__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__',\n'__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__',\n'findall', 'finditer', 'flags', 'groupindex', 'groups', 'match', 'pattern',\n'scanner', 'search', 'split', 'sub', 'subn']\n\n"
] |
[
126,
22,
9
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001415924_python_regex.txt
|
Q:
Installing Pinax on Windows
Can I install Pinax on Windows Environment?
Is there a easy way?
Which environment do you recommend?
A:
I have pinax 0.7rc1 installed and working on windows 7, with no problems.
Check out this video for a great example on how to do this. He uses pinax 0.7beta3 on windows XP.
http://www.vimeo.com/6098872
Here are the steps I followed.
download and install python
download and install python image library
download pinax at http://pinaxproject.com
extract the download to some working directory <pinax-directory> (maybe c:\pinax ?)
make sure you have python in your path (c:\pythonXX)
make sure you have the python scripts folder in your path (c:\pythonXX\scripts)
open a command prompt
cd to <pinax-directory>\scripts folder
run python pinax-boot.py <pinax-env> (I used "../pinax-env")
wait for pinax-boot process to finish
-- technically pinax is installed and ready to use, but the next steps will get you up and running with pinax social app (any other app will also work fine)
cd to your <pinax-env>\scripts directory
execute the activate.bat script
execute python clone_project social <pinax-env>\social
cd to <pinax-env>\social
execute python manage.py syncdb
execute python manage.py runserver
open your browser to the server and you should see your new pinax site
Voila!! Pinax on Windows.
A:
Provided you have Python and Django installed, Pinax should install fine. According to the documentation there is one step that you have to do specifically on Windows however (Under the "Note To Windows Users" heading):
http://pinaxproject.com/docs/0.5.1/install.html
A:
I spent a while trying to get the .7 beta working in Windows and ran into a lot of trouble. However, it looks like the 3rd beta release of .7 (the latest beta release) focuses on Windows support. So try that, instead of the 'stable' version - it's close to being released as stable anyway, and is recommended now for use.
In the end though, I switched to Ubuntu and haven't been happier. Python development is much nicer in Linux. It's easier to install many Python packages, I run into fewer configuration issues, and there's better support and documentation available.
|
Installing Pinax on Windows
|
Can I install Pinax on Windows Environment?
Is there a easy way?
Which environment do you recommend?
|
[
"I have pinax 0.7rc1 installed and working on windows 7, with no problems.\nCheck out this video for a great example on how to do this. He uses pinax 0.7beta3 on windows XP.\nhttp://www.vimeo.com/6098872\nHere are the steps I followed.\n\ndownload and install python\ndownload and install python image library\ndownload pinax at http://pinaxproject.com\nextract the download to some working directory <pinax-directory> (maybe c:\\pinax ?)\nmake sure you have python in your path (c:\\pythonXX)\nmake sure you have the python scripts folder in your path (c:\\pythonXX\\scripts)\nopen a command prompt\ncd to <pinax-directory>\\scripts folder\nrun python pinax-boot.py <pinax-env> (I used \"../pinax-env\")\nwait for pinax-boot process to finish\n\n-- technically pinax is installed and ready to use, but the next steps will get you up and running with pinax social app (any other app will also work fine)\n\ncd to your <pinax-env>\\scripts directory\nexecute the activate.bat script\nexecute python clone_project social <pinax-env>\\social\ncd to <pinax-env>\\social\nexecute python manage.py syncdb\nexecute python manage.py runserver\nopen your browser to the server and you should see your new pinax site\n\nVoila!! Pinax on Windows.\n",
"Provided you have Python and Django installed, Pinax should install fine. According to the documentation there is one step that you have to do specifically on Windows however (Under the \"Note To Windows Users\" heading):\nhttp://pinaxproject.com/docs/0.5.1/install.html\n",
"I spent a while trying to get the .7 beta working in Windows and ran into a lot of trouble. However, it looks like the 3rd beta release of .7 (the latest beta release) focuses on Windows support. So try that, instead of the 'stable' version - it's close to being released as stable anyway, and is recommended now for use.\nIn the end though, I switched to Ubuntu and haven't been happier. Python development is much nicer in Linux. It's easier to install many Python packages, I run into fewer configuration issues, and there's better support and documentation available.\n"
] |
[
7,
0,
0
] |
[] |
[] |
[
"django",
"pinax",
"python",
"web_applications"
] |
stackoverflow_0001233670_django_pinax_python_web_applications.txt
|
Q:
How to improve speed of this readline loop in python?
i'm importing several parts of a Databasedump in text Format into MySQL, the problem is
that before the interesting Data there is very much non-interesting stuff infront.
I wrote this loop to get to the needed data:
def readloop(DBFILE):
txtdb=open(DBFILE, 'r')
sline = ""
# loop till 1st "customernum:" is found
while sline.startswith("customernum: ") is False:
sline = txtdb.readline()
while sline.startswith("customernum: "):
data = []
data.append(sline)
sline = txtdb.readline()
while sline.startswith("customernum: ") is False:
data.append(sline)
sline = txtdb.readline()
if len(sline) == 0:
break
customernum = getitem(data, "customernum: ")
street = getitem(data, "street: ")
country = getitem(data, "country: ")
zip = getitem(data, "zip: ")
The Textfile is pretty huge, so just looping till the first wanted entry takes very much time. Anyone has an idea if this could be done faster (or if the whole way i fixed this is not the best idea) ?
Many thanks in advance!
A:
Please do not write this code:
while condition is False:
Boolean conditions are boolean for cryin' out loud, so they can be tested (or negated and tested) directly:
while not condition:
Your second while loop isn't written as "while condition is True:", I'm curious why you felt the need to test "is False" in the first one.
Pulling out the dis module, I thought I'd dissect this a little further. In my pyparsing experience, function calls are total performance killers, so it would be nice to avoid function calls if possible. Here is your original test:
>>> test = lambda t : t.startswith('customernum') is False
>>> dis.dis(test)
1 0 LOAD_FAST 0 (t)
3 LOAD_ATTR 0 (startswith)
6 LOAD_CONST 0 ('customernum')
9 CALL_FUNCTION 1
12 LOAD_GLOBAL 1 (False)
15 COMPARE_OP 8 (is)
18 RETURN_VALUE
Two expensive things happen here, CALL_FUNCTION and LOAD_GLOBAL. You could cut back on LOAD_GLOBAL by defining a local name for False:
>>> test = lambda t,False=False : t.startswith('customernum') is False
>>> dis.dis(test)
1 0 LOAD_FAST 0 (t)
3 LOAD_ATTR 0 (startswith)
6 LOAD_CONST 0 ('customernum')
9 CALL_FUNCTION 1
12 LOAD_FAST 1 (False)
15 COMPARE_OP 8 (is)
18 RETURN_VALUE
But what if we just drop the 'is' test completely?:
>>> test = lambda t : not t.startswith('customernum')
>>> dis.dis(test)
1 0 LOAD_FAST 0 (t)
3 LOAD_ATTR 0 (startswith)
6 LOAD_CONST 0 ('customernum')
9 CALL_FUNCTION 1
12 UNARY_NOT
13 RETURN_VALUE
We've collapsed a LOAD_xxx and COMPARE_OP with a simple UNARY_NOT. "is False" certainly isn't helping the performance cause any.
Now what if we can do some gross elimination of a line without doing any function calls at all. If the first character of the line is not a 'c', there is no way it will startswith('customernum'). Let's try that:
>>> test = lambda t : t[0] != 'c' and not t.startswith('customernum')
>>> dis.dis(test)
1 0 LOAD_FAST 0 (t)
3 LOAD_CONST 0 (0)
6 BINARY_SUBSCR
7 LOAD_CONST 1 ('c')
10 COMPARE_OP 3 (!=)
13 JUMP_IF_FALSE 14 (to 30)
16 POP_TOP
17 LOAD_FAST 0 (t)
20 LOAD_ATTR 0 (startswith)
23 LOAD_CONST 2 ('customernum')
26 CALL_FUNCTION 1
29 UNARY_NOT
>> 30 RETURN_VALUE
(Note that using [0] to get the first character of a string does not create a slice - this is in fact very fast.)
Now, assuming there are not a large number of lines starting with 'c', the rough-cut filter can eliminate a line using all fairly fast instructions. In fact, by testing "t[0] != 'c'" instead of "not t[0] == 'c'" we save ourselves an extraneous UNARY_NOT instruction.
So using this learning about short-cut optimization and I suggest changing this code:
while sline.startswith("customernum: ") is False:
sline = txtdb.readline()
while sline.startswith("customernum: "):
... do the rest of the customer data stuff...
To this:
for sline in txtdb:
if sline[0] == 'c' and \
sline.startswith("customernum: "):
... do the rest of the customer data stuff...
Note that I have also removed the .readline() function call, and just iterate over the file using "for sline in txtdb".
I realize Alex has provided a different body of code entirely for finding that first 'customernum' line, but I would try optimizing within the general bounds of your algorithm, before pulling out big but obscure block reading guns.
A:
The general idea for optimization is to proceed "by big blocks" (mostly-ignoring line structure) to locate the first line of interest, then move on to by-line processing for the rest). It's somewhat finicky and error-prone (off-by-one and the like) so it really needs testing, but the general idea is as follows...:
import itertools
def readloop(DBFILE):
txtdb=open(DBFILE, 'r')
tag = "customernum: "
BIGBLOCK = 1024 * 1024
# locate first occurrence of tag at line-start
# (assumes the VERY FIRST line doesn't start that way,
# else you need a special-case and slight refactoring)
blob = ''
while True:
blob = blob + txtdb.read(BIGBLOCK)
if not blob:
# tag not present at all -- warn about that, then
return
where = blob.find('\n' + tag)
if where != -1: # found it!
blob = blob[where+1:] + txtdb.readline()
break
blob = blob[-len(tag):]
# now make a by-line iterator over the part of interest
thelines = itertools.chain(blob.splitlines(1), txtdb)
sline = next(thelines, '')
while sline.startswith(tag):
data = []
data.append(sline)
sline = next(thelines, '')
while not sline.startswith(tag):
data.append(sline)
sline = next(thelines, '')
if not sline:
break
customernum = getitem(data, "customernum: ")
street = getitem(data, "street: ")
country = getitem(data, "country: ")
zip = getitem(data, "zip: ")
Here, I've tried to keep as much of your structure intact as feasible, doing only minor enhancements beyond the "big idea" of this refactoring.
A:
I guess you are writing this import script and it gets boring to wait during testing it, so the data stays the same all the time.
You can run the script once to detect the actual positions in the file you want to jump to, with print txtdb.tell(). Write those down and replace the searching code with txtdb.seek( pos ). Basically that's builing an index for the file ;-)
Another more convetional way would be to read data in larger chunks, a few MB at a time, not just the few bytes on a line.
A:
This might help: Python Performance Part 2: Parsing Large Strings for 'A Href' Hypertext
A:
Tell us more about the file.
Can you use file.seek to do a binary search? Seek to the halfway mark, read a few lines, determine if you are before or after the part you need, recurse. That will turn your O(n) search into O(logn).
|
How to improve speed of this readline loop in python?
|
i'm importing several parts of a Databasedump in text Format into MySQL, the problem is
that before the interesting Data there is very much non-interesting stuff infront.
I wrote this loop to get to the needed data:
def readloop(DBFILE):
txtdb=open(DBFILE, 'r')
sline = ""
# loop till 1st "customernum:" is found
while sline.startswith("customernum: ") is False:
sline = txtdb.readline()
while sline.startswith("customernum: "):
data = []
data.append(sline)
sline = txtdb.readline()
while sline.startswith("customernum: ") is False:
data.append(sline)
sline = txtdb.readline()
if len(sline) == 0:
break
customernum = getitem(data, "customernum: ")
street = getitem(data, "street: ")
country = getitem(data, "country: ")
zip = getitem(data, "zip: ")
The Textfile is pretty huge, so just looping till the first wanted entry takes very much time. Anyone has an idea if this could be done faster (or if the whole way i fixed this is not the best idea) ?
Many thanks in advance!
|
[
"Please do not write this code:\nwhile condition is False:\n\nBoolean conditions are boolean for cryin' out loud, so they can be tested (or negated and tested) directly:\nwhile not condition:\n\nYour second while loop isn't written as \"while condition is True:\", I'm curious why you felt the need to test \"is False\" in the first one.\nPulling out the dis module, I thought I'd dissect this a little further. In my pyparsing experience, function calls are total performance killers, so it would be nice to avoid function calls if possible. Here is your original test:\n>>> test = lambda t : t.startswith('customernum') is False\n>>> dis.dis(test)\n 1 0 LOAD_FAST 0 (t)\n 3 LOAD_ATTR 0 (startswith)\n 6 LOAD_CONST 0 ('customernum')\n 9 CALL_FUNCTION 1\n 12 LOAD_GLOBAL 1 (False)\n 15 COMPARE_OP 8 (is)\n 18 RETURN_VALUE\n\nTwo expensive things happen here, CALL_FUNCTION and LOAD_GLOBAL. You could cut back on LOAD_GLOBAL by defining a local name for False:\n>>> test = lambda t,False=False : t.startswith('customernum') is False\n>>> dis.dis(test)\n 1 0 LOAD_FAST 0 (t)\n 3 LOAD_ATTR 0 (startswith)\n 6 LOAD_CONST 0 ('customernum')\n 9 CALL_FUNCTION 1\n 12 LOAD_FAST 1 (False)\n 15 COMPARE_OP 8 (is)\n 18 RETURN_VALUE\n\nBut what if we just drop the 'is' test completely?:\n>>> test = lambda t : not t.startswith('customernum')\n>>> dis.dis(test)\n 1 0 LOAD_FAST 0 (t)\n 3 LOAD_ATTR 0 (startswith)\n 6 LOAD_CONST 0 ('customernum')\n 9 CALL_FUNCTION 1\n 12 UNARY_NOT\n 13 RETURN_VALUE\n\nWe've collapsed a LOAD_xxx and COMPARE_OP with a simple UNARY_NOT. \"is False\" certainly isn't helping the performance cause any.\nNow what if we can do some gross elimination of a line without doing any function calls at all. If the first character of the line is not a 'c', there is no way it will startswith('customernum'). Let's try that:\n>>> test = lambda t : t[0] != 'c' and not t.startswith('customernum')\n>>> dis.dis(test)\n 1 0 LOAD_FAST 0 (t)\n 3 LOAD_CONST 0 (0)\n 6 BINARY_SUBSCR\n 7 LOAD_CONST 1 ('c')\n 10 COMPARE_OP 3 (!=)\n 13 JUMP_IF_FALSE 14 (to 30)\n 16 POP_TOP\n 17 LOAD_FAST 0 (t)\n 20 LOAD_ATTR 0 (startswith)\n 23 LOAD_CONST 2 ('customernum')\n 26 CALL_FUNCTION 1\n 29 UNARY_NOT\n >> 30 RETURN_VALUE\n\n(Note that using [0] to get the first character of a string does not create a slice - this is in fact very fast.)\nNow, assuming there are not a large number of lines starting with 'c', the rough-cut filter can eliminate a line using all fairly fast instructions. In fact, by testing \"t[0] != 'c'\" instead of \"not t[0] == 'c'\" we save ourselves an extraneous UNARY_NOT instruction. \nSo using this learning about short-cut optimization and I suggest changing this code:\nwhile sline.startswith(\"customernum: \") is False:\n sline = txtdb.readline()\n\nwhile sline.startswith(\"customernum: \"):\n ... do the rest of the customer data stuff...\n\nTo this:\nfor sline in txtdb:\n if sline[0] == 'c' and \\ \n sline.startswith(\"customernum: \"):\n ... do the rest of the customer data stuff...\n\nNote that I have also removed the .readline() function call, and just iterate over the file using \"for sline in txtdb\".\nI realize Alex has provided a different body of code entirely for finding that first 'customernum' line, but I would try optimizing within the general bounds of your algorithm, before pulling out big but obscure block reading guns.\n",
"The general idea for optimization is to proceed \"by big blocks\" (mostly-ignoring line structure) to locate the first line of interest, then move on to by-line processing for the rest). It's somewhat finicky and error-prone (off-by-one and the like) so it really needs testing, but the general idea is as follows...:\nimport itertools\n\ndef readloop(DBFILE):\n txtdb=open(DBFILE, 'r')\n tag = \"customernum: \"\n BIGBLOCK = 1024 * 1024\n # locate first occurrence of tag at line-start\n # (assumes the VERY FIRST line doesn't start that way,\n # else you need a special-case and slight refactoring)\n blob = ''\n while True:\n blob = blob + txtdb.read(BIGBLOCK)\n if not blob:\n # tag not present at all -- warn about that, then\n return\n where = blob.find('\\n' + tag)\n if where != -1: # found it!\n blob = blob[where+1:] + txtdb.readline()\n break\n blob = blob[-len(tag):]\n # now make a by-line iterator over the part of interest\n thelines = itertools.chain(blob.splitlines(1), txtdb)\n sline = next(thelines, '')\n while sline.startswith(tag):\n data = []\n data.append(sline)\n sline = next(thelines, '')\n while not sline.startswith(tag):\n data.append(sline)\n sline = next(thelines, '')\n if not sline:\n break\n customernum = getitem(data, \"customernum: \")\n street = getitem(data, \"street: \")\n country = getitem(data, \"country: \")\n zip = getitem(data, \"zip: \")\n\nHere, I've tried to keep as much of your structure intact as feasible, doing only minor enhancements beyond the \"big idea\" of this refactoring.\n",
"I guess you are writing this import script and it gets boring to wait during testing it, so the data stays the same all the time.\nYou can run the script once to detect the actual positions in the file you want to jump to, with print txtdb.tell(). Write those down and replace the searching code with txtdb.seek( pos ). Basically that's builing an index for the file ;-)\nAnother more convetional way would be to read data in larger chunks, a few MB at a time, not just the few bytes on a line.\n",
"This might help: Python Performance Part 2: Parsing Large Strings for 'A Href' Hypertext\n",
"Tell us more about the file.\nCan you use file.seek to do a binary search? Seek to the halfway mark, read a few lines, determine if you are before or after the part you need, recurse. That will turn your O(n) search into O(logn).\n"
] |
[
5,
2,
1,
0,
0
] |
[] |
[] |
[
"loops",
"python",
"readline"
] |
stackoverflow_0001415369_loops_python_readline.txt
|
Q:
How to create hover effect on StaticBitmap in wxpython?
I want to create hover effect on StaticBitmap - If the cursor of mouse is over the the bitmap, shows one image, if not, shows second image. It's trivial program (works perfectly with a button). However, StaticBitmap doesn't emit EVT_WINDOW_ENTER, EVT_WINDOW_LEAVE events.
I can work with EVT_MOTION. If images are switched when the cursor is on the edge of image, switch sometimes doesn't work. (Mainly with fast moving over the edge).
Example code:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import wx
def onWindow(event):
print "window event:", event.m_x, event.m_y
def onMotion(event):
print "motion event:", event.m_x, event.m_y
app = wx.App()
imageA = wx.Image("b.gif", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
imageB = wx.Image("a.gif", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
frame = wx.Frame(None, wx.ID_ANY, title="Hover effect", size=(100+imageA.GetWidth(), 100+imageA.GetHeight()))
w = wx.Window(frame)
bmp = wx.StaticBitmap(w, -1, imageA, (50, 50), (imageA.GetWidth(), imageA.GetHeight()))
bmp.Bind(wx.EVT_MOTION, onMotion)
bmp.Bind(wx.EVT_ENTER_WINDOW, onWindow)
bmp.Bind(wx.EVT_LEAVE_WINDOW, onWindow)
frame.Show()
app.MainLoop()
A:
It looks like this is a wxGTK bug, ENTER and LEAVE events work fine on windows. You should direct the attention of the core developers to the problem, a good place to do this is their bug tracker. This is an issue you should not have to work around IMHO.
I have found that GenericButtons do not have this problem on wxGTK, so maybe you can use that until StaticBitmap gets fixed.
#!/usr/bin/python
# -*- coding: utf-8 -*-
import wx
from wx.lib import buttons
def onWindow(event):
print "window event:", event.m_x, event.m_y
def onMotion(event):
print "motion event:", event.m_x, event.m_y
app = wx.App()
imageA = wx.Image("b.gif", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
imageB = wx.Image("a.gif", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
frame = wx.Frame(None, wx.ID_ANY, title="Hover effect", size=(100+imageA.GetWidth(), 100+imageA.GetHeight()))
w = wx.Window(frame)
#bmp = wx.StaticBitmap(w, -1, imageA, (50, 50), (imageA.GetWidth(), imageA.GetHeight()))
bmp = buttons.GenBitmapButton(w, -1, imageA, style=wx.BORDER_NONE)
#bmp.Bind(wx.EVT_MOTION, onMotion)
bmp.Bind(wx.EVT_ENTER_WINDOW, onWindow)
bmp.Bind(wx.EVT_LEAVE_WINDOW, onWindow)
frame.Show()
app.MainLoop()
A:
There may be bug in wxStaticBitmap implementation, but if wxBitmapButton works you can use it for same effect, with less code
#!/usr/bin/python
# -*- coding: utf-8 -*-
import wx
app = wx.App()
frame = wx.Frame(None, wx.ID_ANY, title="Hover effect")
w = wx.Window(frame)
c = wx.BitmapButton(w, -1, wx.EmptyBitmap(25,25), style = wx.NO_BORDER)
c.SetBitmapHover(wx.EmptyBitmap(3,3))
frame.Show()
app.MainLoop()
|
How to create hover effect on StaticBitmap in wxpython?
|
I want to create hover effect on StaticBitmap - If the cursor of mouse is over the the bitmap, shows one image, if not, shows second image. It's trivial program (works perfectly with a button). However, StaticBitmap doesn't emit EVT_WINDOW_ENTER, EVT_WINDOW_LEAVE events.
I can work with EVT_MOTION. If images are switched when the cursor is on the edge of image, switch sometimes doesn't work. (Mainly with fast moving over the edge).
Example code:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import wx
def onWindow(event):
print "window event:", event.m_x, event.m_y
def onMotion(event):
print "motion event:", event.m_x, event.m_y
app = wx.App()
imageA = wx.Image("b.gif", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
imageB = wx.Image("a.gif", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
frame = wx.Frame(None, wx.ID_ANY, title="Hover effect", size=(100+imageA.GetWidth(), 100+imageA.GetHeight()))
w = wx.Window(frame)
bmp = wx.StaticBitmap(w, -1, imageA, (50, 50), (imageA.GetWidth(), imageA.GetHeight()))
bmp.Bind(wx.EVT_MOTION, onMotion)
bmp.Bind(wx.EVT_ENTER_WINDOW, onWindow)
bmp.Bind(wx.EVT_LEAVE_WINDOW, onWindow)
frame.Show()
app.MainLoop()
|
[
"It looks like this is a wxGTK bug, ENTER and LEAVE events work fine on windows. You should direct the attention of the core developers to the problem, a good place to do this is their bug tracker. This is an issue you should not have to work around IMHO.\nI have found that GenericButtons do not have this problem on wxGTK, so maybe you can use that until StaticBitmap gets fixed.\n#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\nimport wx\nfrom wx.lib import buttons\n\ndef onWindow(event):\n print \"window event:\", event.m_x, event.m_y\n\ndef onMotion(event):\n print \"motion event:\", event.m_x, event.m_y\n\napp = wx.App()\n\nimageA = wx.Image(\"b.gif\", wx.BITMAP_TYPE_ANY).ConvertToBitmap()\nimageB = wx.Image(\"a.gif\", wx.BITMAP_TYPE_ANY).ConvertToBitmap()\n\nframe = wx.Frame(None, wx.ID_ANY, title=\"Hover effect\", size=(100+imageA.GetWidth(), 100+imageA.GetHeight()))\n\nw = wx.Window(frame)\n#bmp = wx.StaticBitmap(w, -1, imageA, (50, 50), (imageA.GetWidth(), imageA.GetHeight()))\nbmp = buttons.GenBitmapButton(w, -1, imageA, style=wx.BORDER_NONE)\n#bmp.Bind(wx.EVT_MOTION, onMotion)\nbmp.Bind(wx.EVT_ENTER_WINDOW, onWindow)\nbmp.Bind(wx.EVT_LEAVE_WINDOW, onWindow)\n\nframe.Show()\napp.MainLoop()\n\n",
"There may be bug in wxStaticBitmap implementation, but if wxBitmapButton works you can use it for same effect, with less code\n#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\nimport wx\n\napp = wx.App()\n\nframe = wx.Frame(None, wx.ID_ANY, title=\"Hover effect\")\nw = wx.Window(frame)\nc = wx.BitmapButton(w, -1, wx.EmptyBitmap(25,25), style = wx.NO_BORDER)\nc.SetBitmapHover(wx.EmptyBitmap(3,3))\nframe.Show()\n\napp.MainLoop()\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"hover",
"image",
"python",
"wxpython",
"wxwidgets"
] |
stackoverflow_0001415727_hover_image_python_wxpython_wxwidgets.txt
|
Q:
What is the least resource intense data structure to distribute with a Python Application
I am building an application to distribute to fellow academics. The application will take three parameters that the user submits and output a list of dates and codes related to those events. I have been building this using a dictionary and intended to build the application so that the dictionary loaded from a pickle file when the application called for it. The parameters supplied by the user will be used to lookup the needed output.
I selected this structure because I have gotten pretty comfortable with dictionaries and pickle files and I see this going out the door with the smallest learning curve on my part. There might be as many as two million keys in the dictionary. I have been satisfied with the performance on my machine with a reasonable subset. I have already thought through about how to break the dictionary apart if I have any performance concerns when the whole thing is put together. I am not really that worried about the amount of disk space on their machine as we are working with terabyte storage values.
Having said all of that I have been poking around in the docs and am wondering if I need to invest some time to learn and implement an alternative data storage file. The only reason I can think of is if there is an alternative that could increase the lookup speed by a factor of three to five or more.
A:
The standard shelve module will give you a persistent dictionary that is stored in a dbm style database. Providing that your keys are strings and your values are picklable (since you're using pickle already, this must be true), this could be a better solution that simply storing the entire dictionary in a single pickle.
Example:
>>> import shelve
>>> d = shelve.open('mydb')
>>> d['key1'] = 12345
>>> d['key2'] = value2
>>> print d['key1']
12345
>>> d.close()
I'd also recommend Durus, but that requires some extra learning on your part. It'll let you create a PersistentDictionary. From memory, keys can be any pickleable object.
A:
To get fast lookups, use the standard Python dbm module (see http://docs.python.org/library/dbm.html) to build your database file, and do lookups in it. The dbm file format may not be cross-platform, so you may want to to distrubute your data in Pickle or repr or JSON or YAML or XML format, and build the dbm database the user runs your program.
A:
How much memory can your application reasonably use? Is this going to be running on each user's desktop, or will there just be one deployment somewhere?
A python dictionary in memory can certainly cope with two million keys. You say that you've got a subset of the data; do you have the whole lot? Maybe you should throw the full dataset at it and see whether it copes.
I just tested creating a two million record dictionary; the total memory usage for the process came in at about 200MB. If speed is your primary concern and you've got the RAM to spare, you're probably not going to do better than an in-memory python dictionary.
A:
See this solution at SourceForge, esp. the "endnotes" documentation:
y_serial.py module :: warehouse Python objects with SQLite
"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."
http://yserial.sourceforge.net
A:
Here are three things you can try:
Compress the pickled dictionary with zlib. pickle.dumps(dict).encode("zlib")
Make your own serializing format (shouldn't be too hard).
Load the data in a sqlite database.
|
What is the least resource intense data structure to distribute with a Python Application
|
I am building an application to distribute to fellow academics. The application will take three parameters that the user submits and output a list of dates and codes related to those events. I have been building this using a dictionary and intended to build the application so that the dictionary loaded from a pickle file when the application called for it. The parameters supplied by the user will be used to lookup the needed output.
I selected this structure because I have gotten pretty comfortable with dictionaries and pickle files and I see this going out the door with the smallest learning curve on my part. There might be as many as two million keys in the dictionary. I have been satisfied with the performance on my machine with a reasonable subset. I have already thought through about how to break the dictionary apart if I have any performance concerns when the whole thing is put together. I am not really that worried about the amount of disk space on their machine as we are working with terabyte storage values.
Having said all of that I have been poking around in the docs and am wondering if I need to invest some time to learn and implement an alternative data storage file. The only reason I can think of is if there is an alternative that could increase the lookup speed by a factor of three to five or more.
|
[
"The standard shelve module will give you a persistent dictionary that is stored in a dbm style database. Providing that your keys are strings and your values are picklable (since you're using pickle already, this must be true), this could be a better solution that simply storing the entire dictionary in a single pickle.\nExample:\n>>> import shelve\n>>> d = shelve.open('mydb')\n>>> d['key1'] = 12345\n>>> d['key2'] = value2\n>>> print d['key1']\n12345\n>>> d.close()\n\nI'd also recommend Durus, but that requires some extra learning on your part. It'll let you create a PersistentDictionary. From memory, keys can be any pickleable object.\n",
"To get fast lookups, use the standard Python dbm module (see http://docs.python.org/library/dbm.html) to build your database file, and do lookups in it. The dbm file format may not be cross-platform, so you may want to to distrubute your data in Pickle or repr or JSON or YAML or XML format, and build the dbm database the user runs your program.\n",
"How much memory can your application reasonably use? Is this going to be running on each user's desktop, or will there just be one deployment somewhere?\nA python dictionary in memory can certainly cope with two million keys. You say that you've got a subset of the data; do you have the whole lot? Maybe you should throw the full dataset at it and see whether it copes.\nI just tested creating a two million record dictionary; the total memory usage for the process came in at about 200MB. If speed is your primary concern and you've got the RAM to spare, you're probably not going to do better than an in-memory python dictionary.\n",
"See this solution at SourceForge, esp. the \"endnotes\" documentation:\ny_serial.py module :: warehouse Python objects with SQLite\n\"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful \"standard\" module for a database to store schema-less data.\"\nhttp://yserial.sourceforge.net\n",
"Here are three things you can try:\n\nCompress the pickled dictionary with zlib. pickle.dumps(dict).encode(\"zlib\")\nMake your own serializing format (shouldn't be too hard).\nLoad the data in a sqlite database.\n\n"
] |
[
6,
2,
2,
1,
0
] |
[] |
[] |
[
"database",
"dictionary",
"python"
] |
stackoverflow_0000885625_database_dictionary_python.txt
|
Q:
Using SQLite in a Python program
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
A:
Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler.
Do the following.
Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that.
CREATE TABLE REVISION(
RELEASE_NUMBER CHAR(20)
);
In your application, connect to your database normally.
Execute a simple query against the revision table. Here's what can happen.
The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it.
The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this.
The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.
A:
AFAIK an SQLITE database is just a file.
To check if the database exists, check for file existence.
When you open a SQLITE database it will automatically create one if the file that backs it up is not in place.
If you try and open a file as a sqlite3 database that is NOT a database, you will get this:
"sqlite3.DatabaseError: file is encrypted or is not a database"
so check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database
A:
SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use IF NOT EXISTS to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you.
The main thing I would still be worried about is that executing CREATE TABLE IF EXISTS for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the CREATE TABLE script once per run. This would still allow for you to delete the database and start over during debugging.
A:
As @diciu pointed out, the database file will be created by sqlite3.connect.
If you want to take a special action when the file is not there, you'll have to explicitly check for existance:
import os
import sqlite3
if not os.path.exists(mydb_path):
#create new DB, create table stocks
con = sqlite3.connect(mydb_path)
con.execute('''create table stocks
(date text, trans text, symbol text, qty real, price real)''')
else:
#use existing DB
con = sqlite3.connect(mydb_path)
...
A:
Sqlite doesn't throw an exception if you create a new database with the same name, it will just connect to it. Since sqlite is a file based database, I suggest you just check for the existence of the file.
About your second problem, to check if a table has been already created, just catch the exception. An exception "sqlite3.OperationalError: table TEST already exists" is thrown if the table already exist.
import sqlite3
import os
database_name = "newdb.db"
if not os.path.isfile(database_name):
print "the database already exist"
db_connection = sqlite3.connect(database_name)
db_cursor = db_connection.cursor()
try:
db_cursor.execute('CREATE TABLE TEST (a INTEGER);')
except sqlite3.OperationalError, msg:
print msg
A:
Doing SQL in overall is horrible in any language I've picked up. SQLalchemy has shown to be easiest from them to use because actual query and committing with it is so clean and absent from troubles.
Here's some basic steps on actually using sqlalchemy in your app, better details can be found from the documentation.
provide table definitions and create ORM-mappings
load database
ask it to create tables from the definitions (won't do so if they exist)
create session maker (optional)
create session
After creating a session, you can commit and query from the database.
A:
See this solution at SourceForge which covers your question in a tutorial manner, with instructive source code :
y_serial.py module :: warehouse Python objects with SQLite
"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."
http://yserial.sourceforge.net
A:
Yes, I was nuking out the problem. All I needed to do was check for the file and catch the IOError if it didn't exist.
Thanks for all the other answers. They may come in handy in the future.
|
Using SQLite in a Python program
|
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
|
[
"Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler.\nDo the following.\n\nAdd a table to your database for \"Components\" or \"Versions\" or \"Configuration\" or \"Release\" or something administrative like that. \nCREATE TABLE REVISION(\n RELEASE_NUMBER CHAR(20)\n);\nIn your application, connect to your database normally.\nExecute a simple query against the revision table. Here's what can happen.\n\n\nThe query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it.\nThe query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this.\nThe query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.\n\n\n",
"AFAIK an SQLITE database is just a file.\nTo check if the database exists, check for file existence.\nWhen you open a SQLITE database it will automatically create one if the file that backs it up is not in place.\nIf you try and open a file as a sqlite3 database that is NOT a database, you will get this:\n\"sqlite3.DatabaseError: file is encrypted or is not a database\"\nso check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database\n",
"SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use IF NOT EXISTS to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you.\nThe main thing I would still be worried about is that executing CREATE TABLE IF EXISTS for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the CREATE TABLE script once per run. This would still allow for you to delete the database and start over during debugging.\n",
"As @diciu pointed out, the database file will be created by sqlite3.connect.\nIf you want to take a special action when the file is not there, you'll have to explicitly check for existance:\nimport os\nimport sqlite3\nif not os.path.exists(mydb_path):\n #create new DB, create table stocks\n con = sqlite3.connect(mydb_path)\n con.execute('''create table stocks\n (date text, trans text, symbol text, qty real, price real)''')\nelse:\n #use existing DB\n con = sqlite3.connect(mydb_path)\n...\n\n",
"\nSqlite doesn't throw an exception if you create a new database with the same name, it will just connect to it. Since sqlite is a file based database, I suggest you just check for the existence of the file.\nAbout your second problem, to check if a table has been already created, just catch the exception. An exception \"sqlite3.OperationalError: table TEST already exists\" is thrown if the table already exist.\n\nimport sqlite3\nimport os\ndatabase_name = \"newdb.db\"\nif not os.path.isfile(database_name):\n print \"the database already exist\"\ndb_connection = sqlite3.connect(database_name)\ndb_cursor = db_connection.cursor()\ntry:\n db_cursor.execute('CREATE TABLE TEST (a INTEGER);')\nexcept sqlite3.OperationalError, msg:\n print msg\n\n",
"Doing SQL in overall is horrible in any language I've picked up. SQLalchemy has shown to be easiest from them to use because actual query and committing with it is so clean and absent from troubles.\nHere's some basic steps on actually using sqlalchemy in your app, better details can be found from the documentation.\n\nprovide table definitions and create ORM-mappings\nload database\nask it to create tables from the definitions (won't do so if they exist)\ncreate session maker (optional)\ncreate session\n\nAfter creating a session, you can commit and query from the database.\n",
"See this solution at SourceForge which covers your question in a tutorial manner, with instructive source code :\ny_serial.py module :: warehouse Python objects with SQLite\n\"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful \"standard\" module for a database to store schema-less data.\"\nhttp://yserial.sourceforge.net\n",
"Yes, I was nuking out the problem. All I needed to do was check for the file and catch the IOError if it didn't exist.\nThanks for all the other answers. They may come in handy in the future.\n"
] |
[
29,
13,
7,
5,
4,
3,
2,
0
] |
[] |
[] |
[
"exception",
"python",
"sqlite"
] |
stackoverflow_0000211501_exception_python_sqlite.txt
|
Q:
How to pickle numpy's Inf objects?
When trying to pickle the object Inf as defined in numpy (I think), the dumping goes Ok but the loading fails:
>>> cPickle.dump(Inf, file("c:/temp/a.pcl",'wb'))
>>> cPickle.load(file("c:/temp/a.pcl",'rb'))
Traceback (most recent call last):
File "<pyshell#257>", line 1, in <module>
cPickle.load(file("c:/temp/a.pcl",'rb'))
ValueError: could not convert string to float
>>> type(Inf)
<type 'float'>
Why is that?
And moreover - is there a way to fix that? I want to pickle something that has Inf in it - changing it to something else will flaw the elegance of the program...
Thanks
A:
If you specify a pickle protocol more than zero, it will work. Protocol is often specified as -1, meaning use the latest and greatest protocol:
>>> cPickle.dump(Inf, file("c:/temp/a.pcl",'wb'), -1)
>>> cPickle.load(file("c:/temp/a.pcl",'rb'))
1.#INF -- may be platform dependent what prints here.
|
How to pickle numpy's Inf objects?
|
When trying to pickle the object Inf as defined in numpy (I think), the dumping goes Ok but the loading fails:
>>> cPickle.dump(Inf, file("c:/temp/a.pcl",'wb'))
>>> cPickle.load(file("c:/temp/a.pcl",'rb'))
Traceback (most recent call last):
File "<pyshell#257>", line 1, in <module>
cPickle.load(file("c:/temp/a.pcl",'rb'))
ValueError: could not convert string to float
>>> type(Inf)
<type 'float'>
Why is that?
And moreover - is there a way to fix that? I want to pickle something that has Inf in it - changing it to something else will flaw the elegance of the program...
Thanks
|
[
"If you specify a pickle protocol more than zero, it will work. Protocol is often specified as -1, meaning use the latest and greatest protocol:\n>>> cPickle.dump(Inf, file(\"c:/temp/a.pcl\",'wb'), -1)\n>>> cPickle.load(file(\"c:/temp/a.pcl\",'rb'))\n1.#INF -- may be platform dependent what prints here.\n\n"
] |
[
5
] |
[
"Try this solution at SourceForge which will work for any arbitrary Python object:\ny_serial.py module :: warehouse Python objects with SQLite\n\"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful \"standard\" module for a database to store schema-less data.\"\nhttp://yserial.sourceforge.net\n"
] |
[
-1
] |
[
"numpy",
"pickle",
"python"
] |
stackoverflow_0001250367_numpy_pickle_python.txt
|
Q:
pythonic way to convert variable to list
I have a function whose input argument can either be an element or a list of elements. If this argument is a single element then I put it in a list so I can iterate over the input in a consistent manner.
Currently I have this:
def my_func(input):
if not isinstance(input, list): input = [input]
for e in input:
...
I am working with an existing API so I can't change the input parameters. Using isinstance() feels hacky, so is there a proper way to do this?
A:
Typically, strings (plain and unicode) are the only iterables that you want to nevertheless consider as "single elements" -- the basestring builtin exists SPECIFICALLY to let you test for either kind of strings with isinstance, so it's very UN-grotty for that special case;-).
So my suggested approach for the most general case is:
if isinstance(input, basestring): input = [input]
else:
try: iter(input)
except TypeError: input = [input]
else: input = list(input)
This is THE way to treat EVERY iterable EXCEPT strings as a list directly, strings and numbers and other non-iterables as scalars (to be normalized into single-item lists).
I'm explicitly making a list out of every kind of iterable so you KNOW you can further on perform EVERY kind of list trick - sorting, iterating more than once, adding or removing items to facilitate iteration, etc, all without altering the ACTUAL input list (if list indeed it was;-). If all you need is a single plain for loop then that last step is unnecessary (and indeed unhelpful if e.g. input is a huge open file) and I'd suggest an auxiliary generator instead:
def justLoopOn(input):
if isinstance(input, basestring):
yield input
else:
try:
for item in input:
yield item
except TypeError:
yield input
now in every single one of your functions needing such argument normalization, you just use:
for item in justLoopOn(input):
You can use an auxiliary normalizing-function even in the other case (where you need a real list for further nefarious purposes); actually, in such (rarer) cases, you can just do:
thelistforme = list(justLoopOn(input))
so that the (inevitably) somewhat-hairy normalization logic is just in ONE place, just as it should be!-)
A:
I like Andrei Vajna's suggestion of hasattr(var,'__iter__'). Note these results from some typical Python types:
>>> hasattr("abc","__iter__")
False
>>> hasattr((0,),"__iter__")
True
>>> hasattr({},"__iter__")
True
>>> hasattr(set(),"__iter__")
True
This has the added advantage of treating a string as a non-iterable - strings are a grey area, as sometimes you want to treat them as an element, other times as a sequence of characters.
Note that in Python 3 the str type does have the __iter__ attribute and this does not work:
>>> hasattr("abc", "__iter__")
True
A:
First, there is no general method that could tell a "single element" from "list of elements" since by definition list can be an element of another list.
I would say you need to define what kinds of data you might have, so that you might have:
any descendant of list against anything else
Test with isinstance(input, list) (so your example is correct)
any sequence type except strings (basestring in Python 2.x, str in Python 3.x)
Use sequence metaclass: isinstance(myvar, collections.Sequence) and not isinstance(myvar, str)
some sequence type against known cases, like int, str, MyClass
Test with isinstance(input, (int, str, MyClass))
any iterable except strings:
Test with
.
try:
input = iter(input) if not isinstance(input, str) else [input]
except TypeError:
input = [input]
A:
You can put * before your argument, this way you'll always get a tuple:
def a(*p):
print type(p)
print p
a(4)
>>> <type 'tuple'>
>>> (4,)
a(4, 5)
>>> <type 'tuple'>
>>> (4,5,)
But that will force you to call your function with variable parameters, I don't know if that 's acceptable for you.
A:
You can do direct type comparisons using type().
def my_func(input):
if not type(input) is list:
input = [input]
for e in input:
# do something
However, the way you have it will allow any type derived from the list type to be passed through. Thus preventing the any derived types from accidentally being wrapped.
A:
Your aproach seems right to me.
It's similar to how you use atom? in Lisp when you iterate over lists and check the current item to see if it is a list or not, because if it is a list you want to process its items, too.
So, yeah, don't see anything wrong with that.
A:
That is an ok way to do it (don't forget to include tuples).
However, you may also want to consider if the argument has a __iter__ method or __getitem__ method. (note that strings have __getitem__ instead of __iter__.)
hasattr(arg, '__iter__') or hasattr(arg, '__getitem__')
This is probably the most general requirement for a list-like type than only checking the type.
|
pythonic way to convert variable to list
|
I have a function whose input argument can either be an element or a list of elements. If this argument is a single element then I put it in a list so I can iterate over the input in a consistent manner.
Currently I have this:
def my_func(input):
if not isinstance(input, list): input = [input]
for e in input:
...
I am working with an existing API so I can't change the input parameters. Using isinstance() feels hacky, so is there a proper way to do this?
|
[
"Typically, strings (plain and unicode) are the only iterables that you want to nevertheless consider as \"single elements\" -- the basestring builtin exists SPECIFICALLY to let you test for either kind of strings with isinstance, so it's very UN-grotty for that special case;-).\nSo my suggested approach for the most general case is:\n if isinstance(input, basestring): input = [input]\n else:\n try: iter(input)\n except TypeError: input = [input]\n else: input = list(input)\n\nThis is THE way to treat EVERY iterable EXCEPT strings as a list directly, strings and numbers and other non-iterables as scalars (to be normalized into single-item lists).\nI'm explicitly making a list out of every kind of iterable so you KNOW you can further on perform EVERY kind of list trick - sorting, iterating more than once, adding or removing items to facilitate iteration, etc, all without altering the ACTUAL input list (if list indeed it was;-). If all you need is a single plain for loop then that last step is unnecessary (and indeed unhelpful if e.g. input is a huge open file) and I'd suggest an auxiliary generator instead:\ndef justLoopOn(input):\n if isinstance(input, basestring):\n yield input\n else:\n try:\n for item in input:\n yield item\n except TypeError:\n yield input\n\nnow in every single one of your functions needing such argument normalization, you just use:\n for item in justLoopOn(input):\n\nYou can use an auxiliary normalizing-function even in the other case (where you need a real list for further nefarious purposes); actually, in such (rarer) cases, you can just do:\n thelistforme = list(justLoopOn(input))\n\nso that the (inevitably) somewhat-hairy normalization logic is just in ONE place, just as it should be!-)\n",
"I like Andrei Vajna's suggestion of hasattr(var,'__iter__'). Note these results from some typical Python types:\n>>> hasattr(\"abc\",\"__iter__\")\nFalse\n>>> hasattr((0,),\"__iter__\")\nTrue\n>>> hasattr({},\"__iter__\")\nTrue\n>>> hasattr(set(),\"__iter__\")\nTrue\n\nThis has the added advantage of treating a string as a non-iterable - strings are a grey area, as sometimes you want to treat them as an element, other times as a sequence of characters.\nNote that in Python 3 the str type does have the __iter__ attribute and this does not work:\n>>> hasattr(\"abc\", \"__iter__\")\nTrue\n\n",
"First, there is no general method that could tell a \"single element\" from \"list of elements\" since by definition list can be an element of another list.\nI would say you need to define what kinds of data you might have, so that you might have:\n\nany descendant of list against anything else\n\n\nTest with isinstance(input, list) (so your example is correct)\n\nany sequence type except strings (basestring in Python 2.x, str in Python 3.x)\n\n\nUse sequence metaclass: isinstance(myvar, collections.Sequence) and not isinstance(myvar, str)\n\nsome sequence type against known cases, like int, str, MyClass\n\nTest with isinstance(input, (int, str, MyClass))\n\nany iterable except strings:\n\n\nTest with \n\n\n.\n try: \n input = iter(input) if not isinstance(input, str) else [input]\n except TypeError:\n input = [input]\n\n",
"You can put * before your argument, this way you'll always get a tuple:\ndef a(*p):\n print type(p)\n print p\n\na(4)\n>>> <type 'tuple'>\n>>> (4,)\n\na(4, 5)\n>>> <type 'tuple'>\n>>> (4,5,)\n\nBut that will force you to call your function with variable parameters, I don't know if that 's acceptable for you.\n",
"You can do direct type comparisons using type().\ndef my_func(input):\n if not type(input) is list:\n input = [input]\n for e in input:\n # do something\n\nHowever, the way you have it will allow any type derived from the list type to be passed through. Thus preventing the any derived types from accidentally being wrapped.\n",
"Your aproach seems right to me.\nIt's similar to how you use atom? in Lisp when you iterate over lists and check the current item to see if it is a list or not, because if it is a list you want to process its items, too.\nSo, yeah, don't see anything wrong with that.\n",
"That is an ok way to do it (don't forget to include tuples).\nHowever, you may also want to consider if the argument has a __iter__ method or __getitem__ method. (note that strings have __getitem__ instead of __iter__.)\nhasattr(arg, '__iter__') or hasattr(arg, '__getitem__')\n\nThis is probably the most general requirement for a list-like type than only checking the type.\n"
] |
[
15,
10,
4,
2,
1,
0,
0
] |
[
"This seems like a reasonable way to do it. You're wanting to test if the element is a list, and this accomplishes that directly. It gets more complicated if you want to support other 'list-like' data types, too, for example:\nisinstance(input, (list, tuple))\n\nor more generally, abstract away the question:\ndef iterable(obj):\n try:\n len(obj)\n return True\n except TypeError:\n return False\n\nbut again, in summary, your method is simple and correct, which sounds good to me!\n"
] |
[
-1
] |
[
"arguments",
"list",
"python"
] |
stackoverflow_0001416646_arguments_list_python.txt
|
Q:
Simple data storing in Python
I'm looking for a simple solution using Python to store data as a flat file, such that each line is a string representation of an array that can be easily parsed.
I'm sure python has library for doing such a task easily but so far all the approaches I have found seemed like it would have been sloppy to get it to work and I'm sure there is a better approach. So far I've tried:
the array.toFile() method but couldn't figure out how to get it to work with nested arrays of strings, it seemed geared towards integer data.
Lists and sets do not have a toFile method built in, so I would have had to parse and encode it manually.
CSV seemed like a good approach but this would also require manually parsing it, and did not allow me to simply append new lines at the end - so any new calls the the CSVWriter would overwrite the file existing data.
I'm really trying to avoid using databases (maybe SQLite but it seems a bit overkill) because I'm trying to develop this to have no software prerequisites besides Python.
A:
In addition to pickle (mentioned above), there's json (built in to 2.6, available via simplejson before that), and marshal. Also, there's a reader in the same csv module the writer is in.
UPDATE: As S. Lott pointed out in a comment, there's also YAML, available via PyYAML, among others.
A:
http://docs.python.org/library/pickle.html
A:
Must the file be human readable? If not, shelve is really easy to use.
A:
I'm looking for a simple solution using Python to store data as a flat file, such that each line is a string representation of an array that can be easily parsed.
Is the data only ever going to be parsed by Python programs? If not, then I'd avoid pickle et al (shelve and marshal) since they're very Python specific. JSON and YAML have the important advantage that parsers are easily available for most any language.
A:
This solution at SourceForge uses only standard Python modules:
y_serial.py module :: warehouse Python objects with SQLite
"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."
http://yserial.sourceforge.net
SQLite is not "overkill" at all -- you will be amazed how simple it is; plus it solves more general data persistance issues.
|
Simple data storing in Python
|
I'm looking for a simple solution using Python to store data as a flat file, such that each line is a string representation of an array that can be easily parsed.
I'm sure python has library for doing such a task easily but so far all the approaches I have found seemed like it would have been sloppy to get it to work and I'm sure there is a better approach. So far I've tried:
the array.toFile() method but couldn't figure out how to get it to work with nested arrays of strings, it seemed geared towards integer data.
Lists and sets do not have a toFile method built in, so I would have had to parse and encode it manually.
CSV seemed like a good approach but this would also require manually parsing it, and did not allow me to simply append new lines at the end - so any new calls the the CSVWriter would overwrite the file existing data.
I'm really trying to avoid using databases (maybe SQLite but it seems a bit overkill) because I'm trying to develop this to have no software prerequisites besides Python.
|
[
"In addition to pickle (mentioned above), there's json (built in to 2.6, available via simplejson before that), and marshal. Also, there's a reader in the same csv module the writer is in.\nUPDATE: As S. Lott pointed out in a comment, there's also YAML, available via PyYAML, among others.\n",
"http://docs.python.org/library/pickle.html\n",
"Must the file be human readable? If not, shelve is really easy to use.\n",
"\nI'm looking for a simple solution using Python to store data as a flat file, such that each line is a string representation of an array that can be easily parsed.\n\nIs the data only ever going to be parsed by Python programs? If not, then I'd avoid pickle et al (shelve and marshal) since they're very Python specific. JSON and YAML have the important advantage that parsers are easily available for most any language.\n",
"This solution at SourceForge uses only standard Python modules:\ny_serial.py module :: warehouse Python objects with SQLite\n\"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful \"standard\" module for a database to store schema-less data.\"\nhttp://yserial.sourceforge.net\nSQLite is not \"overkill\" at all -- you will be amazed how simple it is; plus it solves more general data persistance issues.\n"
] |
[
10,
6,
4,
2,
1
] |
[] |
[] |
[
"csv",
"file_io",
"fileparsing",
"multidimensional_array",
"python"
] |
stackoverflow_0000875228_csv_file_io_fileparsing_multidimensional_array_python.txt
|
Q:
Why use **kwargs in python? What are some real world advantages over using named arguments?
I come from a background in static languages. Can someone explain (ideally through example) the real world advantages of using **kwargs over named arguments?
To me it only seems to make the function call more ambiguous. Thanks.
A:
You may want to accept nearly-arbitrary named arguments for a series of reasons -- and that's what the **kw form lets you do.
The most common reason is to pass the arguments right on to some other function you're wrapping (decorators are one case of this, but FAR from the only one!) -- in this case, **kw loosens the coupling between wrapper and wrappee, as the wrapper doesn't have to know or care about all of the wrappee's arguments. Here's another, completely different reason:
d = dict(a=1, b=2, c=3, d=4)
if all the names had to be known in advance, then obviously this approach just couldn't exist, right? And btw, when applicable, I much prefer this way of making a dict whose keys are literal strings to:
d = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
simply because the latter is quite punctuation-heavy and hence less readable.
When none of the excellent reasons for accepting **kwargs applies, then don't accept it: it's as simple as that. IOW, if there's no good reason to allow the caller to pass extra named args with arbitrary names, don't allow that to happen -- just avoid putting a **kw form at the end of the function's signature in the def statement.
As for using **kw in a call, that lets you put together the exact set of named arguments that you must pass, each with corresponding values, in a dict, independently of a single call point, then use that dict at the single calling point. Compare:
if x: kw['x'] = x
if y: kw['y'] = y
f(**kw)
to:
if x:
if y:
f(x=x, y=y)
else:
f(x=x)
else:
if y:
f(y=y)
else:
f()
Even with just two possibilities (and of the very simplest kind!), the lack of **kw is aleady making the second option absolutely untenable and intolerable -- just imagine how it plays out when there half a dozen possibilities, possibly in slightly richer interaction... without **kw, life would be absolute hell under such circumstances!
A:
Another reason you might want to use **kwargs (and *args) is if you're extending an existing method in a subclass. You want to pass all the existing arguments onto the superclass's method, but want to ensure that your class keeps working even if the signature changes in a future version:
class MySubclass(Superclass):
def __init__(self, *args, **kwargs):
self.myvalue = kwargs.pop('myvalue', None)
super(MySubclass, self).__init__(*args, **kwargs)
A:
Real-world examples:
Decorators - they're usually generic, so you can't specify the arguments upfront:
def decorator(old):
def new(*args, **kwargs):
# ...
return old(*args, **kwargs)
return new
Places where you want to do magic with an unknown number of keyword arguments. Django's ORM does that, e.g.:
Model.objects.filter(foo__lt = 4, bar__iexact = 'bar')
A:
There are two common cases:
First: You are wrapping another function which takes a number of keyword argument, but you are just going to pass them along:
def my_wrapper(a, b, **kwargs):
do_something_first(a, b)
the_real_function(**kwargs)
Second: You are willing to accept any keyword argument, for example, to set attributes on an object:
class OpenEndedObject:
def __init__(self, **kwargs):
for k, v in kwargs.items():
setattr(self, k, v)
foo = OpenEndedObject(a=1, foo='bar')
assert foo.a == 1
assert foo.foo == 'bar'
A:
**kwargs are good if you don't know in advance the name of the parameters. For example the dict constructor uses them to initialize the keys of the new dictionary.
dict(**kwargs) -> new dictionary initialized with the name=value pairs
in the keyword argument list. For example: dict(one=1, two=2)
In [3]: dict(one=1, two=2)
Out[3]: {'one': 1, 'two': 2}
A:
Here's an example, I used in CGI Python. I created a class that took **kwargs to the __init__ function. That allowed me to emulate the DOM on the server-side with classes:
document = Document()
document.add_stylesheet('style.css')
document.append(Div(H1('Imagist\'s Page Title'), id = 'header'))
document.append(Div(id='body'))
The only problem is that you can't do the following, because class is a Python keyword.
Div(class = 'foo')
The solution is to access the underlying dictionary.
Div(**{'class':'foo'})
I'm not saying that this is a "correct" usage of the feature. What I'm saying is that there are all kinds of unforseen ways in which features like this can be used.
A:
And here's another typical example:
MESSAGE = "Lo and behold! A message {message!r} came from {object_} with data {data!r}."
def proclaim(object_, message, data):
print(MESSAGE.format(**locals()))
A:
One example is implementing python-argument-binders, used like this:
>>> from functools import partial
>>> def f(a, b):
... return a+b
>>> p = partial(f, 1, 2)
>>> p()
3
>>> p2 = partial(f, 1)
>>> p2(7)
8
This is from the functools.partial python docs: partial is 'relatively equivalent' to this impl:
def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = keywords.copy()
newkeywords.update(fkeywords)
return func(*(args + fargs), **newkeywords)
newfunc.func = func
newfunc.args = args
newfunc.keywords = keywords
return newfunc
|
Why use **kwargs in python? What are some real world advantages over using named arguments?
|
I come from a background in static languages. Can someone explain (ideally through example) the real world advantages of using **kwargs over named arguments?
To me it only seems to make the function call more ambiguous. Thanks.
|
[
"You may want to accept nearly-arbitrary named arguments for a series of reasons -- and that's what the **kw form lets you do.\nThe most common reason is to pass the arguments right on to some other function you're wrapping (decorators are one case of this, but FAR from the only one!) -- in this case, **kw loosens the coupling between wrapper and wrappee, as the wrapper doesn't have to know or care about all of the wrappee's arguments. Here's another, completely different reason:\nd = dict(a=1, b=2, c=3, d=4)\n\nif all the names had to be known in advance, then obviously this approach just couldn't exist, right? And btw, when applicable, I much prefer this way of making a dict whose keys are literal strings to:\nd = {'a': 1, 'b': 2, 'c': 3, 'd': 4}\n\nsimply because the latter is quite punctuation-heavy and hence less readable.\nWhen none of the excellent reasons for accepting **kwargs applies, then don't accept it: it's as simple as that. IOW, if there's no good reason to allow the caller to pass extra named args with arbitrary names, don't allow that to happen -- just avoid putting a **kw form at the end of the function's signature in the def statement.\nAs for using **kw in a call, that lets you put together the exact set of named arguments that you must pass, each with corresponding values, in a dict, independently of a single call point, then use that dict at the single calling point. Compare:\nif x: kw['x'] = x\nif y: kw['y'] = y\nf(**kw)\n\nto:\nif x:\n if y:\n f(x=x, y=y)\n else:\n f(x=x)\nelse:\n if y:\n f(y=y)\n else:\n f()\n\nEven with just two possibilities (and of the very simplest kind!), the lack of **kw is aleady making the second option absolutely untenable and intolerable -- just imagine how it plays out when there half a dozen possibilities, possibly in slightly richer interaction... without **kw, life would be absolute hell under such circumstances!\n",
"Another reason you might want to use **kwargs (and *args) is if you're extending an existing method in a subclass. You want to pass all the existing arguments onto the superclass's method, but want to ensure that your class keeps working even if the signature changes in a future version:\nclass MySubclass(Superclass):\n def __init__(self, *args, **kwargs):\n self.myvalue = kwargs.pop('myvalue', None)\n super(MySubclass, self).__init__(*args, **kwargs)\n\n",
"Real-world examples:\nDecorators - they're usually generic, so you can't specify the arguments upfront:\ndef decorator(old):\n def new(*args, **kwargs):\n # ...\n return old(*args, **kwargs)\n return new\n\nPlaces where you want to do magic with an unknown number of keyword arguments. Django's ORM does that, e.g.:\nModel.objects.filter(foo__lt = 4, bar__iexact = 'bar')\n\n",
"There are two common cases:\nFirst: You are wrapping another function which takes a number of keyword argument, but you are just going to pass them along:\ndef my_wrapper(a, b, **kwargs):\n do_something_first(a, b)\n the_real_function(**kwargs)\n\nSecond: You are willing to accept any keyword argument, for example, to set attributes on an object:\nclass OpenEndedObject:\n def __init__(self, **kwargs):\n for k, v in kwargs.items():\n setattr(self, k, v)\n\nfoo = OpenEndedObject(a=1, foo='bar')\nassert foo.a == 1\nassert foo.foo == 'bar'\n\n",
"**kwargs are good if you don't know in advance the name of the parameters. For example the dict constructor uses them to initialize the keys of the new dictionary. \n\ndict(**kwargs) -> new dictionary initialized with the name=value pairs\n in the keyword argument list. For example: dict(one=1, two=2)\n\n\nIn [3]: dict(one=1, two=2)\nOut[3]: {'one': 1, 'two': 2}\n\n",
"Here's an example, I used in CGI Python. I created a class that took **kwargs to the __init__ function. That allowed me to emulate the DOM on the server-side with classes:\ndocument = Document()\ndocument.add_stylesheet('style.css')\ndocument.append(Div(H1('Imagist\\'s Page Title'), id = 'header'))\ndocument.append(Div(id='body'))\n\nThe only problem is that you can't do the following, because class is a Python keyword.\nDiv(class = 'foo')\n\nThe solution is to access the underlying dictionary.\nDiv(**{'class':'foo'})\n\nI'm not saying that this is a \"correct\" usage of the feature. What I'm saying is that there are all kinds of unforseen ways in which features like this can be used.\n",
"And here's another typical example:\nMESSAGE = \"Lo and behold! A message {message!r} came from {object_} with data {data!r}.\"\n\ndef proclaim(object_, message, data):\n print(MESSAGE.format(**locals()))\n\n",
"One example is implementing python-argument-binders, used like this:\n\n>>> from functools import partial\n>>> def f(a, b):\n... return a+b\n>>> p = partial(f, 1, 2)\n>>> p()\n3\n>>> p2 = partial(f, 1)\n>>> p2(7)\n8\n\n\nThis is from the functools.partial python docs: partial is 'relatively equivalent' to this impl:\n\ndef partial(func, *args, **keywords):\n def newfunc(*fargs, **fkeywords):\n newkeywords = keywords.copy()\n newkeywords.update(fkeywords)\n return func(*(args + fargs), **newkeywords)\n newfunc.func = func\n newfunc.args = args\n newfunc.keywords = keywords\n return newfunc\n\n\n"
] |
[
73,
46,
41,
15,
5,
3,
1,
0
] |
[] |
[] |
[
"keyword_argument",
"python"
] |
stackoverflow_0001415812_keyword_argument_python.txt
|
Q:
What are the benefits of not using cPickle to create a persistent storage for data?
I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and/or using the shelve module?
A:
Pickling is a two-face coin.
On one side, you have a way to store your object in a very easy way. Just four lines of code and you pickle. You have the object exactly as it is.
On the other side, it can become a compatibility nightmare. You cannot unpickle objects if they are not defined in your code, exactly as they were defined when pickled. This strongly limits your ability to refactor the code, or rearrange stuff in your modules.
Also, not everything can be pickled, and if you are not strict on what gets pickled and the client of your code has full freedom of including any object, sooner or later it will pass something unpicklable to your system, and the system will go boom.
Be very careful about its use. there's no better definition of quick and dirty.
A:
One reason to define your own custom binary format could be optimization. pickle (and shelve, which uses pickle) is a generic serialization framework; it can store almost any Python data. It's easy to use pickle in a lot of situations, but it takes time to inspect all the objects and serialize their data and the data itself is stored in a generic, verbose format. If you are storing specific known data a custom-built serializer can be both faster and more concise.
It takes 37 bytes to pickle an object with a single integer value:
>>> import pickle
>>> class Foo: pass...
>>> foo = Foo()
>>> foo.x = 3
>>> print repr(pickle.dumps(foo))
"(i__main__\nFoo\np0\n(dp1\nS'x'\np2\nI3\nsb."
Embedded in that data is the name of the property and its type. A custom serializer for Foo (and Foo alone) could dispense with that and just store the number, saving both time and space.
Another reason for a custom serialization framework is you can easily do custom validation and versioning of data. If you change your object types and need to load an old version of data it can be tricky via pickle. Your own code can be easily customized to handle older data formats.
In practice, I'd build something using the generic cPickle module and only replace it if profiling indicated it was really important. Maintaining a separate serialization framework is a significant amount of work.
One final resource you may find useful: some synthetic serializer benchmarks. cPickle is pretty fast.
A:
Note that not all objects may be directly pickled - only basic types, or objects that have defined the pickle protocol.
Using your own binary format would allow you to potentially store any kind of object.
Just for note, Zope Object DB (ZODB) is following that very same approach, storing objects with the Pickle format. You may be interested in getting their implementations.
A:
The potential advantages of a custom format over a pickle are:
you can selectively get individual objects, rather than having to incarnate the full set of objects
you can query subsets of objects by properties, and only load those objects that match your criteria
Whether these advantages materialize depends on how you design the storage, of course.
A:
If you are going to do that (implement your own binary format), you should first know that python has a good library to handle HDF5, a binary format used in physics and astronomy to dump huge amounts of data.
This is the home page of the library:
http://www.pytables.org/moin
Basically, you could think of HDF5 as an hierarchical database, in which a table column can contain an inner table by itself: the table Populations has a column called Individual, which is a table containing the informations of every individuals, etc...
PyTables has also its own implementation of the cPickle module, you can access it with:
$ easy_install tables
$ python
>>> import tables
>>> tables.cPickle
I have never used pytable's pickle, but I think it may be straightforward for you to learn how does it work, so you may have a look at it before implementing your own format.
A:
See this solution at SourceForge:
y_serial.py module :: warehouse Python objects with SQLite
"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."
http://yserial.sourceforge.net
[The commentary included with the source endnotes discusses why pickle was selected over json.]
A:
Will you ever need to process data from untrusted sources? If so, you should know that the pickle format is actually a virtual machine that is capable of executing arbitrary code on behalf of the process doing the unpickling.
|
What are the benefits of not using cPickle to create a persistent storage for data?
|
I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and/or using the shelve module?
|
[
"Pickling is a two-face coin.\nOn one side, you have a way to store your object in a very easy way. Just four lines of code and you pickle. You have the object exactly as it is.\nOn the other side, it can become a compatibility nightmare. You cannot unpickle objects if they are not defined in your code, exactly as they were defined when pickled. This strongly limits your ability to refactor the code, or rearrange stuff in your modules.\nAlso, not everything can be pickled, and if you are not strict on what gets pickled and the client of your code has full freedom of including any object, sooner or later it will pass something unpicklable to your system, and the system will go boom.\nBe very careful about its use. there's no better definition of quick and dirty.\n",
"One reason to define your own custom binary format could be optimization. pickle (and shelve, which uses pickle) is a generic serialization framework; it can store almost any Python data. It's easy to use pickle in a lot of situations, but it takes time to inspect all the objects and serialize their data and the data itself is stored in a generic, verbose format. If you are storing specific known data a custom-built serializer can be both faster and more concise. \nIt takes 37 bytes to pickle an object with a single integer value:\n>>> import pickle\n>>> class Foo: pass... \n>>> foo = Foo()\n>>> foo.x = 3\n>>> print repr(pickle.dumps(foo))\n\"(i__main__\\nFoo\\np0\\n(dp1\\nS'x'\\np2\\nI3\\nsb.\"\n\nEmbedded in that data is the name of the property and its type. A custom serializer for Foo (and Foo alone) could dispense with that and just store the number, saving both time and space.\nAnother reason for a custom serialization framework is you can easily do custom validation and versioning of data. If you change your object types and need to load an old version of data it can be tricky via pickle. Your own code can be easily customized to handle older data formats. \nIn practice, I'd build something using the generic cPickle module and only replace it if profiling indicated it was really important. Maintaining a separate serialization framework is a significant amount of work.\nOne final resource you may find useful: some synthetic serializer benchmarks. cPickle is pretty fast.\n",
"Note that not all objects may be directly pickled - only basic types, or objects that have defined the pickle protocol.\nUsing your own binary format would allow you to potentially store any kind of object.\nJust for note, Zope Object DB (ZODB) is following that very same approach, storing objects with the Pickle format. You may be interested in getting their implementations.\n",
"The potential advantages of a custom format over a pickle are:\n\nyou can selectively get individual objects, rather than having to incarnate the full set of objects\nyou can query subsets of objects by properties, and only load those objects that match your criteria\n\nWhether these advantages materialize depends on how you design the storage, of course.\n",
"If you are going to do that (implement your own binary format), you should first know that python has a good library to handle HDF5, a binary format used in physics and astronomy to dump huge amounts of data.\nThis is the home page of the library:\n\nhttp://www.pytables.org/moin\n\nBasically, you could think of HDF5 as an hierarchical database, in which a table column can contain an inner table by itself: the table Populations has a column called Individual, which is a table containing the informations of every individuals, etc...\nPyTables has also its own implementation of the cPickle module, you can access it with:\n$ easy_install tables\n$ python\n>>> import tables\n>>> tables.cPickle\n\nI have never used pytable's pickle, but I think it may be straightforward for you to learn how does it work, so you may have a look at it before implementing your own format.\n",
"See this solution at SourceForge:\ny_serial.py module :: warehouse Python objects with SQLite\n\"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful \"standard\" module for a database to store schema-less data.\"\nhttp://yserial.sourceforge.net\n[The commentary included with the source endnotes discusses why pickle was selected over json.]\n",
"Will you ever need to process data from untrusted sources? If so, you should know that the pickle format is actually a virtual machine that is capable of executing arbitrary code on behalf of the process doing the unpickling.\n"
] |
[
10,
3,
2,
1,
1,
1,
0
] |
[] |
[] |
[
"data_structures",
"database",
"persistence",
"python"
] |
stackoverflow_0001188585_data_structures_database_persistence_python.txt
|
Q:
Google appengine string replacement in template file
I'm using google appengine (python, of course :) ) and I'd like to do a string.replace on a string from the template file.
{% for item in items %}
<p>{{ item.code.replace( '_', ' ' ) }}</p>
{% endfor %}
But that isn't working. So we cannot execute anything other than basic checks in the app engine templates. Is that Correct ?
Another related problem is I'm trying to shorten a string and make it available to the template.
Each furniture object has a name and a longer description field. In this view I'm rendering, I want only the first 50 characters of the description field.
So I tried something like
items = db.GqlQuery( 'select * from furniture' )
# edit: if you change the above line to
# items = db.GqlQuery( 'select * from furniture' ).fetch( 1000 )
# the .fetch() command makes the addition of dynamic properties work!
for item in items :
item.shortdescr = item.description[ 0:50 ]
# pass data off to template for rendering
self.response.out.write(
template.render( 'furnitureAll.html', { 'items' : items } )
)
Template goes
{% for item in items %}
<p>{{ item.name }}</p>
<p>{{ item.shortdescr }}</p>
<!-- items.shortdescr does not exist here,
probably because I did not .put() it previously. -->
{% endfor %}
Since that didn't work, I tried changing the Gql Query to shorten the string instead. But I'm quickly realizing Gql isn't like SQL. I'm trying to write queries like
select name,LEFT( description, 50 ) from furniture
With little success
A:
Apart from the argument-less .fetch() call in your code, which I believe can't possibly work (you ALWAYS have to pass fetch an argument -- the max number of entities you're willing to fetch!), I can't reproduce your problem -- assigning a new attribute (including one obtained by processing existing ones) to each item just works fine in my tests.
Can you please reproduce your observed problem in as tiny as compass as possible and edit your question to include all relevant files pls? Seems to be the only way we could help you with your weird observed bug!
BTW, select name,LEFT( description, 50 ) or whatever OF COURSE won't work in GQL -- GQL, very very explicitly, ONLY supports select * to get whole entities, or select __key__ to get just the entities' keys -- that's all; NO selectivity of columns in the select, much less any OPERATIONS on them!-)
A:
I have little experience with Google AppEngine, but my understanding is that it is very closely related to Django. Your templates do not actually contain Python code, even if some of the structures you use in them look like it.
Both of your questions should be solved using template filters. If it was Django, I would use something like this for your second question:
{{ item.description|truncatewords:10 }}
For your first question (string replace), there may be no built-in filter you can use for that. You will need to write your own. Something like this;
from google.appengine.ext.webapp.template import create_template_register
register = create_template_register()
@register.filter
def replace_underscores(strng):
return strng.replace('_', ' ')
Then, in your template, you can do this:
{{ item.code|replace_underscores }}
|
Google appengine string replacement in template file
|
I'm using google appengine (python, of course :) ) and I'd like to do a string.replace on a string from the template file.
{% for item in items %}
<p>{{ item.code.replace( '_', ' ' ) }}</p>
{% endfor %}
But that isn't working. So we cannot execute anything other than basic checks in the app engine templates. Is that Correct ?
Another related problem is I'm trying to shorten a string and make it available to the template.
Each furniture object has a name and a longer description field. In this view I'm rendering, I want only the first 50 characters of the description field.
So I tried something like
items = db.GqlQuery( 'select * from furniture' )
# edit: if you change the above line to
# items = db.GqlQuery( 'select * from furniture' ).fetch( 1000 )
# the .fetch() command makes the addition of dynamic properties work!
for item in items :
item.shortdescr = item.description[ 0:50 ]
# pass data off to template for rendering
self.response.out.write(
template.render( 'furnitureAll.html', { 'items' : items } )
)
Template goes
{% for item in items %}
<p>{{ item.name }}</p>
<p>{{ item.shortdescr }}</p>
<!-- items.shortdescr does not exist here,
probably because I did not .put() it previously. -->
{% endfor %}
Since that didn't work, I tried changing the Gql Query to shorten the string instead. But I'm quickly realizing Gql isn't like SQL. I'm trying to write queries like
select name,LEFT( description, 50 ) from furniture
With little success
|
[
"Apart from the argument-less .fetch() call in your code, which I believe can't possibly work (you ALWAYS have to pass fetch an argument -- the max number of entities you're willing to fetch!), I can't reproduce your problem -- assigning a new attribute (including one obtained by processing existing ones) to each item just works fine in my tests.\nCan you please reproduce your observed problem in as tiny as compass as possible and edit your question to include all relevant files pls? Seems to be the only way we could help you with your weird observed bug!\nBTW, select name,LEFT( description, 50 ) or whatever OF COURSE won't work in GQL -- GQL, very very explicitly, ONLY supports select * to get whole entities, or select __key__ to get just the entities' keys -- that's all; NO selectivity of columns in the select, much less any OPERATIONS on them!-)\n",
"I have little experience with Google AppEngine, but my understanding is that it is very closely related to Django. Your templates do not actually contain Python code, even if some of the structures you use in them look like it.\nBoth of your questions should be solved using template filters. If it was Django, I would use something like this for your second question:\n{{ item.description|truncatewords:10 }}\n\nFor your first question (string replace), there may be no built-in filter you can use for that. You will need to write your own. Something like this;\nfrom google.appengine.ext.webapp.template import create_template_register\n\nregister = create_template_register()\n\[email protected]\ndef replace_underscores(strng):\n return strng.replace('_', ' ')\n\nThen, in your template, you can do this:\n{{ item.code|replace_underscores }}\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001416921_google_app_engine_python.txt
|
Q:
Riddle: The Square Puzzle
Last couple of days, I have refrained myself from master's studies and have been focusing on this (seemingly simple) puzzle:
There is this 10*10 grid which constitutes a square of 100 available places to go. The aim is to start from a corner and traverse through all the places with respect to some simple "traverse rules" and reach number 100 (or 99 if you're a programmer and start with 0 instead :)
The rules for traversing are:
1. Two spaces hop along the vertical and horizontal axis
2. One space hop along the diagonals
3. You can visit each square only once
To visualise better, here is a valid example traverse (up to the 8th step):
Example Traverse http://img525.imageshack.us/img525/280/squarepuzzle.png
Manually, I have been working on this puzzle out of boredom. For years, I have tried to solve it by hand from time to time, but I have never gone beyond 96. Sounds easy? Try yourself and see for yourself :)
Thus, in order to solve the problem, I have developed a short (around 100 lines of code) program in Python. I am a beginner in this language I wanted to see what I can do.
The program simply applies exhaustive try & error solving technique. In other words: brute force depth first search.
My question arises from here on: The program, unfortunately cannot solve the problem because the state space is so big that search never ends withouh ever finding a solution. It can go up to number 98 (and prints that) without much difficulty, nonetheless not a complete solution.
The program also prints out the length of the search tree it has covered so far. In a couple of minutes, the traverse list from, say, 65th element is covered till the end, for just one single path. This number decreases in exponentially increasing time periods. I have run the code for quite some time and could not get beyond 50 barrier and now I am convinced.
It seems that this simple approach will not be enough unless I run it for ever. So, how can I improve my code to be faster and more efficient so that it comes up with solutions?
Basically, I am looking forward to see ideas on how to:
Capture and exploit domain knowledge specific to this problem
Apply programming techniques/tricks to overcome exhaustion
..and finally realize into a substantial solution.
Thanks in advance.
Revision
Thanks to Dave Webb for relating the problem to domain it belongs:
This is very similar to the Knight's
Tour problem which relates moving a
knight around a chess board without
revisiting the same square. Basically
it's the same problem but with
different "Traverse Rules".
A:
This is very similar to the Knight's Tour problem which relates moving a knight around a chess board without revisiting the same square. Basically it's the same problem but with different "Traverse Rules".
The key optimisation I remember from tackling the Knights Tour recursively is take your next moves in increasing order of the number of available moves on the destination square. This encourages the search to try and move densely in one area and filling it rather than zooming all over the board and leaving little island squares that can never be visited. (This is Warnsdorff's algorithm.)
Also make sure you have considered symmetry where you can. For example, at the simplest level the x and y of your starting square only need to go up to 5 since (10,10) is the same as (1,1) with the board rotated.
A:
I decided to look at the problem and see if I could break it into 5x5 solutions with the ending of a solution one jump away from the corner of another.
First assumption was that 5x5 is solvable. It is and fast.
So I ran solve(0,5) and looked at the results. I drew a 10x10 numbered grid in Excel with a 5x5 numbered grid for translation. Then I just searched the results for #] (ending cells) that would be a jump away from the start of the next 5x5. (ex. for the first square, I searched for "13]".)
For reference:
10 x 10 grid 5 x 5 grid
0 1 2 3 4 | 5 6 7 8 9 0 1 2 3 4
10 11 12 13 14 | 15 16 17 18 19 5 6 7 8 9
20 21 22 23 24 | 25 26 27 28 29 10 11 12 13 14
30 31 32 33 34 | 35 36 37 38 39 15 16 17 18 19
40 41 42 43 44 | 45 46 47 48 49 20 21 22 23 24
---------------+---------------
50 51 52 53 54 | 55 56 57 58 59
60 61 62 63 64 | 65 66 67 68 69
70 71 72 73 74 | 75 76 77 78 79
80 81 82 83 84 | 85 86 87 88 89
90 91 92 93 94 | 95 96 97 98 99
Here is a possible solution:
First square: [0, 15, 7, 19, 16, 1, 4, 12, 20, 23, 8, 5, 17, 2, 10, 22, 14, 11, 3, 18, 6, 9, 24, 21, 13] puts it a diagonal jump up to 5 (in 10x10) the first corner of the next 5 x 5.
Second Square: [0, 12, 24, 21, 6, 9, 17, 2, 14, 22, 7, 15, 18, 3, 11, 23, 20, 5, 8, 16, 19, 4, 1, 13, 10] puts it with last square of 25 in the 10x10, which is two jumps away from 55.
Third Square: [0, 12, 24, 21, 6, 9, 17, 5, 20, 23, 8, 16, 19, 4, 1, 13, 10, 2, 14, 11, 3, 18, 15, 7, 22] puts it with last square of 97 in the 10x10, which is two jumps away from 94.
Fourth Square can be any valid solution, because end point doesn't matter. However, the mapping of the solution from 5x5 to 10x10 is harder, as the square is starting on the opposite corner. Instead of translating, ran solve(24,5) and picked one at random: [24, 9, 6, 21, 13, 10, 2, 17, 5, 20, 23, 8, 16, 1, 4, 12, 0, 15, 18, 3, 11, 14, 22, 7, 19]
This should be possible to all do programatically, now that 5x5 solutions are know to be valid with endpoints legal moves to the next 5x5 corner. Number of 5x5 solutions was 552, which means storing the solutions for further calculation and remapping is pretty easy.
Unless I did this wrong, this gives you one possible solution (defined above 5x5 solutions as one through four respectively):
def trans5(i, col5, row5):
if i < 5: return 5 * col5 + 50 * row5 + i
if i < 10: return 5 + 5 * col5 + 50 * row5 + i
if i < 15: return 10 + 5 * col5 + 50 * row5 + i
if i < 20: return 15 + 5 * col5 + 50 * row5 + i
if i < 25: return 20 + 5 * col5 + 50 * row5 + i
>>> [trans5(i, 0, 0) for i in one] + [trans5(i, 1, 0) for i in two] + [trans5(i, 0, 1) for i in three] + [trans5(i, 1, 1) for i in four]
[0, 30, 12, 34, 31, 1, 4, 22, 40, 43, 13, 10, 32, 2, 20, 42, 24, 21, 3, 33, 11, 14, 44, 41, 23, 5, 27, 49, 46, 16, 19, 37, 7, 29, 47, 17, 35, 38, 8, 26, 48, 45, 15, 18, 36, 39, 9, 6, 28, 25, 50, 72, 94, 91, 61, 64, 82, 60, 90, 93, 63, 81, 84, 54, 51, 73, 70, 52, 74, 71, 53, 83, 80, 62, 92, 99, 69, 66, 96, 78, 75, 57, 87, 65, 95, 98, 68, 86, 56, 59, 77, 55, 85, 88, 58, 76, 79, 97, 67, 89]
Can some one double check the methodology? I think this is a valid solution and method of breaking up the problem.
A:
Eventually, I have come up with the modified Python code to overcome the problem. I've tun the code for a couple of hours and it has already found half a million solutions in a couple of hours.
The full set of solutions still require a total exhaustive search, i.e. to let the program run until it finishes with all combinations. However, reaching "a" legitimate solution can be reduced to "linear time".
First, things I have learned:
Thanks to Dave Webb's answer and ammoQ's answer. The problem is indeed an extension of Hamiltonian Path problem as it is NP-Hard. There is no "easy" solution to begin with. There is a famous riddle of Knight's Tour which is simply the same problem with a different size of board/grid and different traverse-rules. There are many things said and done to elaborate the problem and methodologies and algorithms have been devised.
Thanks to Joe's answer. The problem can be approached in a bottom-up sense and can be sliced down to solvable sub-problems. Solved sub-problems can be connected in an entry-exit point notion (one's exit point can be connected to one other's entry point) so that the main problem could be solved as a constitution of smaller scale problems. This approach is sound and practical but not complete, though. It can not guarantee to find an answer if it exists.
Upon exhaustive brute-force search, here are key points I have developed on the code:
Warnsdorff's algorithm: This
algorithm is the key point to reach
to a handy number of solutions in a
quick way. It simply states that, you
should pick your next move to the
"least accessible" place and populate
your "to go" list with ascending
order or accesibility. Least
accessible place means the place with
least number of possible following
moves.
Below is the pseudocode (from Wikipedia):
Some definitions:
A position Q is accessible from a position P if P can move to Q by a single knight's move, and Q has not yet been visited.
The accessibility of a position P is the number of positions accessible from P.
Algorithm:
set P to be a random initial position
on the board mark the board at P with
the move number "1" for each move
number from 2 to the number of squares
on the board, let S be the set of
positions accessible from the input
position set P to be the position in
S with minimum accessibility mark the
board at P with the current move
number return the marked board -- each
square will be marked with the move
number on which it is visited.
Checking for islands: A nice exploit of domain knowledge here proved to be handy. If a move (unless it is the last one) would cause any of its neighbors to become an island, i.e. not accessible by any other, then that branch is no longer investigated. Saves considerable amount of time (very roughly 25%) combined with Warnsdorff's algorithm.
And here is my code in Python which solves the riddle (to an acceptable degree considering that the problem is NP-Hard). The code is easy to understand as I consider myself at beginner level in Python. The comments are straightforward in explaining the implementation. Solutions can be displayed on a simple grid by a basic GUI (guidelines in the code).
# Solve square puzzle
import operator
class Node:
# Here is how the squares are defined
def __init__(self, ID, base):
self.posx = ID % base
self.posy = ID / base
self.base = base
def isValidNode(self, posx, posy):
return (0<=posx<self.base and 0<=posy<self.base)
def getNeighbors(self):
neighbors = []
if self.isValidNode(self.posx + 3, self.posy): neighbors.append(self.posx + 3 + self.posy*self.base)
if self.isValidNode(self.posx + 2, self.posy + 2): neighbors.append(self.posx + 2 + (self.posy+2)*self.base)
if self.isValidNode(self.posx, self.posy + 3): neighbors.append(self.posx + (self.posy+3)*self.base)
if self.isValidNode(self.posx - 2, self.posy + 2): neighbors.append(self.posx - 2 + (self.posy+2)*self.base)
if self.isValidNode(self.posx - 3, self.posy): neighbors.append(self.posx - 3 + self.posy*self.base)
if self.isValidNode(self.posx - 2, self.posy - 2): neighbors.append(self.posx - 2 + (self.posy-2)*self.base)
if self.isValidNode(self.posx, self.posy - 3): neighbors.append(self.posx + (self.posy-3)*self.base)
if self.isValidNode(self.posx + 2, self.posy - 2): neighbors.append(self.posx + 2 + (self.posy-2)*self.base)
return neighbors
# the nodes go like this:
# 0 => bottom left
# (base-1) => bottom right
# base*(base-1) => top left
# base**2 -1 => top right
def solve(start_nodeID, base):
all_nodes = []
#Traverse list is the list to keep track of which moves are made (the id numbers of nodes in a list)
traverse_list = [start_nodeID]
for i in range(0, base**2): all_nodes.append(Node(i, base))
togo = dict()
#Togo is a dictionary with (nodeID:[list of neighbors]) tuples
togo[start_nodeID] = all_nodes[start_nodeID].getNeighbors()
solution_count = 0
while(True):
# The search is exhausted
if not traverse_list:
print "Somehow, the search tree is exhausted and you have reached the divine salvation."
print "Number of solutions:" + str(solution_count)
break
# Get the next node to hop
try:
current_node_ID = togo[traverse_list[-1]].pop(0)
except IndexError:
del togo[traverse_list.pop()]
continue
# end condition check
traverse_list.append(current_node_ID)
if(len(traverse_list) == base**2):
#OMG, a solution is found
#print traverse_list
solution_count += 1
#Print solution count at a steady rate
if(solution_count%100 == 0):
print solution_count
# The solution list can be returned (to visualize the solution in a simple GUI)
#return traverse_list
# get valid neighbors
valid_neighbor_IDs = []
candidate_neighbor_IDs = all_nodes[current_node_ID].getNeighbors()
valid_neighbor_IDs = filter(lambda id: not id in traverse_list, candidate_neighbor_IDs)
# if no valid neighbors, take a step back
if not valid_neighbor_IDs:
traverse_list.pop()
continue
# if there exists a neighbor which is accessible only through the current node (island)
# and it is not the last one to go, the situation is not promising; so just eliminate that
stuck_check = True
if len(traverse_list) != base**2-1 and any(not filter(lambda id: not id in traverse_list, all_nodes[n].getNeighbors()) for n in valid_neighbor_IDs): stuck_check = False
# if stuck
if not stuck_check:
traverse_list.pop()
continue
# sort the neighbors according to accessibility (the least accessible first)
neighbors_ncount = []
for neighbor in valid_neighbor_IDs:
candidate_nn = all_nodes[neighbor].getNeighbors()
valid_nn = [id for id in candidate_nn if not id in traverse_list]
neighbors_ncount.append(len(valid_nn))
n_dic = dict(zip(valid_neighbor_IDs, neighbors_ncount))
sorted_ndic = sorted(n_dic.items(), key=operator.itemgetter(1))
sorted_valid_neighbor_IDs = []
for (node, ncount) in sorted_ndic: sorted_valid_neighbor_IDs.append(node)
# if current node does have valid neighbors, add them to the front of togo list
# in a sorted way
togo[current_node_ID] = sorted_valid_neighbor_IDs
# To display a solution simply
def drawGUI(size, solution):
# GUI Code (If you can call it a GUI, though)
import Tkinter
root = Tkinter.Tk()
canvas = Tkinter.Canvas(root, width=size*20, height=size*20)
#canvas.create_rectangle(0, 0, size*20, size*20)
canvas.pack()
for x in range(0, size*20, 20):
canvas.create_line(x, 0, x, size*20)
canvas.create_line(0, x, size*20, x)
cnt = 1
for el in solution:
canvas.create_text((el % size)*20 + 4,(el / size)*20 + 4,text=str(cnt), anchor=Tkinter.NW)
cnt += 1
root.mainloop()
print('Start of run')
# it is the moment
solve(0, 10)
#Optional, to draw a returned solution
#drawGUI(10, solve(0, 10))
raw_input('End of Run...')
Thanks to all everybody sharing their knowledge and ideas.
A:
This is just an example of the http://en.wikipedia.org/wiki/Hamiltonian_path problem. German wikipedia claims that it is NP-hard.
A:
An optimization can me made to check for islands (i.e. non-visited spaces with no valid neighbors.) and back out of the traverse until the island is eliminated. This would occur near the "cheap" side of a certain tree traverse. I guess the question is if the reduction is worth the expense.
A:
I wanted to see if I could write a program that would come up with all possible solutions.
#! /usr/bin/env perl
use Modern::Perl;
{
package Grid;
use Scalar::Util qw'reftype';
sub new{
my($class,$width,$height) = @_;
$width ||= 10;
$height ||= $width;
my $self = bless [], $class;
for( my $x = 0; $x < $width; $x++ ){
for( my $y = 0; $y < $height; $y++ ){
$self->[$x][$y] = undef;
}
}
for( my $x = 0; $x < $width; $x++ ){
for( my $y = 0; $y < $height; $y++ ){
$self->[$x][$y] = Grid::Elem->new($self,$x,$y);;
}
}
return $self;
}
sub elem{
my($self,$x,$y) = @_;
no warnings 'uninitialized';
if( @_ == 2 and reftype($x) eq 'ARRAY' ){
($x,$y) = (@$x);
}
die "Attempted to use undefined var" unless defined $x and defined $y;
my $return = $self->[$x][$y];
die unless $return;
return $return;
}
sub done{
my($self) = @_;
for my $col (@$self){
for my $item (@$col){
return 0 unless $item->visit(undef);
}
}
return 1;
}
sub reset{
my($self) = @_;
for my $col (@$self){
for my $item (@$col){
$item->reset;
}
}
}
sub width{
my($self) = @_;
return scalar @$self;
}
sub height{
my($self) = @_;
return scalar @{$self->[0]};
}
}{
package Grid::Elem;
use Scalar::Util 'weaken';
use overload qw(
"" stringify
eq equal
== equal
);
my %dir = (
# x, y
n => [ 0, 2],
s => [ 0,-2],
e => [ 2, 0],
w => [-2, 0],
ne => [ 1, 1],
nw => [-1, 1],
se => [ 1,-1],
sw => [-1,-1],
);
sub new{
my($class,$parent,$x,$y) = @_;
weaken $parent;
my $self = bless {
parent => $parent,
pos => [$x,$y]
}, $class;
$self->_init_possible;
return $self;
}
sub _init_possible{
my($self) = @_;
my $parent = $self->parent;
my $width = $parent->width;
my $height = $parent->height;
my($x,$y) = $self->pos;
my @return;
for my $dir ( keys %dir ){
my($xd,$yd) = @{$dir{$dir}};
my $x = $x + $xd;
my $y = $y + $yd;
next if $y < 0 or $height <= $y;
next if $x < 0 or $width <= $x;
push @return, $dir;
$self->{$dir} = [$x,$y];
}
return @return if wantarray;
return \@return;
}
sub list_possible{
my($self) = @_;
return unless defined wantarray;
# only return keys which are
my @return = grep {
$dir{$_} and defined $self->{$_}
} keys %$self;
return @return if wantarray;
return \@return;
}
sub parent{
my($self) = @_;
return $self->{parent};
}
sub pos{
my($self) = @_;
my @pos = @{$self->{pos}};
return @pos if wantarray;
return \@pos;
}
sub visit{
my($self,$v) = @_;
my $return = $self->{visit} || 0;
$v = 1 if @_ == 1;
$self->{visit} = $v?1:0 if defined $v;
return $return;
}
sub all_neighbors{
my($self) = @_;
return $self->neighbor( $self->list_possible );
}
sub neighbor{
my($self,@n) = @_;
return unless defined wantarray;
return unless @n;
@n = map { exists $dir{$_} ? $_ : undef } @n;
my $parent = $self->parent;
my @return = map {
$parent->elem($self->{$_}) if defined $_
} @n;
if( @n == 1){
my($return) = @return;
#die unless defined $return;
return $return;
}
return @return if wantarray;
return \@return;
}
BEGIN{
for my $dir ( qw'n ne e se s sw w nw' ){
no strict 'refs';
*$dir = sub{
my($self) = @_;
my($return) = $self->neighbor($dir);
die unless $return;
return $return;
}
}
}
sub stringify{
my($self) = @_;
my($x,$y) = $self->pos;
return "($x,$y)";
}
sub equal{
my($l,$r) = @_;
"$l" eq "$r";
}
sub reset{
my($self) = @_;
delete $self->{visit};
return $self;
}
}
# Main code block
{
my $grid = Grid->new();
my $start = $grid->elem(0,0);
my $dest = $grid->elem(-1,-1);
my @all = solve($start,$dest);
#say @$_ for @all;
say STDERR scalar @all;
}
sub solve{
my($current,$dest,$return,@stack) = @_;
$return = [] unless $return;
my %visit;
$visit{$_} = 1 for @stack;
die if $visit{$current};
push @stack, $current->stringify;
if( $dest == $current ){
say @stack;
push @$return, [@stack];
}
my @possible = $current->all_neighbors;
@possible = grep{
! $visit{$_}
} @possible;
for my $next ( @possible ){
solve($next,$dest,$return,@stack);
}
return @$return if wantarray;
return $return;
}
This program came up with more than 100,000 possible solutions before it was terminated. I sent STDOUT to a file, and it was more than 200 MB.
A:
You could count the number of solutions exactly with a sweep-line dynamic programming algorithm.
|
Riddle: The Square Puzzle
|
Last couple of days, I have refrained myself from master's studies and have been focusing on this (seemingly simple) puzzle:
There is this 10*10 grid which constitutes a square of 100 available places to go. The aim is to start from a corner and traverse through all the places with respect to some simple "traverse rules" and reach number 100 (or 99 if you're a programmer and start with 0 instead :)
The rules for traversing are:
1. Two spaces hop along the vertical and horizontal axis
2. One space hop along the diagonals
3. You can visit each square only once
To visualise better, here is a valid example traverse (up to the 8th step):
Example Traverse http://img525.imageshack.us/img525/280/squarepuzzle.png
Manually, I have been working on this puzzle out of boredom. For years, I have tried to solve it by hand from time to time, but I have never gone beyond 96. Sounds easy? Try yourself and see for yourself :)
Thus, in order to solve the problem, I have developed a short (around 100 lines of code) program in Python. I am a beginner in this language I wanted to see what I can do.
The program simply applies exhaustive try & error solving technique. In other words: brute force depth first search.
My question arises from here on: The program, unfortunately cannot solve the problem because the state space is so big that search never ends withouh ever finding a solution. It can go up to number 98 (and prints that) without much difficulty, nonetheless not a complete solution.
The program also prints out the length of the search tree it has covered so far. In a couple of minutes, the traverse list from, say, 65th element is covered till the end, for just one single path. This number decreases in exponentially increasing time periods. I have run the code for quite some time and could not get beyond 50 barrier and now I am convinced.
It seems that this simple approach will not be enough unless I run it for ever. So, how can I improve my code to be faster and more efficient so that it comes up with solutions?
Basically, I am looking forward to see ideas on how to:
Capture and exploit domain knowledge specific to this problem
Apply programming techniques/tricks to overcome exhaustion
..and finally realize into a substantial solution.
Thanks in advance.
Revision
Thanks to Dave Webb for relating the problem to domain it belongs:
This is very similar to the Knight's
Tour problem which relates moving a
knight around a chess board without
revisiting the same square. Basically
it's the same problem but with
different "Traverse Rules".
|
[
"This is very similar to the Knight's Tour problem which relates moving a knight around a chess board without revisiting the same square. Basically it's the same problem but with different \"Traverse Rules\".\nThe key optimisation I remember from tackling the Knights Tour recursively is take your next moves in increasing order of the number of available moves on the destination square. This encourages the search to try and move densely in one area and filling it rather than zooming all over the board and leaving little island squares that can never be visited. (This is Warnsdorff's algorithm.)\nAlso make sure you have considered symmetry where you can. For example, at the simplest level the x and y of your starting square only need to go up to 5 since (10,10) is the same as (1,1) with the board rotated.\n",
"I decided to look at the problem and see if I could break it into 5x5 solutions with the ending of a solution one jump away from the corner of another. \nFirst assumption was that 5x5 is solvable. It is and fast.\nSo I ran solve(0,5) and looked at the results. I drew a 10x10 numbered grid in Excel with a 5x5 numbered grid for translation. Then I just searched the results for #] (ending cells) that would be a jump away from the start of the next 5x5. (ex. for the first square, I searched for \"13]\".)\nFor reference:\n10 x 10 grid 5 x 5 grid \n 0 1 2 3 4 | 5 6 7 8 9 0 1 2 3 4\n10 11 12 13 14 | 15 16 17 18 19 5 6 7 8 9\n20 21 22 23 24 | 25 26 27 28 29 10 11 12 13 14\n30 31 32 33 34 | 35 36 37 38 39 15 16 17 18 19\n40 41 42 43 44 | 45 46 47 48 49 20 21 22 23 24\n---------------+---------------\n50 51 52 53 54 | 55 56 57 58 59\n60 61 62 63 64 | 65 66 67 68 69\n70 71 72 73 74 | 75 76 77 78 79\n80 81 82 83 84 | 85 86 87 88 89\n90 91 92 93 94 | 95 96 97 98 99\n\nHere is a possible solution:\nFirst square: [0, 15, 7, 19, 16, 1, 4, 12, 20, 23, 8, 5, 17, 2, 10, 22, 14, 11, 3, 18, 6, 9, 24, 21, 13] puts it a diagonal jump up to 5 (in 10x10) the first corner of the next 5 x 5.\nSecond Square: [0, 12, 24, 21, 6, 9, 17, 2, 14, 22, 7, 15, 18, 3, 11, 23, 20, 5, 8, 16, 19, 4, 1, 13, 10] puts it with last square of 25 in the 10x10, which is two jumps away from 55.\nThird Square: [0, 12, 24, 21, 6, 9, 17, 5, 20, 23, 8, 16, 19, 4, 1, 13, 10, 2, 14, 11, 3, 18, 15, 7, 22] puts it with last square of 97 in the 10x10, which is two jumps away from 94. \nFourth Square can be any valid solution, because end point doesn't matter. However, the mapping of the solution from 5x5 to 10x10 is harder, as the square is starting on the opposite corner. Instead of translating, ran solve(24,5) and picked one at random: [24, 9, 6, 21, 13, 10, 2, 17, 5, 20, 23, 8, 16, 1, 4, 12, 0, 15, 18, 3, 11, 14, 22, 7, 19]\nThis should be possible to all do programatically, now that 5x5 solutions are know to be valid with endpoints legal moves to the next 5x5 corner. Number of 5x5 solutions was 552, which means storing the solutions for further calculation and remapping is pretty easy.\nUnless I did this wrong, this gives you one possible solution (defined above 5x5 solutions as one through four respectively):\ndef trans5(i, col5, row5):\n if i < 5: return 5 * col5 + 50 * row5 + i\n if i < 10: return 5 + 5 * col5 + 50 * row5 + i\n if i < 15: return 10 + 5 * col5 + 50 * row5 + i\n if i < 20: return 15 + 5 * col5 + 50 * row5 + i\n if i < 25: return 20 + 5 * col5 + 50 * row5 + i\n\n>>> [trans5(i, 0, 0) for i in one] + [trans5(i, 1, 0) for i in two] + [trans5(i, 0, 1) for i in three] + [trans5(i, 1, 1) for i in four]\n [0, 30, 12, 34, 31, 1, 4, 22, 40, 43, 13, 10, 32, 2, 20, 42, 24, 21, 3, 33, 11, 14, 44, 41, 23, 5, 27, 49, 46, 16, 19, 37, 7, 29, 47, 17, 35, 38, 8, 26, 48, 45, 15, 18, 36, 39, 9, 6, 28, 25, 50, 72, 94, 91, 61, 64, 82, 60, 90, 93, 63, 81, 84, 54, 51, 73, 70, 52, 74, 71, 53, 83, 80, 62, 92, 99, 69, 66, 96, 78, 75, 57, 87, 65, 95, 98, 68, 86, 56, 59, 77, 55, 85, 88, 58, 76, 79, 97, 67, 89]\n\nCan some one double check the methodology? I think this is a valid solution and method of breaking up the problem.\n",
"Eventually, I have come up with the modified Python code to overcome the problem. I've tun the code for a couple of hours and it has already found half a million solutions in a couple of hours.\nThe full set of solutions still require a total exhaustive search, i.e. to let the program run until it finishes with all combinations. However, reaching \"a\" legitimate solution can be reduced to \"linear time\".\nFirst, things I have learned: \n\nThanks to Dave Webb's answer and ammoQ's answer. The problem is indeed an extension of Hamiltonian Path problem as it is NP-Hard. There is no \"easy\" solution to begin with. There is a famous riddle of Knight's Tour which is simply the same problem with a different size of board/grid and different traverse-rules. There are many things said and done to elaborate the problem and methodologies and algorithms have been devised.\nThanks to Joe's answer. The problem can be approached in a bottom-up sense and can be sliced down to solvable sub-problems. Solved sub-problems can be connected in an entry-exit point notion (one's exit point can be connected to one other's entry point) so that the main problem could be solved as a constitution of smaller scale problems. This approach is sound and practical but not complete, though. It can not guarantee to find an answer if it exists.\n\nUpon exhaustive brute-force search, here are key points I have developed on the code:\n\nWarnsdorff's algorithm: This\nalgorithm is the key point to reach\nto a handy number of solutions in a\nquick way. It simply states that, you\nshould pick your next move to the\n\"least accessible\" place and populate\nyour \"to go\" list with ascending\norder or accesibility. Least\naccessible place means the place with\nleast number of possible following\nmoves.\nBelow is the pseudocode (from Wikipedia):\n\n\nSome definitions:\n\nA position Q is accessible from a position P if P can move to Q by a single knight's move, and Q has not yet been visited.\nThe accessibility of a position P is the number of positions accessible from P.\n\nAlgorithm:\n\nset P to be a random initial position\n on the board mark the board at P with\n the move number \"1\" for each move\n number from 2 to the number of squares\n on the board, let S be the set of\n positions accessible from the input\n position set P to be the position in\n S with minimum accessibility mark the\n board at P with the current move\n number return the marked board -- each\n square will be marked with the move\n number on which it is visited.\n\n\n\nChecking for islands: A nice exploit of domain knowledge here proved to be handy. If a move (unless it is the last one) would cause any of its neighbors to become an island, i.e. not accessible by any other, then that branch is no longer investigated. Saves considerable amount of time (very roughly 25%) combined with Warnsdorff's algorithm.\n\nAnd here is my code in Python which solves the riddle (to an acceptable degree considering that the problem is NP-Hard). The code is easy to understand as I consider myself at beginner level in Python. The comments are straightforward in explaining the implementation. Solutions can be displayed on a simple grid by a basic GUI (guidelines in the code).\n# Solve square puzzle\nimport operator\n\nclass Node:\n# Here is how the squares are defined\n def __init__(self, ID, base):\n self.posx = ID % base\n self.posy = ID / base\n self.base = base\n def isValidNode(self, posx, posy):\n return (0<=posx<self.base and 0<=posy<self.base)\n\n def getNeighbors(self):\n neighbors = []\n if self.isValidNode(self.posx + 3, self.posy): neighbors.append(self.posx + 3 + self.posy*self.base)\n if self.isValidNode(self.posx + 2, self.posy + 2): neighbors.append(self.posx + 2 + (self.posy+2)*self.base)\n if self.isValidNode(self.posx, self.posy + 3): neighbors.append(self.posx + (self.posy+3)*self.base)\n if self.isValidNode(self.posx - 2, self.posy + 2): neighbors.append(self.posx - 2 + (self.posy+2)*self.base)\n if self.isValidNode(self.posx - 3, self.posy): neighbors.append(self.posx - 3 + self.posy*self.base)\n if self.isValidNode(self.posx - 2, self.posy - 2): neighbors.append(self.posx - 2 + (self.posy-2)*self.base)\n if self.isValidNode(self.posx, self.posy - 3): neighbors.append(self.posx + (self.posy-3)*self.base)\n if self.isValidNode(self.posx + 2, self.posy - 2): neighbors.append(self.posx + 2 + (self.posy-2)*self.base)\n return neighbors\n\n\n# the nodes go like this:\n# 0 => bottom left\n# (base-1) => bottom right\n# base*(base-1) => top left\n# base**2 -1 => top right\ndef solve(start_nodeID, base):\n all_nodes = []\n #Traverse list is the list to keep track of which moves are made (the id numbers of nodes in a list)\n traverse_list = [start_nodeID]\n for i in range(0, base**2): all_nodes.append(Node(i, base))\n togo = dict()\n #Togo is a dictionary with (nodeID:[list of neighbors]) tuples\n togo[start_nodeID] = all_nodes[start_nodeID].getNeighbors()\n solution_count = 0\n\n\n while(True):\n # The search is exhausted\n if not traverse_list:\n print \"Somehow, the search tree is exhausted and you have reached the divine salvation.\"\n print \"Number of solutions:\" + str(solution_count)\n break\n\n # Get the next node to hop\n try:\n current_node_ID = togo[traverse_list[-1]].pop(0)\n except IndexError:\n del togo[traverse_list.pop()]\n continue\n\n # end condition check\n traverse_list.append(current_node_ID)\n if(len(traverse_list) == base**2):\n #OMG, a solution is found\n #print traverse_list\n solution_count += 1\n #Print solution count at a steady rate\n if(solution_count%100 == 0): \n print solution_count\n # The solution list can be returned (to visualize the solution in a simple GUI)\n #return traverse_list\n\n\n # get valid neighbors\n valid_neighbor_IDs = []\n candidate_neighbor_IDs = all_nodes[current_node_ID].getNeighbors()\n valid_neighbor_IDs = filter(lambda id: not id in traverse_list, candidate_neighbor_IDs)\n\n # if no valid neighbors, take a step back\n if not valid_neighbor_IDs:\n traverse_list.pop()\n continue\n\n # if there exists a neighbor which is accessible only through the current node (island)\n # and it is not the last one to go, the situation is not promising; so just eliminate that\n stuck_check = True\n if len(traverse_list) != base**2-1 and any(not filter(lambda id: not id in traverse_list, all_nodes[n].getNeighbors()) for n in valid_neighbor_IDs): stuck_check = False\n\n # if stuck\n if not stuck_check:\n traverse_list.pop()\n continue\n\n # sort the neighbors according to accessibility (the least accessible first)\n neighbors_ncount = []\n for neighbor in valid_neighbor_IDs:\n candidate_nn = all_nodes[neighbor].getNeighbors()\n valid_nn = [id for id in candidate_nn if not id in traverse_list]\n neighbors_ncount.append(len(valid_nn))\n n_dic = dict(zip(valid_neighbor_IDs, neighbors_ncount))\n sorted_ndic = sorted(n_dic.items(), key=operator.itemgetter(1))\n\n sorted_valid_neighbor_IDs = []\n for (node, ncount) in sorted_ndic: sorted_valid_neighbor_IDs.append(node)\n\n\n\n # if current node does have valid neighbors, add them to the front of togo list\n # in a sorted way\n togo[current_node_ID] = sorted_valid_neighbor_IDs\n\n\n# To display a solution simply\ndef drawGUI(size, solution):\n # GUI Code (If you can call it a GUI, though)\n import Tkinter\n root = Tkinter.Tk()\n canvas = Tkinter.Canvas(root, width=size*20, height=size*20)\n #canvas.create_rectangle(0, 0, size*20, size*20)\n canvas.pack()\n\n for x in range(0, size*20, 20):\n canvas.create_line(x, 0, x, size*20)\n canvas.create_line(0, x, size*20, x)\n\n cnt = 1\n for el in solution:\n canvas.create_text((el % size)*20 + 4,(el / size)*20 + 4,text=str(cnt), anchor=Tkinter.NW)\n cnt += 1\n root.mainloop()\n\n\nprint('Start of run')\n\n# it is the moment\nsolve(0, 10)\n\n#Optional, to draw a returned solution\n#drawGUI(10, solve(0, 10))\n\nraw_input('End of Run...')\n\nThanks to all everybody sharing their knowledge and ideas.\n",
"This is just an example of the http://en.wikipedia.org/wiki/Hamiltonian_path problem. German wikipedia claims that it is NP-hard.\n",
"An optimization can me made to check for islands (i.e. non-visited spaces with no valid neighbors.) and back out of the traverse until the island is eliminated. This would occur near the \"cheap\" side of a certain tree traverse. I guess the question is if the reduction is worth the expense.\n",
"I wanted to see if I could write a program that would come up with all possible solutions.\n#! /usr/bin/env perl\nuse Modern::Perl;\n\n{\n package Grid;\n use Scalar::Util qw'reftype';\n\n sub new{\n my($class,$width,$height) = @_;\n $width ||= 10;\n $height ||= $width;\n\n my $self = bless [], $class;\n\n for( my $x = 0; $x < $width; $x++ ){\n for( my $y = 0; $y < $height; $y++ ){\n $self->[$x][$y] = undef;\n }\n }\n\n for( my $x = 0; $x < $width; $x++ ){\n for( my $y = 0; $y < $height; $y++ ){\n $self->[$x][$y] = Grid::Elem->new($self,$x,$y);;\n }\n }\n\n return $self;\n }\n\n sub elem{\n my($self,$x,$y) = @_;\n no warnings 'uninitialized';\n if( @_ == 2 and reftype($x) eq 'ARRAY' ){\n ($x,$y) = (@$x);\n }\n die \"Attempted to use undefined var\" unless defined $x and defined $y;\n my $return = $self->[$x][$y];\n die unless $return;\n return $return;\n }\n\n sub done{\n my($self) = @_;\n for my $col (@$self){\n for my $item (@$col){\n return 0 unless $item->visit(undef);\n }\n }\n return 1;\n }\n\n sub reset{\n my($self) = @_;\n for my $col (@$self){\n for my $item (@$col){\n $item->reset;\n }\n }\n }\n\n sub width{\n my($self) = @_;\n return scalar @$self;\n }\n\n sub height{\n my($self) = @_;\n return scalar @{$self->[0]};\n }\n}{\n package Grid::Elem;\n use Scalar::Util 'weaken';\n\n use overload qw(\n \"\" stringify\n eq equal\n == equal\n );\n\n my %dir = (\n # x, y\n n => [ 0, 2],\n s => [ 0,-2],\n e => [ 2, 0],\n w => [-2, 0],\n\n ne => [ 1, 1],\n nw => [-1, 1],\n\n se => [ 1,-1],\n sw => [-1,-1],\n );\n\n sub new{\n my($class,$parent,$x,$y) = @_;\n weaken $parent;\n my $self = bless {\n parent => $parent,\n pos => [$x,$y]\n }, $class;\n\n $self->_init_possible;\n\n return $self;\n }\n\n sub _init_possible{\n my($self) = @_;\n my $parent = $self->parent;\n my $width = $parent->width;\n my $height = $parent->height;\n my($x,$y) = $self->pos;\n\n my @return;\n for my $dir ( keys %dir ){\n my($xd,$yd) = @{$dir{$dir}};\n my $x = $x + $xd;\n my $y = $y + $yd;\n\n next if $y < 0 or $height <= $y;\n next if $x < 0 or $width <= $x;\n\n push @return, $dir;\n $self->{$dir} = [$x,$y];\n }\n return @return if wantarray;\n return \\@return;\n }\n\n sub list_possible{\n my($self) = @_;\n return unless defined wantarray;\n\n # only return keys which are\n my @return = grep {\n $dir{$_} and defined $self->{$_}\n } keys %$self;\n\n return @return if wantarray;\n return \\@return;\n }\n\n sub parent{\n my($self) = @_;\n return $self->{parent};\n }\n\n sub pos{\n my($self) = @_;\n my @pos = @{$self->{pos}};\n return @pos if wantarray;\n return \\@pos;\n }\n\n sub visit{\n my($self,$v) = @_;\n my $return = $self->{visit} || 0;\n\n $v = 1 if @_ == 1;\n $self->{visit} = $v?1:0 if defined $v;\n\n return $return;\n }\n\n sub all_neighbors{\n my($self) = @_;\n return $self->neighbor( $self->list_possible );\n }\n sub neighbor{\n my($self,@n) = @_;\n return unless defined wantarray;\n return unless @n;\n\n @n = map { exists $dir{$_} ? $_ : undef } @n;\n\n my $parent = $self->parent;\n\n my @return = map {\n $parent->elem($self->{$_}) if defined $_\n } @n;\n\n if( @n == 1){\n my($return) = @return;\n #die unless defined $return;\n return $return;\n }\n return @return if wantarray;\n return \\@return;\n }\n\n BEGIN{\n for my $dir ( qw'n ne e se s sw w nw' ){\n no strict 'refs';\n *$dir = sub{\n my($self) = @_;\n my($return) = $self->neighbor($dir);\n die unless $return;\n return $return;\n }\n }\n }\n\n sub stringify{\n my($self) = @_;\n my($x,$y) = $self->pos;\n return \"($x,$y)\";\n }\n\n sub equal{\n my($l,$r) = @_;\n \"$l\" eq \"$r\";\n }\n\n sub reset{\n my($self) = @_;\n delete $self->{visit};\n return $self;\n }\n}\n\n# Main code block\n{\n my $grid = Grid->new();\n\n my $start = $grid->elem(0,0);\n my $dest = $grid->elem(-1,-1);\n\n my @all = solve($start,$dest);\n #say @$_ for @all;\n say STDERR scalar @all;\n}\n\nsub solve{\n my($current,$dest,$return,@stack) = @_;\n $return = [] unless $return;\n my %visit;\n $visit{$_} = 1 for @stack;\n\n die if $visit{$current};\n\n push @stack, $current->stringify;\n\n if( $dest == $current ){\n say @stack;\n\n push @$return, [@stack];\n }\n\n my @possible = $current->all_neighbors;\n @possible = grep{\n ! $visit{$_}\n } @possible;\n\n for my $next ( @possible ){\n solve($next,$dest,$return,@stack);\n }\n\n return @$return if wantarray;\n return $return;\n}\n\nThis program came up with more than 100,000 possible solutions before it was terminated. I sent STDOUT to a file, and it was more than 200 MB.\n",
"You could count the number of solutions exactly with a sweep-line dynamic programming algorithm.\n"
] |
[
15,
10,
8,
5,
1,
1,
0
] |
[] |
[] |
[
"knights_tour",
"python"
] |
stackoverflow_0000767912_knights_tour_python.txt
|
Q:
Python code to Daemonize a process?
Can anyone share an efficient code snipper to daemonize a process in python?
A:
From http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
(Wayback link)
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import sys, os, time, atexit
from signal import SIGTERM
class Daemon:
"""
A generic daemon class.
Usage: subclass the Daemon class and override the run() method
"""
def __init__(self, pidfile, stdin='/dev/null',
stdout='/dev/null', stderr='/dev/null'):
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.pidfile = pidfile
def daemonize(self):
"""
do the UNIX double-fork magic, see Stevens' "Advanced
Programming in the UNIX Environment" for details (ISBN 0201563177)
http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
"""
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError, e:
sys.stderr.write(
"fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# decouple from parent environment
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError, e:
sys.stderr.write(
"fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# redirect standard file descriptors
sys.stdout.flush()
sys.stderr.flush()
si = file(self.stdin, 'r')
so = file(self.stdout, 'a+')
se = file(self.stderr, 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# write pidfile
atexit.register(self.delpid)
pid = str(os.getpid())
file(self.pidfile,'w+').write("%s\n" % pid)
def delpid(self):
os.remove(self.pidfile)
def start(self):
"""
Start the daemon
"""
# Check for a pidfile to see if the daemon already runs
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if pid:
message = "pidfile %s already exist. Daemon already running?\n"
sys.stderr.write(message % self.pidfile)
sys.exit(1)
# Start the daemon
self.daemonize()
self.run()
def stop(self):
"""
Stop the daemon
"""
# Get the pid from the pidfile
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if not pid:
message = "pidfile %s does not exist. Daemon not running?\n"
sys.stderr.write(message % self.pidfile)
return # not an error in a restart
# Try killing the daemon process
try:
while 1:
os.kill(pid, SIGTERM)
time.sleep(0.1)
except OSError, err:
err = str(err)
if err.find("No such process") > 0:
if os.path.exists(self.pidfile):
os.remove(self.pidfile)
else:
print str(err)
sys.exit(1)
def restart(self):
"""
Restart the daemon
"""
self.stop()
self.start()
def run(self):
"""
You should override this method when you subclass Daemon.
It will be called after the process has been
daemonized by start() or restart().
"""
|
Python code to Daemonize a process?
|
Can anyone share an efficient code snipper to daemonize a process in python?
|
[
"From http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/\n(Wayback link)\n#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n\nimport sys, os, time, atexit\nfrom signal import SIGTERM \n\nclass Daemon:\n \"\"\"\n A generic daemon class.\n\n Usage: subclass the Daemon class and override the run() method\n \"\"\"\n def __init__(self, pidfile, stdin='/dev/null', \n stdout='/dev/null', stderr='/dev/null'):\n self.stdin = stdin\n self.stdout = stdout\n self.stderr = stderr\n self.pidfile = pidfile\n\n def daemonize(self):\n \"\"\"\n do the UNIX double-fork magic, see Stevens' \"Advanced \n Programming in the UNIX Environment\" for details (ISBN 0201563177)\n http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16\n \"\"\"\n try: \n pid = os.fork() \n if pid > 0:\n # exit first parent\n sys.exit(0) \n except OSError, e: \n sys.stderr.write(\n \"fork #1 failed: %d (%s)\\n\" % (e.errno, e.strerror))\n sys.exit(1)\n\n # decouple from parent environment\n os.chdir(\"/\") \n os.setsid() \n os.umask(0) \n\n # do second fork\n try: \n pid = os.fork() \n if pid > 0:\n # exit from second parent\n sys.exit(0) \n except OSError, e: \n sys.stderr.write(\n \"fork #2 failed: %d (%s)\\n\" % (e.errno, e.strerror))\n sys.exit(1) \n\n # redirect standard file descriptors\n sys.stdout.flush()\n sys.stderr.flush()\n si = file(self.stdin, 'r')\n so = file(self.stdout, 'a+')\n se = file(self.stderr, 'a+', 0)\n os.dup2(si.fileno(), sys.stdin.fileno())\n os.dup2(so.fileno(), sys.stdout.fileno())\n os.dup2(se.fileno(), sys.stderr.fileno())\n\n # write pidfile\n atexit.register(self.delpid)\n pid = str(os.getpid())\n file(self.pidfile,'w+').write(\"%s\\n\" % pid)\n\n def delpid(self):\n os.remove(self.pidfile)\n\n def start(self):\n \"\"\"\n Start the daemon\n \"\"\"\n # Check for a pidfile to see if the daemon already runs\n try:\n pf = file(self.pidfile,'r')\n pid = int(pf.read().strip())\n pf.close()\n except IOError:\n pid = None\n\n if pid:\n message = \"pidfile %s already exist. Daemon already running?\\n\"\n sys.stderr.write(message % self.pidfile)\n sys.exit(1)\n\n # Start the daemon\n self.daemonize()\n self.run()\n\n def stop(self):\n \"\"\"\n Stop the daemon\n \"\"\"\n # Get the pid from the pidfile\n try:\n pf = file(self.pidfile,'r')\n pid = int(pf.read().strip())\n pf.close()\n except IOError:\n pid = None\n\n if not pid:\n message = \"pidfile %s does not exist. Daemon not running?\\n\"\n sys.stderr.write(message % self.pidfile)\n return # not an error in a restart\n\n # Try killing the daemon process \n try:\n while 1:\n os.kill(pid, SIGTERM)\n time.sleep(0.1)\n except OSError, err:\n err = str(err)\n if err.find(\"No such process\") > 0:\n if os.path.exists(self.pidfile):\n os.remove(self.pidfile)\n else:\n print str(err)\n sys.exit(1)\n\n def restart(self):\n \"\"\"\n Restart the daemon\n \"\"\"\n self.stop()\n self.start()\n\n def run(self):\n \"\"\"\n You should override this method when you subclass Daemon. \n It will be called after the process has been\n daemonized by start() or restart().\n \"\"\"\n\n"
] |
[
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001417631_python.txt
|
Q:
Python: Warnings and logging verbose limit
I want to unify the whole logging facility of my app. Any warning is raise an exception, next I catch it and pass it to the logger. But the question: Is there in logging any mute facility? Sometimes logger becomes too verbose. Sometimes for the reason of too noisy warnings, is there are any verbose limit in warnings?
http://docs.python.org/library/logging.html
http://docs.python.org/library/warnings.html
A:
Not only are there log levels, but there is a really flexible way of configuring them. If you are using named logger objects (e.g., logger = logging.getLogger(...)) then you can configure them appropriately. That will let you configure verbosity on a subsystem-by-subsystem basis where a subsystem is defined by the logging hierarchy.
The other option is to use logging.Filter and Warning filters to limit the output. I haven't used this method before but it looks like it might be a better fit for your needs.
Give PEP-282 a read for a good prose description of the Python logging package. I think that it describes the functionality much better than the module documentation does.
Edit after Clarification
You might be able to handle the logging portion of this using a custom class based on logging.Logger and registered with logging.setLoggerClass(). It really sounds like you want something similar to syslog's "Last message repeated 9 times". Unfortunately I don't know of an implementation of this anywhere. You might want to see if twisted.python.log supports this functionality.
A:
from the very source you mentioned.
there are the log-levels, use the wisely ;-)
LEVELS = {'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL}
A:
This will be a problem if you plan to make all logging calls from some blind error handler that doesn't know anything about the code that raised the error, which is what your question sounds like. How will you decide which logging calls get made and which don't?
The more standard practice is to use such blocks to recover if possible, and log an error (really, if it is an error that you weren't specifically prepared for, you want to know about it; use a high level). But don't rely on these blocks for all your state/debug information. Better to sprinkle your code with logging calls before it gets to the error-handler. That way, you can observe useful run-time information about a system when it is NOT failing and you can make logging calls of different severity. For example:
import logging
from traceback import format_exc
logger = logging.getLogger() # Gives the root logger. Change this for better organization
# Add your appenders or what have you
def handle_error(e):
logger.error("Unexpected error found")
logger.warn(format_exc()) #put the traceback in the log at lower level
... #Your recovery code
def do_stuff():
logger.info("Program started")
... #Your main code
logger.info("Stuff done")
if __name__ == "__main__":
try:
do_stuff()
except Exception,e:
handle_error(e)
|
Python: Warnings and logging verbose limit
|
I want to unify the whole logging facility of my app. Any warning is raise an exception, next I catch it and pass it to the logger. But the question: Is there in logging any mute facility? Sometimes logger becomes too verbose. Sometimes for the reason of too noisy warnings, is there are any verbose limit in warnings?
http://docs.python.org/library/logging.html
http://docs.python.org/library/warnings.html
|
[
"Not only are there log levels, but there is a really flexible way of configuring them. If you are using named logger objects (e.g., logger = logging.getLogger(...)) then you can configure them appropriately. That will let you configure verbosity on a subsystem-by-subsystem basis where a subsystem is defined by the logging hierarchy.\nThe other option is to use logging.Filter and Warning filters to limit the output. I haven't used this method before but it looks like it might be a better fit for your needs.\nGive PEP-282 a read for a good prose description of the Python logging package. I think that it describes the functionality much better than the module documentation does.\nEdit after Clarification\nYou might be able to handle the logging portion of this using a custom class based on logging.Logger and registered with logging.setLoggerClass(). It really sounds like you want something similar to syslog's \"Last message repeated 9 times\". Unfortunately I don't know of an implementation of this anywhere. You might want to see if twisted.python.log supports this functionality.\n",
"from the very source you mentioned.\nthere are the log-levels, use the wisely ;-)\nLEVELS = {'debug': logging.DEBUG,\n 'info': logging.INFO,\n 'warning': logging.WARNING,\n 'error': logging.ERROR,\n 'critical': logging.CRITICAL}\n\n",
"This will be a problem if you plan to make all logging calls from some blind error handler that doesn't know anything about the code that raised the error, which is what your question sounds like. How will you decide which logging calls get made and which don't?\nThe more standard practice is to use such blocks to recover if possible, and log an error (really, if it is an error that you weren't specifically prepared for, you want to know about it; use a high level). But don't rely on these blocks for all your state/debug information. Better to sprinkle your code with logging calls before it gets to the error-handler. That way, you can observe useful run-time information about a system when it is NOT failing and you can make logging calls of different severity. For example:\nimport logging\nfrom traceback import format_exc\nlogger = logging.getLogger() # Gives the root logger. Change this for better organization\n# Add your appenders or what have you\ndef handle_error(e):\n logger.error(\"Unexpected error found\")\n logger.warn(format_exc()) #put the traceback in the log at lower level\n ... #Your recovery code\ndef do_stuff():\n logger.info(\"Program started\")\n ... #Your main code\n logger.info(\"Stuff done\")\nif __name__ == \"__main__\":\n try:\n do_stuff()\n except Exception,e:\n handle_error(e)\n\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"logging",
"python",
"warnings"
] |
stackoverflow_0001417665_logging_python_warnings.txt
|
Q:
Sending cookies in a SOAP request using Suds
I'm trying to access a SOAP API using Suds. The SOAP API documentation states that I have to provide three cookies with some login data. How can I accomplish this?
A:
Set a "Cookie" HTTP Request Header having the required name/value pairs. This is how Cookie values are usually transmitted in HTTP Based systems. You can add multiple key/value pairs in the same http header.
Single Cookie
Cookie: name1=value1
Multiple Cookies (seperated by semicolons)
Cookie: name1=value1; name2=value2
|
Sending cookies in a SOAP request using Suds
|
I'm trying to access a SOAP API using Suds. The SOAP API documentation states that I have to provide three cookies with some login data. How can I accomplish this?
|
[
"Set a \"Cookie\" HTTP Request Header having the required name/value pairs. This is how Cookie values are usually transmitted in HTTP Based systems. You can add multiple key/value pairs in the same http header.\nSingle Cookie\n\nCookie: name1=value1\n\nMultiple Cookies (seperated by semicolons)\n\nCookie: name1=value1; name2=value2\n\n"
] |
[
4
] |
[] |
[] |
[
"cookies",
"python",
"soap",
"suds"
] |
stackoverflow_0001417902_cookies_python_soap_suds.txt
|
Q:
Twisted - listen to multiple ports for multiple processes with one reactor
i need to run multiple instances of my server app each on it's own port. It's not a problem if i start these with os.system or subprocess.Popen, but i'd like to have some process communication with multiprocessing.
I'd like to somehow dynamically set up listening to different port from different processes. Just calling reactor.listenTCP doesn't do it, because i getting strange Errno 22 while stopping reactor. I'm also pretty sure it's not the correct way to do it. I looked for examples, but couldn't find anything. Any help is appreciated.
EDIT:
Thanks Tzury, it's kinda what i'd like to get. But i have to dynamicly add ports to listen. For Example
from twisted.internet import reactor
from multiprocessing import Process
def addListener(self, port, site):
''' Called when I have to add new port to listen to.
site - factory handling input, NevowSite in my case'''
p = Process(target=f, args=(port, func))
p.start()
def f(self, port, func):
''' Runs as a new process'''
reactor.listenTCP(port, func)
I need a way to neatly stop such processes. Just calling reactor.stop() stop a child process doesn't do it.
This is the error i'm gettin when i trying to stop a process
--- <exception caught here> ---
File "/usr/share/exe/twisted/internet/tcp.py", line 755, in doRead
skt, addr = self.socket.accept()
File "/usr/lib/python2.6/socket.py", line 195, in accept
sock, addr = self._sock.accept()
<class 'socket.error'>: [Errno 22] Invalid argument
Dimitri.
A:
I am not sure what error you are getting.
The following is an example from twisted site (modified)
And as you can see, it listen on two ports, and can listen to many more.
from twisted.internet.protocol import Protocol, Factory
from twisted.internet import reactor
class QOTD(Protocol):
def connectionMade(self):
self.transport.write("An apple a day keeps the doctor away\r\n")
self.transport.loseConnection()
# Next lines are magic:
factory = Factory()
factory.protocol = QOTD
# 8007 is the port you want to run under. Choose something >1024
reactor.listenTCP(8007, factory)
reactor.listenTCP(8008, factory)
reactor.run()
|
Twisted - listen to multiple ports for multiple processes with one reactor
|
i need to run multiple instances of my server app each on it's own port. It's not a problem if i start these with os.system or subprocess.Popen, but i'd like to have some process communication with multiprocessing.
I'd like to somehow dynamically set up listening to different port from different processes. Just calling reactor.listenTCP doesn't do it, because i getting strange Errno 22 while stopping reactor. I'm also pretty sure it's not the correct way to do it. I looked for examples, but couldn't find anything. Any help is appreciated.
EDIT:
Thanks Tzury, it's kinda what i'd like to get. But i have to dynamicly add ports to listen. For Example
from twisted.internet import reactor
from multiprocessing import Process
def addListener(self, port, site):
''' Called when I have to add new port to listen to.
site - factory handling input, NevowSite in my case'''
p = Process(target=f, args=(port, func))
p.start()
def f(self, port, func):
''' Runs as a new process'''
reactor.listenTCP(port, func)
I need a way to neatly stop such processes. Just calling reactor.stop() stop a child process doesn't do it.
This is the error i'm gettin when i trying to stop a process
--- <exception caught here> ---
File "/usr/share/exe/twisted/internet/tcp.py", line 755, in doRead
skt, addr = self.socket.accept()
File "/usr/lib/python2.6/socket.py", line 195, in accept
sock, addr = self._sock.accept()
<class 'socket.error'>: [Errno 22] Invalid argument
Dimitri.
|
[
"I am not sure what error you are getting.\nThe following is an example from twisted site (modified)\nAnd as you can see, it listen on two ports, and can listen to many more.\nfrom twisted.internet.protocol import Protocol, Factory\nfrom twisted.internet import reactor\n\nclass QOTD(Protocol):\n\n def connectionMade(self):\n self.transport.write(\"An apple a day keeps the doctor away\\r\\n\") \n self.transport.loseConnection()\n\n# Next lines are magic:\nfactory = Factory()\nfactory.protocol = QOTD\n\n# 8007 is the port you want to run under. Choose something >1024\nreactor.listenTCP(8007, factory)\nreactor.listenTCP(8008, factory)\nreactor.run()\n\n"
] |
[
12
] |
[] |
[] |
[
"process",
"python",
"twisted"
] |
stackoverflow_0001411281_process_python_twisted.txt
|
Q:
Time out error while creating cgi.FieldStorage object
Hey, any idea about what is the timeout error which I am getting here:
Error trace:
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/_cprequest.py", line 606, in respond
cherrypy.response.body = self.handler()
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/_cpdispatch.py", line 25, in __call__
return self.callable(*self.args, **self.kwargs)
File "sync_server.py", line 853, in put_file
return RequestController_v1_0.put_file(self, *args, **kw)
File "sync_server.py", line 409, in put_file
saved_path, tgt_path, root_folder = self._save_file(client_id, theFile)
File "sync_server.py", line 404, in _save_file
saved_path, tgt_path, root_folder = get_posted_file(cherrypy.request, 'theFile', staging_path)
File "sync_server.py", line 1031, in get_posted_file
, keep_blank_values=True)
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 496, in __init__
self.read_multi(environ, keep_blank_values, strict_parsing)
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 620, in read_multi
environ, keep_blank_values, strict_parsing)
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 498, in __init__
self.read_single()
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 635, in read_single
self.read_lines()
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 657, in read_lines
self.read_lines_to_outerboundary()
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 685, in read_lines_to_outerboundary
line = self.fp.readline(1<<16)
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 206, in readline
data = self.rfile.readline(size)
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 868, in readline
data = self.recv(self._rbufsize)
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 747, in recv
return self._sock.recv(size)
timeout: timed out
Here is the code which is getting called:
def get_posted_file(request, form_field_name, tgt_folder, tgt_fname=None):
logger.debug('get_posted_file: %s' % request.headers['Last-Modified'])
lowerHeaderMap = {}
for key, value in request.headers.items():
lowerHeaderMap[key.lower()] = value
---> dataDict = TmpFieldStorage(fp=request.rfile, headers=lowerHeaderMap, environ={'REQUEST_METHOD':'POST'}
, keep_blank_values=True)
and:
class TmpFieldStorage(cgi.FieldStorage):
"""
Use a named temporary file to allow creation of hard link to final destination
"""
def make_file(self, binary=None):
tmp_folder = os.path.join(get_filer_root(cherrypy.request.login), 'sync_tmp')
if not os.path.exists(tmp_folder):
os.makedirs(tmp_folder)
return tempfile.NamedTemporaryFile(dir=tmp_folder)
A:
environ={'REQUEST_METHOD':'POST'}
That seems a rather deficient environ. The CGI spec requires many more environment variables to be in there, some of which the cgi module is going to need.
In particular there is no CONTENT_LENGTH header. Without it, cgi is defaulting to reading the entire contents of the stream up until EOF. But since it is (probably) a network stream rather than a file there will be no EOF (or at least not one directly at the end of the submission), so the form reader will be sitting there waiting for more input that will never come. Timeout.
|
Time out error while creating cgi.FieldStorage object
|
Hey, any idea about what is the timeout error which I am getting here:
Error trace:
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/_cprequest.py", line 606, in respond
cherrypy.response.body = self.handler()
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/_cpdispatch.py", line 25, in __call__
return self.callable(*self.args, **self.kwargs)
File "sync_server.py", line 853, in put_file
return RequestController_v1_0.put_file(self, *args, **kw)
File "sync_server.py", line 409, in put_file
saved_path, tgt_path, root_folder = self._save_file(client_id, theFile)
File "sync_server.py", line 404, in _save_file
saved_path, tgt_path, root_folder = get_posted_file(cherrypy.request, 'theFile', staging_path)
File "sync_server.py", line 1031, in get_posted_file
, keep_blank_values=True)
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 496, in __init__
self.read_multi(environ, keep_blank_values, strict_parsing)
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 620, in read_multi
environ, keep_blank_values, strict_parsing)
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 498, in __init__
self.read_single()
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 635, in read_single
self.read_lines()
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 657, in read_lines
self.read_lines_to_outerboundary()
File "/array/purato/python2.6/lib/python2.6/cgi.py", line 685, in read_lines_to_outerboundary
line = self.fp.readline(1<<16)
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 206, in readline
data = self.rfile.readline(size)
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 868, in readline
data = self.recv(self._rbufsize)
File "/array/purato/python2.6/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 747, in recv
return self._sock.recv(size)
timeout: timed out
Here is the code which is getting called:
def get_posted_file(request, form_field_name, tgt_folder, tgt_fname=None):
logger.debug('get_posted_file: %s' % request.headers['Last-Modified'])
lowerHeaderMap = {}
for key, value in request.headers.items():
lowerHeaderMap[key.lower()] = value
---> dataDict = TmpFieldStorage(fp=request.rfile, headers=lowerHeaderMap, environ={'REQUEST_METHOD':'POST'}
, keep_blank_values=True)
and:
class TmpFieldStorage(cgi.FieldStorage):
"""
Use a named temporary file to allow creation of hard link to final destination
"""
def make_file(self, binary=None):
tmp_folder = os.path.join(get_filer_root(cherrypy.request.login), 'sync_tmp')
if not os.path.exists(tmp_folder):
os.makedirs(tmp_folder)
return tempfile.NamedTemporaryFile(dir=tmp_folder)
|
[
"\nenviron={'REQUEST_METHOD':'POST'} \n\nThat seems a rather deficient environ. The CGI spec requires many more environment variables to be in there, some of which the cgi module is going to need.\nIn particular there is no CONTENT_LENGTH header. Without it, cgi is defaulting to reading the entire contents of the stream up until EOF. But since it is (probably) a network stream rather than a file there will be no EOF (or at least not one directly at the end of the submission), so the form reader will be sitting there waiting for more input that will never come. Timeout.\n"
] |
[
0
] |
[] |
[] |
[
"cgi",
"cherrypy",
"python"
] |
stackoverflow_0001417918_cgi_cherrypy_python.txt
|
Q:
django newbie. Having trouble with ModelForm
i'm trying to write a very simple django app. I cant get it to show my form inside my template.
<form ...>
{{form.as_p}}
</form>
it shows absolutely nothing. If I add a submit button, it only shows that.
Do I have to declare a form object that inherits from forms.Form ? Cant it be done with ModelForms?
[UPDATE]Solved! (apologize for wasting your time)
In my urls file I had:
(r'login/$',direct_to_template, {'template':'register.html'}
Switched to:
(r'login/$','portal.views.register')
And yes, I feel terrible.
Background:
I have a Student model, and I have a registration page. When it is accessed, it should display a textfield asking for students name. If the student completes that field, then it saves it.
#models.py
class Student(models.Model):
name = models.CharField(max_length =50)
#forms.py
class StudentForm (forms.ModelForm):
class Meta:
model = Student
So, here is my view:
def register(request):
if request.method == 'POST':
form = StudentForm(request.POST)
if form.is_valid():
form.save()
return render_to_response('/thanks/')
else:
student = Student()
form = StudentForm(instance =student)
return render_to_response('register.html',{'form':form})
A:
The problem is in your view. You will have no existing student object to retrieve from the database. The following code sample will help you implement an "create" view.
As a side note, you might like using the direct_to_template generic view function to make your life a bit easier.
def add_student(request):
if request.method == 'POST':
form = StudentForm(request.POST)
if form.is_valid():
new_student = form.save()
return HttpResponseRedirect('/back/to/somewhere/on/success/')
else:
form = StudentForm()
return direct_to_template(request,
'register.html',
{'form':form})
|
django newbie. Having trouble with ModelForm
|
i'm trying to write a very simple django app. I cant get it to show my form inside my template.
<form ...>
{{form.as_p}}
</form>
it shows absolutely nothing. If I add a submit button, it only shows that.
Do I have to declare a form object that inherits from forms.Form ? Cant it be done with ModelForms?
[UPDATE]Solved! (apologize for wasting your time)
In my urls file I had:
(r'login/$',direct_to_template, {'template':'register.html'}
Switched to:
(r'login/$','portal.views.register')
And yes, I feel terrible.
Background:
I have a Student model, and I have a registration page. When it is accessed, it should display a textfield asking for students name. If the student completes that field, then it saves it.
#models.py
class Student(models.Model):
name = models.CharField(max_length =50)
#forms.py
class StudentForm (forms.ModelForm):
class Meta:
model = Student
So, here is my view:
def register(request):
if request.method == 'POST':
form = StudentForm(request.POST)
if form.is_valid():
form.save()
return render_to_response('/thanks/')
else:
student = Student()
form = StudentForm(instance =student)
return render_to_response('register.html',{'form':form})
|
[
"The problem is in your view. You will have no existing student object to retrieve from the database. The following code sample will help you implement an \"create\" view.\nAs a side note, you might like using the direct_to_template generic view function to make your life a bit easier.\ndef add_student(request):\n if request.method == 'POST':\n form = StudentForm(request.POST)\n if form.is_valid():\n new_student = form.save()\n\n return HttpResponseRedirect('/back/to/somewhere/on/success/')\n else:\n form = StudentForm()\n\n return direct_to_template(request,\n 'register.html',\n {'form':form})\n\n"
] |
[
2
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001418149_django_python.txt
|
Q:
Which is more pythonic for array removal?
I'm removing an item from an array if it exists.
Two ways I can think of to do this
Way #1
# x array, r item to remove
if r in x :
x.remove( r )
Way #2
try :
x.remove( r )
except :
pass
Timing it shows the try/except way can be faster
(some times i'm getting:)
1.16225508968e-06
8.80804972547e-07
1.14314196588e-06
8.73752536492e-07
import timeit
runs = 10000
x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',
'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',
'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',
'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',
]
r = 'a'
code1 ="""
x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',
'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',
'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',
'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',
]
r = 'a'
if r in x :
x.remove(r)
"""
print timeit.Timer( code1 ).timeit( runs ) / runs
code2 ="""
x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',
'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',
'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',
'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',
]
r = 'a'
try :
x.remove( r )
except :
pass
"""
print timeit.Timer( code2 ).timeit( runs ) / runs
Which is more pythonic?
A:
I've always gone with the first method. if in reads far more clearly than exception handling does.
A:
that would be:
try:
x.remove(r)
except ValueError:
pass
btw, you should have tried to remove an item that is not in the list, to have a comprehensive comparison.
A:
Speed depends on the ratio of hits to misses. To be pythonic choose the clearer method.
Personally I think way#1 is clearer (It takes less lines to have an 'if' block rather than an exception block and also uses less brain space). It will also be faster when there are more hits than misses (an exception is more expensive than skipping a if block).
A:
The try/except way
A:
The first way looks cleaner. The second looks like a lot of extra effort just to remove an item from a list.
There's nothing about it in PEP-8, though, so whichever you prefer is the 'real' answer.
Speaking of PEP-8... having that space before the colon falls under the definition of 'extraneous whitespace'.
|
Which is more pythonic for array removal?
|
I'm removing an item from an array if it exists.
Two ways I can think of to do this
Way #1
# x array, r item to remove
if r in x :
x.remove( r )
Way #2
try :
x.remove( r )
except :
pass
Timing it shows the try/except way can be faster
(some times i'm getting:)
1.16225508968e-06
8.80804972547e-07
1.14314196588e-06
8.73752536492e-07
import timeit
runs = 10000
x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',
'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',
'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',
'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',
]
r = 'a'
code1 ="""
x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',
'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',
'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',
'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',
]
r = 'a'
if r in x :
x.remove(r)
"""
print timeit.Timer( code1 ).timeit( runs ) / runs
code2 ="""
x = [ '101', '102', '103', '104', '105', 'a', 'b', 'c',
'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', '111', '112', '113',
'x', 'y', 'z', 'w', 'wwwwwww', 'aeiojwaef', 'iweojfoigj', 'oiowow',
'oiweoiwioeiowe', 'oiwjaoigjoaigjaowig',
]
r = 'a'
try :
x.remove( r )
except :
pass
"""
print timeit.Timer( code2 ).timeit( runs ) / runs
Which is more pythonic?
|
[
"I've always gone with the first method. if in reads far more clearly than exception handling does.\n",
"that would be:\ntry:\n x.remove(r)\nexcept ValueError:\n pass\n\nbtw, you should have tried to remove an item that is not in the list, to have a comprehensive comparison.\n",
"Speed depends on the ratio of hits to misses. To be pythonic choose the clearer method.\nPersonally I think way#1 is clearer (It takes less lines to have an 'if' block rather than an exception block and also uses less brain space). It will also be faster when there are more hits than misses (an exception is more expensive than skipping a if block).\n",
"The try/except way\n",
"The first way looks cleaner. The second looks like a lot of extra effort just to remove an item from a list.\nThere's nothing about it in PEP-8, though, so whichever you prefer is the 'real' answer.\nSpeaking of PEP-8... having that space before the colon falls under the definition of 'extraneous whitespace'.\n"
] |
[
6,
5,
3,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001418266_python.txt
|
Q:
Access an instance from Terminal
Can't figure this out. In Terminal, I import a module which instantiates a class, which I haven't figured out how to access. Of course, I can always instantiate in Terminal:
Server=Data.ServerData()
Then I can get a result:
Server.Property().DefaultChart
However, I want to skip that step getting the result directly from the instance already running in the module. I think Data.Server in this case should load the Server instance from when I imported Data:
Data.Server.Property().DefaultChart
>>> AttributeError: 'module' object has no attribute 'Server'
So how to access the running instance from Terminal?
A:
If importing Data.py implicitly creates an instance of the Data.ServerData class (somewhat dubious, but OK in certain cases), that still tells us nothing about how that module chose to name that one instance. Do dir(Data) at the >>> prompt to see all the names defined in the Data module; if you want to see what names (if any!) have values that are instances of Data.ServerData, e.g.:
>>> [n for n in dir(Data) if isinstance(getattr(Data,n), Data.ServerData)]
Reading Data.py's source code might be simpler, but you do have many other options for such introspection to find out exactly what's going on (and how it differ from what you EXPECTED [[not sure on what basis!]] to be going on).
|
Access an instance from Terminal
|
Can't figure this out. In Terminal, I import a module which instantiates a class, which I haven't figured out how to access. Of course, I can always instantiate in Terminal:
Server=Data.ServerData()
Then I can get a result:
Server.Property().DefaultChart
However, I want to skip that step getting the result directly from the instance already running in the module. I think Data.Server in this case should load the Server instance from when I imported Data:
Data.Server.Property().DefaultChart
>>> AttributeError: 'module' object has no attribute 'Server'
So how to access the running instance from Terminal?
|
[
"If importing Data.py implicitly creates an instance of the Data.ServerData class (somewhat dubious, but OK in certain cases), that still tells us nothing about how that module chose to name that one instance. Do dir(Data) at the >>> prompt to see all the names defined in the Data module; if you want to see what names (if any!) have values that are instances of Data.ServerData, e.g.:\n>>> [n for n in dir(Data) if isinstance(getattr(Data,n), Data.ServerData)]\n\nReading Data.py's source code might be simpler, but you do have many other options for such introspection to find out exactly what's going on (and how it differ from what you EXPECTED [[not sure on what basis!]] to be going on).\n"
] |
[
2
] |
[] |
[] |
[
"class",
"instance",
"interactive",
"module",
"python"
] |
stackoverflow_0001418262_class_instance_interactive_module_python.txt
|
Q:
Python implementation question
Hey. I have a problem I'm trying to solve in Python and I can't think of a clever implementation. The input is a string of letters. Some of them represent variables, others represent operators, and I want to iterate over a large amount of values for the variables (different configurations).
I think an equivalent question would be that you get an equation with many variables stored as a string and you want to see the solution when you substitute x,y,z etc. for many different values, or satisfying a boolean formula.
I can't think of any clever data structure that would implement this - just 'setting' the formula every time, substiuting variables, takes more time than actually evaluating it.
I know it's a bit abstract - but anyone has any ideas or has experience with somtehing similar?
Someone usggested I want to implement an evaluator. It's true but beside the point. For the sake of the question supposed the eval(string) function is already implemented. What is an efficient data structure that will hold the values and allow to change them every time cleanly and efficiently? Surely not a string! It has to be a modifiable list. And if it has a few instances of the variable x, it should be able to accsess them all fast and change their value before evaluation, etc, etc.
A:
as Ira said you need an expression evaluator like Lex/Yacc to do this job, you have plenty available here :
http://wiki.python.org/moin/LanguageParsing
I have been using pyparsing and works greats for that kind of things
A:
What you want is obviously an expression evaluator.
Other you use a built in facility in the language typically called "eval" (I don't know if Python offers this) or you build an expression parser/evaluator.
To get some of ideas of how to build such an expression evaluator,
you can look at the following SO golf exercise:
Code Golf: Mathematical expression evaluator (that respects PEMDAS)
To do what you want, you'd have to translate the basic ideas (they're all pretty much the same) to Pythonese, and add a facility to parse variable names and look up their values.
That should be a very straightforward extension.
A:
Parse that string into a tree (or equivalently interpretable data structure), just once, and then repeatedly use a function to "interpret the tree" for each variable-assignment-set of interest. (You could even generate Python bytecode as the "interpretable data structure" so you can just use eval as the "interpret the tree" -- that makes for slow generation, but that's needed only once, and fast interpretation).
As you say that's a bit abstract so let's give a concrete, if over-simplistic, example. Say for example that the variables are the letters x, y, z, t, and the operators are a for addition and s for subtraction -- strings of adjacent letters implicitly mean high-priority multiplication, as in common mathematical convention; no parentheses, and strict left-to-right execution (i.e. no operator precedence, beyond multiplication). Every character except these 6 must be ignored. Here, then, is a very ad-hoc parser and Python bytecode generator:
class BadString(Exception): pass
def makeBytecode(thestring):
theoperator = dict(a='+', s='-')
python = []
state = 'start'
for i, letter in enumerate(thestring):
if letter in 'xyzt':
if state == 'start':
python.append(letter)
state = 'variable'
elif state == 'variable':
python.append('*')
python.append(letter)
elif letter in 'as':
if state == 'start':
raise BadString(
'Unexpected operator %r at column %d' % (letter, i))
python.append(theoperator[letter])
state = 'start'
if state != 'variable':
raise BadString(
'Unexpected operator %r at end of string' % letter)
python = ''.join(python)
# sanity check
# print 'Python for %r is %r' % (thestring, python)
return compile(python, thestring, 'eval')
Now, you can simply call eval with the result of this as the first argument and the dict associating values to x, y, z and t as the second argument. For example (having imported the above module as par and uncommented the sanity check):
>>> c=par.makeBytecode('xyax')
Python for 'xyax' is 'x*y+x'
>>> for x in range(4):
... for y in range(5):
... print 'x=%s, y=%s: result=%s' % (x,y,eval(c,dict(x=x,y=y)))
...
x=0, y=0: result=0
x=0, y=1: result=0
x=0, y=2: result=0
x=0, y=3: result=0
x=0, y=4: result=0
x=1, y=0: result=1
x=1, y=1: result=2
x=1, y=2: result=3
x=1, y=3: result=4
x=1, y=4: result=5
x=2, y=0: result=2
x=2, y=1: result=4
x=2, y=2: result=6
x=2, y=3: result=8
x=2, y=4: result=10
x=3, y=0: result=3
x=3, y=1: result=6
x=3, y=2: result=9
x=3, y=3: result=12
x=3, y=4: result=15
>>>
For more sophisticated, yet still simple!, parsing of the string & building of the rapidly interpretable data structure, see for example pyparsing.
A:
I see that the eval idea has already been mentioned, and this won't help if you have operators to substitute, but I've used the following in the past:
def evaluate_expression(expr, context):
try:
return eval(expr, {'__builtins__': None}, context)
except (TypeError, ZeroDivisionError, SyntaxError):
# TypeError is when combining items in the wrong way (ie, None + 4)
# ZeroDivisionError is when the denominator happens to be zero (5/0)
# SyntaxError is when the expression is invalid
return None
You could then do something like:
values = {
'a': 1.0,
'b': 5.0,
'c': 1.0,
}
evaluate_expression('a + b - (1/c)', **values)
which would evaluate to 1.0 + 5.0 - 1/1.0 == 5.0
Again, this won't let you substitute in operators, that is, you can't let 'd' evaluate to +, but it gives you a safe way of using the eval function in Python (you can't run "while True" for example).
See this example for more information on using eval safely.
|
Python implementation question
|
Hey. I have a problem I'm trying to solve in Python and I can't think of a clever implementation. The input is a string of letters. Some of them represent variables, others represent operators, and I want to iterate over a large amount of values for the variables (different configurations).
I think an equivalent question would be that you get an equation with many variables stored as a string and you want to see the solution when you substitute x,y,z etc. for many different values, or satisfying a boolean formula.
I can't think of any clever data structure that would implement this - just 'setting' the formula every time, substiuting variables, takes more time than actually evaluating it.
I know it's a bit abstract - but anyone has any ideas or has experience with somtehing similar?
Someone usggested I want to implement an evaluator. It's true but beside the point. For the sake of the question supposed the eval(string) function is already implemented. What is an efficient data structure that will hold the values and allow to change them every time cleanly and efficiently? Surely not a string! It has to be a modifiable list. And if it has a few instances of the variable x, it should be able to accsess them all fast and change their value before evaluation, etc, etc.
|
[
"as Ira said you need an expression evaluator like Lex/Yacc to do this job, you have plenty available here :\nhttp://wiki.python.org/moin/LanguageParsing\nI have been using pyparsing and works greats for that kind of things\n",
"What you want is obviously an expression evaluator.\nOther you use a built in facility in the language typically called \"eval\" (I don't know if Python offers this) or you build an expression parser/evaluator.\nTo get some of ideas of how to build such an expression evaluator,\nyou can look at the following SO golf exercise:\nCode Golf: Mathematical expression evaluator (that respects PEMDAS)\nTo do what you want, you'd have to translate the basic ideas (they're all pretty much the same) to Pythonese, and add a facility to parse variable names and look up their values.\nThat should be a very straightforward extension.\n",
"Parse that string into a tree (or equivalently interpretable data structure), just once, and then repeatedly use a function to \"interpret the tree\" for each variable-assignment-set of interest. (You could even generate Python bytecode as the \"interpretable data structure\" so you can just use eval as the \"interpret the tree\" -- that makes for slow generation, but that's needed only once, and fast interpretation).\nAs you say that's a bit abstract so let's give a concrete, if over-simplistic, example. Say for example that the variables are the letters x, y, z, t, and the operators are a for addition and s for subtraction -- strings of adjacent letters implicitly mean high-priority multiplication, as in common mathematical convention; no parentheses, and strict left-to-right execution (i.e. no operator precedence, beyond multiplication). Every character except these 6 must be ignored. Here, then, is a very ad-hoc parser and Python bytecode generator:\nclass BadString(Exception): pass\n\ndef makeBytecode(thestring):\n theoperator = dict(a='+', s='-')\n python = []\n state = 'start'\n for i, letter in enumerate(thestring):\n if letter in 'xyzt':\n if state == 'start':\n python.append(letter)\n state = 'variable'\n elif state == 'variable':\n python.append('*')\n python.append(letter)\n elif letter in 'as':\n if state == 'start':\n raise BadString(\n 'Unexpected operator %r at column %d' % (letter, i))\n python.append(theoperator[letter])\n state = 'start'\n\n if state != 'variable':\n raise BadString(\n 'Unexpected operator %r at end of string' % letter)\n python = ''.join(python)\n # sanity check\n # print 'Python for %r is %r' % (thestring, python)\n return compile(python, thestring, 'eval')\n\nNow, you can simply call eval with the result of this as the first argument and the dict associating values to x, y, z and t as the second argument. For example (having imported the above module as par and uncommented the sanity check):\n>>> c=par.makeBytecode('xyax')\nPython for 'xyax' is 'x*y+x'\n>>> for x in range(4):\n... for y in range(5):\n... print 'x=%s, y=%s: result=%s' % (x,y,eval(c,dict(x=x,y=y)))\n... \nx=0, y=0: result=0\nx=0, y=1: result=0\nx=0, y=2: result=0\nx=0, y=3: result=0\nx=0, y=4: result=0\nx=1, y=0: result=1\nx=1, y=1: result=2\nx=1, y=2: result=3\nx=1, y=3: result=4\nx=1, y=4: result=5\nx=2, y=0: result=2\nx=2, y=1: result=4\nx=2, y=2: result=6\nx=2, y=3: result=8\nx=2, y=4: result=10\nx=3, y=0: result=3\nx=3, y=1: result=6\nx=3, y=2: result=9\nx=3, y=3: result=12\nx=3, y=4: result=15\n>>> \n\nFor more sophisticated, yet still simple!, parsing of the string & building of the rapidly interpretable data structure, see for example pyparsing.\n",
"I see that the eval idea has already been mentioned, and this won't help if you have operators to substitute, but I've used the following in the past:\ndef evaluate_expression(expr, context):\n try:\n return eval(expr, {'__builtins__': None}, context)\n except (TypeError, ZeroDivisionError, SyntaxError):\n # TypeError is when combining items in the wrong way (ie, None + 4)\n # ZeroDivisionError is when the denominator happens to be zero (5/0)\n # SyntaxError is when the expression is invalid\n return None\n\nYou could then do something like:\nvalues = {\n 'a': 1.0,\n 'b': 5.0,\n 'c': 1.0,\n}\nevaluate_expression('a + b - (1/c)', **values)\n\nwhich would evaluate to 1.0 + 5.0 - 1/1.0 == 5.0\nAgain, this won't let you substitute in operators, that is, you can't let 'd' evaluate to +, but it gives you a safe way of using the eval function in Python (you can't run \"while True\" for example).\nSee this example for more information on using eval safely.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"data_structures",
"formula",
"implementation",
"python"
] |
stackoverflow_0001418255_data_structures_formula_implementation_python.txt
|
Q:
Any experiences with Protocol Buffers?
I was just looking through some information about Google's protocol buffers data interchange format. Has anyone played around with the code or even created a project around it?
I'm currently using XML in a Python project for structured content created by hand in a text editor, and I was wondering what the general opinion was on Protocol Buffers as a user-facing input format. The speed and brevity benefits definitely seem to be there, but there are so many factors when it comes to actually generating and processing the data.
A:
If you are looking for user facing interaction, stick with xml. It has more support, understanding, and general acceptance currently. If it's internal, I would say that protocol buffers are a great idea.
Maybe in a few years as more tools come out to support protocol buffers, then start looking towards that for a public facing api. Until then... JSON?
A:
Protocol buffers are intended to optimize communications between machines. They are really not intended for human interaction. Also, the format is binary, so it could not replace XML in that use case.
I would also recommend JSON as being the most compact text-based format.
A:
Another drawback of binary format like PB is that if there is a single bit of error, the entire data file is not parsable, but with JSON or XML, as the last resort you can still manually fix the error because it is human readable and has redundancy built-in..
A:
From your brief description, it sounds like protocol buffers is not the right fit. The phrase "structured content created by hand in a text editor" pretty much screams for XML.
But if you want efficient, low latency communications with data structures that are not shared outside your organization, binary serialization such as protocol buffers can offer a huge win.
|
Any experiences with Protocol Buffers?
|
I was just looking through some information about Google's protocol buffers data interchange format. Has anyone played around with the code or even created a project around it?
I'm currently using XML in a Python project for structured content created by hand in a text editor, and I was wondering what the general opinion was on Protocol Buffers as a user-facing input format. The speed and brevity benefits definitely seem to be there, but there are so many factors when it comes to actually generating and processing the data.
|
[
"If you are looking for user facing interaction, stick with xml. It has more support, understanding, and general acceptance currently. If it's internal, I would say that protocol buffers are a great idea.\nMaybe in a few years as more tools come out to support protocol buffers, then start looking towards that for a public facing api. Until then... JSON?\n",
"Protocol buffers are intended to optimize communications between machines. They are really not intended for human interaction. Also, the format is binary, so it could not replace XML in that use case. \nI would also recommend JSON as being the most compact text-based format.\n",
"Another drawback of binary format like PB is that if there is a single bit of error, the entire data file is not parsable, but with JSON or XML, as the last resort you can still manually fix the error because it is human readable and has redundancy built-in..\n",
"From your brief description, it sounds like protocol buffers is not the right fit. The phrase \"structured content created by hand in a text editor\" pretty much screams for XML.\nBut if you want efficient, low latency communications with data structures that are not shared outside your organization, binary serialization such as protocol buffers can offer a huge win.\n"
] |
[
13,
11,
4,
3
] |
[] |
[] |
[
"database",
"protocol_buffers",
"python",
"xml"
] |
stackoverflow_0000001734_database_protocol_buffers_python_xml.txt
|
Q:
Call Python from C++
I'm trying to call a function in a Python script from my main C++ program. The python function takes a string as the argument and returns nothing (ok.. 'None').
It works perfectly well (never thought it would be that easy..) as long as the previous call is finished before the function is called again, otherwise there is an access violation at pModule = PyImport_Import(pName).
There are a lot of tutorials how to embed python in C and vice versa but I found nothing about that problem.
int callPython(TCHAR* title){
PyObject *pName, *pModule, *pFunc;
PyObject *pArgs, *pValue;
Py_Initialize();
pName = PyUnicode_FromString("Main");
/* Name of Pythonfile */
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if (pModule != NULL) {
pFunc = PyObject_GetAttrString(pModule, "writeLyricToFile");
/* function name. pFunc is a new reference */
if (pFunc && PyCallable_Check(pFunc)) {
pArgs = PyTuple_New(1);
pValue = PyUnicode_FromWideChar(title, -1);
if (!pValue) {
Py_DECREF(pArgs);
Py_DECREF(pModule);
showErrorBox(_T("pValue is false"));
return 1;
}
PyTuple_SetItem(pArgs, 0, pValue);
pValue = PyObject_CallObject(pFunc, pArgs);
Py_DECREF(pArgs);
if (pValue != NULL) {
//worked as it should!
Py_DECREF(pValue);
}
else {
Py_DECREF(pFunc);
Py_DECREF(pModule);
PyErr_Print();
showErrorBox(_T("pValue is null"));
return 1;
}
}
else {
if (PyErr_Occurred()) PyErr_Print();
showErrorBox(_T("pFunc null or not callable"));
return 1;
}
Py_XDECREF(pFunc);
Py_DECREF(pModule);
}
else {
PyErr_Print();
showErrorBox(_T("pModule is null"));
return 1;
}
Py_Finalize();
return 0;
}
A:
When you say "as long as the previous call is finished before the function is called again", I can only assume that you have multiple threads calling from C++ into Python. The python is not thread safe, so this is going to fail!
Read up on the Global Interpreter Lock (GIL) in the Python manual. Perhaps the following links will help:
http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock
http://docs.python.org/c-api/init.html#PyEval_InitThreads
http://docs.python.org/c-api/init.html#PyEval_AcquireLock
http://docs.python.org/c-api/init.html#PyEval_ReleaseLock
The GIL is mentioned on Wikipedia:
http://en.wikipedia.org/wiki/Global_Interpreter_Lock
A:
Thank you for your help!
Yes you're right, there are several C threads. Never thought I'd need mutex for the interpreter itself - the GIL is a completly new concept for me (and isn't even once mentioned in the whole tutorial).
After reading the reference (for sure not the easiest part of it, although the PyGILState_* functions simplify the whole thing a lot), I added an
void initPython(){
PyEval_InitThreads();
Py_Initialize();
PyEval_ReleaseLock();
}
function to initialise the interpreter correctly.
Every thread creates its data structure, acquires the lock and releases it afterwards as shown in the reference.
Works as it should, but when calling Py_Finalize() before terminating the process I get a segfault.. any problems with just leaving it?
|
Call Python from C++
|
I'm trying to call a function in a Python script from my main C++ program. The python function takes a string as the argument and returns nothing (ok.. 'None').
It works perfectly well (never thought it would be that easy..) as long as the previous call is finished before the function is called again, otherwise there is an access violation at pModule = PyImport_Import(pName).
There are a lot of tutorials how to embed python in C and vice versa but I found nothing about that problem.
int callPython(TCHAR* title){
PyObject *pName, *pModule, *pFunc;
PyObject *pArgs, *pValue;
Py_Initialize();
pName = PyUnicode_FromString("Main");
/* Name of Pythonfile */
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if (pModule != NULL) {
pFunc = PyObject_GetAttrString(pModule, "writeLyricToFile");
/* function name. pFunc is a new reference */
if (pFunc && PyCallable_Check(pFunc)) {
pArgs = PyTuple_New(1);
pValue = PyUnicode_FromWideChar(title, -1);
if (!pValue) {
Py_DECREF(pArgs);
Py_DECREF(pModule);
showErrorBox(_T("pValue is false"));
return 1;
}
PyTuple_SetItem(pArgs, 0, pValue);
pValue = PyObject_CallObject(pFunc, pArgs);
Py_DECREF(pArgs);
if (pValue != NULL) {
//worked as it should!
Py_DECREF(pValue);
}
else {
Py_DECREF(pFunc);
Py_DECREF(pModule);
PyErr_Print();
showErrorBox(_T("pValue is null"));
return 1;
}
}
else {
if (PyErr_Occurred()) PyErr_Print();
showErrorBox(_T("pFunc null or not callable"));
return 1;
}
Py_XDECREF(pFunc);
Py_DECREF(pModule);
}
else {
PyErr_Print();
showErrorBox(_T("pModule is null"));
return 1;
}
Py_Finalize();
return 0;
}
|
[
"When you say \"as long as the previous call is finished before the function is called again\", I can only assume that you have multiple threads calling from C++ into Python. The python is not thread safe, so this is going to fail!\nRead up on the Global Interpreter Lock (GIL) in the Python manual. Perhaps the following links will help:\n\nhttp://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock\nhttp://docs.python.org/c-api/init.html#PyEval_InitThreads\nhttp://docs.python.org/c-api/init.html#PyEval_AcquireLock\nhttp://docs.python.org/c-api/init.html#PyEval_ReleaseLock\n\nThe GIL is mentioned on Wikipedia:\n\nhttp://en.wikipedia.org/wiki/Global_Interpreter_Lock\n\n",
"Thank you for your help!\nYes you're right, there are several C threads. Never thought I'd need mutex for the interpreter itself - the GIL is a completly new concept for me (and isn't even once mentioned in the whole tutorial).\nAfter reading the reference (for sure not the easiest part of it, although the PyGILState_* functions simplify the whole thing a lot), I added an \nvoid initPython(){\n PyEval_InitThreads();\n Py_Initialize();\n PyEval_ReleaseLock();\n}\n\nfunction to initialise the interpreter correctly. \nEvery thread creates its data structure, acquires the lock and releases it afterwards as shown in the reference.\nWorks as it should, but when calling Py_Finalize() before terminating the process I get a segfault.. any problems with just leaving it?\n"
] |
[
5,
1
] |
[] |
[] |
[
"c",
"c++",
"python"
] |
stackoverflow_0001417473_c_c++_python.txt
|
Q:
Are Python list comprehensions the same thing as map/grep in Perl?
I was having some trouble grokking the list comprehension syntax in Python, so I started thinking about how to achieve the same thing in Perl, which I'm more familiar with. I realized that the basic examples (taken from this page) can all be done in Perl with map or grep.
E.g.
(python) (perl)
S = [x**2 for x in range(10)] @S = map { $_**2 } ( 0..9 );
V = [2**i for i in range(13)] @V = map { 2**$_ } ( 0..12 );
M = [x for x in S if x % 2 == 0] @M = grep { $_ % 2 == 0 } @S;
So is "list comprehension" just a fancy term for "map and/or filter a list" or is there more to it?
A:
You are correct: a list comprehension is essentially just syntactic sugar for map and filter (terms from the functional programming world).
Hopefully this sample code demonstrates their equality:
>>> # Python 2
>>> [x**2 for x in range(10)] == map(lambda x: x**2, range(10))
True
>>> [2**i for i in range(13)] == map(lambda x: 2**x, range(13))
True
>>> S = [x**2 for x in range(10)]
>>> [x for x in S if x % 2 == 0] == filter(lambda x: x % 2 == 0, S)
True
Note that this is only valid in Python 2.X, as SilentGhost pointed out in the comment. To make this compatible with Python 3, you'll have to wrap the calls to map or filter in the list constructor, because map and filter have been updated to return iterators, not lists.
>>> # Python 3
>>> [x**2 for x in range(10)] == list(map(lambda x: x**2, range(10)))
True
>>> [2**i for i in range(13)] == list(map(lambda x: 2**x, range(13)))
True
>>> S = [x**2 for x in range(10)]
>>> [x for x in S if x % 2 == 0] == list(filter(lambda x: x % 2 == 0, S))
True
A:
Yes, they are basically the same.
In fact Python also has a map function:
S = map(lambda x: x**2, range(10))
is the same as your first examples above. However, the list comprehension syntax is strongly preferred in Python. I believe Guido has been quoted as saying he regrets introducing the functional syntax at all.
However, where it gets really interesting is in the next evolution of list comprehensions, which is generators. These allow you to return an iterator - rather than processing the whole list at once, it does a single iteration and then returns, so that you don't have to hold the whole list in memory at the same time. Very powerful.
A:
They're the "pythonic" version for mapping and filtering sequences, but they allow to do some others things, like flattening a (fixed level) nested list, for example:
[j for i in nested_list for j in i]
Another thing that you cannot do with a regular map and a lambda expression is to structurally decompose the iterating values, for example:
[(x%y)*z for x,y,z in list_with_triplets_of_ints]
of course there are workarounds like:
aux = lambda x,y,z: (x%y)*z
map(lambda t: aux(*t), list_with_triplets_of_ints)
but when the transformation you need to apply is already defined, then usually it's just simpler to use a map, like in:
map(int, list_of_str_values)
rather than
[int(i) for i in list_of_str_values]
A:
List comprehensions also flatten out things:
For example:
[(x, y) for x in xrange(10) if x%2 == 0 for y in xrange(20) if x!=y]
If you used nested maps here, you'd have to use concat (summing the lists) too.
A:
List comprehensions are more powerful than map or filter as they allow you to abstractly play with lists.
It also more convenient to use them when your maps are further nested with more maps and filter calls.
A:
Yes. The power of the Python syntax is that the same syntax (within round rather than square brackets) is also used to define generators, which produce sequences of values on demand.
|
Are Python list comprehensions the same thing as map/grep in Perl?
|
I was having some trouble grokking the list comprehension syntax in Python, so I started thinking about how to achieve the same thing in Perl, which I'm more familiar with. I realized that the basic examples (taken from this page) can all be done in Perl with map or grep.
E.g.
(python) (perl)
S = [x**2 for x in range(10)] @S = map { $_**2 } ( 0..9 );
V = [2**i for i in range(13)] @V = map { 2**$_ } ( 0..12 );
M = [x for x in S if x % 2 == 0] @M = grep { $_ % 2 == 0 } @S;
So is "list comprehension" just a fancy term for "map and/or filter a list" or is there more to it?
|
[
"You are correct: a list comprehension is essentially just syntactic sugar for map and filter (terms from the functional programming world).\nHopefully this sample code demonstrates their equality:\n>>> # Python 2\n>>> [x**2 for x in range(10)] == map(lambda x: x**2, range(10))\nTrue\n>>> [2**i for i in range(13)] == map(lambda x: 2**x, range(13))\nTrue\n>>> S = [x**2 for x in range(10)]\n>>> [x for x in S if x % 2 == 0] == filter(lambda x: x % 2 == 0, S)\nTrue\n\nNote that this is only valid in Python 2.X, as SilentGhost pointed out in the comment. To make this compatible with Python 3, you'll have to wrap the calls to map or filter in the list constructor, because map and filter have been updated to return iterators, not lists.\n>>> # Python 3\n>>> [x**2 for x in range(10)] == list(map(lambda x: x**2, range(10)))\nTrue\n>>> [2**i for i in range(13)] == list(map(lambda x: 2**x, range(13)))\nTrue\n>>> S = [x**2 for x in range(10)]\n>>> [x for x in S if x % 2 == 0] == list(filter(lambda x: x % 2 == 0, S))\nTrue\n\n",
"Yes, they are basically the same.\nIn fact Python also has a map function:\nS = map(lambda x: x**2, range(10))\n\nis the same as your first examples above. However, the list comprehension syntax is strongly preferred in Python. I believe Guido has been quoted as saying he regrets introducing the functional syntax at all.\nHowever, where it gets really interesting is in the next evolution of list comprehensions, which is generators. These allow you to return an iterator - rather than processing the whole list at once, it does a single iteration and then returns, so that you don't have to hold the whole list in memory at the same time. Very powerful.\n",
"They're the \"pythonic\" version for mapping and filtering sequences, but they allow to do some others things, like flattening a (fixed level) nested list, for example:\n[j for i in nested_list for j in i]\n\nAnother thing that you cannot do with a regular map and a lambda expression is to structurally decompose the iterating values, for example:\n[(x%y)*z for x,y,z in list_with_triplets_of_ints]\n\nof course there are workarounds like:\naux = lambda x,y,z: (x%y)*z\nmap(lambda t: aux(*t), list_with_triplets_of_ints)\n\nbut when the transformation you need to apply is already defined, then usually it's just simpler to use a map, like in:\nmap(int, list_of_str_values)\n\nrather than\n[int(i) for i in list_of_str_values]\n\n",
"List comprehensions also flatten out things:\nFor example:\n[(x, y) for x in xrange(10) if x%2 == 0 for y in xrange(20) if x!=y]\nIf you used nested maps here, you'd have to use concat (summing the lists) too.\n",
"List comprehensions are more powerful than map or filter as they allow you to abstractly play with lists.\nIt also more convenient to use them when your maps are further nested with more maps and filter calls.\n",
"Yes. The power of the Python syntax is that the same syntax (within round rather than square brackets) is also used to define generators, which produce sequences of values on demand.\n"
] |
[
15,
3,
3,
2,
0,
0
] |
[] |
[] |
[
"list",
"perl",
"python"
] |
stackoverflow_0001418912_list_perl_python.txt
|
Q:
Python memory leaks?
I am writing a python extension that seems to be leaking memory. I am trying to figure out the soure of the problem using valgrind.
However, it seems that python itself is leaking memory according to valgrind. Using the following simple script:
hello.py
print "Hello World!"
and doing
> valgrind --tool=memcheck python ./hello.py
(...)
==7937== ERROR SUMMARY: 580 errors from 34 contexts (suppressed: 21 from 1)
==7937== malloc/free: in use at exit: 721,878 bytes in 190 blocks.
==7937== malloc/free: 2,436 allocs, 2,246 frees, 1,863,631 bytes allocated.
==7937== For counts of detected errors, rerun with: -v
==7937== Use --track-origins=yes to see where uninitialised values come from
==7937== searching for pointers to 190 not-freed blocks.
==7937== checked 965,952 bytes.
==7937==
==7937== LEAK SUMMARY:
==7937== definitely lost: 0 bytes in 0 blocks.
==7937== possibly lost: 4,612 bytes in 13 blocks.
==7937== still reachable: 717,266 bytes in 177 blocks.
==7937== suppressed: 0 bytes in 0 blocks.
==7937== Rerun with --leak-check=full to see details of leaked memory.
Does anybody have an explanation for this strage behavior? Is the python interpreter really leaking memory?
What tool do python developers use to debug their memory leaks?
A:
There's a whole README.valgrind in the Python sources that explains the various caveats trying to use Valgrind with Python:
http://svn.python.org/projects/python/trunk/Misc/README.valgrind
Python uses its own small-object allocation scheme on top of malloc,
called PyMalloc.
Valgrind may show some unexpected results when PyMalloc is used.
Starting with Python 2.3, PyMalloc is used by default. You can disable
PyMalloc when configuring python by adding the --without-pymalloc option.
If you disable PyMalloc, most of the information in this document and
the supplied suppressions file will not be useful. As discussed above,
disabling PyMalloc can catch more problems.
If you use valgrind on a default build of Python, you will see
many errors like:
==6399== Use of uninitialised value of size 4
==6399== at 0x4A9BDE7E: PyObject_Free (obmalloc.c:711)
==6399== by 0x4A9B8198: dictresize (dictobject.c:477)
These are expected and not a problem.
A:
The leak is most likely coming from your own extension, not from Python. Large systems often exit with memory still allocated, simply because it isn't worth it to explicitly free it if the process is about to end anyway.
|
Python memory leaks?
|
I am writing a python extension that seems to be leaking memory. I am trying to figure out the soure of the problem using valgrind.
However, it seems that python itself is leaking memory according to valgrind. Using the following simple script:
hello.py
print "Hello World!"
and doing
> valgrind --tool=memcheck python ./hello.py
(...)
==7937== ERROR SUMMARY: 580 errors from 34 contexts (suppressed: 21 from 1)
==7937== malloc/free: in use at exit: 721,878 bytes in 190 blocks.
==7937== malloc/free: 2,436 allocs, 2,246 frees, 1,863,631 bytes allocated.
==7937== For counts of detected errors, rerun with: -v
==7937== Use --track-origins=yes to see where uninitialised values come from
==7937== searching for pointers to 190 not-freed blocks.
==7937== checked 965,952 bytes.
==7937==
==7937== LEAK SUMMARY:
==7937== definitely lost: 0 bytes in 0 blocks.
==7937== possibly lost: 4,612 bytes in 13 blocks.
==7937== still reachable: 717,266 bytes in 177 blocks.
==7937== suppressed: 0 bytes in 0 blocks.
==7937== Rerun with --leak-check=full to see details of leaked memory.
Does anybody have an explanation for this strage behavior? Is the python interpreter really leaking memory?
What tool do python developers use to debug their memory leaks?
|
[
"There's a whole README.valgrind in the Python sources that explains the various caveats trying to use Valgrind with Python:\nhttp://svn.python.org/projects/python/trunk/Misc/README.valgrind\nPython uses its own small-object allocation scheme on top of malloc,\ncalled PyMalloc.\n\nValgrind may show some unexpected results when PyMalloc is used.\nStarting with Python 2.3, PyMalloc is used by default. You can disable\nPyMalloc when configuring python by adding the --without-pymalloc option.\nIf you disable PyMalloc, most of the information in this document and\nthe supplied suppressions file will not be useful. As discussed above,\ndisabling PyMalloc can catch more problems.\n\nIf you use valgrind on a default build of Python, you will see\nmany errors like:\n\n ==6399== Use of uninitialised value of size 4\n ==6399== at 0x4A9BDE7E: PyObject_Free (obmalloc.c:711)\n ==6399== by 0x4A9B8198: dictresize (dictobject.c:477)\n\nThese are expected and not a problem. \n\n",
"The leak is most likely coming from your own extension, not from Python. Large systems often exit with memory still allocated, simply because it isn't worth it to explicitly free it if the process is about to end anyway.\n"
] |
[
12,
2
] |
[] |
[] |
[
"memory",
"memory_management",
"python",
"valgrind"
] |
stackoverflow_0001419065_memory_memory_management_python_valgrind.txt
|
Q:
appengine select based on timestamp
I can't seem to select something based on timestamp. The behavior is a bit weird and the < and = symbols don't seem to mean what I expect them to.
""" A site message """
class Message( db.Model ) :
# from/to/ a few other fields
subject = db.StringProperty()
body = db.Text()
# this is the field i'm trying to use
sent = db.DateTimeProperty( auto_now_add=True )
When I write GQL queries like
select * from Message where sent = '2009-09-14 01:00:02.648000'
(there is a message with PRECISELY that time stamp in the datastore)
it gives me nothing back.
If I try
select * from Message where sent < '2009-09-14 01:00:02.648000'
It simply gives me all of them. When I try the > sign, it simply gives me none again.
What's going on here and how do I select based on timestamp?
A:
Do NOT use strings to "stand in" for datetimes! It just won't work...
See the docs: DateTimeProperty corresponds to a datetime.datetime value. import datetime, convert your beloved strings to datetime.datetime instances (e.g. by calling the strptime method thereof), use THOSE instances for comparisons -- and, of course, live happily ever after!-)
|
appengine select based on timestamp
|
I can't seem to select something based on timestamp. The behavior is a bit weird and the < and = symbols don't seem to mean what I expect them to.
""" A site message """
class Message( db.Model ) :
# from/to/ a few other fields
subject = db.StringProperty()
body = db.Text()
# this is the field i'm trying to use
sent = db.DateTimeProperty( auto_now_add=True )
When I write GQL queries like
select * from Message where sent = '2009-09-14 01:00:02.648000'
(there is a message with PRECISELY that time stamp in the datastore)
it gives me nothing back.
If I try
select * from Message where sent < '2009-09-14 01:00:02.648000'
It simply gives me all of them. When I try the > sign, it simply gives me none again.
What's going on here and how do I select based on timestamp?
|
[
"Do NOT use strings to \"stand in\" for datetimes! It just won't work...\nSee the docs: DateTimeProperty corresponds to a datetime.datetime value. import datetime, convert your beloved strings to datetime.datetime instances (e.g. by calling the strptime method thereof), use THOSE instances for comparisons -- and, of course, live happily ever after!-)\n"
] |
[
5
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001419354_google_app_engine_python.txt
|
Q:
How to forward port to router using python
I am building python p2p application like p2p instant messenger. I am communicating with other peers using TCP/IP connection. I do not want client to do port forwarding.
When application starts it should check whether port is forwarded to router if not it should forward it to router.
Is it possible to programaticaly forward the port to router. Or how can I use port 80 for p2p communication as its used by browsers.
A:
You may find the post and files listed here helpful. This person implemented a Nat PMP library in Python.
http://blog.yimingliu.com/2008/01/07/nat-pmp-client-library-for-python/
If you want to use port 80 for p2p communication, you will simply just need to write your own protocol in HTTP and connect over port 80.
|
How to forward port to router using python
|
I am building python p2p application like p2p instant messenger. I am communicating with other peers using TCP/IP connection. I do not want client to do port forwarding.
When application starts it should check whether port is forwarded to router if not it should forward it to router.
Is it possible to programaticaly forward the port to router. Or how can I use port 80 for p2p communication as its used by browsers.
|
[
"You may find the post and files listed here helpful. This person implemented a Nat PMP library in Python.\nhttp://blog.yimingliu.com/2008/01/07/nat-pmp-client-library-for-python/\nIf you want to use port 80 for p2p communication, you will simply just need to write your own protocol in HTTP and connect over port 80.\n"
] |
[
1
] |
[] |
[] |
[
"p2p",
"portforwarding",
"python"
] |
stackoverflow_0001419590_p2p_portforwarding_python.txt
|
Q:
How to extract and then refer to variables defined in a python module?
I'm trying to build a simple environment check script for my firm's test environment. My goal is to be able to ping each of the hosts defined for a given test environment instance. The hosts are defined in a file like this:
#!/usr/bin/env python
host_ip = '192.168.100.10'
router_ip = '192.168.100.254'
fs_ip = '192.168.200.10'
How can I obtain all of these values in a way that is iterable (i.e. I need to loop through and ping each ip address)?
I've looked at local() and vars(), but trying do something like this:
for key, value in vars():
print key, value
generates this error:
ValueError: too many values to unpack
I have been able to extract the names of all variables by checking dir(local_variables) for values that don't contain a '__' string, but then I have a list of strings, and I can't figure out how to get from the string to the value of the same-named variable.
A:
First off, I strongly recommend not doing it that way. Instead, do:
hosts = {
"host_ip": '192.168.100.10',
"router_ip": '192.168.100.254',
"fs_ip": '192.168.200.10',
}
Then you can simply import the module and reference it normally--this gives an ordinary, standard way to access this data from any Python code:
import config
for host, ip in config.hosts.iteritems():
...
If you do access variables directly, you're going to get a bunch of stuff you don't want: the builtins (__builtins__, __package__, etc); anything that was imported while setting up the other variables, etc.
You'll also want to make sure that the context you're running in is different from the one whose variables you're iterating over, or you'll be creating new variables in locals() (or vars(), or globals()) while you're iterating over it, and you'll get "RuntimeError: dictionary changed size during iteration".
A:
You need to do vars().iteritems(). (or .items())
Looping over a dictionary like vars() will only extract the keys in the dictionary.
Example.
>>> for key in vars(): print key
...
__builtins__
__name__
__doc__
key
__package__
>>> for key, value in vars().items(): print key, value
...
__builtins__ <module '__builtin__' (built-in)>
value None
__package__ None
key __doc__
__name__ __main__
__doc__ None
A:
Glenn Maynard makes a good point, using a dictionary is simpler and more standard. That being said, here are a couple of tricks I've used sometimes:
In the file hosts.py:
#!/usr/bin/env python
host_ip = '192.168.100.10'
router_ip = '192.168.100.254'
fs_ip = '192.168.200.10'
and in another file:
hosts_dict = {}
execfile('hosts.py', hosts_dict)
or
import hosts
hosts_dict = hosts.__dict__
But again, those are both rather hackish.
A:
If this really is all that your imported file contains, you could also read it as just a text file, and then parse the input lines using basic string methods.
iplistfile = open("iplist.py")
host_addr_map = {}
for line in iplistfile:
if not line or line[0] == '#':
continue
host, ipaddr = map(str.strip, line.split('='))
host_addr_map[host] = ipaddr
iplistfile.close()
Now you have a dict of your ip addresses, addressable by host name. To get them all, just use basic dict-style methods:
for hostname in host_addr_map:
print hostname, host_addr_map[hostname]
print host_addr_map.keys()
This also has the advantage that it removes any temptation any misguided person might have to add more elaborate Python logic to what you thought was just a configuration file.
A:
Here's a hack I use all the time. You got really close when you returned the list of string names, but you have to use the eval() function to return the actual object that bears the name represented by the string:
hosts = [eval('modulename.' + x) for x in dir(local_variables) if '_ip' in x]
If I'm not mistaken, this method also doesn't pose the same drawbacks as locals() and vars() explained by Glen Maynard.
|
How to extract and then refer to variables defined in a python module?
|
I'm trying to build a simple environment check script for my firm's test environment. My goal is to be able to ping each of the hosts defined for a given test environment instance. The hosts are defined in a file like this:
#!/usr/bin/env python
host_ip = '192.168.100.10'
router_ip = '192.168.100.254'
fs_ip = '192.168.200.10'
How can I obtain all of these values in a way that is iterable (i.e. I need to loop through and ping each ip address)?
I've looked at local() and vars(), but trying do something like this:
for key, value in vars():
print key, value
generates this error:
ValueError: too many values to unpack
I have been able to extract the names of all variables by checking dir(local_variables) for values that don't contain a '__' string, but then I have a list of strings, and I can't figure out how to get from the string to the value of the same-named variable.
|
[
"First off, I strongly recommend not doing it that way. Instead, do:\nhosts = {\n \"host_ip\": '192.168.100.10',\n \"router_ip\": '192.168.100.254',\n \"fs_ip\": '192.168.200.10',\n}\n\nThen you can simply import the module and reference it normally--this gives an ordinary, standard way to access this data from any Python code:\nimport config\nfor host, ip in config.hosts.iteritems():\n ...\n\nIf you do access variables directly, you're going to get a bunch of stuff you don't want: the builtins (__builtins__, __package__, etc); anything that was imported while setting up the other variables, etc.\nYou'll also want to make sure that the context you're running in is different from the one whose variables you're iterating over, or you'll be creating new variables in locals() (or vars(), or globals()) while you're iterating over it, and you'll get \"RuntimeError: dictionary changed size during iteration\".\n",
"You need to do vars().iteritems(). (or .items())\nLooping over a dictionary like vars() will only extract the keys in the dictionary.\nExample.\n>>> for key in vars(): print key\n...\n__builtins__\n__name__\n__doc__\nkey\n__package__\n\n\n>>> for key, value in vars().items(): print key, value\n...\n__builtins__ <module '__builtin__' (built-in)>\nvalue None\n__package__ None\nkey __doc__\n__name__ __main__\n__doc__ None\n\n",
"Glenn Maynard makes a good point, using a dictionary is simpler and more standard. That being said, here are a couple of tricks I've used sometimes:\nIn the file hosts.py:\n#!/usr/bin/env python\nhost_ip = '192.168.100.10'\nrouter_ip = '192.168.100.254'\nfs_ip = '192.168.200.10'\n\nand in another file:\nhosts_dict = {}\nexecfile('hosts.py', hosts_dict)\n\nor\nimport hosts\nhosts_dict = hosts.__dict__\n\nBut again, those are both rather hackish.\n",
"If this really is all that your imported file contains, you could also read it as just a text file, and then parse the input lines using basic string methods.\niplistfile = open(\"iplist.py\")\n\nhost_addr_map = {}\nfor line in iplistfile:\n if not line or line[0] == '#':\n continue\n\n host, ipaddr = map(str.strip, line.split('='))\n host_addr_map[host] = ipaddr\n\niplistfile.close()\n\nNow you have a dict of your ip addresses, addressable by host name. To get them all, just use basic dict-style methods:\nfor hostname in host_addr_map:\n print hostname, host_addr_map[hostname]\n\nprint host_addr_map.keys()\n\nThis also has the advantage that it removes any temptation any misguided person might have to add more elaborate Python logic to what you thought was just a configuration file.\n",
"Here's a hack I use all the time. You got really close when you returned the list of string names, but you have to use the eval() function to return the actual object that bears the name represented by the string:\nhosts = [eval('modulename.' + x) for x in dir(local_variables) if '_ip' in x]\n\nIf I'm not mistaken, this method also doesn't pose the same drawbacks as locals() and vars() explained by Glen Maynard.\n"
] |
[
8,
2,
1,
0,
-2
] |
[] |
[] |
[
"import",
"python",
"variables"
] |
stackoverflow_0001419620_import_python_variables.txt
|
Q:
Parsing unstructured text in Python
I wanted to parse a text file that contains unstructured text. I need to get the address, date of birth, name, sex, and ID.
. 55 MORILLO ZONE VIII,
BARANGAY ZONE VIII
(POB.), LUISIANA, LAGROS
F
01/16/1952
ALOMO, TERESITA CABALLES
3412-00000-A1652TCA2
12
. 22 FABRICANTE ST. ZONE
VIII LUISIANA LAGROS,
BARANGAY ZONE VIII
(POB.), LUISIANA, LAGROS
M
10/14/1967
AMURAO, CALIXTO MANALO13
In the example above, the first 3 lines is the address, the line with just an "F" is the sex, the DOB would be the line after "F", name after the DOB, the ID after the name, and the no. 12 under the ID is the index/record no.
However, the format is not consistent. In the second group, the address is 4 lines instead of 3 and the index/record no. is appended after the name (if the person doesn't have an ID field).
I wanted to rewrite the text into the following format:
name, ID, address, sex, DOB
A:
Here is a first stab at a pyparsing solution (easy-to-copy code at the pyparsing pastebin). Walk through the separate parts, according to the interleaved comments.
data = """\
. 55 MORILLO ZONE VIII,
BARANGAY ZONE VIII
(POB.), LUISIANA, LAGROS
F
01/16/1952
ALOMO, TERESITA CABALLES
3412-00000-A1652TCA2
12
. 22 FABRICANTE ST. ZONE
VIII LUISIANA LAGROS,
BARANGAY ZONE VIII
(POB.), LUISIANA, LAGROS
M
10/14/1967
AMURAO, CALIXTO MANALO13
"""
from pyparsing import LineEnd, oneOf, Word, nums, Combine, restOfLine, \
alphanums, Suppress, empty, originalTextFor, OneOrMore, alphas, \
Group, ZeroOrMore
NL = LineEnd().suppress()
gender = oneOf("M F")
integer = Word(nums)
date = Combine(integer + '/' + integer + '/' + integer)
# define the simple line definitions
gender_line = gender("sex") + NL
dob_line = date("DOB") + NL
name_line = restOfLine("name") + NL
id_line = Word(alphanums+"-")("ID") + NL
recnum_line = integer("recnum") + NL
# define forms of address lines
first_addr_line = Suppress('.') + empty + restOfLine + NL
# a subsequent address line is any line that is not a gender definition
subsq_addr_line = ~(gender_line) + restOfLine + NL
# a line with a name and a recnum combined, if there is no ID
name_recnum_line = originalTextFor(OneOrMore(Word(alphas+',')))("name") + \
integer("recnum") + NL
# defining the form of an overall record, either with or without an ID
record = Group((first_addr_line + ZeroOrMore(subsq_addr_line))("address") +
gender_line +
dob_line +
((name_line +
id_line +
recnum_line) |
name_recnum_line))
# parse data
records = OneOrMore(record).parseString(data)
# output the desired results (note that address is actually a list of lines)
for rec in records:
if rec.ID:
print "%(name)s, %(ID)s, %(address)s, %(sex)s, %(DOB)s" % rec
else:
print "%(name)s, , %(address)s, %(sex)s, %(DOB)s" % rec
print
# how to access the individual fields of the parsed record
for rec in records:
print rec.dump()
print rec.name, 'is', rec.sex
print
Prints:
ALOMO, TERESITA CABALLES, 3412-00000-A1652TCA2, ['55 MORILLO ZONE VIII,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS'], F, 01/16/1952
AMURAO, CALIXTO MANALO, , ['22 FABRICANTE ST. ZONE', 'VIII LUISIANA LAGROS,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS'], M, 10/14/1967
['55 MORILLO ZONE VIII,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS', 'F', '01/16/1952', 'ALOMO, TERESITA CABALLES', '3412-00000-A1652TCA2', '12']
- DOB: 01/16/1952
- ID: 3412-00000-A1652TCA2
- address: ['55 MORILLO ZONE VIII,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS']
- name: ALOMO, TERESITA CABALLES
- recnum: 12
- sex: F
ALOMO, TERESITA CABALLES is F
['22 FABRICANTE ST. ZONE', 'VIII LUISIANA LAGROS,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS', 'M', '10/14/1967', 'AMURAO, CALIXTO MANALO', '13']
- DOB: 10/14/1967
- address: ['22 FABRICANTE ST. ZONE', 'VIII LUISIANA LAGROS,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS']
- name: AMURAO, CALIXTO MANALO
- recnum: 13
- sex: M
AMURAO, CALIXTO MANALO is M
A:
you have to exploit whatever regularity and structure the text does have.
I suggest you read one line at a time and match it to a regular expression to determine its type, fill in the appropriate field in a person object. writing out that object and starting a new one whenever you get a field that you already have filled in.
A:
It may be overkill, but the leading edge machine learning algorithms for this type of problem are based on conditional random fields. For example, Accurate Information Extraction from Research Papers
using Conditional Random Fields.
There is software out there that makes training these models relatively easy. See Mallet or CRF++.
A:
You can probably do this with regular expressions without too much difficulty. If you have never used them before, check out the python documentation, then fire up redemo.py (on my computer, it's in c:\python26\Tools\scripts).
The first task is to split the flat file into a list of entities (one chunk of text per record). From the snippet of text you gave, you could split the file with a pattern matching the beginning of a line, where the first character is a dot:
import re
re_entity_splitter = re.compile(r'^\.')
entities = re_entity_splitter.split(open(textfile).read())
Note that the dot must be escaped (it's a wildcard character by default). Note also the r before the pattern. The r denotes 'raw string' format, which excuses you from having to escape the escape characters, resulting in so-called 'backslash plague.'
Once you have the file split into individual people, picking out the gender and birthdate is a snap. Use these:
re_gender = re.compile(r'^[MF]')
re_birth_Date = re.compile(r'\d\d/\d\d/\d\d')
And away you go. You can paste the flat file into re demo GUI and experiment with creating patterns to match what you need. You'll have it parsed in no time. Once you get good at this, you can use symbolic group names (see docs) to pick out individual elements quickly and cleanly.
A:
Here's a quick hack job.
f = open('data.txt')
def process(file):
address = ""
for line in file:
if line == '': raise StopIteration
line = line.rstrip() # to ignore \n
if line in ('M','F'):
sex = line
break
else:
address += line
DOB = file.readline().rstrip() # to ignore \n
name = file.readline().rstrip()
if name[-1].isdigit():
name = re.match(r'^([^\d]+)\d+', name).group(1)
ID = None
else:
ID = file.readline().rstrip()
file.readline() # ignore the record #
print (name, ID, address, sex, DOB)
while True:
process(f)
|
Parsing unstructured text in Python
|
I wanted to parse a text file that contains unstructured text. I need to get the address, date of birth, name, sex, and ID.
. 55 MORILLO ZONE VIII,
BARANGAY ZONE VIII
(POB.), LUISIANA, LAGROS
F
01/16/1952
ALOMO, TERESITA CABALLES
3412-00000-A1652TCA2
12
. 22 FABRICANTE ST. ZONE
VIII LUISIANA LAGROS,
BARANGAY ZONE VIII
(POB.), LUISIANA, LAGROS
M
10/14/1967
AMURAO, CALIXTO MANALO13
In the example above, the first 3 lines is the address, the line with just an "F" is the sex, the DOB would be the line after "F", name after the DOB, the ID after the name, and the no. 12 under the ID is the index/record no.
However, the format is not consistent. In the second group, the address is 4 lines instead of 3 and the index/record no. is appended after the name (if the person doesn't have an ID field).
I wanted to rewrite the text into the following format:
name, ID, address, sex, DOB
|
[
"Here is a first stab at a pyparsing solution (easy-to-copy code at the pyparsing pastebin). Walk through the separate parts, according to the interleaved comments.\ndata = \"\"\"\\\n. 55 MORILLO ZONE VIII,\nBARANGAY ZONE VIII\n(POB.), LUISIANA, LAGROS\nF\n01/16/1952\nALOMO, TERESITA CABALLES\n3412-00000-A1652TCA2\n12\n. 22 FABRICANTE ST. ZONE\nVIII LUISIANA LAGROS,\nBARANGAY ZONE VIII\n(POB.), LUISIANA, LAGROS\nM\n10/14/1967\nAMURAO, CALIXTO MANALO13\n\"\"\"\n\nfrom pyparsing import LineEnd, oneOf, Word, nums, Combine, restOfLine, \\\n alphanums, Suppress, empty, originalTextFor, OneOrMore, alphas, \\\n Group, ZeroOrMore\n\nNL = LineEnd().suppress()\ngender = oneOf(\"M F\")\ninteger = Word(nums)\ndate = Combine(integer + '/' + integer + '/' + integer)\n\n# define the simple line definitions\ngender_line = gender(\"sex\") + NL\ndob_line = date(\"DOB\") + NL\nname_line = restOfLine(\"name\") + NL\nid_line = Word(alphanums+\"-\")(\"ID\") + NL\nrecnum_line = integer(\"recnum\") + NL\n\n# define forms of address lines\nfirst_addr_line = Suppress('.') + empty + restOfLine + NL\n# a subsequent address line is any line that is not a gender definition\nsubsq_addr_line = ~(gender_line) + restOfLine + NL\n\n# a line with a name and a recnum combined, if there is no ID\nname_recnum_line = originalTextFor(OneOrMore(Word(alphas+',')))(\"name\") + \\\n integer(\"recnum\") + NL\n\n# defining the form of an overall record, either with or without an ID\nrecord = Group((first_addr_line + ZeroOrMore(subsq_addr_line))(\"address\") + \n gender_line + \n dob_line +\n ((name_line +\n id_line + \n recnum_line) |\n name_recnum_line))\n\n# parse data\nrecords = OneOrMore(record).parseString(data)\n\n# output the desired results (note that address is actually a list of lines)\nfor rec in records:\n if rec.ID:\n print \"%(name)s, %(ID)s, %(address)s, %(sex)s, %(DOB)s\" % rec\n else:\n print \"%(name)s, , %(address)s, %(sex)s, %(DOB)s\" % rec\nprint\n\n# how to access the individual fields of the parsed record\nfor rec in records:\n print rec.dump()\n print rec.name, 'is', rec.sex\n print\n\nPrints:\nALOMO, TERESITA CABALLES, 3412-00000-A1652TCA2, ['55 MORILLO ZONE VIII,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS'], F, 01/16/1952\nAMURAO, CALIXTO MANALO, , ['22 FABRICANTE ST. ZONE', 'VIII LUISIANA LAGROS,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS'], M, 10/14/1967\n\n['55 MORILLO ZONE VIII,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS', 'F', '01/16/1952', 'ALOMO, TERESITA CABALLES', '3412-00000-A1652TCA2', '12']\n- DOB: 01/16/1952\n- ID: 3412-00000-A1652TCA2\n- address: ['55 MORILLO ZONE VIII,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS']\n- name: ALOMO, TERESITA CABALLES\n- recnum: 12\n- sex: F\nALOMO, TERESITA CABALLES is F\n\n['22 FABRICANTE ST. ZONE', 'VIII LUISIANA LAGROS,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS', 'M', '10/14/1967', 'AMURAO, CALIXTO MANALO', '13']\n- DOB: 10/14/1967\n- address: ['22 FABRICANTE ST. ZONE', 'VIII LUISIANA LAGROS,', 'BARANGAY ZONE VIII', '(POB.), LUISIANA, LAGROS']\n- name: AMURAO, CALIXTO MANALO\n- recnum: 13\n- sex: M\nAMURAO, CALIXTO MANALO is M\n\n",
"you have to exploit whatever regularity and structure the text does have.\nI suggest you read one line at a time and match it to a regular expression to determine its type, fill in the appropriate field in a person object. writing out that object and starting a new one whenever you get a field that you already have filled in.\n",
"It may be overkill, but the leading edge machine learning algorithms for this type of problem are based on conditional random fields. For example, Accurate Information Extraction from Research Papers \nusing Conditional Random Fields. \nThere is software out there that makes training these models relatively easy. See Mallet or CRF++.\n",
"You can probably do this with regular expressions without too much difficulty. If you have never used them before, check out the python documentation, then fire up redemo.py (on my computer, it's in c:\\python26\\Tools\\scripts).\nThe first task is to split the flat file into a list of entities (one chunk of text per record). From the snippet of text you gave, you could split the file with a pattern matching the beginning of a line, where the first character is a dot:\nimport re\nre_entity_splitter = re.compile(r'^\\.')\n\nentities = re_entity_splitter.split(open(textfile).read())\n\nNote that the dot must be escaped (it's a wildcard character by default). Note also the r before the pattern. The r denotes 'raw string' format, which excuses you from having to escape the escape characters, resulting in so-called 'backslash plague.'\nOnce you have the file split into individual people, picking out the gender and birthdate is a snap. Use these:\nre_gender = re.compile(r'^[MF]')\nre_birth_Date = re.compile(r'\\d\\d/\\d\\d/\\d\\d')\n\nAnd away you go. You can paste the flat file into re demo GUI and experiment with creating patterns to match what you need. You'll have it parsed in no time. Once you get good at this, you can use symbolic group names (see docs) to pick out individual elements quickly and cleanly.\n",
"Here's a quick hack job.\nf = open('data.txt')\n\ndef process(file):\n address = \"\"\n\n for line in file:\n if line == '': raise StopIteration\n line = line.rstrip() # to ignore \\n\n if line in ('M','F'):\n sex = line\n break\n else:\n address += line\n\n DOB = file.readline().rstrip() # to ignore \\n\n name = file.readline().rstrip()\n\n if name[-1].isdigit():\n name = re.match(r'^([^\\d]+)\\d+', name).group(1)\n ID = None\n else:\n ID = file.readline().rstrip()\n file.readline() # ignore the record #\n\n print (name, ID, address, sex, DOB)\n\nwhile True:\n process(f)\n\n"
] |
[
16,
4,
3,
2,
1
] |
[] |
[] |
[
"parsing",
"python",
"text"
] |
stackoverflow_0001419653_parsing_python_text.txt
|
Q:
Cannot import file in Python/Django
I'm not sure what's going on, but on my own laptop, everything works okay. When I upload to my host with Python 2.3.5, my views.py can't find anything in my models.py. I have:
from dtms.models import User
from dtms.item_list import *
where my models, item_list, and views files are in /mysite/dtms/
It ends up telling me it can't find User. Any ideas?
Also, when I use the django shell, I can do "from dtms.models import *" and it works just fine.
Okay, after doing the suggestion below, I get a log file of:
syspath = ['/home/victor/django/django_projects', '/home/victor/django/django_projects/mysite']
DEBUG:root:something <module 'dtms' from '/home/victor/django/django_projects/mysite/dtms/__init__.pyc'>
DEBUG:root:/home/victor/django/django_projects/mysite/dtms/__init__.pyc
DEBUG:root:['/home/victor/django/django_projects/mysite/dtms']
I'm not entirely sure what this means - my file is in mysite/dtms/item_list.py. Does this mean it's being loaded? I see the dtms module is being loaded, but it still can't find dtms.models
A:
The fact that from X import * works does not guarantee that from X import Wowie will work too, you know (if you could wean yourself away from that import * addiction you'd be WAY happier on the long run, but, that's another issue;-).
My general advice in import problems is to bracket the problematic import with try/except:
try:
from blah import bluh
except ImportError, e:
import sys
print 'Import error:', e
print 'sys.path:', sys.path
blah = __import__('blah')
print 'blah is %r' % blah
try:
print 'blah is at %s (%s)' % (blah.__file__, blah.__path__)
except Exception, e:
print 'Cannot give details on blah (%s)' % e
and the like. That generally shows you pretty quickly that your sys.path isn't what you thought it would be, and/or blah is at some weird place or with weird path, and the like.
A:
To check your sys.path you can do what Alex said, but instead of using print you can use the logging module:
import logging
LOG_FILENAME = '/tmp/logging_example.out'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG,)
logging.debug('This message should go to the log file')
A:
Make sure your project (or the folder above your "dtms" app) is in python's module search path.
This is something you may need to set in your web server's configuration. The reason it works in the django shell is probably because you are in your project's folder when you run the shell.
This is explained here if you're using apache with mod_python.
A:
I could be way off with this, but did you set the DJANGO_SETTINGS_MODULE environment variable yet? It affects what you can import. Set it to ".settings". It's also something that gets set when you fire up manage.py, so things work there that won't work in other situations with setting the variable beforehand.
Here's what I do on my system:
export DJANGO_SETTINGS_MODULE=<project name>.settings
or
import os
os.environ['DJANGO_SETTINGS_MODULE']='<project name>.settings'
Sorry if this misses the point, but when I hear of problems importing models.py, I immediately thing of environment variables. Also, the project directory has to be on PYTHONPATH, but you probably already know that.
|
Cannot import file in Python/Django
|
I'm not sure what's going on, but on my own laptop, everything works okay. When I upload to my host with Python 2.3.5, my views.py can't find anything in my models.py. I have:
from dtms.models import User
from dtms.item_list import *
where my models, item_list, and views files are in /mysite/dtms/
It ends up telling me it can't find User. Any ideas?
Also, when I use the django shell, I can do "from dtms.models import *" and it works just fine.
Okay, after doing the suggestion below, I get a log file of:
syspath = ['/home/victor/django/django_projects', '/home/victor/django/django_projects/mysite']
DEBUG:root:something <module 'dtms' from '/home/victor/django/django_projects/mysite/dtms/__init__.pyc'>
DEBUG:root:/home/victor/django/django_projects/mysite/dtms/__init__.pyc
DEBUG:root:['/home/victor/django/django_projects/mysite/dtms']
I'm not entirely sure what this means - my file is in mysite/dtms/item_list.py. Does this mean it's being loaded? I see the dtms module is being loaded, but it still can't find dtms.models
|
[
"The fact that from X import * works does not guarantee that from X import Wowie will work too, you know (if you could wean yourself away from that import * addiction you'd be WAY happier on the long run, but, that's another issue;-).\nMy general advice in import problems is to bracket the problematic import with try/except:\ntry:\n from blah import bluh\nexcept ImportError, e:\n import sys\n print 'Import error:', e\n print 'sys.path:', sys.path\n blah = __import__('blah')\n print 'blah is %r' % blah\n try:\n print 'blah is at %s (%s)' % (blah.__file__, blah.__path__)\n except Exception, e:\n print 'Cannot give details on blah (%s)' % e\n\nand the like. That generally shows you pretty quickly that your sys.path isn't what you thought it would be, and/or blah is at some weird place or with weird path, and the like.\n",
"To check your sys.path you can do what Alex said, but instead of using print you can use the logging module:\n\nimport logging\nLOG_FILENAME = '/tmp/logging_example.out'\nlogging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG,)\n\nlogging.debug('This message should go to the log file')\n\n",
"Make sure your project (or the folder above your \"dtms\" app) is in python's module search path.\nThis is something you may need to set in your web server's configuration. The reason it works in the django shell is probably because you are in your project's folder when you run the shell.\nThis is explained here if you're using apache with mod_python.\n",
"I could be way off with this, but did you set the DJANGO_SETTINGS_MODULE environment variable yet? It affects what you can import. Set it to \".settings\". It's also something that gets set when you fire up manage.py, so things work there that won't work in other situations with setting the variable beforehand.\nHere's what I do on my system:\nexport DJANGO_SETTINGS_MODULE=<project name>.settings\n\nor\nimport os\nos.environ['DJANGO_SETTINGS_MODULE']='<project name>.settings'\n\nSorry if this misses the point, but when I hear of problems importing models.py, I immediately thing of environment variables. Also, the project directory has to be on PYTHONPATH, but you probably already know that.\n"
] |
[
4,
2,
0,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001419224_django_python.txt
|
Q:
convert the key in MIME encoded form in python
I need to convert the key in MIME encoded form which is presently comes in (ascii armored) radix 64 format. For that, I have to get this radix64 format in its binary form and also need to remove its header and checksum than coversion in MIME format, but I didnt find any method which can do this conversion.
f = urllib.urlopen('http://pool.sks-keyservers.net:11371/pks/lookup?op=get&search= 0x58e9390daf8c5bf3') #Retrieve the public key from PKS
data = f.read()
decoded_bytes = base64.b64decode(data)
print decoded_bytes
I used the base64.b64decode method and it gives me the following error:
Traceback (most recent call last):
File "RetEnc.py", line 12, in ?
decoded_bytes = base64.b64decode(data)
File "/usr/lib/python2.4/base64.py", line 76, in b64decode
raise TypeError(msg)
TypeError: Incorrect padding
Why am I get this TypeError: Incorrect padding error, and how cn I fix it?
A:
For a start, when you do use a valid search value ("jay" returns an error stating "too many values"), you will receive an HTML page from which you need to extract the actual key. Trying a search value of "jaysh" I get the following response:
>>> print urllib.urlopen('http://pool.sks-keyservers.net:11371/pks/lookup?op=get&search=jaysh').read()
<html><head><title>Public Key Server -- Get ``jaysh ''</title></head>
<body><h1>Public Key Server -- Get ``jaysh ''</h1>
<pre>
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: SKS 1.1.1
mQGiBEpz7VIRBADAt9YpYfYHJeGA6d+G261FHW1uA0YXltCWa7TL6JnIsuxvh9vImUoyMJd6
1xEW4TuROTxGcMMiDemQq6HfV9tLi7ptVBLf/8nUEFoGhxS+DPJsy46WmlscKHRIEdIkTYhp
uAIMim0q5HWymEqqAfBLwJTOY9sR+nelh0NKepcCqwCgvenJ2R5UgmAh+sOhIBrh3OahZEED
/2sRGHi4xRWKePFpttXfb2hry2/jURPae/wYfuI6Xw3k5EO593veGS7Zyjnt+7mVY1N5V/ey
rfXaS3R6GsByG/eRVzRJGU2DSQvmF+q2NC6v2s4KSzr5CVKpn586SGUSg/aKvXY3EIrpvAGP
rHum1wt6P9m9kr/4X8SdVhj7Jti6A/0TA8C2KYhOn/hSYAMTmhisHan3g2Cm6yNzKeTiq6/0
ooG/ffcY81zC6+Kw236VGy2bLrMLkboXPuecvaRfz14gJA9SGyInIGQcd78BrX8KZDUpF1Ek
KxQqL97YRMQevYV89uQADKT1rDBJPNZ+o9f59WT04tClphk/quvMMuSVILQaamF5c2ggPGph
eXNocmVlQGdtYWlsLmNvbT6IZgQTEQIAJgUCSnPtUgIbAwUJAAFRgAYLCQgHAwIEFQIIAwQW
AgMBAh4BAheAAAoJEFjpOQ2vjFvzS0wAn3vf1A8npIY/DMIFFw0/eGf0FNekAKCBJnub9GVu
9OUY0nISQf7uZZVyI7kBDQRKc+1SEAQAm7Pink6S5+kfHeUoJVldb+VAlHdf7BdvKjVeiKAb
dFUa6vR9az+wn8V5asNy/npEAYnHG2nVFpR8DTlN0eO35p78qXkuWkkpNocLIB3bFwkOCbff
P3yaCZp27Vq+9182bAR2Ah10T1KShjWTS/wfRpSVECYUGUMSh4bJTnbDA2MAAwUEAIcRhF9N
OxAsOezkiZBm+tG4BgT0+uWchY7fItJdEqrdrROuCFqWkJLY2uTbhtZ5RMceFAW3s+IYDHLL
PwM1O+ZojhvAkGwLyC4F+6RCE62mscvDJQsdwS4L25CaG2Aw97HhY7+bG00TWqGLb9JibKie
X1Lk+W8Sde/4UK3Q8tpbiE8EGBECAA8FAkpz7VICGwwFCQABUYAACgkQWOk5Da+MW/MAAgCg
tfUKLOsrFjmyFu7biv7ZwVfejaMAn1QXEJw6hpvte60WZrL0CpS60A6Q
=tvYU
-----END PGP PUBLIC KEY BLOCK-----
</pre>
</body></html>
So you need to look only at the key which is wrapped by the <pre> HTML tags.
By the way, there are other issues that you will need to contend with such as multiple keys being returned because you are searching by "name", when you should be searching by keyID. For example, keyID 0x58E9390DAF8C5BF3 will return the public key for jaysh and only jaysh and the corresponsing URL is http://pool.sks-keyservers.net:11371/pks/lookup?op=get&search=0x58E9390DAF8C5BF3.
This was mostly covered in my earlier answer to this question which I presume you also asked.
|
convert the key in MIME encoded form in python
|
I need to convert the key in MIME encoded form which is presently comes in (ascii armored) radix 64 format. For that, I have to get this radix64 format in its binary form and also need to remove its header and checksum than coversion in MIME format, but I didnt find any method which can do this conversion.
f = urllib.urlopen('http://pool.sks-keyservers.net:11371/pks/lookup?op=get&search= 0x58e9390daf8c5bf3') #Retrieve the public key from PKS
data = f.read()
decoded_bytes = base64.b64decode(data)
print decoded_bytes
I used the base64.b64decode method and it gives me the following error:
Traceback (most recent call last):
File "RetEnc.py", line 12, in ?
decoded_bytes = base64.b64decode(data)
File "/usr/lib/python2.4/base64.py", line 76, in b64decode
raise TypeError(msg)
TypeError: Incorrect padding
Why am I get this TypeError: Incorrect padding error, and how cn I fix it?
|
[
"For a start, when you do use a valid search value (\"jay\" returns an error stating \"too many values\"), you will receive an HTML page from which you need to extract the actual key. Trying a search value of \"jaysh\" I get the following response:\n>>> print urllib.urlopen('http://pool.sks-keyservers.net:11371/pks/lookup?op=get&search=jaysh').read()\n<html><head><title>Public Key Server -- Get ``jaysh ''</title></head>\n<body><h1>Public Key Server -- Get ``jaysh ''</h1>\n<pre>\n-----BEGIN PGP PUBLIC KEY BLOCK-----\nVersion: SKS 1.1.1\n\nmQGiBEpz7VIRBADAt9YpYfYHJeGA6d+G261FHW1uA0YXltCWa7TL6JnIsuxvh9vImUoyMJd6\n1xEW4TuROTxGcMMiDemQq6HfV9tLi7ptVBLf/8nUEFoGhxS+DPJsy46WmlscKHRIEdIkTYhp\nuAIMim0q5HWymEqqAfBLwJTOY9sR+nelh0NKepcCqwCgvenJ2R5UgmAh+sOhIBrh3OahZEED\n/2sRGHi4xRWKePFpttXfb2hry2/jURPae/wYfuI6Xw3k5EO593veGS7Zyjnt+7mVY1N5V/ey\nrfXaS3R6GsByG/eRVzRJGU2DSQvmF+q2NC6v2s4KSzr5CVKpn586SGUSg/aKvXY3EIrpvAGP\nrHum1wt6P9m9kr/4X8SdVhj7Jti6A/0TA8C2KYhOn/hSYAMTmhisHan3g2Cm6yNzKeTiq6/0\nooG/ffcY81zC6+Kw236VGy2bLrMLkboXPuecvaRfz14gJA9SGyInIGQcd78BrX8KZDUpF1Ek\nKxQqL97YRMQevYV89uQADKT1rDBJPNZ+o9f59WT04tClphk/quvMMuSVILQaamF5c2ggPGph\neXNocmVlQGdtYWlsLmNvbT6IZgQTEQIAJgUCSnPtUgIbAwUJAAFRgAYLCQgHAwIEFQIIAwQW\nAgMBAh4BAheAAAoJEFjpOQ2vjFvzS0wAn3vf1A8npIY/DMIFFw0/eGf0FNekAKCBJnub9GVu\n9OUY0nISQf7uZZVyI7kBDQRKc+1SEAQAm7Pink6S5+kfHeUoJVldb+VAlHdf7BdvKjVeiKAb\ndFUa6vR9az+wn8V5asNy/npEAYnHG2nVFpR8DTlN0eO35p78qXkuWkkpNocLIB3bFwkOCbff\nP3yaCZp27Vq+9182bAR2Ah10T1KShjWTS/wfRpSVECYUGUMSh4bJTnbDA2MAAwUEAIcRhF9N\nOxAsOezkiZBm+tG4BgT0+uWchY7fItJdEqrdrROuCFqWkJLY2uTbhtZ5RMceFAW3s+IYDHLL\nPwM1O+ZojhvAkGwLyC4F+6RCE62mscvDJQsdwS4L25CaG2Aw97HhY7+bG00TWqGLb9JibKie\nX1Lk+W8Sde/4UK3Q8tpbiE8EGBECAA8FAkpz7VICGwwFCQABUYAACgkQWOk5Da+MW/MAAgCg\ntfUKLOsrFjmyFu7biv7ZwVfejaMAn1QXEJw6hpvte60WZrL0CpS60A6Q\n=tvYU\n-----END PGP PUBLIC KEY BLOCK-----\n</pre>\n</body></html>\n\nSo you need to look only at the key which is wrapped by the <pre> HTML tags.\nBy the way, there are other issues that you will need to contend with such as multiple keys being returned because you are searching by \"name\", when you should be searching by keyID. For example, keyID 0x58E9390DAF8C5BF3 will return the public key for jaysh and only jaysh and the corresponsing URL is http://pool.sks-keyservers.net:11371/pks/lookup?op=get&search=0x58E9390DAF8C5BF3.\nThis was mostly covered in my earlier answer to this question which I presume you also asked.\n"
] |
[
0
] |
[] |
[] |
[
"encoding",
"python"
] |
stackoverflow_0001419686_encoding_python.txt
|
Q:
How can I generate a complete histogram with numpy?
I have a very long list in a numpy.array. I want to generate a histogram for it. However, Numpy's built in histogram requires a pre-defined number of bins. What's the best way to generate a full histogram with one bin for each value?
A:
If you have an array of integers and the max value isn't too large you can use numpy.bincount:
hist = dict((key,val) for key, val in enumerate(numpy.bincount(data)) if val)
Edit:
If you have float data, or data spread over a huge range you can convert it to integers by doing:
bins = numpy.unique(data)
bincounts = numpy.bincount(numpy.digitize(data, bins) - 1)
hist = dict(zip(bins, bincounts))
A:
A bin for every value sounds a bit strange but wouldn't
bins=a.max()-a.min()
give a similar result?
|
How can I generate a complete histogram with numpy?
|
I have a very long list in a numpy.array. I want to generate a histogram for it. However, Numpy's built in histogram requires a pre-defined number of bins. What's the best way to generate a full histogram with one bin for each value?
|
[
"If you have an array of integers and the max value isn't too large you can use numpy.bincount:\nhist = dict((key,val) for key, val in enumerate(numpy.bincount(data)) if val)\n\nEdit:\nIf you have float data, or data spread over a huge range you can convert it to integers by doing:\nbins = numpy.unique(data)\nbincounts = numpy.bincount(numpy.digitize(data, bins) - 1)\nhist = dict(zip(bins, bincounts))\n\n",
"A bin for every value sounds a bit strange but wouldn't\nbins=a.max()-a.min()\n\ngive a similar result?\n"
] |
[
8,
0
] |
[] |
[] |
[
"histogram",
"numpy",
"python"
] |
stackoverflow_0001420235_histogram_numpy_python.txt
|
Q:
How to install distutils packages using distutils api or setuptools api
I'm working on a buildout script that needs to install a distutils package on remote server.
On PyPi there are 2 recipes for doing this
collective.recipe.distutils 0.1 and zerokspot.recipe.distutils 0.1.1.
The later module a derivative of the former, and is a little more convenient then the first, but the both suffer from the same problem, which I will describe now.
When bootstrap.py is executed, it downloads zc.buildout package and puts it into buildout's eggs directory. This gives ./bin/buildout access to zc.buildout code, but /usr/local/python does not know anything about zc.buildout at this point.
Buildout attepts to install the package by running 'python setup.py install' inside of a subprocess. This produces an ImportError because zc.buildout is not installed for /usr/local/python.
So, I have several solutions.
Install zc.buildout using easy_install on the remote server. I don't like this option at all, it makes a special case for a module that is very insignificant.
Modify zerokspot.recipe.distutils to put try block around 'import zc.buildout' this way, it will install even if zc.buildout is not installed. It's an ok solution, but somewhat hackish.
Replace subprocess with code that will install the package using distutils api or setuptools api. This would be the best solution in my opinion.
The question is how would i do #3?
Thank you,
Taras
PS: I solved the problem by creating another package that does not have dependancy on zc.buildout. My package is called taras.recipe.distutils and it's available on pypi.
A:
You can call a command line program within your Python program using the subprocess module:
import subprocess
subprocess.call('python setup.py install')
However, how much control do you have over the environment that this install will be run? If it is a package that you are distributing, you will likely have issues no matter what solution people propose. How will you handle cases of needing root access (e.g. sudo python setup.py install)?
You may consider looking into Paver since it provides an API that is in some ways an extension of setuptools.
A:
zerokspot.recipe.distutils is fundamentally broken in that it adds a dependency on zc.buildout in it's setup.py, as follows:
setup.py imports get_version from zerokspot.recipe.distutils
All of zerokspot.recipe.distutils is defined in it's __init__.py, including get_version
__init__.py in zerokspot.recipe.distutils imports zc.buildout
Why the author defines get_version is a mystery to me; best practice keeps a simple version string in setup.py itself and lets setuptools deal with dev versions (through setup.cfg), and distutils for version metadata extraction.
Generally it is not a good idea to import the whole package in setup.py as that would require all the package dependencies to be present at install time. Obviously the author of the package has zc.buildout installed as a site-wide package and didn't notice his oversight.
Your best bet is to fork the package on github, remove the get_version dependency, and propose the change to the original author while you use your fork instead.
A:
Are you sure you don't want to just generate a bdist?
|
How to install distutils packages using distutils api or setuptools api
|
I'm working on a buildout script that needs to install a distutils package on remote server.
On PyPi there are 2 recipes for doing this
collective.recipe.distutils 0.1 and zerokspot.recipe.distutils 0.1.1.
The later module a derivative of the former, and is a little more convenient then the first, but the both suffer from the same problem, which I will describe now.
When bootstrap.py is executed, it downloads zc.buildout package and puts it into buildout's eggs directory. This gives ./bin/buildout access to zc.buildout code, but /usr/local/python does not know anything about zc.buildout at this point.
Buildout attepts to install the package by running 'python setup.py install' inside of a subprocess. This produces an ImportError because zc.buildout is not installed for /usr/local/python.
So, I have several solutions.
Install zc.buildout using easy_install on the remote server. I don't like this option at all, it makes a special case for a module that is very insignificant.
Modify zerokspot.recipe.distutils to put try block around 'import zc.buildout' this way, it will install even if zc.buildout is not installed. It's an ok solution, but somewhat hackish.
Replace subprocess with code that will install the package using distutils api or setuptools api. This would be the best solution in my opinion.
The question is how would i do #3?
Thank you,
Taras
PS: I solved the problem by creating another package that does not have dependancy on zc.buildout. My package is called taras.recipe.distutils and it's available on pypi.
|
[
"You can call a command line program within your Python program using the subprocess module:\nimport subprocess\nsubprocess.call('python setup.py install')\n\nHowever, how much control do you have over the environment that this install will be run? If it is a package that you are distributing, you will likely have issues no matter what solution people propose. How will you handle cases of needing root access (e.g. sudo python setup.py install)?\nYou may consider looking into Paver since it provides an API that is in some ways an extension of setuptools.\n",
"zerokspot.recipe.distutils is fundamentally broken in that it adds a dependency on zc.buildout in it's setup.py, as follows:\n\nsetup.py imports get_version from zerokspot.recipe.distutils\nAll of zerokspot.recipe.distutils is defined in it's __init__.py, including get_version\n__init__.py in zerokspot.recipe.distutils imports zc.buildout\n\nWhy the author defines get_version is a mystery to me; best practice keeps a simple version string in setup.py itself and lets setuptools deal with dev versions (through setup.cfg), and distutils for version metadata extraction.\nGenerally it is not a good idea to import the whole package in setup.py as that would require all the package dependencies to be present at install time. Obviously the author of the package has zc.buildout installed as a site-wide package and didn't notice his oversight.\nYour best bet is to fork the package on github, remove the get_version dependency, and propose the change to the original author while you use your fork instead.\n",
"Are you sure you don't want to just generate a bdist?\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"buildout",
"distutils",
"python"
] |
stackoverflow_0001419379_buildout_distutils_python.txt
|
Q:
How do I loop through relationships in a list only once?
I have a list of users:
users = [1,2,3,4,5]
I want to compute a relationship between them:
score = compatibility( user[0], user[1] )
How do I loop over users so that a relationship between users are computed only once?
A:
If you care only about ordered relationship, you could do the following:
>>> for i, u in enumerate(users[1:]):
print(users[i], u) # or do something else
1 2
2 3
3 4
4 5
if you need all combinations you should use itertools.combinations:
>>> import itertools
>>> for i in itertools.combinations(users, 2):
print(*i)
1 2
1 3
1 4
1 5
2 3
2 4
2 5
3 4
3 5
4 5
A:
use for loops, or list comprehension.
here is for loop example:
for u in users:
for su in users:
if su == u:
pass
else:
score = compatibility(u, su)
# do score whatever you want
list comprehension:
score = [compatibility(x, y) for x in users for y in users if x!=y and compatibility(x,y) not in score]
A:
Something like the following should work (not tested):
users_range = range(len(users))
# Initialize a 2-dimensional array
scores = [None for j in users_range for i in users_range]
# Assign a compatibility to each pair of users.
for i in users_range:
for j in users_range:
scores[i][j] = compatibility(users[i], users[j])
A:
I managed to do what I wanted with this:
i = 0
for user1 in users:
i += 1
for user2 in users[i:]:
print compatibility( user1, user2 )
A:
If you mean that:
compatibility(user[0], user[1]) == compatibility(user[1], user[0])
you could use:
for i, user1 in enumerate(users):
for user2 in users[i:]:
score = compatibility(user1, user2)
this will also calculate the compatibility between the same users (maybe applicable)
A:
import itertools
def compatibility(u1, u2):
"just a stub for demonstration purposes"
return abs(u1 - u2)
def compatibility_map(users):
return dict(((u1, u2), compatibility(u1, u2))
for u1, u2 in itertools.combinations(users, 2))
> compat.compatiblity_map([1,2,3,4,5])
{(1, 2): 1, (1, 3): 2, (4, 5): 1, (1, 4): 3, (1, 5): 4,
(2, 3): 1, (2, 5): 3, (3, 4): 1, (2, 4): 2, (3, 5): 2}
Use itertools.permuations instead of itertools.combinations if compatibility(a,b) doesn't mean the same thing as compatibility(b,a).
|
How do I loop through relationships in a list only once?
|
I have a list of users:
users = [1,2,3,4,5]
I want to compute a relationship between them:
score = compatibility( user[0], user[1] )
How do I loop over users so that a relationship between users are computed only once?
|
[
"If you care only about ordered relationship, you could do the following:\n>>> for i, u in enumerate(users[1:]):\n print(users[i], u) # or do something else\n\n\n1 2\n2 3\n3 4\n4 5\n\nif you need all combinations you should use itertools.combinations:\n>>> import itertools\n>>> for i in itertools.combinations(users, 2):\n print(*i)\n\n1 2\n1 3\n1 4\n1 5\n2 3\n2 4\n2 5\n3 4\n3 5\n4 5\n\n",
"use for loops, or list comprehension.\nhere is for loop example:\nfor u in users:\n for su in users:\n if su == u:\n pass\n else:\n score = compatibility(u, su)\n # do score whatever you want\n\nlist comprehension:\nscore = [compatibility(x, y) for x in users for y in users if x!=y and compatibility(x,y) not in score]\n\n",
"Something like the following should work (not tested):\nusers_range = range(len(users))\n\n# Initialize a 2-dimensional array\nscores = [None for j in users_range for i in users_range]\n\n# Assign a compatibility to each pair of users.\nfor i in users_range:\n for j in users_range:\n scores[i][j] = compatibility(users[i], users[j])\n\n",
"I managed to do what I wanted with this:\ni = 0\nfor user1 in users: \n i += 1 \n for user2 in users[i:]:\n print compatibility( user1, user2 )\n\n",
"If you mean that:\ncompatibility(user[0], user[1]) == compatibility(user[1], user[0])\n\nyou could use:\nfor i, user1 in enumerate(users):\n for user2 in users[i:]:\n score = compatibility(user1, user2)\n\nthis will also calculate the compatibility between the same users (maybe applicable)\n",
"import itertools\n\ndef compatibility(u1, u2):\n \"just a stub for demonstration purposes\"\n return abs(u1 - u2)\n\ndef compatibility_map(users):\n return dict(((u1, u2), compatibility(u1, u2))\n for u1, u2 in itertools.combinations(users, 2))\n\n> compat.compatiblity_map([1,2,3,4,5])\n{(1, 2): 1, (1, 3): 2, (4, 5): 1, (1, 4): 3, (1, 5): 4,\n (2, 3): 1, (2, 5): 3, (3, 4): 1, (2, 4): 2, (3, 5): 2}\n\nUse itertools.permuations instead of itertools.combinations if compatibility(a,b) doesn't mean the same thing as compatibility(b,a).\n"
] |
[
11,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001420705_python.txt
|
Q:
setuptools / dpkg-buildpackage: Refuse to build if nosetests fail
I have a very simple python package that I build into debian packages using setuptools, cdbs and pycentral:
setup.py:
from setuptools import setup
setup(name='PHPSerialize',
version='1.0',
py_modules=['PHPSerialize'],
test_suite = 'nose.collector'
)
debian/rules:
#!/usr/bin/make -f
DEB_PYTHON_SYSTEM = pycentral
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/python-distutils.mk
Now, is there an easy way to make dpkg-buildpackage execute the unit tests and refuse to create the .deb if the test suite fails?
A:
Try
build/yourpackage::
nosetests
|
setuptools / dpkg-buildpackage: Refuse to build if nosetests fail
|
I have a very simple python package that I build into debian packages using setuptools, cdbs and pycentral:
setup.py:
from setuptools import setup
setup(name='PHPSerialize',
version='1.0',
py_modules=['PHPSerialize'],
test_suite = 'nose.collector'
)
debian/rules:
#!/usr/bin/make -f
DEB_PYTHON_SYSTEM = pycentral
include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/python-distutils.mk
Now, is there an easy way to make dpkg-buildpackage execute the unit tests and refuse to create the .deb if the test suite fails?
|
[
"Try\nbuild/yourpackage::\n nosetests\n\n"
] |
[
2
] |
[] |
[] |
[
"cdbs",
"debian",
"nose",
"python",
"setuptools"
] |
stackoverflow_0001231958_cdbs_debian_nose_python_setuptools.txt
|
Q:
Detecting the http request type (GET, HEAD, etc) from a python cgi
How can I find out the http request my python cgi received? I need different behaviors for HEAD and GET.
Thanks!
A:
import os
if os.environ['REQUEST_METHOD'] == 'GET':
# blah
A:
Why do you need to distinguish between GET and HEAD?
Normally you shouldn't distinguish and should treat a HEAD request just like a GET. This is because a HEAD request is meant to return the exact same headers as a GET. The only difference is there will be no response content. Just because there is no response content though doesn't mean you no longer have to return a valid Content-Length header, or other headers, which are dependent on the response content.
In mod_wsgi, which various people are pointing you at, it will actually deliberately change the request method from HEAD to GET in certain cases to guard against people who wrongly treat HEAD differently. The specific case where this is done is where an Apache output filter is registered. The reason that it is done in this case is because the output filter may expect to see the response content and from that generate additional response headers. If you were to decide not to bother to generate any response content for a HEAD request, you will deprive the output filter of the content and the headers they add may then not agree with what would be returned from a GET request. The end result of this is that you can stuff up caches and the operation of the browser.
The same can apply equally for CGI scripts behind Apache as output filters can still be added in that case as well. For CGI scripts there is nothing in place though to protect against users being stupid and doing things differently for a HEAD request.
|
Detecting the http request type (GET, HEAD, etc) from a python cgi
|
How can I find out the http request my python cgi received? I need different behaviors for HEAD and GET.
Thanks!
|
[
"import os\n\nif os.environ['REQUEST_METHOD'] == 'GET':\n # blah\n\n",
"Why do you need to distinguish between GET and HEAD?\nNormally you shouldn't distinguish and should treat a HEAD request just like a GET. This is because a HEAD request is meant to return the exact same headers as a GET. The only difference is there will be no response content. Just because there is no response content though doesn't mean you no longer have to return a valid Content-Length header, or other headers, which are dependent on the response content.\nIn mod_wsgi, which various people are pointing you at, it will actually deliberately change the request method from HEAD to GET in certain cases to guard against people who wrongly treat HEAD differently. The specific case where this is done is where an Apache output filter is registered. The reason that it is done in this case is because the output filter may expect to see the response content and from that generate additional response headers. If you were to decide not to bother to generate any response content for a HEAD request, you will deprive the output filter of the content and the headers they add may then not agree with what would be returned from a GET request. The end result of this is that you can stuff up caches and the operation of the browser.\nThe same can apply equally for CGI scripts behind Apache as output filters can still be added in that case as well. For CGI scripts there is nothing in place though to protect against users being stupid and doing things differently for a HEAD request.\n"
] |
[
17,
0
] |
[
"This is not a direct answer to your question. But your question stems from doing things the wrong way.\nDo not write Python CGI scripts.\nWrite a mod_wsgi application. Better still, use a Python web framework. There are dozens. Choose one like Werkzeug.\nThe WSGI standard (described in PEP 333) makes it much, much easier to find things in the web request. \nThe mod_wsgi implementation is faster and more secure than a CGI.\nA web framework is also simpler than writing your own CGI script or mod_wsgi application.\n"
] |
[
-1
] |
[
"cgi",
"http",
"httpwebrequest",
"python"
] |
stackoverflow_0001417715_cgi_http_httpwebrequest_python.txt
|
Q:
What variable name do you use for file descriptors?
A pretty silly trivial question. The canonical example is f = open('filename'), but
f is not very descriptive. After not looking at code in a while,
you can forget whether it means
"file" or "function f(x)" or "fourier
transform results" or something else. EIBTI.
In Python, file is already taken by a function.
What else do you use?
A:
data_file
settings_file
results_file
.... etc
A:
You can append it to the beginning, Hungarian-like "file_fft".
However, I would try to close file descriptors as soon as possible, and I recommend using the with statement like this so you don't have to worry about closing it, and it makes it easier to not lose track of it.
with open("x.txt") as f:
data = f.read()
do something with data
A:
I'm happy to use f (for either a function OR a file;-) if that identifier's scope is constrained to a pretty small compass (such as with open('zap') as f: would normally portend, say). In general, identifiers with large lexical scopes should be longer and more explicit, ones with lexically small/short scopes/lifespans can be shorter and less explicit, and this applies to open file object just about as much as to any other kind of object!-)
A:
Generally if the scope of a file object is only a few lines, f is perfectly readable - the variable name for the filename in the open call is probably descriptive enough. otherwise something_file is probably a good idea.
A:
generally I'll use "fp" for a short-lifetime file pointer.
for a longer-lived descriptor, I'll be more descriptive. "fpDebugLog", for example.
A:
I rather use one of: f, fp, fd.
Sometimes inf / outf for input and output file.
|
What variable name do you use for file descriptors?
|
A pretty silly trivial question. The canonical example is f = open('filename'), but
f is not very descriptive. After not looking at code in a while,
you can forget whether it means
"file" or "function f(x)" or "fourier
transform results" or something else. EIBTI.
In Python, file is already taken by a function.
What else do you use?
|
[
" data_file\n settings_file\n results_file\n .... etc\n\n",
"You can append it to the beginning, Hungarian-like \"file_fft\".\nHowever, I would try to close file descriptors as soon as possible, and I recommend using the with statement like this so you don't have to worry about closing it, and it makes it easier to not lose track of it.\nwith open(\"x.txt\") as f:\n data = f.read()\n do something with data\n\n",
"I'm happy to use f (for either a function OR a file;-) if that identifier's scope is constrained to a pretty small compass (such as with open('zap') as f: would normally portend, say). In general, identifiers with large lexical scopes should be longer and more explicit, ones with lexically small/short scopes/lifespans can be shorter and less explicit, and this applies to open file object just about as much as to any other kind of object!-)\n",
"Generally if the scope of a file object is only a few lines, f is perfectly readable - the variable name for the filename in the open call is probably descriptive enough. otherwise something_file is probably a good idea.\n",
"generally I'll use \"fp\" for a short-lifetime file pointer.\nfor a longer-lived descriptor, I'll be more descriptive. \"fpDebugLog\", for example.\n",
"I rather use one of: f, fp, fd.\nSometimes inf / outf for input and output file.\n"
] |
[
8,
6,
4,
3,
2,
0
] |
[] |
[] |
[
"explicit",
"file",
"naming_conventions",
"python",
"variable_names"
] |
stackoverflow_0001419039_explicit_file_naming_conventions_python_variable_names.txt
|
Q:
How can I add a decorator to an existing object method?
If I'm using a module/class I have no control over, how would I decorate one of the methods?
I understand I can: my_decorate_method(target_method)() but I'm looking to have this happen wherever target_method is called without having to do a search/replace.
Is it even possible?
A:
Don't do this.
Use inheritance.
import some_module
class MyVersionOfAClass( some_module.AClass ):
def someMethod( self, *args, **kwargs ):
# do your "decoration" here.
super( MyVersionOfAClass, self ). someMethod( *args, **kwargs )
# you can also do "decoration" here.
Now, fix you main program to use MyVersionOfAClass instead of some_module.AClass.
A:
Yes it's possible, but there are several problems.
First whet you get method from class in obvious way you get a warper object but not the function itself.
class X(object):
def m(self,x):
print x
print X.m #>>> <unbound method X.m>
print vars(X)['m'] #>>> <function m at 0x9e17e64>
def increase_decorator(function):
return lambda self,x: function(self,x+1)
Second I don't know if settings new method will always work:
x = X()
x.m(1) #>>> 1
X.m = increase_decorator( vars(X)['m'] )
x.m(1) #>>> 2
|
How can I add a decorator to an existing object method?
|
If I'm using a module/class I have no control over, how would I decorate one of the methods?
I understand I can: my_decorate_method(target_method)() but I'm looking to have this happen wherever target_method is called without having to do a search/replace.
Is it even possible?
|
[
"Don't do this.\nUse inheritance.\nimport some_module\n\nclass MyVersionOfAClass( some_module.AClass ):\n def someMethod( self, *args, **kwargs ):\n # do your \"decoration\" here.\n super( MyVersionOfAClass, self ). someMethod( *args, **kwargs )\n # you can also do \"decoration\" here.\n\nNow, fix you main program to use MyVersionOfAClass instead of some_module.AClass.\n",
"Yes it's possible, but there are several problems.\nFirst whet you get method from class in obvious way you get a warper object but not the function itself.\nclass X(object):\n def m(self,x):\n print x\n\nprint X.m #>>> <unbound method X.m>\nprint vars(X)['m'] #>>> <function m at 0x9e17e64>\n\ndef increase_decorator(function):\n return lambda self,x: function(self,x+1)\n\nSecond I don't know if settings new method will always work:\nx = X()\nx.m(1) #>>> 1\nX.m = increase_decorator( vars(X)['m'] )\nx.m(1) #>>> 2\n\n"
] |
[
7,
3
] |
[] |
[] |
[
"decorator",
"python"
] |
stackoverflow_0001420484_decorator_python.txt
|
Q:
How do I pass a string into subprocess.Popen in Python 2?
I would like to run a process from Python (2.4/2.5/2.6) using Popen, and I
would like to give it a string as its standard input.
I'll write an example where the process does a "head -n 1" its input.
The following works, but I would like to solve it in a nicer way, without using
echo:
>>> from subprocess import *
>>> p1 = Popen(["echo", "first line\nsecond line"], stdout=PIPE)
>>> Popen(["head", "-n", "1"], stdin=p1.stdout)
first line
I tried to use StringIO, but it does not work:
>>> from StringIO import StringIO
>>> Popen(["head", "-n", "1"], stdin=StringIO("first line\nsecond line"))
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/usr/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: StringIO instance has no attribute 'fileno'
I guess I could make a temporary file and write the string there -- but that's not very nice either.
A:
Have you tried to feed your string to communicate as a string?
Popen.communicate(input=my_input)
It works like this:
p = subprocess.Popen(["head", "-n", "1"], stdin=subprocess.PIPE)
p.communicate('first\nsecond')
output:
first
I forgot to set stdin to subprocess.PIPE when I tried it at first.
A:
Use os.pipe:
>>> from subprocess import Popen
>>> import os, sys
>>> read, write = os.pipe()
>>> p = Popen(["head", "-n", "1"], stdin=read, stdout=sys.stdout)
>>> byteswritten = os.write(write, "foo bar\n")
foo bar
>>>
|
How do I pass a string into subprocess.Popen in Python 2?
|
I would like to run a process from Python (2.4/2.5/2.6) using Popen, and I
would like to give it a string as its standard input.
I'll write an example where the process does a "head -n 1" its input.
The following works, but I would like to solve it in a nicer way, without using
echo:
>>> from subprocess import *
>>> p1 = Popen(["echo", "first line\nsecond line"], stdout=PIPE)
>>> Popen(["head", "-n", "1"], stdin=p1.stdout)
first line
I tried to use StringIO, but it does not work:
>>> from StringIO import StringIO
>>> Popen(["head", "-n", "1"], stdin=StringIO("first line\nsecond line"))
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/lib/python2.4/subprocess.py", line 533, in __init__
(p2cread, p2cwrite,
File "/usr/lib/python2.4/subprocess.py", line 830, in _get_handles
p2cread = stdin.fileno()
AttributeError: StringIO instance has no attribute 'fileno'
I guess I could make a temporary file and write the string there -- but that's not very nice either.
|
[
"Have you tried to feed your string to communicate as a string?\nPopen.communicate(input=my_input)\n\nIt works like this:\np = subprocess.Popen([\"head\", \"-n\", \"1\"], stdin=subprocess.PIPE)\np.communicate('first\\nsecond')\n\noutput:\nfirst\n\nI forgot to set stdin to subprocess.PIPE when I tried it at first.\n",
"Use os.pipe:\n>>> from subprocess import Popen\n>>> import os, sys\n>>> read, write = os.pipe()\n>>> p = Popen([\"head\", \"-n\", \"1\"], stdin=read, stdout=sys.stdout)\n>>> byteswritten = os.write(write, \"foo bar\\n\")\nfoo bar\n>>>\n\n"
] |
[
8,
5
] |
[] |
[] |
[
"popen",
"python"
] |
stackoverflow_0001421311_popen_python.txt
|
Q:
Python regular expression for HTML parsing (BeautifulSoup)
I want to grab the value of a hidden input field in HTML.
<input type="hidden" name="fooId" value="12-3456789-1111111111" />
I want to write a regular expression in Python that will return the value of fooId, given that I know the line in the HTML follows the format
<input type="hidden" name="fooId" value="**[id is here]**" />
Can someone provide an example in Python to parse the HTML for the value?
A:
For this particular case, BeautifulSoup is harder to write than a regex, but it is much more robust... I'm just contributing with the BeautifulSoup example, given that you already know which regexp to use :-)
from BeautifulSoup import BeautifulSoup
#Or retrieve it from the web, etc.
html_data = open('/yourwebsite/page.html','r').read()
#Create the soup object from the HTML data
soup = BeautifulSoup(html_data)
fooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag
value = fooId.attrs[2][1] #The value of the third attribute of the desired tag
#or index it directly via fooId['value']
A:
I agree with Vinko BeautifulSoup is the way to go. However I suggest using fooId['value'] to get the attribute rather than relying on value being the third attribute.
from BeautifulSoup import BeautifulSoup
#Or retrieve it from the web, etc.
html_data = open('/yourwebsite/page.html','r').read()
#Create the soup object from the HTML data
soup = BeautifulSoup(html_data)
fooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag
value = fooId['value'] #The value attribute
A:
import re
reg = re.compile('<input type="hidden" name="([^"]*)" value="<id>" />')
value = reg.search(inputHTML).group(1)
print 'Value is', value
A:
Parsing is one of those areas where you really don't want to roll your own if you can avoid it, as you'll be chasing down the edge-cases and bugs for years go come
I'd recommend using BeautifulSoup. It has a very good reputation and looks from the docs like it's pretty easy to use.
A:
Pyparsing is a good interim step between BeautifulSoup and regex. It is more robust than just regexes, since its HTML tag parsing comprehends variations in case, whitespace, attribute presence/absence/order, but simpler to do this kind of basic tag extraction than using BS.
Your example is especially simple, since everything you are looking for is in the attributes of the opening "input" tag. Here is a pyparsing example showing several variations on your input tag that would give regexes fits, and also shows how NOT to match a tag if it is within a comment:
html = """<html><body>
<input type="hidden" name="fooId" value="**[id is here]**" />
<blah>
<input name="fooId" type="hidden" value="**[id is here too]**" />
<input NAME="fooId" type="hidden" value="**[id is HERE too]**" />
<INPUT NAME="fooId" type="hidden" value="**[and id is even here TOO]**" />
<!--
<input type="hidden" name="fooId" value="**[don't report this id]**" />
-->
<foo>
</body></html>"""
from pyparsing import makeHTMLTags, withAttribute, htmlComment
# use makeHTMLTags to create tag expression - makeHTMLTags returns expressions for
# opening and closing tags, we're only interested in the opening tag
inputTag = makeHTMLTags("input")[0]
# only want input tags with special attributes
inputTag.setParseAction(withAttribute(type="hidden", name="fooId"))
# don't report tags that are commented out
inputTag.ignore(htmlComment)
# use searchString to skip through the input
foundTags = inputTag.searchString(html)
# dump out first result to show all returned tags and attributes
print foundTags[0].dump()
print
# print out the value attribute for all matched tags
for inpTag in foundTags:
print inpTag.value
Prints:
['input', ['type', 'hidden'], ['name', 'fooId'], ['value', '**[id is here]**'], True]
- empty: True
- name: fooId
- startInput: ['input', ['type', 'hidden'], ['name', 'fooId'], ['value', '**[id is here]**'], True]
- empty: True
- name: fooId
- type: hidden
- value: **[id is here]**
- type: hidden
- value: **[id is here]**
**[id is here]**
**[id is here too]**
**[id is HERE too]**
**[and id is even here TOO]**
You can see that not only does pyparsing match these unpredictable variations, it returns the data in an object that makes it easy to read out the individual tag attributes and their values.
A:
/<input type="hidden" name="fooId" value="([\d-]+)" \/>/
A:
/<input\s+type="hidden"\s+name="([A-Za-z0-9_]+)"\s+value="([A-Za-z0-9_\-]*)"\s*/>/
>>> import re
>>> s = '<input type="hidden" name="fooId" value="12-3456789-1111111111" />'
>>> re.match('<input\s+type="hidden"\s+name="([A-Za-z0-9_]+)"\s+value="([A-Za-z0-9_\-]*)"\s*/>', s).groups()
('fooId', '12-3456789-1111111111')
|
Python regular expression for HTML parsing (BeautifulSoup)
|
I want to grab the value of a hidden input field in HTML.
<input type="hidden" name="fooId" value="12-3456789-1111111111" />
I want to write a regular expression in Python that will return the value of fooId, given that I know the line in the HTML follows the format
<input type="hidden" name="fooId" value="**[id is here]**" />
Can someone provide an example in Python to parse the HTML for the value?
|
[
"For this particular case, BeautifulSoup is harder to write than a regex, but it is much more robust... I'm just contributing with the BeautifulSoup example, given that you already know which regexp to use :-)\nfrom BeautifulSoup import BeautifulSoup\n\n#Or retrieve it from the web, etc. \nhtml_data = open('/yourwebsite/page.html','r').read()\n\n#Create the soup object from the HTML data\nsoup = BeautifulSoup(html_data)\nfooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag\nvalue = fooId.attrs[2][1] #The value of the third attribute of the desired tag \n #or index it directly via fooId['value']\n\n",
"I agree with Vinko BeautifulSoup is the way to go. However I suggest using fooId['value'] to get the attribute rather than relying on value being the third attribute.\nfrom BeautifulSoup import BeautifulSoup\n#Or retrieve it from the web, etc.\nhtml_data = open('/yourwebsite/page.html','r').read()\n#Create the soup object from the HTML data\nsoup = BeautifulSoup(html_data)\nfooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag\nvalue = fooId['value'] #The value attribute\n\n",
"import re\nreg = re.compile('<input type=\"hidden\" name=\"([^\"]*)\" value=\"<id>\" />')\nvalue = reg.search(inputHTML).group(1)\nprint 'Value is', value\n\n",
"Parsing is one of those areas where you really don't want to roll your own if you can avoid it, as you'll be chasing down the edge-cases and bugs for years go come\nI'd recommend using BeautifulSoup. It has a very good reputation and looks from the docs like it's pretty easy to use.\n",
"Pyparsing is a good interim step between BeautifulSoup and regex. It is more robust than just regexes, since its HTML tag parsing comprehends variations in case, whitespace, attribute presence/absence/order, but simpler to do this kind of basic tag extraction than using BS.\nYour example is especially simple, since everything you are looking for is in the attributes of the opening \"input\" tag. Here is a pyparsing example showing several variations on your input tag that would give regexes fits, and also shows how NOT to match a tag if it is within a comment:\nhtml = \"\"\"<html><body>\n<input type=\"hidden\" name=\"fooId\" value=\"**[id is here]**\" />\n<blah>\n<input name=\"fooId\" type=\"hidden\" value=\"**[id is here too]**\" />\n<input NAME=\"fooId\" type=\"hidden\" value=\"**[id is HERE too]**\" />\n<INPUT NAME=\"fooId\" type=\"hidden\" value=\"**[and id is even here TOO]**\" />\n<!--\n<input type=\"hidden\" name=\"fooId\" value=\"**[don't report this id]**\" />\n-->\n<foo>\n</body></html>\"\"\"\n\nfrom pyparsing import makeHTMLTags, withAttribute, htmlComment\n\n# use makeHTMLTags to create tag expression - makeHTMLTags returns expressions for\n# opening and closing tags, we're only interested in the opening tag\ninputTag = makeHTMLTags(\"input\")[0]\n\n# only want input tags with special attributes\ninputTag.setParseAction(withAttribute(type=\"hidden\", name=\"fooId\"))\n\n# don't report tags that are commented out\ninputTag.ignore(htmlComment)\n\n# use searchString to skip through the input \nfoundTags = inputTag.searchString(html)\n\n# dump out first result to show all returned tags and attributes\nprint foundTags[0].dump()\nprint\n\n# print out the value attribute for all matched tags\nfor inpTag in foundTags:\n print inpTag.value\n\nPrints:\n['input', ['type', 'hidden'], ['name', 'fooId'], ['value', '**[id is here]**'], True]\n- empty: True\n- name: fooId\n- startInput: ['input', ['type', 'hidden'], ['name', 'fooId'], ['value', '**[id is here]**'], True]\n - empty: True\n - name: fooId\n - type: hidden\n - value: **[id is here]**\n- type: hidden\n- value: **[id is here]**\n\n**[id is here]**\n**[id is here too]**\n**[id is HERE too]**\n**[and id is even here TOO]**\n\nYou can see that not only does pyparsing match these unpredictable variations, it returns the data in an object that makes it easy to read out the individual tag attributes and their values.\n",
"/<input type=\"hidden\" name=\"fooId\" value=\"([\\d-]+)\" \\/>/\n\n",
"/<input\\s+type=\"hidden\"\\s+name=\"([A-Za-z0-9_]+)\"\\s+value=\"([A-Za-z0-9_\\-]*)\"\\s*/>/\n\n>>> import re\n>>> s = '<input type=\"hidden\" name=\"fooId\" value=\"12-3456789-1111111111\" />'\n>>> re.match('<input\\s+type=\"hidden\"\\s+name=\"([A-Za-z0-9_]+)\"\\s+value=\"([A-Za-z0-9_\\-]*)\"\\s*/>', s).groups()\n('fooId', '12-3456789-1111111111')\n\n"
] |
[
27,
18,
8,
5,
1,
0,
0
] |
[] |
[] |
[
"python",
"regex",
"screen_scraping"
] |
stackoverflow_0000055391_python_regex_screen_scraping.txt
|
Q:
Set files to ownership of current directory in Python
I'm working on a Python script that creates text files containing size/space information about the directories that the script is run on. The script needs to be run as root, and as a result, it sets the text files that it creates to root's ownership.
I know I can change the ownership with os.fchown, but how do I pass fchown the uid and gid of the directory that the script is running on?
A:
Use
import os, stat
info = os.stat(dirpath)
uid, gid = info[stat.ST_UID], info[stat.ST_GID]
|
Set files to ownership of current directory in Python
|
I'm working on a Python script that creates text files containing size/space information about the directories that the script is run on. The script needs to be run as root, and as a result, it sets the text files that it creates to root's ownership.
I know I can change the ownership with os.fchown, but how do I pass fchown the uid and gid of the directory that the script is running on?
|
[
"Use\nimport os, stat\ninfo = os.stat(dirpath)\nuid, gid = info[stat.ST_UID], info[stat.ST_GID]\n\n"
] |
[
0
] |
[] |
[] |
[
"permissions",
"python"
] |
stackoverflow_0001421807_permissions_python.txt
|
Q:
Debugging Ruby/Python/Groovy
I'm rephrasing this question because it was either too uninteresting or too incomprehensible. :)
The original question came about because I'm making the transation from Java to Groovy, but the example could apply equally when transitioning to any of the higher-level languages (Ruby, Python, Groovy).
Java is easy to debug because there is a clear relationship between lines of code, and fairly fine-grained behaviour, e.g. manipulate an array using a for loop:
for ( int i=0; i < array1.size(); i++ )
{
if ( meetsSomeCriterion(array1.elementAt(i) )
{
array2.add( array1.elementAt(i) );
}
}
so you can set a breakpoint on the test in the loop and see what happens next. (I know there are better ways to write this; it's just to illustrate the point.)
In languages like Ruby the idiomatic style seems to favour higher-level one-liner coding, e.g. from http://rubyquiz.com/quiz113.html
quiz.to_s.reverse.scan(/(?:\d*\.)?\d{1,3}-?/).join(',').reverse
I'm wondering if you can suggest any effective techniques for debugging this, for example if you changed the regular expression ... would you still use the traditional debugger, and step into/over the chained methods? Or is there a better way?
Thanks!
A:
If I were to debug your example, the first thing I would do is break it down into multiple steps. I don't care if it's "pythonic" or "the ruby way" or "tclish" or whatever, code like that can be difficult to debug.
That's not to say I don't write code like that. Once it's been debugged it is sometimes OK to join it all into a single line but I find myself leaning more toward readability and maintainability and less toward writing concise code. If the one-liner approach is genuinely more readable I'll go with it, but if it's not, I don't.
A:
Combining multiple actions into a single line is all well and good, when you can still look at the line in question and know that it's going to do exactly what you want it to do. The minute you get to the point where you can't look at the code and go "yeah, ok, it does xyz, there's no way it couldn't" is when you should consider breaking it into individual pieces.
I give the same advice to people with long procs/methods. If you can't look at the code and know exactly what it's doing in all situations, then break it up. You can break up each of the "non-obvious" bits of code into it's own method and write tests for that piece alone. Then, you can use that method in your original method and know it's going to work... plus your original method is now easier to understand.
Along the same lines, you can break your "scan(/(?:\d*.)?\d{1,3}-?/)" code off into another method and test that by itself. The original code can then use that method, and it should be much easier to understand and know it's working.
A:
If I have to debug such a line as the one you posted I find that nothing helps as much as breaking it into stand-alone statements. That way you can see what each method receives as a parameter , and what it returns.
Such statements make code hard to maintain.
|
Debugging Ruby/Python/Groovy
|
I'm rephrasing this question because it was either too uninteresting or too incomprehensible. :)
The original question came about because I'm making the transation from Java to Groovy, but the example could apply equally when transitioning to any of the higher-level languages (Ruby, Python, Groovy).
Java is easy to debug because there is a clear relationship between lines of code, and fairly fine-grained behaviour, e.g. manipulate an array using a for loop:
for ( int i=0; i < array1.size(); i++ )
{
if ( meetsSomeCriterion(array1.elementAt(i) )
{
array2.add( array1.elementAt(i) );
}
}
so you can set a breakpoint on the test in the loop and see what happens next. (I know there are better ways to write this; it's just to illustrate the point.)
In languages like Ruby the idiomatic style seems to favour higher-level one-liner coding, e.g. from http://rubyquiz.com/quiz113.html
quiz.to_s.reverse.scan(/(?:\d*\.)?\d{1,3}-?/).join(',').reverse
I'm wondering if you can suggest any effective techniques for debugging this, for example if you changed the regular expression ... would you still use the traditional debugger, and step into/over the chained methods? Or is there a better way?
Thanks!
|
[
"If I were to debug your example, the first thing I would do is break it down into multiple steps. I don't care if it's \"pythonic\" or \"the ruby way\" or \"tclish\" or whatever, code like that can be difficult to debug. \nThat's not to say I don't write code like that. Once it's been debugged it is sometimes OK to join it all into a single line but I find myself leaning more toward readability and maintainability and less toward writing concise code. If the one-liner approach is genuinely more readable I'll go with it, but if it's not, I don't.\n",
"Combining multiple actions into a single line is all well and good, when you can still look at the line in question and know that it's going to do exactly what you want it to do. The minute you get to the point where you can't look at the code and go \"yeah, ok, it does xyz, there's no way it couldn't\" is when you should consider breaking it into individual pieces.\nI give the same advice to people with long procs/methods. If you can't look at the code and know exactly what it's doing in all situations, then break it up. You can break up each of the \"non-obvious\" bits of code into it's own method and write tests for that piece alone. Then, you can use that method in your original method and know it's going to work... plus your original method is now easier to understand.\nAlong the same lines, you can break your \"scan(/(?:\\d*.)?\\d{1,3}-?/)\" code off into another method and test that by itself. The original code can then use that method, and it should be much easier to understand and know it's working.\n",
"If I have to debug such a line as the one you posted I find that nothing helps as much as breaking it into stand-alone statements. That way you can see what each method receives as a parameter , and what it returns. \nSuch statements make code hard to maintain.\n"
] |
[
3,
2,
0
] |
[] |
[] |
[
"debugging",
"dynamic_languages",
"groovy",
"python",
"ruby"
] |
stackoverflow_0001205343_debugging_dynamic_languages_groovy_python_ruby.txt
|
Q:
Switching databases in TG2 during runtime
I am doing an application which will use multiple sqlite3 databases, prepopuldated with data from an external application. Each database will have the exact same tables, but with different data.
I want to be able to switch between these databases according to user input. What is the most elegant way to do that in TurboGears 2?
A:
If ALL databases have the same schema then you should be able to create several Sessions using the same model to the different DBs.
A:
Dzhelil,
I wrote a blog post a while back about using multiple databases in TG2. You could combine this method with Jorge's suggestion of multiple DBSessions and I think you could do this easily.
How to use multiple databases in TurboGears 2.0
Hope this helps,
Seth
A:
I am using two databases for a read-only application. The second database is a cache in case the primary database is down. I use two objects to hold the connection, metadata and compatible Table instances. The top of the view function assigns db = primary or db = secondary and the rest is just queries against db.tableA.join(db.tableB). I am not using the ORM.
The schemata are not strictly identical. The primary database needs a schema. prefix (Table(...schema='schema')) and the cache database does not. To get around this, I create my table objects in a function that takes the schema name as an argument. By calling the function once for each database, I wind up with compatible prefixed and non-prefixed Table objects.
At least in Pylons, the SQLAlchemy meta.Session is a ScopedSession. The application's BaseController in appname/lib/base.py calls Session.remove() after each request. It's probably better to have a single Session that talks to both databases, but if you don't you may need to modify your BaseController to call .remove() on each Session.
|
Switching databases in TG2 during runtime
|
I am doing an application which will use multiple sqlite3 databases, prepopuldated with data from an external application. Each database will have the exact same tables, but with different data.
I want to be able to switch between these databases according to user input. What is the most elegant way to do that in TurboGears 2?
|
[
"If ALL databases have the same schema then you should be able to create several Sessions using the same model to the different DBs.\n",
"Dzhelil,\nI wrote a blog post a while back about using multiple databases in TG2. You could combine this method with Jorge's suggestion of multiple DBSessions and I think you could do this easily.\nHow to use multiple databases in TurboGears 2.0\nHope this helps, \nSeth\n",
"I am using two databases for a read-only application. The second database is a cache in case the primary database is down. I use two objects to hold the connection, metadata and compatible Table instances. The top of the view function assigns db = primary or db = secondary and the rest is just queries against db.tableA.join(db.tableB). I am not using the ORM.\nThe schemata are not strictly identical. The primary database needs a schema. prefix (Table(...schema='schema')) and the cache database does not. To get around this, I create my table objects in a function that takes the schema name as an argument. By calling the function once for each database, I wind up with compatible prefixed and non-prefixed Table objects.\nAt least in Pylons, the SQLAlchemy meta.Session is a ScopedSession. The application's BaseController in appname/lib/base.py calls Session.remove() after each request. It's probably better to have a single Session that talks to both databases, but if you don't you may need to modify your BaseController to call .remove() on each Session.\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"python",
"sqlite",
"turbogears",
"turbogears2"
] |
stackoverflow_0001093589_python_sqlite_turbogears_turbogears2.txt
|
Q:
pytz: Why is normalize needed when converting between timezones?
I'm reading the not so complete pytz documentation and I'm stuck on understand one part of it.
Converting between timezones also needs special attention. This also needs to use the normalize method to ensure the conversion is correct.
>>> utc_dt = utc.localize(datetime.utcfromtimestamp(1143408899))
>>> utc_dt.strftime(fmt)
'2006-03-26 21:34:59 UTC+0000'
>>> au_tz = timezone('Australia/Sydney')
>>> au_dt = au_tz.normalize(utc_dt.astimezone(au_tz))
>>> au_dt.strftime(fmt)
'2006-03-27 08:34:59 EST+1100'
>>> utc_dt2 = utc.normalize(au_dt.astimezone(utc))
>>> utc_dt2.strftime(fmt)
'2006-03-26 21:34:59 UTC+0000'
I tried this very example without using normalize and it turned out just the same. In my opinion this example doesn't really explain why we have to use normalize when converting between datetime objects in different timezones.
Would someone please give me an example (like the one above) where the result differs when not using normalize.
Thanks
A:
From the pytz documentation:
In addition, if you perform date arithmetic on local times that cross DST boundaries, the results may be in an incorrect timezone (ie. subtract 1 minute from 2002-10-27 1:00 EST and you get 2002-10-27 0:59 EST instead of the correct 2002-10-27 1:59 EDT). A normalize() method is provided to correct this. Unfortunately these issues cannot be resolved without modifying the Python datetime implementation.
A:
The docs say that normalize is used as a workaround for DST issues:
In addition, if you perform date arithmetic on local times that cross DST boundaries, the results may be in an incorrect timezone (ie. subtract 1 minute from 2002-10-27 1:00 EST and you get 2002-10-27 0:59 EST instead of the correct 2002-10-27 1:59 EDT). A normalize() method is provided to correct this.
So it's used to correct some edge cases involving DST. If you're not using DST timezones (e.g. UTC) then it's not necessary to use normalize.
If you don't use it your conversion could potentially be one hour off under certain circumstances.
|
pytz: Why is normalize needed when converting between timezones?
|
I'm reading the not so complete pytz documentation and I'm stuck on understand one part of it.
Converting between timezones also needs special attention. This also needs to use the normalize method to ensure the conversion is correct.
>>> utc_dt = utc.localize(datetime.utcfromtimestamp(1143408899))
>>> utc_dt.strftime(fmt)
'2006-03-26 21:34:59 UTC+0000'
>>> au_tz = timezone('Australia/Sydney')
>>> au_dt = au_tz.normalize(utc_dt.astimezone(au_tz))
>>> au_dt.strftime(fmt)
'2006-03-27 08:34:59 EST+1100'
>>> utc_dt2 = utc.normalize(au_dt.astimezone(utc))
>>> utc_dt2.strftime(fmt)
'2006-03-26 21:34:59 UTC+0000'
I tried this very example without using normalize and it turned out just the same. In my opinion this example doesn't really explain why we have to use normalize when converting between datetime objects in different timezones.
Would someone please give me an example (like the one above) where the result differs when not using normalize.
Thanks
|
[
"From the pytz documentation:\n\nIn addition, if you perform date arithmetic on local times that cross DST boundaries, the results may be in an incorrect timezone (ie. subtract 1 minute from 2002-10-27 1:00 EST and you get 2002-10-27 0:59 EST instead of the correct 2002-10-27 1:59 EDT). A normalize() method is provided to correct this. Unfortunately these issues cannot be resolved without modifying the Python datetime implementation.\n\n",
"The docs say that normalize is used as a workaround for DST issues:\n\nIn addition, if you perform date arithmetic on local times that cross DST boundaries, the results may be in an incorrect timezone (ie. subtract 1 minute from 2002-10-27 1:00 EST and you get 2002-10-27 0:59 EST instead of the correct 2002-10-27 1:59 EDT). A normalize() method is provided to correct this.\n\nSo it's used to correct some edge cases involving DST. If you're not using DST timezones (e.g. UTC) then it's not necessary to use normalize.\nIf you don't use it your conversion could potentially be one hour off under certain circumstances.\n"
] |
[
12,
6
] |
[] |
[] |
[
"python",
"pytz",
"timezone"
] |
stackoverflow_0001422880_python_pytz_timezone.txt
|
Q:
Python and the built-in heap
At the moment, I am trying to write a priority queue in Python using the built in heapq library. However, I am stuck trying to get a handle on what Python does with the tie-breaking, I want to have a specific condition where I can dictate what happens with the tie-breaking instead of the heapq library that seems to almost pick something off the queue at random. Does anybody know of a way of rewriting the tie-breaking condition or would it be easier to build the priority queue from the ground up?
A:
heapq uses the intrinsic comparisons on queue items (__le__ and friends). The general way to work around this limit is the good old approach known as "decorate/undecorate" -- that's what we used to do in sorting, before the key= parameter was introduced there.
To put it simply, you enqueue and dequeue, not just the items you care about ("payload"), but rather the items "decorated" into tuples that start with the "keys" you need heapq to consider. So for example it would be normal to enqueue a tuple like (foo.x, time.time(), foo) if you want prioritization by the x attribute with ties broken by time of insertion in the queue -- of course when you de-queue you "undecorate", there by taking the [-1]th item of the tuple you get by de-queueing.
So, just put the "secondary keys" you need to be considered for "tie-breaking" in the "decorated" tuple you enqueue, AFTER the ones whose "ties" you want to break that way.
|
Python and the built-in heap
|
At the moment, I am trying to write a priority queue in Python using the built in heapq library. However, I am stuck trying to get a handle on what Python does with the tie-breaking, I want to have a specific condition where I can dictate what happens with the tie-breaking instead of the heapq library that seems to almost pick something off the queue at random. Does anybody know of a way of rewriting the tie-breaking condition or would it be easier to build the priority queue from the ground up?
|
[
"heapq uses the intrinsic comparisons on queue items (__le__ and friends). The general way to work around this limit is the good old approach known as \"decorate/undecorate\" -- that's what we used to do in sorting, before the key= parameter was introduced there.\nTo put it simply, you enqueue and dequeue, not just the items you care about (\"payload\"), but rather the items \"decorated\" into tuples that start with the \"keys\" you need heapq to consider. So for example it would be normal to enqueue a tuple like (foo.x, time.time(), foo) if you want prioritization by the x attribute with ties broken by time of insertion in the queue -- of course when you de-queue you \"undecorate\", there by taking the [-1]th item of the tuple you get by de-queueing.\nSo, just put the \"secondary keys\" you need to be considered for \"tie-breaking\" in the \"decorated\" tuple you enqueue, AFTER the ones whose \"ties\" you want to break that way.\n"
] |
[
10
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001422969_python.txt
|
Q:
Negative lookahead after newline?
I have a CSV-like text file that has about 1000 lines. Between each record in the file is a long series of dashes. The records generally end with a \n, but sometimes there is an extra \n before the end of the record. Simplified example:
"1x", "1y", "Hi there"
-------------------------------
"2x", "2y", "Hello - I'm lost"
-------------------------------
"3x", "3y", "How ya
doing?"
-------------------------------
I want to replace the extra \n's with spaces, i.e. concatenate the lines between the dashes. I thought I would be able to do this (Python 2.5):
text = open("thefile.txt", "r").read()
better_text = re.sub(r'\n(?!\-)', ' ', text)
but that seems to replace every \n, not just the ones that are not followed by a dash. What am I doing wrong?
I am asking this question in an attempt to improve my own regex skills and understand the mistakes that I made. The end goal is to generate a text file in a format that is usable by a specific VBA for Word macro that generates a styled Word document which will then be digested by a Word-friendly CMS.
A:
This is a good place to use a generator function to skip the lines of ----'s and yield something that the csv module can read.
def readCleanLines( someFile ):
for line in someFile:
if line.strip() == len(line.strip())*'-':
continue
yield line
reader= csv.reader( readCleanLines( someFile ) )
for row in reader:
print row
This should handle the line breaks inside quotes seamlessly and silently.
If you want to do other things with this file, for example, save a copy with the ---- lines removed, you can do this.
with open( "source", "r" ) as someFile:
with open( "destination", "w" ) as anotherFile:
for line in readCleanLines( someFile ):
anotherFile.write( line )
That will make a copy with the ---- lines removed. This isn't really worth the effort, since reading and skipping the lines is very, very fast and doesn't require any additional storage.
A:
You need to exclude the line breaks at the end of the separating lines. Try this:
\n(?<!-\n)(?!-)
This regular expression uses a negative look-behind assertion to exclude \n that’s preceeded by an -.
A:
re.sub(r'(?<!-)\n(?!-)', ' ', text)
(Hyphen doesn't need escaping outside of a character class.)
A:
A RegEx isn't always the best tool for the job. How about running it through something like "Split" or "Tokenize" first? (I'm sure python has an equivalent) Then you have your records and can assume newlines are just continuations.
|
Negative lookahead after newline?
|
I have a CSV-like text file that has about 1000 lines. Between each record in the file is a long series of dashes. The records generally end with a \n, but sometimes there is an extra \n before the end of the record. Simplified example:
"1x", "1y", "Hi there"
-------------------------------
"2x", "2y", "Hello - I'm lost"
-------------------------------
"3x", "3y", "How ya
doing?"
-------------------------------
I want to replace the extra \n's with spaces, i.e. concatenate the lines between the dashes. I thought I would be able to do this (Python 2.5):
text = open("thefile.txt", "r").read()
better_text = re.sub(r'\n(?!\-)', ' ', text)
but that seems to replace every \n, not just the ones that are not followed by a dash. What am I doing wrong?
I am asking this question in an attempt to improve my own regex skills and understand the mistakes that I made. The end goal is to generate a text file in a format that is usable by a specific VBA for Word macro that generates a styled Word document which will then be digested by a Word-friendly CMS.
|
[
"This is a good place to use a generator function to skip the lines of ----'s and yield something that the csv module can read.\ndef readCleanLines( someFile ):\n for line in someFile:\n if line.strip() == len(line.strip())*'-':\n continue\n yield line\n\nreader= csv.reader( readCleanLines( someFile ) )\nfor row in reader:\n print row\n\nThis should handle the line breaks inside quotes seamlessly and silently.\n\nIf you want to do other things with this file, for example, save a copy with the ---- lines removed, you can do this.\nwith open( \"source\", \"r\" ) as someFile:\n with open( \"destination\", \"w\" ) as anotherFile:\n for line in readCleanLines( someFile ):\n anotherFile.write( line )\n\nThat will make a copy with the ---- lines removed. This isn't really worth the effort, since reading and skipping the lines is very, very fast and doesn't require any additional storage.\n",
"You need to exclude the line breaks at the end of the separating lines. Try this:\n\\n(?<!-\\n)(?!-)\n\nThis regular expression uses a negative look-behind assertion to exclude \\n that’s preceeded by an -.\n",
"re.sub(r'(?<!-)\\n(?!-)', ' ', text)\n\n(Hyphen doesn't need escaping outside of a character class.)\n",
"A RegEx isn't always the best tool for the job. How about running it through something like \"Split\" or \"Tokenize\" first? (I'm sure python has an equivalent) Then you have your records and can assume newlines are just continuations.\n"
] |
[
7,
5,
1,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001423260_python_regex.txt
|
Q:
Can I run a Python script as a service?
Is it possible to run a Python script as a background service on a webserver? I want to do this for socket communication.
A:
You can make it a daemon. There is a PEP for a more complete solution, but I have found that this works well.
import os, sys
def become_daemon(our_home_dir='.', out_log='/dev/null', err_log='/dev/null', pidfile='/var/tmp/daemon.pid'):
""" Make the current process a daemon. """
try:
# First fork
try:
if os.fork() > 0:
sys.exit(0)
except OSError, e:
sys.stderr.write('fork #1 failed" (%d) %s\n' % (e.errno, e.strerror))
sys.exit(1)
os.setsid()
os.chdir(our_home_dir)
os.umask(0)
# Second fork
try:
pid = os.fork()
if pid > 0:
# You must write the pid file here. After the exit()
# the pid variable is gone.
fpid = open(pidfile, 'wb')
fpid.write(str(pid))
fpid.close()
sys.exit(0)
except OSError, e:
sys.stderr.write('fork #2 failed" (%d) %s\n' % (e.errno, e.strerror))
sys.exit(1)
si = open('/dev/null', 'r')
so = open(out_log, 'a+', 0)
se = open(err_log, 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
except Exception, e:
sys.stderr.write(str(e))
A:
You might want to check out Twisted.
A:
on XP and later you can use the sc.exe program to use any .exe as service:
>sc create
Creates a service entry in the registry and Service Database.
SYNTAX:
sc create [service name] [binPath= ] <option1> <option2>...
CREATE OPTIONS:
NOTE: The option name includes the equal sign.
type= <own|share|interact|kernel|filesys|rec>
(default = own)
start= <boot|system|auto|demand|disabled>
(default = demand)
error= <normal|severe|critical|ignore>
(default = normal)
binPath= <BinaryPathName>
group= <LoadOrderGroup>
tag= <yes|no>
depend= <Dependencies(separated by / (forward slash))>
obj= <AccountName|ObjectName>
(default = LocalSystem)
DisplayName= <display name>
password= <password>
You can start your pythonscript by starting the python interpreter with your script as argument:
python.exe myscript.py
A:
There is the very helpful Pypi package which is the basis for my daemons written in Python.
A:
Assuming this is for Windows, see this recipe based on srvany
A:
If you are talking about linux, it is as easy as doing something like ./myscript.py &
|
Can I run a Python script as a service?
|
Is it possible to run a Python script as a background service on a webserver? I want to do this for socket communication.
|
[
"You can make it a daemon. There is a PEP for a more complete solution, but I have found that this works well.\nimport os, sys\n\ndef become_daemon(our_home_dir='.', out_log='/dev/null', err_log='/dev/null', pidfile='/var/tmp/daemon.pid'):\n \"\"\" Make the current process a daemon. \"\"\"\n\n try:\n # First fork\n try:\n if os.fork() > 0:\n sys.exit(0)\n except OSError, e:\n sys.stderr.write('fork #1 failed\" (%d) %s\\n' % (e.errno, e.strerror))\n sys.exit(1)\n\n os.setsid()\n os.chdir(our_home_dir)\n os.umask(0)\n\n # Second fork\n try:\n pid = os.fork()\n if pid > 0:\n # You must write the pid file here. After the exit()\n # the pid variable is gone.\n fpid = open(pidfile, 'wb')\n fpid.write(str(pid))\n fpid.close()\n sys.exit(0)\n except OSError, e:\n sys.stderr.write('fork #2 failed\" (%d) %s\\n' % (e.errno, e.strerror))\n sys.exit(1)\n\n si = open('/dev/null', 'r')\n so = open(out_log, 'a+', 0)\n se = open(err_log, 'a+', 0)\n os.dup2(si.fileno(), sys.stdin.fileno())\n os.dup2(so.fileno(), sys.stdout.fileno())\n os.dup2(se.fileno(), sys.stderr.fileno())\n except Exception, e:\n sys.stderr.write(str(e))\n\n",
"You might want to check out Twisted.\n",
"on XP and later you can use the sc.exe program to use any .exe as service:\n>sc create\nCreates a service entry in the registry and Service Database.\nSYNTAX:\nsc create [service name] [binPath= ] <option1> <option2>...\nCREATE OPTIONS:\nNOTE: The option name includes the equal sign.\n type= <own|share|interact|kernel|filesys|rec>\n (default = own)\n start= <boot|system|auto|demand|disabled>\n (default = demand)\n error= <normal|severe|critical|ignore>\n (default = normal)\n binPath= <BinaryPathName>\n group= <LoadOrderGroup>\n tag= <yes|no>\n depend= <Dependencies(separated by / (forward slash))>\n obj= <AccountName|ObjectName>\n (default = LocalSystem)\n DisplayName= <display name>\n password= <password>\n\nYou can start your pythonscript by starting the python interpreter with your script as argument:\npython.exe myscript.py\n\n",
"There is the very helpful Pypi package which is the basis for my daemons written in Python.\n",
"Assuming this is for Windows, see this recipe based on srvany\n",
"If you are talking about linux, it is as easy as doing something like ./myscript.py &\n"
] |
[
10,
7,
2,
2,
0,
0
] |
[] |
[] |
[
"python",
"sockets",
"web_services",
"webserver"
] |
stackoverflow_0001423345_python_sockets_web_services_webserver.txt
|
Q:
What is wrong with this python function from "Programming Collective Intelligence"?
This is the function in question. It calculates the Pearson correlation coefficient for p1 and p2, which is supposed to be a number between -1 and 1.
When I use this with real user data, it sometimes returns a number greater than 1, like in this example:
def sim_pearson(prefs,p1,p2):
si={}
for item in prefs[p1]:
if item in prefs[p2]: si[item]=1
if len(si)==0: return 0
n=len(si)
sum1=sum([prefs[p1][it] for it in si])
sum2=sum([prefs[p2][it] for it in si])
sum1Sq=sum([pow(prefs[p1][it],2) for it in si])
sum2Sq=sum([pow(prefs[p2][it],2) for it in si])
pSum=sum([prefs[p1][it]*prefs[p2][it] for it in si])
num=pSum-(sum1*sum2/n)
den=sqrt((sum1Sq-pow(sum1,2)/n)*(sum2Sq-pow(sum2,2)/n))
if den==0: return 0
r=num/den
return r
critics = {
'user1':{
'item1': 3,
'item2': 5,
'item3': 5,
},
'user2':{
'item1': 4,
'item2': 5,
'item3': 5,
}
}
print sim_pearson(critics, 'user1', 'user2', )
1.15470053838
A:
It looks like you may be unexpectedly using integer division. I made the following change and your function returned 1.0:
num=pSum-(1.0*sum1*sum2/n)
den=sqrt((sum1Sq-1.0*pow(sum1,2)/n)*(sum2Sq-1.0*pow(sum2,2)/n))
See PEP 238 for more information on the division operator in Python. An alternate way of fixing your above code is:
from __future__ import division
A:
Well it took me a minute to read over the code but it seems if you change your input data to floats it will work
A:
Integer division is confusing it. It works if you make n a float:
n=float(len(si))
A:
Well, I wasn't exactly able to find what's wrong with the logic in your function, so I just reimplemented it using the definition of Pearson coefficient:
from math import sqrt
def sim_pearson(p1,p2):
keys = set(p1) | set(p2)
n = len(keys)
a1 = sum(p1[it] for it in keys) / n
a2 = sum(p2[it] for it in keys) / n
# print(a1, a2)
sum1Sq = sum((p1[it] - a1) ** 2 for it in keys)
sum2Sq = sum((p2[it] - a2) ** 2 for it in keys)
num = sum((p1[it] - a1) * (p2[it] - a2) for it in keys)
den = sqrt(sum1Sq * sum2Sq)
# print(sum1Sq, sum2Sq, num, den)
return num / den
critics = {
'user1':{
'item1': 3,
'item2': 5,
'item3': 5,
},
'user2':{
'item1': 4,
'item2': 5,
'item3': 5,
}
}
assert 0.999 < sim_pearson(critics['user1'], critics['user1']) < 1.0001
print('Your example:', sim_pearson(critics['user1'], critics['user2']))
print('Another example:', sim_pearson({1: 1, 2: 2, 3: 3}, {1: 4, 2: 0, 3: 1}))
Note that in your example the Pearson coefficient is just 1.0 since vectors (-4/3, 2/3, 2/3) and (-2/3, 1/3, 1/3) are parallel.
|
What is wrong with this python function from "Programming Collective Intelligence"?
|
This is the function in question. It calculates the Pearson correlation coefficient for p1 and p2, which is supposed to be a number between -1 and 1.
When I use this with real user data, it sometimes returns a number greater than 1, like in this example:
def sim_pearson(prefs,p1,p2):
si={}
for item in prefs[p1]:
if item in prefs[p2]: si[item]=1
if len(si)==0: return 0
n=len(si)
sum1=sum([prefs[p1][it] for it in si])
sum2=sum([prefs[p2][it] for it in si])
sum1Sq=sum([pow(prefs[p1][it],2) for it in si])
sum2Sq=sum([pow(prefs[p2][it],2) for it in si])
pSum=sum([prefs[p1][it]*prefs[p2][it] for it in si])
num=pSum-(sum1*sum2/n)
den=sqrt((sum1Sq-pow(sum1,2)/n)*(sum2Sq-pow(sum2,2)/n))
if den==0: return 0
r=num/den
return r
critics = {
'user1':{
'item1': 3,
'item2': 5,
'item3': 5,
},
'user2':{
'item1': 4,
'item2': 5,
'item3': 5,
}
}
print sim_pearson(critics, 'user1', 'user2', )
1.15470053838
|
[
"It looks like you may be unexpectedly using integer division. I made the following change and your function returned 1.0:\nnum=pSum-(1.0*sum1*sum2/n)\nden=sqrt((sum1Sq-1.0*pow(sum1,2)/n)*(sum2Sq-1.0*pow(sum2,2)/n))\n\nSee PEP 238 for more information on the division operator in Python. An alternate way of fixing your above code is:\nfrom __future__ import division\n\n",
"Well it took me a minute to read over the code but it seems if you change your input data to floats it will work\n",
"Integer division is confusing it. It works if you make n a float:\nn=float(len(si))\n\n",
"Well, I wasn't exactly able to find what's wrong with the logic in your function, so I just reimplemented it using the definition of Pearson coefficient:\nfrom math import sqrt\n\ndef sim_pearson(p1,p2):\n keys = set(p1) | set(p2)\n n = len(keys)\n\n a1 = sum(p1[it] for it in keys) / n\n a2 = sum(p2[it] for it in keys) / n\n\n# print(a1, a2)\n\n sum1Sq = sum((p1[it] - a1) ** 2 for it in keys)\n sum2Sq = sum((p2[it] - a2) ** 2 for it in keys) \n\n num = sum((p1[it] - a1) * (p2[it] - a2) for it in keys)\n den = sqrt(sum1Sq * sum2Sq)\n\n# print(sum1Sq, sum2Sq, num, den)\n return num / den\n\ncritics = {\n 'user1':{\n 'item1': 3,\n 'item2': 5,\n 'item3': 5,\n },\n\n 'user2':{\n 'item1': 4,\n 'item2': 5,\n 'item3': 5,\n }\n}\n\nassert 0.999 < sim_pearson(critics['user1'], critics['user1']) < 1.0001\n\nprint('Your example:', sim_pearson(critics['user1'], critics['user2']))\nprint('Another example:', sim_pearson({1: 1, 2: 2, 3: 3}, {1: 4, 2: 0, 3: 1}))\n\nNote that in your example the Pearson coefficient is just 1.0 since vectors (-4/3, 2/3, 2/3) and (-2/3, 1/3, 1/3) are parallel. \n"
] |
[
8,
2,
2,
1
] |
[] |
[] |
[
"algorithm",
"pearson",
"python"
] |
stackoverflow_0001423525_algorithm_pearson_python.txt
|
Q:
adding the same object twice to a ManyToManyField
I have two django model classes:
class A(models.Model):
name = models.CharField(max_length = 128) #irrelevant
class B(models.Model):
a = models.ManyToManyField(A)
name = models.CharField(max_length = 128) #irrelevant
What I want to do is the following:
a1 = A()
a2 = A()
b = B()
b.a.add(a1)
b.a.add(a1) #I want to have a1 twice
b.a.add(a2)
assert len(b.a.all()) == 3 #this fails; the length of all() is 2
I am guessing that add() uses a set semantic, but how can I circumvent that? I tried looking into custom managers, but I am not sure if this the right way (seems complicated)...
Thanks in advance!
A:
I think what you want is to use an intermediary model to form the M2M relationship using the through keyword argument in the ManyToManyField. Sort of like the first answer above, but more "Django-y".
class A(models.Model):
name = models.CharField(max_length=200)
class B(models.Model):
a = models.ManyToManyField(A, through='C')
...
class C(models.Model):
a = models.ForeignKey(A)
b = models.ForeignKey(B)
When using the through keyword, the usual M2M manipulation methods are no longer available (this means add, create, remove, or assignment with = operator). Instead you must create the intermediary model itself, like so:
>>> C.objects.create(a=a1, b=b)
However, you will still be able to use the usual querying operations on the model containing the ManyToManyField. In other words the following will still work:
>>> b.a.filter(a=a1)
But maybe a better example is something like this:
>>> B.objects.filter(a__name='Test')
As long as the FK fields on the intermediary model are not designated as unique you will be able to create multiple instances with the same FKs. You can also attach additional information about the relationship by adding any other fields you like to C.
Intermediary models are documented here.
A:
Django uses a relational DB for its underlying storage, and that intrinsically DOES have "set semantics": no way to circumvent THAT. So if you want a "multi-set" of anything, you have to represent it with a numeric field that counts how many times each item "occurs". ManyToManyField doesn't do that -- so, instead, you'll need a separate Model subclass which explicitly indicates the A and the B it's relating, AND has an integer property to "count how many times".
A:
One way to do it:
class A(models.Model):
...
class B(models.Model):
...
class C(models.Model):
a = models.ForeignKey(A)
b = models.ForeignKey(B)
Then:
>>> a1 = A()
>>> a2 = A()
>>> b = B()
>>> c1 = C(a=a1, b=b)
>>> c2 = C(a=a2, b=b)
>>> c3 = C(a=a1, b=b)
So, we simply have:
>>> assert C.objects.filter(b=b).count == 3
>>> for c in C.objects.filter(b=b):
... # do something with c.a
|
adding the same object twice to a ManyToManyField
|
I have two django model classes:
class A(models.Model):
name = models.CharField(max_length = 128) #irrelevant
class B(models.Model):
a = models.ManyToManyField(A)
name = models.CharField(max_length = 128) #irrelevant
What I want to do is the following:
a1 = A()
a2 = A()
b = B()
b.a.add(a1)
b.a.add(a1) #I want to have a1 twice
b.a.add(a2)
assert len(b.a.all()) == 3 #this fails; the length of all() is 2
I am guessing that add() uses a set semantic, but how can I circumvent that? I tried looking into custom managers, but I am not sure if this the right way (seems complicated)...
Thanks in advance!
|
[
"I think what you want is to use an intermediary model to form the M2M relationship using the through keyword argument in the ManyToManyField. Sort of like the first answer above, but more \"Django-y\".\nclass A(models.Model):\n name = models.CharField(max_length=200)\n\nclass B(models.Model):\n a = models.ManyToManyField(A, through='C')\n ...\n\nclass C(models.Model):\n a = models.ForeignKey(A)\n b = models.ForeignKey(B)\n\nWhen using the through keyword, the usual M2M manipulation methods are no longer available (this means add, create, remove, or assignment with = operator). Instead you must create the intermediary model itself, like so:\n >>> C.objects.create(a=a1, b=b)\n\nHowever, you will still be able to use the usual querying operations on the model containing the ManyToManyField. In other words the following will still work:\n >>> b.a.filter(a=a1)\n\nBut maybe a better example is something like this:\n>>> B.objects.filter(a__name='Test')\n\nAs long as the FK fields on the intermediary model are not designated as unique you will be able to create multiple instances with the same FKs. You can also attach additional information about the relationship by adding any other fields you like to C. \nIntermediary models are documented here. \n",
"Django uses a relational DB for its underlying storage, and that intrinsically DOES have \"set semantics\": no way to circumvent THAT. So if you want a \"multi-set\" of anything, you have to represent it with a numeric field that counts how many times each item \"occurs\". ManyToManyField doesn't do that -- so, instead, you'll need a separate Model subclass which explicitly indicates the A and the B it's relating, AND has an integer property to \"count how many times\".\n",
"One way to do it:\nclass A(models.Model):\n ...\n\nclass B(models.Model):\n ...\n\nclass C(models.Model):\n a = models.ForeignKey(A)\n b = models.ForeignKey(B)\n\nThen:\n>>> a1 = A()\n>>> a2 = A()\n>>> b = B()\n>>> c1 = C(a=a1, b=b)\n>>> c2 = C(a=a2, b=b)\n>>> c3 = C(a=a1, b=b)\n\nSo, we simply have:\n>>> assert C.objects.filter(b=b).count == 3\n>>> for c in C.objects.filter(b=b):\n... # do something with c.a\n\n"
] |
[
8,
4,
2
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0001417825_django_django_models_python.txt
|
Q:
hashlib / md5. Compatibility with python 2.4
python 2.6 reports that the md5 module is obsolete and hashlib should be used. If I change import md5 to import hashlib I will solve for python 2.5 and python 2.6, but not for python 2.4, which has no hashlib module (leading to a ImportError, which I can catch).
Now, to fix it, I could do a try/catch, and define a getMd5() function so that a proper one gets defined according to the result of the try block. Is this solution ok?
How would you solve this issue in a more general case, like, for example: you have two different libraries with the same objective but different interface, and you want to use one, but fall back and use the other if the first one is not found.
A:
In general the following construct is just fine:
try:
import module
except ImportError:
# Do something else.
In your particular case, perhaps:
try:
from hashlib import md5
except ImportError:
from md5 import md5
A:
In the case where the modules have the same interface, as they do here, the solution you described is fine. You could also isolate the import into its own module like this:
hash.py
----
try:
import hashlib.md5 as md5mod
except ImportError:
import md5 as md5mod
-----
prog.py
-----
from hash import md5mod
....
In the case where they have different interfaces you would need to write an adaptor to align the interfaces as you have specified.
|
hashlib / md5. Compatibility with python 2.4
|
python 2.6 reports that the md5 module is obsolete and hashlib should be used. If I change import md5 to import hashlib I will solve for python 2.5 and python 2.6, but not for python 2.4, which has no hashlib module (leading to a ImportError, which I can catch).
Now, to fix it, I could do a try/catch, and define a getMd5() function so that a proper one gets defined according to the result of the try block. Is this solution ok?
How would you solve this issue in a more general case, like, for example: you have two different libraries with the same objective but different interface, and you want to use one, but fall back and use the other if the first one is not found.
|
[
"In general the following construct is just fine:\ntry:\n import module\nexcept ImportError: \n # Do something else.\n\nIn your particular case, perhaps:\ntry: \n from hashlib import md5\nexcept ImportError:\n from md5 import md5\n\n",
"In the case where the modules have the same interface, as they do here, the solution you described is fine. You could also isolate the import into its own module like this:\nhash.py\n----\ntry:\n import hashlib.md5 as md5mod\nexcept ImportError:\n import md5 as md5mod\n\n-----\nprog.py\n-----\nfrom hash import md5mod\n....\n\nIn the case where they have different interfaces you would need to write an adaptor to align the interfaces as you have specified.\n"
] |
[
18,
2
] |
[] |
[] |
[
"backwards_compatibility",
"hashlib",
"import",
"md5",
"python"
] |
stackoverflow_0001423861_backwards_compatibility_hashlib_import_md5_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.