content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Python: undo write to file
What is the best way to undo the writing to a file? If I'm going through a loop and writing one line at a time, and I want to undo the previous write and replace it with something else, how do I go about doing that? Any ideas?
Thanks in advance!
A:
as others have noted, this doesn't make much sense, it's far better not to write until you have to. in your case, you can keep the 'writing pointer' one line behind your processing.
pseudocode:
previousItem = INVALID
for each item I:
is I same as previousItem?
then update previousItem with I
else
write previousItem to file
previousItem = I
write previousItem to file
as you can see, previousItem is the only item kept in memory, and it's updated to 'accumulate' as needed. it's only written to file when the next one isn't "the same as" that one.
of course, you could really rollback the file cursor, just keep track of the byte offset where the last line started and then do an fseek() to there before rewriting. at first it would seem simpler to code, but it's a total nightmare to debug.
A:
Try to write to your files lazily: Don't write until you are finally certain you need to do it.
A:
As mentioned, you're best off not trying to undo writes. If you really want to do it, though, it's easy enough:
import os
f = open("test.txt", "w+")
f.write("testing 1\n")
f.write("testing 2\n")
pos = f.tell()
f.write("testing 3\n")
f.seek(pos, os.SEEK_SET)
f.truncate(pos)
f.write("foo\n")
Just record the file position to rewind to, seek back to it, and truncate the file to that position.
The major problem with doing this is that it doesn't work on streams. You can't do this to stdout, or to a pipe or TCP stream; only to a real file.
A:
If you keep track of the line numbers you can use something like this:
from itertools import islice
def seek_to_line(f, n):
for ignored_line in islice(f, n - 1):
pass # skip n-1 lines
f = open('foo')
seek_to_line(f, 9000) # seek to line 9000
# print lines 9000 and later
for line in f:
print line
A:
Perhaps a better thing to do would be to modify your program so that it will only write a line if you are sure that you want to write it. To do that your code would look something like:
to_write = ""
for item in alist:
#Check to make sure that I want to write
f.write(to_write)
to_write = ""
#Compute what you want to write.
to_write = something
#We're finished looping so write the last part out
f.write(to_write)
|
Python: undo write to file
|
What is the best way to undo the writing to a file? If I'm going through a loop and writing one line at a time, and I want to undo the previous write and replace it with something else, how do I go about doing that? Any ideas?
Thanks in advance!
|
[
"as others have noted, this doesn't make much sense, it's far better not to write until you have to. in your case, you can keep the 'writing pointer' one line behind your processing.\npseudocode:\npreviousItem = INVALID\nfor each item I:\n is I same as previousItem?\n then update previousItem with I\n else\n write previousItem to file\n previousItem = I\nwrite previousItem to file\n\nas you can see, previousItem is the only item kept in memory, and it's updated to 'accumulate' as needed. it's only written to file when the next one isn't \"the same as\" that one.\nof course, you could really rollback the file cursor, just keep track of the byte offset where the last line started and then do an fseek() to there before rewriting. at first it would seem simpler to code, but it's a total nightmare to debug.\n",
"Try to write to your files lazily: Don't write until you are finally certain you need to do it.\n",
"As mentioned, you're best off not trying to undo writes. If you really want to do it, though, it's easy enough:\nimport os\nf = open(\"test.txt\", \"w+\")\nf.write(\"testing 1\\n\")\nf.write(\"testing 2\\n\")\npos = f.tell()\nf.write(\"testing 3\\n\")\n\nf.seek(pos, os.SEEK_SET)\nf.truncate(pos)\nf.write(\"foo\\n\")\n\nJust record the file position to rewind to, seek back to it, and truncate the file to that position.\nThe major problem with doing this is that it doesn't work on streams. You can't do this to stdout, or to a pipe or TCP stream; only to a real file.\n",
"If you keep track of the line numbers you can use something like this:\nfrom itertools import islice \ndef seek_to_line(f, n): \n for ignored_line in islice(f, n - 1): \n pass # skip n-1 lines \n\n\nf = open('foo') \nseek_to_line(f, 9000) # seek to line 9000 \n\n\n# print lines 9000 and later \nfor line in f: \n print line \n\n",
"Perhaps a better thing to do would be to modify your program so that it will only write a line if you are sure that you want to write it. To do that your code would look something like:\nto_write = \"\"\nfor item in alist:\n #Check to make sure that I want to write\n f.write(to_write)\n to_write = \"\"\n #Compute what you want to write.\n to_write = something\n\n#We're finished looping so write the last part out\nf.write(to_write)\n\n"
] |
[
5,
4,
4,
0,
0
] |
[] |
[] |
[
"file",
"python",
"undo"
] |
stackoverflow_0001479035_file_python_undo.txt
|
Q:
How to build from the source?
I cannot use sqlite3 (build python package). The reason of the is missing _sqlite3.so. I found that peoples had the same problem and they resolved it here.
The solutions is given in one sentence:
By building from source and moving the
library to
/usr/lib/python2.5/lib-dynload/ I
resolved the issue.
However, I do t understand the terminology. What does it mean "building from the source"? What should be build from the source? New version of Python? SQLite? And how one actually build from the source? Which steps should be done?
A:
Download the SQLite source here: SQLite Download Page
Extract the tarball somewhere on your machine.
Navigate to the expanded directory.
Run:
./configure
make
make install (sudo make install if you have permission issues)
Copy the newly compiled files to your Python directory.
Those directions are the simplest possible. You may run into dependency issues in which case, you'll need to download and build them as well.
|
How to build from the source?
|
I cannot use sqlite3 (build python package). The reason of the is missing _sqlite3.so. I found that peoples had the same problem and they resolved it here.
The solutions is given in one sentence:
By building from source and moving the
library to
/usr/lib/python2.5/lib-dynload/ I
resolved the issue.
However, I do t understand the terminology. What does it mean "building from the source"? What should be build from the source? New version of Python? SQLite? And how one actually build from the source? Which steps should be done?
|
[
"Download the SQLite source here: SQLite Download Page\nExtract the tarball somewhere on your machine.\nNavigate to the expanded directory.\nRun:\n./configure\nmake\nmake install (sudo make install if you have permission issues)\n\nCopy the newly compiled files to your Python directory.\nThose directions are the simplest possible. You may run into dependency issues in which case, you'll need to download and build them as well.\n"
] |
[
3
] |
[] |
[] |
[
"build",
"python",
"sqlite"
] |
stackoverflow_0001479265_build_python_sqlite.txt
|
Q:
How to bind a TextField to an IBOutlet()?
I'm trying to figure out how to update an NSTextField programatically.
I've figured out how to get the current value of the Text Field from Python:
myVar = objc.IBOutlet()
....
self.myVar.stringValue()
How do I set the value of myVar from the Python side and have the GUI update? I'd like some sort of two way binding (like {myVAR} in Flex).
Thanks!
A:
How do I set the value of myVar from the Python side and have the GUI update?
Why would you want to? The nib loader set it to a control; if you set the variable, you would lose the control.
To set the value of the control, send it a setStringValue_ (or similar) message.
|
How to bind a TextField to an IBOutlet()?
|
I'm trying to figure out how to update an NSTextField programatically.
I've figured out how to get the current value of the Text Field from Python:
myVar = objc.IBOutlet()
....
self.myVar.stringValue()
How do I set the value of myVar from the Python side and have the GUI update? I'd like some sort of two way binding (like {myVAR} in Flex).
Thanks!
|
[
"\nHow do I set the value of myVar from the Python side and have the GUI update?\n\nWhy would you want to? The nib loader set it to a control; if you set the variable, you would lose the control.\nTo set the value of the control, send it a setStringValue_ (or similar) message.\n"
] |
[
1
] |
[] |
[] |
[
"binding",
"cocoa",
"pyobjc",
"python"
] |
stackoverflow_0001479709_binding_cocoa_pyobjc_python.txt
|
Q:
How to find which view is resolved from url in presence of decorators
For debugging purposes, I'd like a quick way (e.g. in manage.py shell) of looking up which view that will be called as a result of a specific URL being requested.
I know this is what django.core.urlresolvers.resolve does, but when having a decorator on the view function it will return that decorator.
Example:
>>>django.core.urlresolvers.resolve('/edit_settings/'))
(Allow, (), {})
...where Allow is the decorator, not the view it's decorating.
How can I find the view without manually inspecting the urls.py files?
A:
This isn't my area of expertise, but it might help.
You might be able to introspect Allow to find out which object it's decorating.
>>>from django.core.urlresolvers import resolve
>>>func, args, kwargs=resolve('/edit_settings/')
>>>func
Allow
You could try
>>>func.func_name
but it might not return the view function you want.
Here's what I found when I was experimenting with basic decorator functions:
>>>def decorator(func):
... def wrapped(*args,**kwargs):
... return func(*args,**kwargs)
... wrapped.__doc__ = "Wrapped function: %s" %func.__name__
... return wrapped
>>>def add(a,b):
... return(a,b)
>>>decorated_add=decorator(add)
In this case, when I tried decorated_add.func_name it returned wrapped. However, I wanted to find a way to return add. Because I added the doc string to wrapped, I could determine the original function name:
>>>decorated_add.func_name
wrapped
>>>decorated_add.__doc__
'Wrapped function: add'
Hopefully, you can find out how to introspect Allow to find out the name of the view function, possibly by modifying the decorator function.
|
How to find which view is resolved from url in presence of decorators
|
For debugging purposes, I'd like a quick way (e.g. in manage.py shell) of looking up which view that will be called as a result of a specific URL being requested.
I know this is what django.core.urlresolvers.resolve does, but when having a decorator on the view function it will return that decorator.
Example:
>>>django.core.urlresolvers.resolve('/edit_settings/'))
(Allow, (), {})
...where Allow is the decorator, not the view it's decorating.
How can I find the view without manually inspecting the urls.py files?
|
[
"This isn't my area of expertise, but it might help. \nYou might be able to introspect Allow to find out which object it's decorating.\n>>>from django.core.urlresolvers import resolve\n>>>func, args, kwargs=resolve('/edit_settings/')\n>>>func\nAllow\n\nYou could try\n>>>func.func_name\n\nbut it might not return the view function you want.\nHere's what I found when I was experimenting with basic decorator functions:\n>>>def decorator(func):\n... def wrapped(*args,**kwargs):\n... return func(*args,**kwargs)\n... wrapped.__doc__ = \"Wrapped function: %s\" %func.__name__\n... return wrapped\n\n>>>def add(a,b):\n... return(a,b)\n\n>>>decorated_add=decorator(add)\n\nIn this case, when I tried decorated_add.func_name it returned wrapped. However, I wanted to find a way to return add. Because I added the doc string to wrapped, I could determine the original function name:\n>>>decorated_add.func_name\nwrapped\n>>>decorated_add.__doc__\n'Wrapped function: add'\n\nHopefully, you can find out how to introspect Allow to find out the name of the view function, possibly by modifying the decorator function.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"django_urls",
"python"
] |
stackoverflow_0001476996_django_django_urls_python.txt
|
Q:
Python: Grab the A record of any URI?
I basically want to implement something where you can type in any URI ( I probably will only deal with http ) and I want to return the A record of the domain in the URI, I want the server's IP address.
I know there's the ping command which most people use to look an ip address up, but I also know there's 'host' and 'dig' which are more specific.
Are there any native functions I can use that will do this for me? And if so, how lenient is that function in terms of what URI string it accepts and the structure it's in? I want to throw it:
http://foo.com
www.foo.com
http://foo.com/baz.ext
And have basically anything return an IP address. If need be I can take care of the URI parsing ( so it's a consistent format when looked up ) myself, that's an extra plus though.
A:
py> import urlparse,socket
py> p = urlparse.urlparse("http://stackoverflow.com/questions/1480183")
py> p
('http', 'stackoverflow.com', '/questions/1480183', '', '', '')
py> host=p[1]
py> ai=socket.gethostbyname(host)
py> socket.gethostbyname(host)
'69.59.196.211'
|
Python: Grab the A record of any URI?
|
I basically want to implement something where you can type in any URI ( I probably will only deal with http ) and I want to return the A record of the domain in the URI, I want the server's IP address.
I know there's the ping command which most people use to look an ip address up, but I also know there's 'host' and 'dig' which are more specific.
Are there any native functions I can use that will do this for me? And if so, how lenient is that function in terms of what URI string it accepts and the structure it's in? I want to throw it:
http://foo.com
www.foo.com
http://foo.com/baz.ext
And have basically anything return an IP address. If need be I can take care of the URI parsing ( so it's a consistent format when looked up ) myself, that's an extra plus though.
|
[
"py> import urlparse,socket\npy> p = urlparse.urlparse(\"http://stackoverflow.com/questions/1480183\")\npy> p\n('http', 'stackoverflow.com', '/questions/1480183', '', '', '')\npy> host=p[1]\npy> ai=socket.gethostbyname(host)\npy> socket.gethostbyname(host)\n'69.59.196.211'\n\n"
] |
[
5
] |
[] |
[] |
[
"dns",
"host",
"ip",
"python",
"url"
] |
stackoverflow_0001480183_dns_host_ip_python_url.txt
|
Q:
Is there anyway to use TestCase.assertEqual() outside of a TestCase?
I have a utility class that stores methods that are useful for some unit test cases. I want these helper methods to be able to do asserts/fails/etc., but it seems I can't use those methods because they expect TestCase as their first argument...
I want to be able to store the common methods outside of the testcase code and continue to use asserts in them, is that possible at all? They are ultimately used in testcase code.
I have something like:
unittest_foo.py:
import unittest
from common_methods import *
class TestPayments(unittest.TestCase):
def test_0(self):
common_method1("foo")
common_methods.py:
from unittest import TestCase
def common_method1():
blah=stuff
TestCase.failUnless(len(blah) > 5)
...
...
When the suite is run:
TypeError: unbound method failUnless() must be called with TestCase instance as first argument (got bool instance instead)
A:
This is often accomplished with multiple inheritance:
common_methods.py:
class CommonMethods:
def common_method1(self, stuff):
blah=stuff
self.failUnless(len(blah) > 5)
...
...
unittest_foo.py:
import unittest
from common_methods import CommonMethods
class TestPayments(unittest.TestCase, CommonMethods):
def test_0(self):
self.common_method1("foo")
A:
Sounds like you want this, from the error at least ...
unittest_foo.py:
import unittest
from common_methods import *
class TestPayments(unittest.TestCase):
def test_0(self):
common_method1(self, "foo")
common_methods.py:
def common_method1( self, stuff ):
blah=stuff
self.failUnless(len(blah) > 5)
|
Is there anyway to use TestCase.assertEqual() outside of a TestCase?
|
I have a utility class that stores methods that are useful for some unit test cases. I want these helper methods to be able to do asserts/fails/etc., but it seems I can't use those methods because they expect TestCase as their first argument...
I want to be able to store the common methods outside of the testcase code and continue to use asserts in them, is that possible at all? They are ultimately used in testcase code.
I have something like:
unittest_foo.py:
import unittest
from common_methods import *
class TestPayments(unittest.TestCase):
def test_0(self):
common_method1("foo")
common_methods.py:
from unittest import TestCase
def common_method1():
blah=stuff
TestCase.failUnless(len(blah) > 5)
...
...
When the suite is run:
TypeError: unbound method failUnless() must be called with TestCase instance as first argument (got bool instance instead)
|
[
"This is often accomplished with multiple inheritance:\ncommon_methods.py: \nclass CommonMethods:\n def common_method1(self, stuff):\n blah=stuff\n self.failUnless(len(blah) > 5)\n ...\n...\n\nunittest_foo.py:\nimport unittest\nfrom common_methods import CommonMethods\nclass TestPayments(unittest.TestCase, CommonMethods):\n def test_0(self):\n self.common_method1(\"foo\")\n\n",
"Sounds like you want this, from the error at least ...\nunittest_foo.py:\nimport unittest\nfrom common_methods import *\n\nclass TestPayments(unittest.TestCase):\n def test_0(self):\n common_method1(self, \"foo\")\n\ncommon_methods.py:\ndef common_method1( self, stuff ):\n blah=stuff\n self.failUnless(len(blah) > 5)\n\n"
] |
[
4,
1
] |
[] |
[] |
[
"python",
"unit_testing"
] |
stackoverflow_0001480144_python_unit_testing.txt
|
Q:
Django autoreload for development on every request?
Can a Django app be reloaded on every request ?
This is very useful for development. Ruby on Rails
does just this.
runserver reloads, but it reloads slow, and still sometime
one has to stop and start it again for some changes to show up.
(For example changes in admin.)
mod_wsgi can autoreload on Linux by touching *.wsgi files.
On Windows one has to use an observer/reloader script, that again
is slow.
I have no tried mod_python or fastcgi, can they do this?
The reason behind this is when changing scripts one would like
changes to show up immediately.
A:
Of course it is going to be slow to reload, it has to load all the application code again and not just the one file. Django is not PHP, so don't expect it to work the same.
If you really want Django to reload on every request regardless, then use CGI and a CGI/WSGI bridge. It is still going to be slow though as CGI itself adds additional overhead.
The Apache/mod_wsgi method of using a code monitor, which works for daemon mode when using UNIX, or if using Windows, is the best compromise. That is, it checks once a second for any code file which is a part of the application being changed and only then restarting the process. The run server itself also uses this one second polling method from memory as well.
Using this polling approach does introduce a one second window where you may make a request before code reload requirement has been detected. Most people aren't that quick though to progress from saving a file to reloading in browser and so wouldn't notice.
In Apache/mod_wsgi 3.0 there are mechanisms which would allow one to implement an alternative code reloader that eliminates that window by being able to schedule a check for modified code at the start of a request, but this is then going to impact the performance of every request. For the polling method it runs in the background and so doesn't normally cause any performance impact on requests.
Even in Apache/mod_wsgi with current versions you could do the same by using embedded mode and setting Apache MaxRequestsPerChild to 1, but this is also going to affect performance of serving static files as well.
In short, trying to force a reload on every request is just not the best way of going about it and certainly will not eliminate the load delays resulting from using a fat Python web application such as Django.
|
Django autoreload for development on every request?
|
Can a Django app be reloaded on every request ?
This is very useful for development. Ruby on Rails
does just this.
runserver reloads, but it reloads slow, and still sometime
one has to stop and start it again for some changes to show up.
(For example changes in admin.)
mod_wsgi can autoreload on Linux by touching *.wsgi files.
On Windows one has to use an observer/reloader script, that again
is slow.
I have no tried mod_python or fastcgi, can they do this?
The reason behind this is when changing scripts one would like
changes to show up immediately.
|
[
"Of course it is going to be slow to reload, it has to load all the application code again and not just the one file. Django is not PHP, so don't expect it to work the same.\nIf you really want Django to reload on every request regardless, then use CGI and a CGI/WSGI bridge. It is still going to be slow though as CGI itself adds additional overhead.\nThe Apache/mod_wsgi method of using a code monitor, which works for daemon mode when using UNIX, or if using Windows, is the best compromise. That is, it checks once a second for any code file which is a part of the application being changed and only then restarting the process. The run server itself also uses this one second polling method from memory as well.\nUsing this polling approach does introduce a one second window where you may make a request before code reload requirement has been detected. Most people aren't that quick though to progress from saving a file to reloading in browser and so wouldn't notice.\nIn Apache/mod_wsgi 3.0 there are mechanisms which would allow one to implement an alternative code reloader that eliminates that window by being able to schedule a check for modified code at the start of a request, but this is then going to impact the performance of every request. For the polling method it runs in the background and so doesn't normally cause any performance impact on requests.\nEven in Apache/mod_wsgi with current versions you could do the same by using embedded mode and setting Apache MaxRequestsPerChild to 1, but this is also going to affect performance of serving static files as well.\nIn short, trying to force a reload on every request is just not the best way of going about it and certainly will not eliminate the load delays resulting from using a fat Python web application such as Django.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001478061_django_python.txt
|
Q:
What is Python's Fabric equivalent in other languages?
Can someone tell me what's the equivalent of Python's Fabric in Python itself, other languages or third party tools? I am still a bit fuzzy on what it is trying to accomplish and it's usage.
A:
These tools are for performing common remote administration tasks usually as part of automated builds - a Ruby equivalent might be Capistrano, JSch in Java.
A:
It helps you to run commands on a lot of remote machines via SSH from your box. So you don't have to login on each one and copypaste the output of some machine back to your desktop.
A:
The Ruby community uses a tool called Capistrano for the same purpose.
A:
Looking at the fabric example the first thing I thought about was mpiexec. Given right coding, I believe fabric can be used to run bot-nets or parallel processing clusters depending on your inclination.
|
What is Python's Fabric equivalent in other languages?
|
Can someone tell me what's the equivalent of Python's Fabric in Python itself, other languages or third party tools? I am still a bit fuzzy on what it is trying to accomplish and it's usage.
|
[
"These tools are for performing common remote administration tasks usually as part of automated builds - a Ruby equivalent might be Capistrano, JSch in Java.\n",
"It helps you to run commands on a lot of remote machines via SSH from your box. So you don't have to login on each one and copypaste the output of some machine back to your desktop.\n",
"The Ruby community uses a tool called Capistrano for the same purpose.\n",
"Looking at the fabric example the first thing I thought about was mpiexec. Given right coding, I believe fabric can be used to run bot-nets or parallel processing clusters depending on your inclination.\n"
] |
[
6,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001479632_python.txt
|
Q:
How can I, on some global keystroke, paste some text to current active application in linux with Python or C++
I want to write app, which will work like a daemon and on some global keystroke paste some text to current active application (text editor, browser, jabber client) I think i will need to use some low level xserver api. How i can do this with Python or C++ ?
A:
Probably you want to hack xmon...
AFAIK there is no easy way to hook the X protocol. You will need to do "deep packet inspection", which would be fairly easy in the application event loop but not so easy, as you want, "like a daemon", or on "global keystroke[s]".
So, I know this is really brute force and ignorance, but I think you will have to wrap the X server by starting it on a non-standard port or publishing an environment variable, just like you were using something like an SSH tunnel to forward an X server connection.
There is an X protocol monitor called Xmon for which source is available. It might be a good starting point.
A:
You can use the xmacroplay utility from xmacro to do this under X windows I think. Either use it directly - send it commands to standard input using the subprocess module, or read the source code and find out how it does it! I don't think there are python bindings for it.
From the xmacroplay website
xmacroplay:
Reads lines from the standard input. It can understand the following lines:
Delay [sec] - delays the program with [sec] secundums
ButtonPress [n] - sends a ButtonPress event with button [n]
this emulates the pressing of the mouse button [n]
ButtonRelease [n] - sends a ButtonRelease event with button [n]
this emulates the releasing of the mouse button [n]
... snip lots more ...
This is probably the command you are interested in
String [max. 1024 long string]
- Sends the string as single characters converted to
KeyPress and KeyRelease events based on a
character table in chartbl.h (currently only
Latin1 is used...)
There is also Xnee which does a similar thing.
|
How can I, on some global keystroke, paste some text to current active application in linux with Python or C++
|
I want to write app, which will work like a daemon and on some global keystroke paste some text to current active application (text editor, browser, jabber client) I think i will need to use some low level xserver api. How i can do this with Python or C++ ?
|
[
"Probably you want to hack xmon...\n\nAFAIK there is no easy way to hook the X protocol. You will need to do \"deep packet inspection\", which would be fairly easy in the application event loop but not so easy, as you want, \"like a daemon\", or on \"global keystroke[s]\".\nSo, I know this is really brute force and ignorance, but I think you will have to wrap the X server by starting it on a non-standard port or publishing an environment variable, just like you were using something like an SSH tunnel to forward an X server connection.\nThere is an X protocol monitor called Xmon for which source is available. It might be a good starting point.\n",
"You can use the xmacroplay utility from xmacro to do this under X windows I think. Either use it directly - send it commands to standard input using the subprocess module, or read the source code and find out how it does it! I don't think there are python bindings for it.\nFrom the xmacroplay website\nxmacroplay:\nReads lines from the standard input. It can understand the following lines:\n\nDelay [sec] - delays the program with [sec] secundums\nButtonPress [n] - sends a ButtonPress event with button [n]\n this emulates the pressing of the mouse button [n]\nButtonRelease [n] - sends a ButtonRelease event with button [n]\n this emulates the releasing of the mouse button [n]\n... snip lots more ...\n\nThis is probably the command you are interested in\nString [max. 1024 long string]\n - Sends the string as single characters converted to\n KeyPress and KeyRelease events based on a\n character table in chartbl.h (currently only\n Latin1 is used...)\n\nThere is also Xnee which does a similar thing.\n"
] |
[
1,
0
] |
[] |
[] |
[
"c++",
"linux",
"python",
"xserver"
] |
stackoverflow_0001480655_c++_linux_python_xserver.txt
|
Q:
Python code seems to be getting executed out of order
At work I have a programming language encoded in a database record. I'm trying to write a print function in python to display what the record contains.
This is the code I'm having trouble with:
# Un-indent the block if necessary.
if func_option[row.FRML_FUNC_OPTN] in ['Endif', 'Else']:
self.indent = self.indent - 1
# if this is a new line, indent it.
if len(self.formulatext) <> 0 and self.formulatext[len(self.formulatext) - 1] == '\n':
for i in range(1,self.indent):
rowtext = ' ' + rowtext
# increase indent for 'then', 'else'
if func_option[row.FRML_FUNC_OPTN] in ['Then', 'Else']:
self.indent = self.indent + 1
When row.FRML____FUNC____OPTN equals 'Else', I expect it to first un-indent, then indent again, so that the 'else' is printed at a lower level of indentation, then the rest of the code is within. Instead this is the type of indentation I get:
IfThen
IfThen
Else
EndifComment
IfThen
Endif
IfThen
Else
Endif
Else
Endif
As you can see the 'Else' is still indented higher than the If / Endif. Any idea why this could be happening?
I did try sprinkling the code with debug statements the result of which is:
row: Else
row.FRML_FUNC_OPTN is : Elsedecrementing indent
row.FRML_FUNC_OPTN is : Elseincrementing indent
which means that the indent altering if's are indeed being entered...
A:
Just because it is a "script language" doesn't mean you have to live without a full debugger with breakpoints !
Install eric3
Load your code
Press "debug" ;)
Also, you seem new to Python, so here are a few hints :
you can multiply strings, much faster than a loop
read how array access works, use [-1] for last element
read on string methods, use .endswith()
use tuples for static unmutable data, faster
# Un-indent the block if necessary.
op = func_option[row.FRML_FUNC_OPTN]
if op in ('Endif', 'Else'):
self.indent -= 1
# if this is a new line, indent it.
if self.formulatext.endswith( '\n' ):
rowtext = ("\t" * indent) + rowtext
# increase indent for 'then', 'else'
if op in ('Then', 'Else'):
self.indent += 1
A:
From your debug log:
row: Else
row.FRML_FUNC_OPTN is : Elsedecrementing indent
row.FRML_FUNC_OPTN is : Elseincrementing indent
I suspect you already have indentation before "Else" when you enter the code fragment supplied.
Try adding:
rowtext = rowtext.strip()
just before the first if
Or if rowtext is blank, and you're adding it to something else later on, try calling strip on that.
|
Python code seems to be getting executed out of order
|
At work I have a programming language encoded in a database record. I'm trying to write a print function in python to display what the record contains.
This is the code I'm having trouble with:
# Un-indent the block if necessary.
if func_option[row.FRML_FUNC_OPTN] in ['Endif', 'Else']:
self.indent = self.indent - 1
# if this is a new line, indent it.
if len(self.formulatext) <> 0 and self.formulatext[len(self.formulatext) - 1] == '\n':
for i in range(1,self.indent):
rowtext = ' ' + rowtext
# increase indent for 'then', 'else'
if func_option[row.FRML_FUNC_OPTN] in ['Then', 'Else']:
self.indent = self.indent + 1
When row.FRML____FUNC____OPTN equals 'Else', I expect it to first un-indent, then indent again, so that the 'else' is printed at a lower level of indentation, then the rest of the code is within. Instead this is the type of indentation I get:
IfThen
IfThen
Else
EndifComment
IfThen
Endif
IfThen
Else
Endif
Else
Endif
As you can see the 'Else' is still indented higher than the If / Endif. Any idea why this could be happening?
I did try sprinkling the code with debug statements the result of which is:
row: Else
row.FRML_FUNC_OPTN is : Elsedecrementing indent
row.FRML_FUNC_OPTN is : Elseincrementing indent
which means that the indent altering if's are indeed being entered...
|
[
"Just because it is a \"script language\" doesn't mean you have to live without a full debugger with breakpoints !\n\nInstall eric3\nLoad your code\nPress \"debug\" ;)\n\nAlso, you seem new to Python, so here are a few hints :\n\nyou can multiply strings, much faster than a loop\nread how array access works, use [-1] for last element\nread on string methods, use .endswith()\nuse tuples for static unmutable data, faster\n\n\n\n# Un-indent the block if necessary.\nop = func_option[row.FRML_FUNC_OPTN]\nif op in ('Endif', 'Else'):\n self.indent -= 1\n\n# if this is a new line, indent it.\nif self.formulatext.endswith( '\\n' ):\n rowtext = (\"\\t\" * indent) + rowtext\n\n# increase indent for 'then', 'else'\nif op in ('Then', 'Else'):\n self.indent += 1\n\n",
"From your debug log:\nrow: Else\nrow.FRML_FUNC_OPTN is : Elsedecrementing indent\nrow.FRML_FUNC_OPTN is : Elseincrementing indent\n\nI suspect you already have indentation before \"Else\" when you enter the code fragment supplied.\nTry adding:\nrowtext = rowtext.strip()\n\njust before the first if\nOr if rowtext is blank, and you're adding it to something else later on, try calling strip on that.\n"
] |
[
2,
1
] |
[] |
[] |
[
"debugging",
"python"
] |
stackoverflow_0001476722_debugging_python.txt
|
Q:
Would python be an appropriate choice for a video library for home use software
I am thinking of creating a video library software which keep track of all my videos and keep track of videos that I already haven't watched and stats like this. The stats will be specific to each user using the software.
My question is, is python appropriate to create this software or do I need something like c++.
A:
Python is perfectly appropriate for such tasks - indeed the most popular video site, YouTube, is essentially programmed in Python (using, of course, lower-level components called from Python for such tasks as web serving, relational db, video transcoding -- there are plenty of such reusable opensource components for all these kinds of tasks, but your application's logic flow and all application-level logic can perfectly well be in Python).
Just yesterday evening, at the local Python interest group meeting in Mountain View, we had new members who just moved to Silicon Valley exactly to take Python-based jobs in the video industry, and they were saying that professional level video handing in the industry is also veering more and more towards Python -- stalwarts like Pixar and ILM had been using Python forever, but in the last year or two it's been a flood of Python adoption in the industry.
A:
Yes. Python is much easier to use than c++ for something like this. You may want to use it as a front-end to a DB such as sqlite3
A:
Maybe you should take a look at this project:
Moovida
It's a complete media center, open source, written in python that is easy to extend. I don't know if it will do exactly what you want out of the box but you can probably easily add the features you want.
A:
Of course you can use almost any programming language for almost any task. But after noting that, it's also obvious that different languages are also differently well adapted for different tasks.
C/C++ are languages that are very "hardware friendly". Basically, the languages are just one abstraction level above assembler, with C's use of pointers etc. C++ is almost like a (semi-)portable object oriented assembler, if one wants to be funny. :) This makes C/C++ fast and good at talking to hardware.
But those same features become mis-features in other cases. The pointers make it possible to walk all over the memory and unless you are careful you will leak memory all over the place. So I would say (and now C people will get angry) that C/C++ in fact are directly inappropriate for what you want to do.
You want a language that are higher, does more things like memory management automatically and invisibly. There are many to choose from there, but without a doubt Python is eminently suited for this. Python has the last couple of years emerged as The New Cool Language to write these kind of softwares in, and much multimedia software such as Freevo and the already mentioned Moovida are written in Python.
A:
If you want your code to be REAL FAST, use C++ (or parallel fortran).
However in your application, 99% of the runtime isn't going to be in YOUR code, it's going to be in GUI libraries, OS calls, waiting for user interaction, calling libraries (written in C) to open video files and make thumbnails, that kind of stuff.
So using C++ will make your code 100 times faster, and your application will, as a result, be 1% faster, which is utterly useless. And if you write it in C++ you'll need months, whereas using Python you'll be finished much faster and have lots more fun.
Using C++ could even make it a lot slower, because in Python you can very easily build more scalable algorithms by using super powerful primitives like hashes, sets, generators, etc, try several algorithms in 5 minutes to see which one is the best, import a library which already does 90% of the work, etc.
Write it in Python.
|
Would python be an appropriate choice for a video library for home use software
|
I am thinking of creating a video library software which keep track of all my videos and keep track of videos that I already haven't watched and stats like this. The stats will be specific to each user using the software.
My question is, is python appropriate to create this software or do I need something like c++.
|
[
"Python is perfectly appropriate for such tasks - indeed the most popular video site, YouTube, is essentially programmed in Python (using, of course, lower-level components called from Python for such tasks as web serving, relational db, video transcoding -- there are plenty of such reusable opensource components for all these kinds of tasks, but your application's logic flow and all application-level logic can perfectly well be in Python).\nJust yesterday evening, at the local Python interest group meeting in Mountain View, we had new members who just moved to Silicon Valley exactly to take Python-based jobs in the video industry, and they were saying that professional level video handing in the industry is also veering more and more towards Python -- stalwarts like Pixar and ILM had been using Python forever, but in the last year or two it's been a flood of Python adoption in the industry.\n",
"Yes. Python is much easier to use than c++ for something like this. You may want to use it as a front-end to a DB such as sqlite3 \n",
"Maybe you should take a look at this project:\nMoovida\nIt's a complete media center, open source, written in python that is easy to extend. I don't know if it will do exactly what you want out of the box but you can probably easily add the features you want.\n",
"Of course you can use almost any programming language for almost any task. But after noting that, it's also obvious that different languages are also differently well adapted for different tasks.\nC/C++ are languages that are very \"hardware friendly\". Basically, the languages are just one abstraction level above assembler, with C's use of pointers etc. C++ is almost like a (semi-)portable object oriented assembler, if one wants to be funny. :) This makes C/C++ fast and good at talking to hardware.\nBut those same features become mis-features in other cases. The pointers make it possible to walk all over the memory and unless you are careful you will leak memory all over the place. So I would say (and now C people will get angry) that C/C++ in fact are directly inappropriate for what you want to do.\nYou want a language that are higher, does more things like memory management automatically and invisibly. There are many to choose from there, but without a doubt Python is eminently suited for this. Python has the last couple of years emerged as The New Cool Language to write these kind of softwares in, and much multimedia software such as Freevo and the already mentioned Moovida are written in Python.\n",
"If you want your code to be REAL FAST, use C++ (or parallel fortran).\nHowever in your application, 99% of the runtime isn't going to be in YOUR code, it's going to be in GUI libraries, OS calls, waiting for user interaction, calling libraries (written in C) to open video files and make thumbnails, that kind of stuff.\nSo using C++ will make your code 100 times faster, and your application will, as a result, be 1% faster, which is utterly useless. And if you write it in C++ you'll need months, whereas using Python you'll be finished much faster and have lots more fun.\nUsing C++ could even make it a lot slower, because in Python you can very easily build more scalable algorithms by using super powerful primitives like hashes, sets, generators, etc, try several algorithms in 5 minutes to see which one is the best, import a library which already does 90% of the work, etc.\nWrite it in Python.\n"
] |
[
6,
1,
1,
1,
1
] |
[] |
[] |
[
"python",
"video",
"video_library"
] |
stackoverflow_0001477626_python_video_video_library.txt
|
Q:
how to configure apache on XP for python 2.6.2 and what do you prefer, python with framework/without?
I am starting python today. It will be my pleasure to have your help.
A:
About a framework - choose the one you'll like. You can find most of them on Python wiki.
About Apache - if you choose a framework, it'll probably have some kind of development web server built-in, with better debugging capabilities than Apache installation. If you really want Apache, then you could install and configure mod_wsgi.
A:
If you are starting Python today, why tackle web development immediately? I'd start off with "hello world" in the console.
|
how to configure apache on XP for python 2.6.2 and what do you prefer, python with framework/without?
|
I am starting python today. It will be my pleasure to have your help.
|
[
"About a framework - choose the one you'll like. You can find most of them on Python wiki.\nAbout Apache - if you choose a framework, it'll probably have some kind of development web server built-in, with better debugging capabilities than Apache installation. If you really want Apache, then you could install and configure mod_wsgi.\n",
"If you are starting Python today, why tackle web development immediately? I'd start off with \"hello world\" in the console.\n"
] |
[
1,
0
] |
[] |
[] |
[
"apache",
"python",
"windows_xp"
] |
stackoverflow_0001481309_apache_python_windows_xp.txt
|
Q:
Python - Same line of code only works the second time it called?
Sorry I couldn't really describe my problem much better in the title.
I am trying to learn Python, and came across this strange behavior and was hoping someone could explain this to me.
I am running Ubuntu 8.10 and python 2.5.2
First I import xml.dom
Then I create an instance of a minidom (using its fully qaulified name xml.dom.minidom)
This fails, but then if I run that same line again, it works!
See below:
$> python
Python 2.5.2 (r252:60911, Oct 5 2008, 19:29:17)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import xml.dom
>>> xml.dom.minidom.parseString("<xml><item/></xml>")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'minidom'
>>> xml.dom.minidom.parseString("<xml><item/></xml>")
<xml.dom.minidom.Document instance at 0x7fd914e42fc8>
I tried on another machine, and if consistently fails.
A:
The problem is in apport_python_hook.apport_excepthook() as a side effect it imports xml.dom.minidom.
Without apport_except_hook:
>>> import sys
>>> sys.excepthook = sys.__excepthook__
>>> import xml.dom
>>> xml.dom.minidom
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'minidom'
>>> xml.dom.minidom
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'minidom'
>>>
With apport_except_hook:
>>> import apport_python_hook
>>> apport_python_hook.install()
>>> xml.dom.minidom
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'minidom'
>>> xml.dom.minidom
<module 'xml.dom.minidom' from '../lib/python2.6/xml/dom/minidom.pyc'>
A:
minidom is a module so you should need
import xml.dom.minidom
xml.dom.minidom.parseString("<xml><item/></xml>")
I don't know how you got the second parseString to work it fails on my python as in your other machine
A:
I couldn't get your code to work even on the second try (using Python 2.6.1 on Snow Leopard). :-) However, here's one version that does work for me:
>>> from xml.dom.minidom import parseString
>>> parseString("<xml><item/></xml>")
<xml.dom.minidom.Document instance at 0x100539830>
Personally, I prefer this style of import. It tends to make for much less verbose code.
A:
I can replicate your behaviour on Ubuntu 9.04 (python 2.6.2). If you do python -v you can see the first error causes lots of extra imports. Since it doesn't happen for everybody, I can only assume the Ubuntu/Debian have added something to python to auto load modules.
Still the recommended action is to import xml.dom.minidom.
|
Python - Same line of code only works the second time it called?
|
Sorry I couldn't really describe my problem much better in the title.
I am trying to learn Python, and came across this strange behavior and was hoping someone could explain this to me.
I am running Ubuntu 8.10 and python 2.5.2
First I import xml.dom
Then I create an instance of a minidom (using its fully qaulified name xml.dom.minidom)
This fails, but then if I run that same line again, it works!
See below:
$> python
Python 2.5.2 (r252:60911, Oct 5 2008, 19:29:17)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import xml.dom
>>> xml.dom.minidom.parseString("<xml><item/></xml>")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'minidom'
>>> xml.dom.minidom.parseString("<xml><item/></xml>")
<xml.dom.minidom.Document instance at 0x7fd914e42fc8>
I tried on another machine, and if consistently fails.
|
[
"The problem is in apport_python_hook.apport_excepthook() as a side effect it imports xml.dom.minidom.\nWithout apport_except_hook:\n>>> import sys\n>>> sys.excepthook = sys.__excepthook__\n>>> import xml.dom\n>>> xml.dom.minidom\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'module' object has no attribute 'minidom'\n>>> xml.dom.minidom\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'module' object has no attribute 'minidom'\n>>> \n\nWith apport_except_hook:\n>>> import apport_python_hook\n>>> apport_python_hook.install()\n>>> xml.dom.minidom\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'module' object has no attribute 'minidom'\n>>> xml.dom.minidom\n<module 'xml.dom.minidom' from '../lib/python2.6/xml/dom/minidom.pyc'>\n\n",
"minidom is a module so you should need\nimport xml.dom.minidom\nxml.dom.minidom.parseString(\"<xml><item/></xml>\")\n\nI don't know how you got the second parseString to work it fails on my python as in your other machine\n",
"I couldn't get your code to work even on the second try (using Python 2.6.1 on Snow Leopard). :-) However, here's one version that does work for me:\n>>> from xml.dom.minidom import parseString\n>>> parseString(\"<xml><item/></xml>\")\n<xml.dom.minidom.Document instance at 0x100539830>\n\nPersonally, I prefer this style of import. It tends to make for much less verbose code.\n",
"I can replicate your behaviour on Ubuntu 9.04 (python 2.6.2). If you do python -v you can see the first error causes lots of extra imports. Since it doesn't happen for everybody, I can only assume the Ubuntu/Debian have added something to python to auto load modules.\nStill the recommended action is to import xml.dom.minidom.\n"
] |
[
7,
5,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001481264_python.txt
|
Q:
Generating random sentences from custom text in Python's NLTK?
I'm having trouble with the NLTK under Python, specifically the .generate() method.
generate(self, length=100)
Print random text, generated using a trigram language model.
Parameters:
* length (int) - The length of text to generate (default=100)
Here is a simplified version of what I am attempting.
import nltk
words = 'The quick brown fox jumps over the lazy dog'
tokens = nltk.word_tokenize(words)
text = nltk.Text(tokens)
print text.generate(3)
This will always generate
Building ngram index...
The quick brown
None
As opposed to building a random phrase out of the words.
Here is my output when I do
print text.generate()
Building ngram index...
The quick brown fox jumps over the lazy dog fox jumps over the lazy
dog dog The quick brown fox jumps over the lazy dog dog brown fox
jumps over the lazy dog over the lazy dog The quick brown fox jumps
over the lazy dog fox jumps over the lazy dog lazy dog The quick brown
fox jumps over the lazy dog the lazy dog The quick brown fox jumps
over the lazy dog jumps over the lazy dog over the lazy dog brown fox
jumps over the lazy dog quick brown fox jumps over the lazy dog The
None
Again starting out with the same text, but then varying it. I've also tried using the first chapter from Orwell's 1984. Again that always starts with the first 3 tokens (one of which is a space in this case) and then goes on to randomly generate text.
What am I doing wrong here?
A:
To generate random text, U need to use Markov Chains
code to do that: from here
import random
class Markov(object):
def __init__(self, open_file):
self.cache = {}
self.open_file = open_file
self.words = self.file_to_words()
self.word_size = len(self.words)
self.database()
def file_to_words(self):
self.open_file.seek(0)
data = self.open_file.read()
words = data.split()
return words
def triples(self):
""" Generates triples from the given data string. So if our string were
"What a lovely day", we'd generate (What, a, lovely) and then
(a, lovely, day).
"""
if len(self.words) < 3:
return
for i in range(len(self.words) - 2):
yield (self.words[i], self.words[i+1], self.words[i+2])
def database(self):
for w1, w2, w3 in self.triples():
key = (w1, w2)
if key in self.cache:
self.cache[key].append(w3)
else:
self.cache[key] = [w3]
def generate_markov_text(self, size=25):
seed = random.randint(0, self.word_size-3)
seed_word, next_word = self.words[seed], self.words[seed+1]
w1, w2 = seed_word, next_word
gen_words = []
for i in xrange(size):
gen_words.append(w1)
w1, w2 = w2, random.choice(self.cache[(w1, w2)])
gen_words.append(w2)
return ' '.join(gen_words)
Explaination:
Generating pseudo random text with Markov chains using Python
A:
You should be "training" the Markov model with multiple sequences, so that you accurately sample the starting state probabilities as well (called "pi" in Markov-speak). If you use a single sequence then you will always start in the same state.
In the case of Orwell's 1984 you would want to use sentence tokenization first (NLTK is very good at it), then word tokenization (yielding a list of lists of tokens, not just a single list of tokens) and then feed each sentence separately to the Markov model. This will allow it to properly model sequence starts, instead of being stuck on a single way to start every sequence.
A:
Your sample corpus is most likely to be too small. I don't know how exactly nltk builds its trigram model but it is common practice that beginning and end of sentences are handled somehow. Since there is only one beginning of sentence in your corpus this might be the reason why every sentence has the same beginning.
A:
Are you sure that using word_tokenize is the right approach?
This Google groups page has the example:
>>> import nltk
>>> text = nltk.Text(nltk.corpus.brown.words()) # Get text from brown
>>> text.generate()
But I've never used nltk, so I can't say whether that works the way you want.
|
Generating random sentences from custom text in Python's NLTK?
|
I'm having trouble with the NLTK under Python, specifically the .generate() method.
generate(self, length=100)
Print random text, generated using a trigram language model.
Parameters:
* length (int) - The length of text to generate (default=100)
Here is a simplified version of what I am attempting.
import nltk
words = 'The quick brown fox jumps over the lazy dog'
tokens = nltk.word_tokenize(words)
text = nltk.Text(tokens)
print text.generate(3)
This will always generate
Building ngram index...
The quick brown
None
As opposed to building a random phrase out of the words.
Here is my output when I do
print text.generate()
Building ngram index...
The quick brown fox jumps over the lazy dog fox jumps over the lazy
dog dog The quick brown fox jumps over the lazy dog dog brown fox
jumps over the lazy dog over the lazy dog The quick brown fox jumps
over the lazy dog fox jumps over the lazy dog lazy dog The quick brown
fox jumps over the lazy dog the lazy dog The quick brown fox jumps
over the lazy dog jumps over the lazy dog over the lazy dog brown fox
jumps over the lazy dog quick brown fox jumps over the lazy dog The
None
Again starting out with the same text, but then varying it. I've also tried using the first chapter from Orwell's 1984. Again that always starts with the first 3 tokens (one of which is a space in this case) and then goes on to randomly generate text.
What am I doing wrong here?
|
[
"To generate random text, U need to use Markov Chains\ncode to do that: from here\nimport random\n\nclass Markov(object):\n\n def __init__(self, open_file):\n self.cache = {}\n self.open_file = open_file\n self.words = self.file_to_words()\n self.word_size = len(self.words)\n self.database()\n\n\n def file_to_words(self):\n self.open_file.seek(0)\n data = self.open_file.read()\n words = data.split()\n return words\n\n\n def triples(self):\n \"\"\" Generates triples from the given data string. So if our string were\n \"What a lovely day\", we'd generate (What, a, lovely) and then\n (a, lovely, day).\n \"\"\"\n\n if len(self.words) < 3:\n return\n\n for i in range(len(self.words) - 2):\n yield (self.words[i], self.words[i+1], self.words[i+2])\n\n def database(self):\n for w1, w2, w3 in self.triples():\n key = (w1, w2)\n if key in self.cache:\n self.cache[key].append(w3)\n else:\n self.cache[key] = [w3]\n\n def generate_markov_text(self, size=25):\n seed = random.randint(0, self.word_size-3)\n seed_word, next_word = self.words[seed], self.words[seed+1]\n w1, w2 = seed_word, next_word\n gen_words = []\n for i in xrange(size):\n gen_words.append(w1)\n w1, w2 = w2, random.choice(self.cache[(w1, w2)])\n gen_words.append(w2)\n return ' '.join(gen_words)\n\nExplaination: \nGenerating pseudo random text with Markov chains using Python\n",
"You should be \"training\" the Markov model with multiple sequences, so that you accurately sample the starting state probabilities as well (called \"pi\" in Markov-speak). If you use a single sequence then you will always start in the same state.\nIn the case of Orwell's 1984 you would want to use sentence tokenization first (NLTK is very good at it), then word tokenization (yielding a list of lists of tokens, not just a single list of tokens) and then feed each sentence separately to the Markov model. This will allow it to properly model sequence starts, instead of being stuck on a single way to start every sequence.\n",
"Your sample corpus is most likely to be too small. I don't know how exactly nltk builds its trigram model but it is common practice that beginning and end of sentences are handled somehow. Since there is only one beginning of sentence in your corpus this might be the reason why every sentence has the same beginning.\n",
"Are you sure that using word_tokenize is the right approach? \nThis Google groups page has the example:\n>>> import nltk\n>>> text = nltk.Text(nltk.corpus.brown.words()) # Get text from brown\n>>> text.generate() \n\nBut I've never used nltk, so I can't say whether that works the way you want.\n"
] |
[
13,
7,
1,
0
] |
[
"Maybe you can sort the tokens array randomly before generating a sentence.\n"
] |
[
-1
] |
[
"nltk",
"python",
"random"
] |
stackoverflow_0001150144_nltk_python_random.txt
|
Q:
set URL when enter site - pylons
My problem is that when a user enter my website like: www.mywebsite.com
I use pylonshq
I want the URL to be change to /#home if its possible via. map.connect. I have no idéa how to fix it via. python, so therefore a guide or maybe some samples would be a help.
Right now it looks like this:
map.connect('/', controller='home',action='index')
A:
Simplest solution is to add js, something like this:
location.url += '#home'.
Or issue redirect with anchor included (but this won't work in IE).
|
set URL when enter site - pylons
|
My problem is that when a user enter my website like: www.mywebsite.com
I use pylonshq
I want the URL to be change to /#home if its possible via. map.connect. I have no idéa how to fix it via. python, so therefore a guide or maybe some samples would be a help.
Right now it looks like this:
map.connect('/', controller='home',action='index')
|
[
"Simplest solution is to add js, something like this:\nlocation.url += '#home'.\nOr issue redirect with anchor included (but this won't work in IE).\n"
] |
[
1
] |
[] |
[] |
[
"pylons",
"python"
] |
stackoverflow_0001481363_pylons_python.txt
|
Q:
Efficient method to determine location on a grid(array)
I am representing a grid with a 2D list in python. I would like to pick a point (x,y) in the list and determine it's location...right edge, top left corner, somewhere in the middle...
Currently I am checking like so:
# left column, not a corner
if x == 0 and y != 0 and y != self.dim_y - 1:
pass
# right column, not a corner
elif x == self.dim_x - 1 and y != 0 and y != self.dim_y - 1:
pass
# top row, not a corner
elif y == 0 and x != 0 and x != self.dim_x - 1:
pass
# bottom row, not a corner
elif y == self.dim_y - 1 and x != 0 and x != self.dim_x - 1:
pass
# top left corner
elif x == 0 and y == 0:
pass
# top right corner
elif x == self.dim_x - 1 and y == 0:
pass
# bottom left corner
elif x == 0 and y == self.dim_y - 1:
pass
# bottom right corner
elif x == self.dim_x - 1 and y == self.dim_y - 1:
pass
# somewhere in middle; not an edge
else:
pass
Where I have some function do something after the location is determined
dim_x and dim_y are the dimensions of the list.
Is there a better way of doing this without so many if-else statements? Something efficient would be good since this part of the logic is being called a couple million times...it's for simulated annealing.
Thanks in advance. Also, what would be a better way of wording the title?
A:
def location(x,y,dim_x,dim_y):
index = 1*(y==0) + 2*(y==dim_y-1) + 3*(x==0) + 6*(x==dim_x-1)
return ["interior","top","bottom","left","top-left",
"bottom-left","right","top-right","bottom-right"][index]
A:
# initially:
method_list = [
bottom_left, bottom, bottom_right,
left, middle, right,
top_left, top, top_right,
]
# each time:
keyx = 0 if not x else (2 if x == self.dim_x - 1 else 1)
keyy = 0 if not y else (2 if y == self.dim_y - 1 else 1)
key = keyy * 3 + keyx
method_list[key](self, x, y, other_args)
Untested ... but the general idea should shine through.
Update after the goal posts were drastically relocated by "Something efficient would be good since this part of the logic is being called a couple million times...it's for simulated annealing":
Originally you didn't like the chain of tests, and said you were calling a function to handle each of the 8 cases. If you want fast (in Python): retain the chain of tests, and do the handling of each case inline instead of calling a function.
Can you use psyco? Also, consider using Cython.
A:
If I understand correctly, you have a collection of coordinates (x,y) living in a grid, and you would like to know, given any coordinate, whether it is inside the grid or on an edge.
The approach I would take is to normalize the grid before making the comparison, so that its origin is (0,0) and its top right corner is (1,1), then I would only have to know the value of the coordinate to determine its location. Let me explain.
0) Let _max represent the maximum value and _min, for instance, x_min is the minimum value of the coordinate x; let _new represent the normalized value.
1) Given (x,y), compute: x_new = (x_max-x)/(x_max-x_min) and y_new=(y_max-y)/(y_max-y_min).
2) [this is pseudo code]
switch y_new:
case y_new==0: pos_y='bottom'
case y_new==1: pos_y='top'
otherwise: pos_y='%2.2f \% on y', 100*y_new
switch x_new:
case x_new==0: pos_x='left'
case x_new==1: pos_x='right'
otherwise: pos_x='%2.2f \% on x', 100*x_new
print pos_y, pos_x
It would print stuff like "bottom left" or "top right" or "32.58% on y 15.43% on x"
Hope that helps.
A:
I guess if you really want to treat all these cases completely differently, your solution is okay, as it is very explicit. A compact solution might look more elegant, but will probably be harder to maintain. It really depends on what happens inside the if-blocks.
As soon as there is a common handling of, say, the corners, one might prefer to catch those cases with one clever if-statement.
A:
Something like this might be more readable / maintainable. It will probably be a lot faster than your nested if statements since it only tests each condition once and dispatches through a dictionary which is nice and fast.
class LocationThing:
def __init__(self, x, y):
self.dim_x = x
self.dim_y = y
def interior(self):
print "interior"
def left(self):
print "left"
def right(self):
print "right"
def top(self):
print "top"
def bottom(self):
print "bottom"
def top_left(self):
print "top_left"
def top_right(self):
print "top_right"
def bottom_left(self):
print "bottom_left"
def bottom_right(self):
print "bottom_right"
location_map = {
# (left, right, top, bottom)
( False, False, False, False ) : interior,
( True, False, False, False ) : left,
( False, True, False, False ) : right,
( False, False, True, False ) : top,
( False, False, False, True ) : bottom,
( True, False, True, False ) : top_left,
( False, True, True, False ) : top_right,
( True, False, False, True ) : bottom_left,
( False, True, False, True ) : bottom_right,
}
def location(self, x,y):
method = self.location_map[(x==0, x==self.dim_x-1, y==0, y==self.dim_y-1)]
return method(self)
l = LocationThing(10,10)
l.location(0,0)
l.location(0,1)
l.location(1,1)
l.location(9,9)
l.location(9,1)
l.location(1,9)
l.location(0,9)
l.location(9,0)
When you run the above it prints
top_left
left
interior
bottom_right
right
bottom
bottom_left
top_right
A:
For a fast inner-loop function, you can just bite the bullet and do the ugly: nested if else statements with repeated terms, so that each comparison is only done once, and it runs about twice as fast as an example cleaner answer (by mobrule):
import timeit
def f0(x, y, x_dim, y_dim):
if x!=0:
if x!=x_dim: # in the x interior
if y!=0:
if y!=y_dim: # y interior
return "interior"
else: # y==y_dim edge 'top'
return "interior-top"
else:
return "interior-bottom"
else: # x = x_dim, "right"
if y!=0:
if y!=y_dim: #
return "right-interior"
else: # y==y_dim edge 'top'
return "right-top"
else:
return "right-bottom"
else: # x=0 'left'
if y!=0:
if y!=y_dim: # y interior
return "left-interior"
else: # y==y_dim edge 'top'
return "left-top"
else:
return "left-bottom"
r_list = ["interior","top","bottom","left","top-left",
"bottom-left","right","top-right","bottom-right"]
def f1(x,y,dim_x,dim_y):
index = 1*(y==0) + 2*(y==dim_y-1) + 3*(x==0) + 6*(x==dim_x-1)
return r_list[index]
for x, y, x_dim, y_dim in [(4, 4, 5, 6), (0, 0, 5, 6)]:
t = timeit.Timer("f0(x, y, x_dim, y_dim)", "from __main__ import f0, f1, x, y, x_dim, y_dim, r_list")
print "f0", t.timeit(number=1000000)
t = timeit.Timer("f1(x, y, x_dim, y_dim)", "from __main__ import f0, f1, x, y, x_dim, y_dim, r_list")
print "f1", t.timeit(number=1000000)
Which gives:
f0 0.729887008667 # nested if-else for interior point (no "else"s)
f1 1.4765329361
f0 0.622623920441 # nested if-else for left-bottom (all "else"s)
f1 1.49259114265
So it's a bit better than twice as fast as mobrule's answer, which was the fastest looking code that I knew would work when I posted this. (Also, I moved mobrule's string list out of the function as that sped up the result by 50%.) Speed over beauty?
If instead you want a concise and easy to read solution, I suggest:
def f1(x, y, x_dim, y_dim):
d_x = {0:"left", x_dim:"right"}
d_y = {0:"bottom", y_dim:"top"}
return d_x.get(x, "interior")+"-"+d_y.get(y, "interior")
which is as fast as the others by my timing.
|
Efficient method to determine location on a grid(array)
|
I am representing a grid with a 2D list in python. I would like to pick a point (x,y) in the list and determine it's location...right edge, top left corner, somewhere in the middle...
Currently I am checking like so:
# left column, not a corner
if x == 0 and y != 0 and y != self.dim_y - 1:
pass
# right column, not a corner
elif x == self.dim_x - 1 and y != 0 and y != self.dim_y - 1:
pass
# top row, not a corner
elif y == 0 and x != 0 and x != self.dim_x - 1:
pass
# bottom row, not a corner
elif y == self.dim_y - 1 and x != 0 and x != self.dim_x - 1:
pass
# top left corner
elif x == 0 and y == 0:
pass
# top right corner
elif x == self.dim_x - 1 and y == 0:
pass
# bottom left corner
elif x == 0 and y == self.dim_y - 1:
pass
# bottom right corner
elif x == self.dim_x - 1 and y == self.dim_y - 1:
pass
# somewhere in middle; not an edge
else:
pass
Where I have some function do something after the location is determined
dim_x and dim_y are the dimensions of the list.
Is there a better way of doing this without so many if-else statements? Something efficient would be good since this part of the logic is being called a couple million times...it's for simulated annealing.
Thanks in advance. Also, what would be a better way of wording the title?
|
[
"def location(x,y,dim_x,dim_y):\n index = 1*(y==0) + 2*(y==dim_y-1) + 3*(x==0) + 6*(x==dim_x-1)\n return [\"interior\",\"top\",\"bottom\",\"left\",\"top-left\",\n \"bottom-left\",\"right\",\"top-right\",\"bottom-right\"][index]\n\n",
"# initially:\nmethod_list = [\n bottom_left, bottom, bottom_right,\n left, middle, right,\n top_left, top, top_right,\n ]\n\n# each time:\nkeyx = 0 if not x else (2 if x == self.dim_x - 1 else 1)\nkeyy = 0 if not y else (2 if y == self.dim_y - 1 else 1)\nkey = keyy * 3 + keyx\nmethod_list[key](self, x, y, other_args)\n\nUntested ... but the general idea should shine through.\nUpdate after the goal posts were drastically relocated by \"Something efficient would be good since this part of the logic is being called a couple million times...it's for simulated annealing\":\nOriginally you didn't like the chain of tests, and said you were calling a function to handle each of the 8 cases. If you want fast (in Python): retain the chain of tests, and do the handling of each case inline instead of calling a function.\nCan you use psyco? Also, consider using Cython.\n",
"If I understand correctly, you have a collection of coordinates (x,y) living in a grid, and you would like to know, given any coordinate, whether it is inside the grid or on an edge.\nThe approach I would take is to normalize the grid before making the comparison, so that its origin is (0,0) and its top right corner is (1,1), then I would only have to know the value of the coordinate to determine its location. Let me explain.\n0) Let _max represent the maximum value and _min, for instance, x_min is the minimum value of the coordinate x; let _new represent the normalized value.\n1) Given (x,y), compute: x_new = (x_max-x)/(x_max-x_min) and y_new=(y_max-y)/(y_max-y_min).\n\n2) [this is pseudo code]\nswitch y_new:\n case y_new==0: pos_y='bottom'\n case y_new==1: pos_y='top'\n otherwise: pos_y='%2.2f \\% on y', 100*y_new\nswitch x_new:\n case x_new==0: pos_x='left'\n case x_new==1: pos_x='right'\n otherwise: pos_x='%2.2f \\% on x', 100*x_new\n\nprint pos_y, pos_x\n\nIt would print stuff like \"bottom left\" or \"top right\" or \"32.58% on y 15.43% on x\"\n\nHope that helps.\n\n",
"I guess if you really want to treat all these cases completely differently, your solution is okay, as it is very explicit. A compact solution might look more elegant, but will probably be harder to maintain. It really depends on what happens inside the if-blocks.\nAs soon as there is a common handling of, say, the corners, one might prefer to catch those cases with one clever if-statement.\n",
"Something like this might be more readable / maintainable. It will probably be a lot faster than your nested if statements since it only tests each condition once and dispatches through a dictionary which is nice and fast.\nclass LocationThing:\n\n def __init__(self, x, y):\n self.dim_x = x\n self.dim_y = y\n\n def interior(self):\n print \"interior\"\n def left(self):\n print \"left\"\n def right(self):\n print \"right\"\n def top(self):\n print \"top\"\n def bottom(self):\n print \"bottom\"\n def top_left(self):\n print \"top_left\"\n def top_right(self):\n print \"top_right\"\n def bottom_left(self):\n print \"bottom_left\"\n def bottom_right(self):\n print \"bottom_right\"\n\n location_map = {\n # (left, right, top, bottom)\n ( False, False, False, False ) : interior,\n ( True, False, False, False ) : left,\n ( False, True, False, False ) : right,\n ( False, False, True, False ) : top,\n ( False, False, False, True ) : bottom,\n ( True, False, True, False ) : top_left,\n ( False, True, True, False ) : top_right,\n ( True, False, False, True ) : bottom_left,\n ( False, True, False, True ) : bottom_right,\n }\n\n\n def location(self, x,y):\n method = self.location_map[(x==0, x==self.dim_x-1, y==0, y==self.dim_y-1)]\n return method(self)\n\nl = LocationThing(10,10)\nl.location(0,0)\nl.location(0,1)\nl.location(1,1)\nl.location(9,9)\nl.location(9,1)\nl.location(1,9)\nl.location(0,9)\nl.location(9,0)\n\nWhen you run the above it prints\ntop_left\nleft\ninterior\nbottom_right\nright\nbottom\nbottom_left\ntop_right\n\n",
"For a fast inner-loop function, you can just bite the bullet and do the ugly: nested if else statements with repeated terms, so that each comparison is only done once, and it runs about twice as fast as an example cleaner answer (by mobrule):\nimport timeit\n\ndef f0(x, y, x_dim, y_dim):\n if x!=0:\n if x!=x_dim: # in the x interior\n if y!=0:\n if y!=y_dim: # y interior\n return \"interior\"\n else: # y==y_dim edge 'top'\n return \"interior-top\"\n else:\n return \"interior-bottom\"\n else: # x = x_dim, \"right\"\n if y!=0:\n if y!=y_dim: # \n return \"right-interior\"\n else: # y==y_dim edge 'top'\n return \"right-top\"\n else:\n return \"right-bottom\"\n else: # x=0 'left'\n if y!=0:\n if y!=y_dim: # y interior\n return \"left-interior\"\n else: # y==y_dim edge 'top'\n return \"left-top\"\n else:\n return \"left-bottom\"\n\nr_list = [\"interior\",\"top\",\"bottom\",\"left\",\"top-left\",\n \"bottom-left\",\"right\",\"top-right\",\"bottom-right\"] \ndef f1(x,y,dim_x,dim_y):\n index = 1*(y==0) + 2*(y==dim_y-1) + 3*(x==0) + 6*(x==dim_x-1)\n return r_list[index]\n\nfor x, y, x_dim, y_dim in [(4, 4, 5, 6), (0, 0, 5, 6)]:\n t = timeit.Timer(\"f0(x, y, x_dim, y_dim)\", \"from __main__ import f0, f1, x, y, x_dim, y_dim, r_list\")\n print \"f0\", t.timeit(number=1000000)\n t = timeit.Timer(\"f1(x, y, x_dim, y_dim)\", \"from __main__ import f0, f1, x, y, x_dim, y_dim, r_list\")\n print \"f1\", t.timeit(number=1000000)\n\nWhich gives:\nf0 0.729887008667 # nested if-else for interior point (no \"else\"s)\nf1 1.4765329361\nf0 0.622623920441 # nested if-else for left-bottom (all \"else\"s)\nf1 1.49259114265\n\nSo it's a bit better than twice as fast as mobrule's answer, which was the fastest looking code that I knew would work when I posted this. (Also, I moved mobrule's string list out of the function as that sped up the result by 50%.) Speed over beauty?\nIf instead you want a concise and easy to read solution, I suggest:\ndef f1(x, y, x_dim, y_dim):\n d_x = {0:\"left\", x_dim:\"right\"}\n d_y = {0:\"bottom\", y_dim:\"top\"}\n return d_x.get(x, \"interior\")+\"-\"+d_y.get(y, \"interior\")\n\nwhich is as fast as the others by my timing.\n"
] |
[
7,
3,
1,
0,
0,
0
] |
[] |
[] |
[
"arrays",
"list",
"performance",
"python"
] |
stackoverflow_0001480406_arrays_list_performance_python.txt
|
Q:
Python: This should be impossible, shouldn't it?
This is part of my Django application which is saving a user's profile in a special way.
class SomeUser:
def __init__(self, request):
self.logged_in = True
self.profile = request.user.get_profile()
self.favorites = self.profile.favorites.all().values_list('pk', flat=True)
def save(self, resp):
print "1: " + str(self.favorites)
self.profile.favorites = self.favorites
print "2: " + str(self.favorites)
self.profile.save()
return resp
Output:
1: [68, 56]
2: []
How is this even possible? I'm not fiddling with self.favorites at all! How can its value change?
EDIT: Updated the question with more info.
A:
I'm guessing self.favorites is some kind of iterator, maybe a django QuerySet.
The first str() runs the iterator and empties it out
The second str() runs the iterator again and it is empty
A:
With just this snippet of code, it can't happen (assuming self.favorites and self.hello aren't properties). My guess would be that something somewhere else is mutating the value of self.favorites or self.hello. Is there another thread that could be doing this somewhere? Or could this perhaps be happening in a different request?
|
Python: This should be impossible, shouldn't it?
|
This is part of my Django application which is saving a user's profile in a special way.
class SomeUser:
def __init__(self, request):
self.logged_in = True
self.profile = request.user.get_profile()
self.favorites = self.profile.favorites.all().values_list('pk', flat=True)
def save(self, resp):
print "1: " + str(self.favorites)
self.profile.favorites = self.favorites
print "2: " + str(self.favorites)
self.profile.save()
return resp
Output:
1: [68, 56]
2: []
How is this even possible? I'm not fiddling with self.favorites at all! How can its value change?
EDIT: Updated the question with more info.
|
[
"I'm guessing self.favorites is some kind of iterator, maybe a django QuerySet.\nThe first str() runs the iterator and empties it out\nThe second str() runs the iterator again and it is empty\n",
"With just this snippet of code, it can't happen (assuming self.favorites and self.hello aren't properties). My guess would be that something somewhere else is mutating the value of self.favorites or self.hello. Is there another thread that could be doing this somewhere? Or could this perhaps be happening in a different request?\n"
] |
[
4,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001481545_django_python.txt
|
Q:
Making a string the name of an instance/object
I've been struggling for a couple of days now with the following...
I'm trying to find a way to instantiate a number of objects which I can name via a raw_input call, and then, when I need to, look at its attributes via the 'print VARIABLE NAME' command in conjunction with the str() method.
So, to give an example. Let's say I want to create a zoo of 10 animals...
class Zoo(object):
def __init__(self, species, legs, stomachs):
self.species = species
self.legs = legs
self.stomachs = stomachs
for i in range(9):
species = raw_input("Enter species name: ")
legs = input("How many legs does this species have? ")
stomachs = input("...and how many stomachs? ")
species = Zoo(species, legs, stomachs)
The idea is that the 'species' variable (first line of the for loop) e.g species = Bear becomes the object 'Bear' (last line of loop), which in conjunction with a str() method and the 'print Bear' command would give me the bears attributes.
Like I say, I've struggled for a while with this but despite looking at other posts on similar themes still can't figure out a way. Some say use dictionaries, others say use setattr() but I can't see how this would work in my example.
A:
If you just want to introduce new named variables in a module namespace, then setattr may well be the easiest way to go:
import sys
class Species:
def __init__(self, name, legs, stomachs):
self.name = name
self.legs = legs
self.stomachs = stomachs
def create_species():
name = raw_input('Enter species name: ')
legs = input('How many legs? ')
stomachs = input('How many stomachs? ')
species = Species(name, legs, stomachs)
setattr(sys.modules[Species.__module__], name, species)
if __name__ == '__main__':
for i in range(5):
create_species()
If you save this code to a file named zoo.py, and then import it from another module, you could use it as follows:
import zoo
zoo.create_species() # => enter "Bear" as species name when prompted
animal = zoo.Bear # <= this object will be an instance of the Species class
Generally, though, using a dictionary is a more "Pythonic" way to maintain a collection of named values. Dynamically binding new variables has a number of issues, including the fact that most people will expect module variables to remain fairly steady between runs of a program. Also, the rules for naming of Python variables are much stricter than the possible set of animal names -- you can't include spaces in a variable name, for example, so while setattr will happily store the value, you'd have to use getattr to retrieve it.
A:
It's really, truly, SERIOUSLY a bad idea to create barenamed variables on the fly -- I earnestly implore you to give up the requirement to be able to just print FOOBAR for a FOOBAR barename that never existed in the code, much as I'd implore a fellow human being who's keen to commit suicide to give up their crazy desires and give life a chance. Use dictionaries, use a function that takes 'FOOBAR' as the argument and looks it up, etc.
But if my fellow human being is adamant about wanting to put an end to their days, I might switch to suggestions about how to do that with the least collateral damage to themselves and others. The equivalent here would be...:
class Zoo(object):
def __init__(self, species, legs, stomachs):
self.species = species
self.legs = legs
self.stomachs = stomachs
import __builtin__
setattr(__builtin__, species, self)
By explicitly using the __builtin__ module, you ensure you can "print thespeciesname" from ANY module -- not just the one defining Zoo, nor just the one instantiating it.
It's STILL a terrible idea, but this is the least-horror way to implement it.
A:
class Zoo(object):
def __init__(self, name):
self.name = name
self.animals = []
def __str__(self):
return ("The %s Zoo houses:\n" % self.name) + "\n".join(str(an) for an in self.animals)
class Animal( object ):
species = None
def __init__(self, legs, stomach):
self.legs = legs
self.stomach = stomach
def __str__(self):
return "A %s with %d legs and %d stomachs" % ( self.species, self.legs, self.stomach )
class Bear( Animal ):
species = "Bear"
class Karp( Animal ):
species = "Karp"
## this is the point ... you can look up Classes by their names here
## if you wonder show to automate the generation of this dict ... don't.
## ( or learn a lot Python, then add a metaclass to Animal ;-p )
species = dict( bear = Bear,
karp = Karp )
zoo = Zoo( "Strange" )
while len(zoo.animals) < 9:
name = raw_input("Enter species name: ").lower()
if name in species:
legs = input("How many legs does this species have? ")
stomachs = input("...and how many stomachs? ")
zoo.animals.append( species[name]( legs, stomachs ) )
else:
print "I have no idea what a", name, "is."
print "Try again"
print zoo
A:
>>> class Bear():
... pass
...
>>> class Dog():
... pass
...
>>>
>>> types = {'bear': Bear, 'dog': Dog}
>>>
>>> types['dog']()
<__main__.Dog instance at 0x75c10>
|
Making a string the name of an instance/object
|
I've been struggling for a couple of days now with the following...
I'm trying to find a way to instantiate a number of objects which I can name via a raw_input call, and then, when I need to, look at its attributes via the 'print VARIABLE NAME' command in conjunction with the str() method.
So, to give an example. Let's say I want to create a zoo of 10 animals...
class Zoo(object):
def __init__(self, species, legs, stomachs):
self.species = species
self.legs = legs
self.stomachs = stomachs
for i in range(9):
species = raw_input("Enter species name: ")
legs = input("How many legs does this species have? ")
stomachs = input("...and how many stomachs? ")
species = Zoo(species, legs, stomachs)
The idea is that the 'species' variable (first line of the for loop) e.g species = Bear becomes the object 'Bear' (last line of loop), which in conjunction with a str() method and the 'print Bear' command would give me the bears attributes.
Like I say, I've struggled for a while with this but despite looking at other posts on similar themes still can't figure out a way. Some say use dictionaries, others say use setattr() but I can't see how this would work in my example.
|
[
"If you just want to introduce new named variables in a module namespace, then setattr may well be the easiest way to go:\nimport sys\n\nclass Species:\n def __init__(self, name, legs, stomachs):\n self.name = name\n self.legs = legs\n self.stomachs = stomachs\n\ndef create_species():\n name = raw_input('Enter species name: ')\n legs = input('How many legs? ')\n stomachs = input('How many stomachs? ')\n species = Species(name, legs, stomachs)\n setattr(sys.modules[Species.__module__], name, species)\n\nif __name__ == '__main__':\n for i in range(5):\n create_species()\n\nIf you save this code to a file named zoo.py, and then import it from another module, you could use it as follows:\nimport zoo\nzoo.create_species() # => enter \"Bear\" as species name when prompted\nanimal = zoo.Bear # <= this object will be an instance of the Species class\n\nGenerally, though, using a dictionary is a more \"Pythonic\" way to maintain a collection of named values. Dynamically binding new variables has a number of issues, including the fact that most people will expect module variables to remain fairly steady between runs of a program. Also, the rules for naming of Python variables are much stricter than the possible set of animal names -- you can't include spaces in a variable name, for example, so while setattr will happily store the value, you'd have to use getattr to retrieve it.\n",
"It's really, truly, SERIOUSLY a bad idea to create barenamed variables on the fly -- I earnestly implore you to give up the requirement to be able to just print FOOBAR for a FOOBAR barename that never existed in the code, much as I'd implore a fellow human being who's keen to commit suicide to give up their crazy desires and give life a chance. Use dictionaries, use a function that takes 'FOOBAR' as the argument and looks it up, etc.\nBut if my fellow human being is adamant about wanting to put an end to their days, I might switch to suggestions about how to do that with the least collateral damage to themselves and others. The equivalent here would be...:\nclass Zoo(object): \n def __init__(self, species, legs, stomachs):\n self.species = species\n self.legs = legs\n self.stomachs = stomachs\n import __builtin__\n setattr(__builtin__, species, self)\n\nBy explicitly using the __builtin__ module, you ensure you can \"print thespeciesname\" from ANY module -- not just the one defining Zoo, nor just the one instantiating it.\nIt's STILL a terrible idea, but this is the least-horror way to implement it.\n",
"class Zoo(object):\n def __init__(self, name):\n self.name = name\n self.animals = []\n\n def __str__(self):\n return (\"The %s Zoo houses:\\n\" % self.name) + \"\\n\".join(str(an) for an in self.animals)\n\nclass Animal( object ):\n species = None\n def __init__(self, legs, stomach):\n self.legs = legs\n self.stomach = stomach\n\n def __str__(self):\n return \"A %s with %d legs and %d stomachs\" % ( self.species, self.legs, self.stomach )\n\n\nclass Bear( Animal ):\n species = \"Bear\"\n\nclass Karp( Animal ):\n species = \"Karp\"\n\n\n## this is the point ... you can look up Classes by their names here\n## if you wonder show to automate the generation of this dict ... don't.\n## ( or learn a lot Python, then add a metaclass to Animal ;-p )\nspecies = dict( bear = Bear,\n karp = Karp )\n\nzoo = Zoo( \"Strange\" )\nwhile len(zoo.animals) < 9:\n name = raw_input(\"Enter species name: \").lower()\n if name in species:\n legs = input(\"How many legs does this species have? \")\n stomachs = input(\"...and how many stomachs? \")\n zoo.animals.append( species[name]( legs, stomachs ) )\n else:\n print \"I have no idea what a\", name, \"is.\"\n print \"Try again\" \n\nprint zoo\n\n",
">>> class Bear():\n... pass\n... \n>>> class Dog():\n... pass\n... \n>>> \n>>> types = {'bear': Bear, 'dog': Dog}\n>>> \n>>> types['dog']()\n<__main__.Dog instance at 0x75c10>\n\n"
] |
[
6,
1,
0,
0
] |
[] |
[] |
[
"object",
"python",
"string",
"variables"
] |
stackoverflow_0001479490_object_python_string_variables.txt
|
Q:
Socket program Python vs C++ (Winsock)
I have python program which works perfectly for internet chatting. But program built on similar sockets in C++ do not work over internet.
Python program
import thread
import socket
class p2p:
def __init__(self):
socket.setdefaulttimeout(50)
self.port = 3000
#Destination IP HERE
self.peerId = '59.95.18.156'
#declaring sender socket
self.socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM )
self.socket.bind(('', self.port))
self.socket.settimeout(50)
#starting thread for reception
thread.start_new_thread(self.receiveData, ())
while 1:
data=raw_input('>')
#print 'sending...'+data
self.sendData(data)
def receiveData(self):
while 1:
data,address=self.socket.recvfrom(1024)
print data
def sendData(self,data):
self.socket.sendto(data, (self.peerId,self.port))
if __name__=='__main__':
print 'Started......'
p2p()
I want to built similar functionality in c++. I took server and client programs from MSDN. But they are working only on localhost not over internet ..
they are as follows...
Sender
#include <stdio.h>
#include "winsock2.h"
void main() {
WSADATA wsaData;
SOCKET SendSocket;
sockaddr_in RecvAddr;
int Port = 3000;
char SendBuf[3]={'a','2','\0'};
int BufLen = 3;
//---------------------------------------------
// Initialize Winsock
WSAStartup(MAKEWORD(2,2), &wsaData);
//---------------------------------------------
// Create a socket for sending data
SendSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
//---------------------------------------------
// Set up the RecvAddr structure with the IP address of
// the receiver (in this example case "123.456.789.1")
// and the specified port number.
RecvAddr.sin_family = AF_INET;
RecvAddr.sin_port = htons(Port);
RecvAddr.sin_addr.s_addr = inet_addr("59.95.18.156");
//---------------------------------------------
// Send a datagram to the receiver
printf("Sending a datagram to the receiver...\n");
sendto(SendSocket,
SendBuf,
BufLen,
0,
(SOCKADDR *) &RecvAddr,
sizeof(RecvAddr));
//---------------------------------------------
// When the application is finished sending, close the socket.
printf("Finished sending. Closing socket.\n");
closesocket(SendSocket);
//---------------------------------------------
// Clean up and quit.
printf("Exiting.\n");
WSACleanup();
return;
}
Receiver
#include <stdio.h>
#include "winsock2.h"
#include<iostream>
using namespace std;
void main() {
WSADATA wsaData;
SOCKET RecvSocket;
sockaddr_in RecvAddr;
int Port = 3000;
char RecvBuf[3];
int BufLen = 3;
sockaddr_in SenderAddr;
int SenderAddrSize = sizeof(SenderAddr);
//-----------------------------------------------
// Initialize Winsock
WSAStartup(MAKEWORD(2,2), &wsaData);
//-----------------------------------------------
// Create a receiver socket to receive datagrams
RecvSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
//-----------------------------------------------
// Bind the socket to any address and the specified port.
RecvAddr.sin_family = AF_INET;
RecvAddr.sin_port = htons(Port);
RecvAddr.sin_addr.s_addr = htonl(INADDR_ANY);
bind(RecvSocket, (SOCKADDR *) &RecvAddr, sizeof(RecvAddr));
//-----------------------------------------------
// Call the recvfrom function to receive datagrams
// on the bound socket.
printf("Receiving datagrams...\n");
while(true){
recvfrom(RecvSocket,
RecvBuf,
BufLen,
0,
(SOCKADDR *)&SenderAddr,
&SenderAddrSize);
cout<<RecvBuf;
}
//-----------------------------------------------
// Close the socket when finished receiving datagrams
printf("Finished receiving. Closing socket.\n");
closesocket(RecvSocket);
//-----------------------------------------------
// Clean up and exit.
printf("Exiting.\n");
WSACleanup();
return;
}
Thank you very much for any help ..
Sorry for too much code in the question.
A:
Per the docs, sendto returns a number that's >0 (number of bytes sent) for success, <0 for failure, and in the latter case you use WSAGetLastError for more information. So try saving the sendto result, printing it (as well as the size of the data you're trying to send), and in case of error print the last-error code too. What do you see then?
|
Socket program Python vs C++ (Winsock)
|
I have python program which works perfectly for internet chatting. But program built on similar sockets in C++ do not work over internet.
Python program
import thread
import socket
class p2p:
def __init__(self):
socket.setdefaulttimeout(50)
self.port = 3000
#Destination IP HERE
self.peerId = '59.95.18.156'
#declaring sender socket
self.socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM )
self.socket.bind(('', self.port))
self.socket.settimeout(50)
#starting thread for reception
thread.start_new_thread(self.receiveData, ())
while 1:
data=raw_input('>')
#print 'sending...'+data
self.sendData(data)
def receiveData(self):
while 1:
data,address=self.socket.recvfrom(1024)
print data
def sendData(self,data):
self.socket.sendto(data, (self.peerId,self.port))
if __name__=='__main__':
print 'Started......'
p2p()
I want to built similar functionality in c++. I took server and client programs from MSDN. But they are working only on localhost not over internet ..
they are as follows...
Sender
#include <stdio.h>
#include "winsock2.h"
void main() {
WSADATA wsaData;
SOCKET SendSocket;
sockaddr_in RecvAddr;
int Port = 3000;
char SendBuf[3]={'a','2','\0'};
int BufLen = 3;
//---------------------------------------------
// Initialize Winsock
WSAStartup(MAKEWORD(2,2), &wsaData);
//---------------------------------------------
// Create a socket for sending data
SendSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
//---------------------------------------------
// Set up the RecvAddr structure with the IP address of
// the receiver (in this example case "123.456.789.1")
// and the specified port number.
RecvAddr.sin_family = AF_INET;
RecvAddr.sin_port = htons(Port);
RecvAddr.sin_addr.s_addr = inet_addr("59.95.18.156");
//---------------------------------------------
// Send a datagram to the receiver
printf("Sending a datagram to the receiver...\n");
sendto(SendSocket,
SendBuf,
BufLen,
0,
(SOCKADDR *) &RecvAddr,
sizeof(RecvAddr));
//---------------------------------------------
// When the application is finished sending, close the socket.
printf("Finished sending. Closing socket.\n");
closesocket(SendSocket);
//---------------------------------------------
// Clean up and quit.
printf("Exiting.\n");
WSACleanup();
return;
}
Receiver
#include <stdio.h>
#include "winsock2.h"
#include<iostream>
using namespace std;
void main() {
WSADATA wsaData;
SOCKET RecvSocket;
sockaddr_in RecvAddr;
int Port = 3000;
char RecvBuf[3];
int BufLen = 3;
sockaddr_in SenderAddr;
int SenderAddrSize = sizeof(SenderAddr);
//-----------------------------------------------
// Initialize Winsock
WSAStartup(MAKEWORD(2,2), &wsaData);
//-----------------------------------------------
// Create a receiver socket to receive datagrams
RecvSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
//-----------------------------------------------
// Bind the socket to any address and the specified port.
RecvAddr.sin_family = AF_INET;
RecvAddr.sin_port = htons(Port);
RecvAddr.sin_addr.s_addr = htonl(INADDR_ANY);
bind(RecvSocket, (SOCKADDR *) &RecvAddr, sizeof(RecvAddr));
//-----------------------------------------------
// Call the recvfrom function to receive datagrams
// on the bound socket.
printf("Receiving datagrams...\n");
while(true){
recvfrom(RecvSocket,
RecvBuf,
BufLen,
0,
(SOCKADDR *)&SenderAddr,
&SenderAddrSize);
cout<<RecvBuf;
}
//-----------------------------------------------
// Close the socket when finished receiving datagrams
printf("Finished receiving. Closing socket.\n");
closesocket(RecvSocket);
//-----------------------------------------------
// Clean up and exit.
printf("Exiting.\n");
WSACleanup();
return;
}
Thank you very much for any help ..
Sorry for too much code in the question.
|
[
"Per the docs, sendto returns a number that's >0 (number of bytes sent) for success, <0 for failure, and in the latter case you use WSAGetLastError for more information. So try saving the sendto result, printing it (as well as the size of the data you're trying to send), and in case of error print the last-error code too. What do you see then?\n"
] |
[
1
] |
[] |
[] |
[
"c++",
"python",
"sockets",
"winsock"
] |
stackoverflow_0001481103_c++_python_sockets_winsock.txt
|
Q:
How to add header while making soap request using soappy
I have WSDL file, using that i wanted to make soap request which will look exactly like this --
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<AuthSoapHd xmlns="http://foobar.org/">
<strUserName>string</strUserName>
<strPassword>string</strPassword>
</AuthSoapHd>
</soap:Header>
<soap:Body>
<SearchQuotes xmlns="http://foobar.org/">
<searchtxt>string</searchtxt>
</SearchQuotes>
</soap:Body>
</soap:Envelope>
To sovle this, i did this
>> from SOAPpy import WSDL
>> WSDLFILE = '/path/foo.wsdl'
>> server = WSDL.Proxy(WSDLFILE)
>> server.SearchQuotes('rel')
I get this error
faultType: <Fault soap:Server: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.NullReferenceException: Object reference not set to an instance of an object.
The i debugged it and got this
*** Outgoing SOAP ******************************************************
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/1999/XMLSchema"
>
<SOAP-ENV:Body>
<SearchQuotes SOAP-ENC:root="1">
<v1 xsi:type="xsd:string">rel</v1>
</SearchQuotes>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
We can see it doesn't contain any header. I think that WSDL file has some bug.
Now, can anyone suggest me how to add header to this outgoing soap request.
Any sort of help will be appreciated. Thanks in advance
A:
Not tested, but I believe you can use the method the docs suggest to add soap headers, i.e., make and prep a SOAPpy.Header instance, then use server = server._hd (hd) to get a proxy equipped with it (though in your case that does seem to be a workaround attempt to broken WSDL, as you say -- might it be better to fix the WSDL instead?).
|
How to add header while making soap request using soappy
|
I have WSDL file, using that i wanted to make soap request which will look exactly like this --
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<AuthSoapHd xmlns="http://foobar.org/">
<strUserName>string</strUserName>
<strPassword>string</strPassword>
</AuthSoapHd>
</soap:Header>
<soap:Body>
<SearchQuotes xmlns="http://foobar.org/">
<searchtxt>string</searchtxt>
</SearchQuotes>
</soap:Body>
</soap:Envelope>
To sovle this, i did this
>> from SOAPpy import WSDL
>> WSDLFILE = '/path/foo.wsdl'
>> server = WSDL.Proxy(WSDLFILE)
>> server.SearchQuotes('rel')
I get this error
faultType: <Fault soap:Server: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.NullReferenceException: Object reference not set to an instance of an object.
The i debugged it and got this
*** Outgoing SOAP ******************************************************
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/1999/XMLSchema"
>
<SOAP-ENV:Body>
<SearchQuotes SOAP-ENC:root="1">
<v1 xsi:type="xsd:string">rel</v1>
</SearchQuotes>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
We can see it doesn't contain any header. I think that WSDL file has some bug.
Now, can anyone suggest me how to add header to this outgoing soap request.
Any sort of help will be appreciated. Thanks in advance
|
[
"Not tested, but I believe you can use the method the docs suggest to add soap headers, i.e., make and prep a SOAPpy.Header instance, then use server = server._hd (hd) to get a proxy equipped with it (though in your case that does seem to be a workaround attempt to broken WSDL, as you say -- might it be better to fix the WSDL instead?).\n"
] |
[
0
] |
[] |
[] |
[
"python",
"soap",
"soappy"
] |
stackoverflow_0001481313_python_soap_soappy.txt
|
Q:
pythonic format for indices
I am after a string format to efficiently represent a set of indices.
For example "1-3,6,8-10,16" would produce [1,2,3,6,8,9,10,16]
Ideally I would also be able to represent infinite sequences.
Is there an existing standard way of doing this? Or a good library? Or can you propose your own format?
thanks!
Edit: Wow! - thanks for all the well considered responses. I agree I should use ':' instead. Any ideas about infinite lists? I was thinking of using "1.." to represent all positive numbers.
The use case is for a shopping cart. For some products I need to restrict product sales to multiples of X, for others any positive number. So I am after a string format to represent this in the database.
A:
You don't need a string for that, This is as simple as it can get:
from types import SliceType
class sequence(object):
def __getitem__(self, item):
for a in item:
if isinstance(a, SliceType):
i = a.start
step = a.step if a.step else 1
while True:
if a.stop and i > a.stop:
break
yield i
i += step
else:
yield a
print list(sequence()[1:3,6,8:10,16])
Output:
[1, 2, 3, 6, 8, 9, 10, 16]
I'm using Python slice type power to express the sequence ranges. I'm also using generators to be memory efficient.
Please note that I'm adding 1 to the slice stop, otherwise the ranges will be different because the stop in slices is not included.
It supports steps:
>>> list(sequence()[1:3,6,8:20:2])
[1, 2, 3, 6, 8, 10, 12, 14, 16, 18, 20]
And infinite sequences:
sequence()[1:3,6,8:]
1, 2, 3, 6, 8, 9, 10, ...
If you have to give it a string then you can combine @ilya n. parser with this solution. I'll extend @ilya n. parser to support indexes as well as ranges:
def parser(input):
ranges = [a.split('-') for a in input.split(',')]
return [slice(*map(int, a)) if len(a) > 1 else int(a[0]) for a in ranges]
Now you can use it like this:
>>> print list(sequence()[parser('1-3,6,8-10,16')])
[1, 2, 3, 6, 8, 9, 10, 16]
A:
If you're into something Pythonic, I think 1:3,6,8:10,16 would be a better choice, as x:y is a standard notation for index range and the syntax allows you to use this notation on objects. Note that the call
z[1:3,6,8:10,16]
gets translated into
z.__getitem__((slice(1, 3, None), 6, slice(8, 10, None), 16))
Even though this is a TypeError if z is a built-in container, you're free to create the class that will return something reasonable, e.g. as NumPy's arrays.
You might also say that by convention 5: and :5 represent infinite index ranges (this is a bit stretched as Python has no built-in types with negative or infinitely large positive indexes).
And here's the parser (a beautiful one-liner that suffers from slice(16, None, None) glitch described below):
def parse(s):
return [slice(*map(int, x.split(':'))) for x in s.split(',')]
There's one pitfall, however: 8:10 by definition includes only indices 8 and 9 -- without upper bound. If that's unacceptable for your purposes, you certainly need a different format and 1-3,6,8-10,16 looks good to me. The parser then would be
def myslice(start, stop=None, step=None):
return slice(start, (stop if stop is not None else start) + 1, step)
def parse(s):
return [myslice(*map(int, x.split('-'))) for x in s.split(',')]
Update: here's the full parser for a combined format:
from sys import maxsize as INF
def indices(s: 'string with indices list') -> 'indices generator':
for x in s.split(','):
splitter = ':' if (':' in x) or (x[0] == '-') else '-'
ix = x.split(splitter)
start = int(ix[0]) if ix[0] is not '' else -INF
if len(ix) == 1:
stop = start + 1
else:
stop = int(ix[1]) if ix[1] is not '' else INF
step = int(ix[2]) if len(ix) > 2 else 1
for y in range(start, stop + (splitter == '-'), step):
yield y
This handles negative numbers as well, so
print(list(indices('-5, 1:3, 6, 8:15:2, 20-25, 18')))
prints
[-5, 1, 2, 6, 7, 8, 10, 12, 14, 20, 21, 22, 23, 24, 25, 18, 19]
Yet another alternative is to use ... (which Python recognizes as the built-in constant Ellipsis so you can call z[...] if you want) but I think 1,...,3,6, 8,...,10,16 is less readable.
A:
This is probably about as lazily as it can be done, meaning it will be okay for even very large lists:
def makerange(s):
for nums in s.split(","): # whole list comma-delimited
range_ = nums.split("-") # number might have a dash - if not, no big deal
start = int(range_[0])
for i in xrange(start, start + 1 if len(range_) == 1 else int(range_[1]) + 1):
yield i
s = "1-3,6,8-10,16"
print list(makerange(s))
output:
[1, 2, 3, 6, 8, 9, 10, 16]
A:
This looked like a fun puzzle to go with my coffee this morning. If you settle on your given syntax (which looks okay to me, with some notes at the end), here is a pyparsing converter that will take your input string and return a list of integers:
from pyparsing import *
integer = Word(nums).setParseAction(lambda t : int(t[0]))
intrange = integer("start") + '-' + integer("end")
def validateRange(tokens):
if tokens.from_ > tokens.to:
raise Exception("invalid range, start must be <= end")
intrange.setParseAction(validateRange)
intrange.addParseAction(lambda t: list(range(t.start, t.end+1)))
indices = delimitedList(intrange | integer)
def mergeRanges(tokens):
ret = set()
for item in tokens:
if isinstance(item,int):
ret.add(item)
else:
ret += set(item)
return sorted(ret)
indices.setParseAction(mergeRanges)
test = "1-3,6,8-10,16"
print indices.parseString(test)
This also takes care of any overlapping or duplicate entries, such "3-8,4,6,3,4", and returns a list of just the unique integers.
The parser takes care of validating that ranges like "10-3" are not allowed. If you really wanted to allow this, and have something like "1,5-3,7" return 1,5,4,3,7, then you could tweak the intrange and mergeRanges parse actions to get this simpler result (and discard the validateRange parse action altogether).
You are very likely to get whitespace in your expressions, I assume that this is not significant. "1, 2, 3-6" would be handled the same as "1,2,3-6". Pyparsing does this by default, so you don't see any special whitespace handling in the code above (but it's there...)
This parser does not handle negative indices, but if that were needed too, just change the definition of integer to:
integer = Combine(Optional('-') + Word(nums)).setParseAction(lambda t : int(t[0]))
Your example didn't list any negatives, so I left it out for now.
Python uses ':' for a ranging delimiter, so your original string could have looked like "1:3,6,8:10,16", and Pascal used '..' for array ranges, giving "1..3,6,8..10,16" - meh, dashes are just as good as far as I'm concerned.
A:
import sys
class Sequencer(object):
def __getitem__(self, items):
if not isinstance(items, (tuple, list)):
items = [items]
for item in items:
if isinstance(item, slice):
for i in xrange(*item.indices(sys.maxint)):
yield i
else:
yield item
>>> s = Sequencer()
>>> print list(s[1:3,6,8:10,16])
[1, 2, 6, 8, 9, 16]
Note that I am using the xrange builtin to generate the sequence. That seems awkward at first because it doesn't include the upper number of sequences by default, however it proves to be very convenient. You can do things like:
>>> print list(s[1:10:3,5,5,16,13:5:-1])
[1, 4, 7, 5, 5, 16, 13, 12, 11, 10, 9, 8, 7, 6]
Which means you can use the step part of xrange.
|
pythonic format for indices
|
I am after a string format to efficiently represent a set of indices.
For example "1-3,6,8-10,16" would produce [1,2,3,6,8,9,10,16]
Ideally I would also be able to represent infinite sequences.
Is there an existing standard way of doing this? Or a good library? Or can you propose your own format?
thanks!
Edit: Wow! - thanks for all the well considered responses. I agree I should use ':' instead. Any ideas about infinite lists? I was thinking of using "1.." to represent all positive numbers.
The use case is for a shopping cart. For some products I need to restrict product sales to multiples of X, for others any positive number. So I am after a string format to represent this in the database.
|
[
"You don't need a string for that, This is as simple as it can get:\nfrom types import SliceType\n\nclass sequence(object):\n def __getitem__(self, item):\n for a in item:\n if isinstance(a, SliceType):\n i = a.start\n step = a.step if a.step else 1\n while True:\n if a.stop and i > a.stop:\n break\n yield i\n i += step\n else:\n yield a\n\nprint list(sequence()[1:3,6,8:10,16])\n\nOutput:\n[1, 2, 3, 6, 8, 9, 10, 16]\n\nI'm using Python slice type power to express the sequence ranges. I'm also using generators to be memory efficient.\nPlease note that I'm adding 1 to the slice stop, otherwise the ranges will be different because the stop in slices is not included.\nIt supports steps:\n>>> list(sequence()[1:3,6,8:20:2])\n[1, 2, 3, 6, 8, 10, 12, 14, 16, 18, 20]\n\nAnd infinite sequences:\nsequence()[1:3,6,8:]\n1, 2, 3, 6, 8, 9, 10, ...\n\n\nIf you have to give it a string then you can combine @ilya n. parser with this solution. I'll extend @ilya n. parser to support indexes as well as ranges:\ndef parser(input):\n ranges = [a.split('-') for a in input.split(',')]\n return [slice(*map(int, a)) if len(a) > 1 else int(a[0]) for a in ranges]\n\nNow you can use it like this:\n>>> print list(sequence()[parser('1-3,6,8-10,16')])\n[1, 2, 3, 6, 8, 9, 10, 16]\n\n",
"If you're into something Pythonic, I think 1:3,6,8:10,16 would be a better choice, as x:y is a standard notation for index range and the syntax allows you to use this notation on objects. Note that the call\nz[1:3,6,8:10,16]\n\ngets translated into\nz.__getitem__((slice(1, 3, None), 6, slice(8, 10, None), 16))\n\nEven though this is a TypeError if z is a built-in container, you're free to create the class that will return something reasonable, e.g. as NumPy's arrays.\nYou might also say that by convention 5: and :5 represent infinite index ranges (this is a bit stretched as Python has no built-in types with negative or infinitely large positive indexes).\nAnd here's the parser (a beautiful one-liner that suffers from slice(16, None, None) glitch described below):\ndef parse(s):\n return [slice(*map(int, x.split(':'))) for x in s.split(',')]\n\nThere's one pitfall, however: 8:10 by definition includes only indices 8 and 9 -- without upper bound. If that's unacceptable for your purposes, you certainly need a different format and 1-3,6,8-10,16 looks good to me. The parser then would be \ndef myslice(start, stop=None, step=None):\n return slice(start, (stop if stop is not None else start) + 1, step)\n\ndef parse(s):\n return [myslice(*map(int, x.split('-'))) for x in s.split(',')]\n\n\nUpdate: here's the full parser for a combined format:\nfrom sys import maxsize as INF\n\ndef indices(s: 'string with indices list') -> 'indices generator':\n for x in s.split(','):\n splitter = ':' if (':' in x) or (x[0] == '-') else '-'\n ix = x.split(splitter)\n start = int(ix[0]) if ix[0] is not '' else -INF\n if len(ix) == 1:\n stop = start + 1\n else:\n stop = int(ix[1]) if ix[1] is not '' else INF\n step = int(ix[2]) if len(ix) > 2 else 1\n for y in range(start, stop + (splitter == '-'), step):\n yield y\n\nThis handles negative numbers as well, so\n print(list(indices('-5, 1:3, 6, 8:15:2, 20-25, 18')))\n\nprints\n[-5, 1, 2, 6, 7, 8, 10, 12, 14, 20, 21, 22, 23, 24, 25, 18, 19]\n\n\nYet another alternative is to use ... (which Python recognizes as the built-in constant Ellipsis so you can call z[...] if you want) but I think 1,...,3,6, 8,...,10,16 is less readable.\n",
"This is probably about as lazily as it can be done, meaning it will be okay for even very large lists:\ndef makerange(s):\n for nums in s.split(\",\"): # whole list comma-delimited\n range_ = nums.split(\"-\") # number might have a dash - if not, no big deal\n start = int(range_[0])\n for i in xrange(start, start + 1 if len(range_) == 1 else int(range_[1]) + 1):\n yield i\n\ns = \"1-3,6,8-10,16\"\nprint list(makerange(s))\n\noutput:\n[1, 2, 3, 6, 8, 9, 10, 16]\n\n",
"This looked like a fun puzzle to go with my coffee this morning. If you settle on your given syntax (which looks okay to me, with some notes at the end), here is a pyparsing converter that will take your input string and return a list of integers:\nfrom pyparsing import *\n\ninteger = Word(nums).setParseAction(lambda t : int(t[0]))\nintrange = integer(\"start\") + '-' + integer(\"end\")\ndef validateRange(tokens):\n if tokens.from_ > tokens.to:\n raise Exception(\"invalid range, start must be <= end\")\nintrange.setParseAction(validateRange)\nintrange.addParseAction(lambda t: list(range(t.start, t.end+1)))\n\nindices = delimitedList(intrange | integer)\n\ndef mergeRanges(tokens):\n ret = set()\n for item in tokens:\n if isinstance(item,int):\n ret.add(item)\n else:\n ret += set(item)\n return sorted(ret)\n\nindices.setParseAction(mergeRanges)\n\ntest = \"1-3,6,8-10,16\"\nprint indices.parseString(test)\n\nThis also takes care of any overlapping or duplicate entries, such \"3-8,4,6,3,4\", and returns a list of just the unique integers.\nThe parser takes care of validating that ranges like \"10-3\" are not allowed. If you really wanted to allow this, and have something like \"1,5-3,7\" return 1,5,4,3,7, then you could tweak the intrange and mergeRanges parse actions to get this simpler result (and discard the validateRange parse action altogether).\nYou are very likely to get whitespace in your expressions, I assume that this is not significant. \"1, 2, 3-6\" would be handled the same as \"1,2,3-6\". Pyparsing does this by default, so you don't see any special whitespace handling in the code above (but it's there...)\nThis parser does not handle negative indices, but if that were needed too, just change the definition of integer to:\ninteger = Combine(Optional('-') + Word(nums)).setParseAction(lambda t : int(t[0]))\n\nYour example didn't list any negatives, so I left it out for now.\nPython uses ':' for a ranging delimiter, so your original string could have looked like \"1:3,6,8:10,16\", and Pascal used '..' for array ranges, giving \"1..3,6,8..10,16\" - meh, dashes are just as good as far as I'm concerned.\n",
"import sys\n\nclass Sequencer(object):\n def __getitem__(self, items):\n if not isinstance(items, (tuple, list)):\n items = [items]\n for item in items:\n if isinstance(item, slice):\n for i in xrange(*item.indices(sys.maxint)):\n yield i\n else:\n yield item\n\n\n>>> s = Sequencer()\n>>> print list(s[1:3,6,8:10,16])\n[1, 2, 6, 8, 9, 16]\n\nNote that I am using the xrange builtin to generate the sequence. That seems awkward at first because it doesn't include the upper number of sequences by default, however it proves to be very convenient. You can do things like:\n>>> print list(s[1:10:3,5,5,16,13:5:-1])\n[1, 4, 7, 5, 5, 16, 13, 12, 11, 10, 9, 8, 7, 6]\n\nWhich means you can use the step part of xrange.\n"
] |
[
7,
3,
2,
1,
1
] |
[] |
[] |
[
"indexing",
"python",
"sequence",
"set"
] |
stackoverflow_0001481192_indexing_python_sequence_set.txt
|
Q:
Why I cannot build a chain of methods? (method1.method2.method3)
If I have the following code:
import sqlite
sqlite.connect('tmp.db').cursor().close()
I get the following error message:
Traceback (most recent call last):
File "searchengine2.py", line 13, in ?
sqlite.connect('tmp.db').cursor().close()
File "/usr/lib64/python2.4/site-packages/sqlite/main.py", line 280, in close
if self.con and self.con.closed:
ReferenceError: weakly-referenced object no longer exists
However, if I modify the code in the following way:
import sqlite
x1 = sqlite.connect('tmp.db')
x2 = x1.cursor()
x3 = x2.close()
everything is fine. Why?
A:
Apparently the cursor keeps a weak reference to the connection (self.con). Because you chain the functions, the connection you've instantiated is out of scope as soon as you instantiate the cursor -- nothing holds a strong reference to the connection anymore, and the connection is eligible for garbage collection.
Therefore, by the time you try to close the cursor (which in turn tries to close the connection), the connection is already out of scope and may have been garbage collected -- if it was, then the connection is already closed.
There's no way to get around this without modifying the cursor's source to make it hold a strong reference to the connection, and there's no easy way to tell how many problems that might introduce if you do it that way. (Good) designers don't arbitrarily make weak references without a good reason.
(Hopefully you understand weak references from your last question.)
A:
It looks like cursor() returns (and keeps) a weak reference to the connection, so that then, when the strong reference to your connection is off the call stack, your connection (the result of connect()) is left without any strong references. So by the time close() is called, your connection has been destructed.
The second form avoids this by keeping a strong reference to your connection around the whole time.
|
Why I cannot build a chain of methods? (method1.method2.method3)
|
If I have the following code:
import sqlite
sqlite.connect('tmp.db').cursor().close()
I get the following error message:
Traceback (most recent call last):
File "searchengine2.py", line 13, in ?
sqlite.connect('tmp.db').cursor().close()
File "/usr/lib64/python2.4/site-packages/sqlite/main.py", line 280, in close
if self.con and self.con.closed:
ReferenceError: weakly-referenced object no longer exists
However, if I modify the code in the following way:
import sqlite
x1 = sqlite.connect('tmp.db')
x2 = x1.cursor()
x3 = x2.close()
everything is fine. Why?
|
[
"Apparently the cursor keeps a weak reference to the connection (self.con). Because you chain the functions, the connection you've instantiated is out of scope as soon as you instantiate the cursor -- nothing holds a strong reference to the connection anymore, and the connection is eligible for garbage collection.\nTherefore, by the time you try to close the cursor (which in turn tries to close the connection), the connection is already out of scope and may have been garbage collected -- if it was, then the connection is already closed.\nThere's no way to get around this without modifying the cursor's source to make it hold a strong reference to the connection, and there's no easy way to tell how many problems that might introduce if you do it that way. (Good) designers don't arbitrarily make weak references without a good reason.\n(Hopefully you understand weak references from your last question.)\n",
"It looks like cursor() returns (and keeps) a weak reference to the connection, so that then, when the strong reference to your connection is off the call stack, your connection (the result of connect()) is left without any strong references. So by the time close() is called, your connection has been destructed.\nThe second form avoids this by keeping a strong reference to your connection around the whole time.\n"
] |
[
3,
1
] |
[] |
[] |
[
"methods",
"python",
"sqlite"
] |
stackoverflow_0001482270_methods_python_sqlite.txt
|
Q:
Understanding this class in python. The operator % and formatting a float
class FormatFloat(FormatFormatStr):
def __init__(self, precision=4, scale=1.):
FormatFormatStr.__init__(self, '%%1.%df'%precision)
self.precision = precision
self.scale = scale
def toval(self, x):
if x is not None:
x = x * self.scale
return x
def fromstr(self, s):
return float(s)/self.scale
The part that confuses me is this part
FormatFormatStr.__init__(self, '%%1.%df'%precision)
does this mean that the precision gets entered twice before the 1 and once before df? Does df stand for anything that you know of? I don't see it elsewhere even in its ancestors as can be seen here:
class FormatFormatStr(FormatObj):
def __init__(self, fmt):
self.fmt = fmt
def tostr(self, x):
if x is None: return 'None'
return self.fmt%self.toval(x)
class FormatObj:
def tostr(self, x):
return self.toval(x)
def toval(self, x):
return str(x)
def fromstr(self, s):
return s
also, I put this into my Ipython and get this:
In [53]: x = FormatFloat(.234324234325435)
In [54]: x
Out[54]: <matplotlib.mlab.FormatFloat instance at 0x939d4ec>
I figured that it would reduce precision to 4 and scale to 1. But instead it gets stored somewhere in my memory. Can I retrieve it to see what it does to the number?
Thanks everyone you're very helpful!
A:
>>> precision=4
>>> '%%1.%df'%precision
'%1.4f'
%% gets translated to %
1 is printed as is
%d prints precision as a decimal number
f is printed literally
A:
In ('%%1.%df' % precision), the first %% yields a literal %, %d is substituted with precision, and f is inserted literally. Here's an example of how it might turn out:
>>> '%%1.%df' % 4
'%1.4f'
More about string formatting in Python
In order to use the FormatFloat class you might try something like this:
formatter = FormatFloat(precision = 4)
print formatter.tostr(0.2345678)
A:
In python format strings, "%%" means "insert a literal percent sign" -- the first % 'escapes' the second, in the jargon. The string in question, "%%1.%df" % precision is using a format string to generate a format string, and the only thing that gets substituted is the "%d". Try it at the interactive prompt:
>>> print "%%1.%df" % 5
'%1.5f'
The class FormatFloat doesn't define __repr__, __str__, or __unicode__ (the "magic" methods used for type coercion) so when you just print the value in an interactive console, you get the standard representation of instances. To get the string value, you would call the tostr() method (defined on the parent class):
>>> ff = FormatFloat(.234324234325435)
>>> ff.tostr()
0.234
|
Understanding this class in python. The operator % and formatting a float
|
class FormatFloat(FormatFormatStr):
def __init__(self, precision=4, scale=1.):
FormatFormatStr.__init__(self, '%%1.%df'%precision)
self.precision = precision
self.scale = scale
def toval(self, x):
if x is not None:
x = x * self.scale
return x
def fromstr(self, s):
return float(s)/self.scale
The part that confuses me is this part
FormatFormatStr.__init__(self, '%%1.%df'%precision)
does this mean that the precision gets entered twice before the 1 and once before df? Does df stand for anything that you know of? I don't see it elsewhere even in its ancestors as can be seen here:
class FormatFormatStr(FormatObj):
def __init__(self, fmt):
self.fmt = fmt
def tostr(self, x):
if x is None: return 'None'
return self.fmt%self.toval(x)
class FormatObj:
def tostr(self, x):
return self.toval(x)
def toval(self, x):
return str(x)
def fromstr(self, s):
return s
also, I put this into my Ipython and get this:
In [53]: x = FormatFloat(.234324234325435)
In [54]: x
Out[54]: <matplotlib.mlab.FormatFloat instance at 0x939d4ec>
I figured that it would reduce precision to 4 and scale to 1. But instead it gets stored somewhere in my memory. Can I retrieve it to see what it does to the number?
Thanks everyone you're very helpful!
|
[
">>> precision=4\n>>> '%%1.%df'%precision\n'%1.4f'\n\n%% gets translated to %\n1 is printed as is\n%d prints precision as a decimal number\nf is printed literally\n",
"In ('%%1.%df' % precision), the first %% yields a literal %, %d is substituted with precision, and f is inserted literally. Here's an example of how it might turn out:\n>>> '%%1.%df' % 4\n'%1.4f'\n\nMore about string formatting in Python\nIn order to use the FormatFloat class you might try something like this:\nformatter = FormatFloat(precision = 4)\nprint formatter.tostr(0.2345678)\n\n",
"In python format strings, \"%%\" means \"insert a literal percent sign\" -- the first % 'escapes' the second, in the jargon. The string in question, \"%%1.%df\" % precision is using a format string to generate a format string, and the only thing that gets substituted is the \"%d\". Try it at the interactive prompt:\n>>> print \"%%1.%df\" % 5\n'%1.5f'\n\nThe class FormatFloat doesn't define __repr__, __str__, or __unicode__ (the \"magic\" methods used for type coercion) so when you just print the value in an interactive console, you get the standard representation of instances. To get the string value, you would call the tostr() method (defined on the parent class):\n>>> ff = FormatFloat(.234324234325435)\n>>> ff.tostr()\n0.234\n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"class",
"operators",
"python"
] |
stackoverflow_0001482383_class_operators_python.txt
|
Q:
Google App Engine - ReferenceProperty() gives error - Generic reference - Polymodel
Given a Polymodel in Google App Engine, likeso:
from google.appengine.ext import db
from google.appengine.ext.db import polymodel
class Base(polymodel.PolyModel):
def add_to_referer(self):
Referer(target=self).put()
class Referer(db.Model):
target = db.ReferenceProperty()
@classmethod
def who_referred(cls):
for referer in Referer.all():
obj = referer.target
This last line is giving an error likeso:
No implementation for kind 'Base'
Traceback is likeso:
>>> object = referer.target
/usr/local/google_appengine/google/appengine/ext/db/__init__.py in __get__:2804
/usr/local/google_appengine/google/appengine/ext/db/__init__.py in get:1179
/usr/local/google_appengine/google/appengine/ext/db/__init__.py in class_for_kind:220
Does anyone have any idea what's going on here? The expected behavior would be, obviously, that no error be thrown.
It may be relevant that Base and Referer are in separate files (and not imported).
This problem may be somewhat related to Python decorate a class to change parent object type, which is a question which still lurks in the back of my mind.
Thank you for reading.
A:
Ah. I answered this immediately after I posted:
The file with Referer needs to import Base.
Perhaps someone else will happen upon this quirk, so I'll leave this question open.
|
Google App Engine - ReferenceProperty() gives error - Generic reference - Polymodel
|
Given a Polymodel in Google App Engine, likeso:
from google.appengine.ext import db
from google.appengine.ext.db import polymodel
class Base(polymodel.PolyModel):
def add_to_referer(self):
Referer(target=self).put()
class Referer(db.Model):
target = db.ReferenceProperty()
@classmethod
def who_referred(cls):
for referer in Referer.all():
obj = referer.target
This last line is giving an error likeso:
No implementation for kind 'Base'
Traceback is likeso:
>>> object = referer.target
/usr/local/google_appengine/google/appengine/ext/db/__init__.py in __get__:2804
/usr/local/google_appengine/google/appengine/ext/db/__init__.py in get:1179
/usr/local/google_appengine/google/appengine/ext/db/__init__.py in class_for_kind:220
Does anyone have any idea what's going on here? The expected behavior would be, obviously, that no error be thrown.
It may be relevant that Base and Referer are in separate files (and not imported).
This problem may be somewhat related to Python decorate a class to change parent object type, which is a question which still lurks in the back of my mind.
Thank you for reading.
|
[
"Ah. I answered this immediately after I posted:\nThe file with Referer needs to import Base.\nPerhaps someone else will happen upon this quirk, so I'll leave this question open.\n"
] |
[
0
] |
[] |
[] |
[
"google_app_engine",
"python",
"referenceproperty"
] |
stackoverflow_0001482435_google_app_engine_python_referenceproperty.txt
|
Q:
What does it mean "weakly-referenced object no longer exists"?
I am running a Python code and I get the following error message:
Exception exceptions.ReferenceError: 'weakly-referenced object no longer exists' in <bound method crawler.__del__ of <searchengine.crawler instance at 0x2b8c1f99ef80>> ignored
Does anybody know what can it means?
P.S.
This is the code which produce the error:
import sqlite
class crawler:
def __init__(self,dbname):
tmp = sqlite.connect(dbname)
self.con = tmp.cursor()
def __del__(self):
self.con.close()
crawler = crawler('searchindex.db')
A:
A normal AKA strong reference is one that keeps the referred-to object alive: in CPython, each object keeps the number of (normal) references to it that exists (known as its "reference count" or RC) and goes away as soon as the RC reaches zero (occasional generational mark and sweep passes also garbage-collect "reference loops" once in a while).
When you don't want an object to stay alive just because another one refers to it, then you use a "weak reference", a special variety of reference that doesn't increment the RC; see the docs for details. Of course, since the referred-to object CAN go away if not otherwise referred to (the whole purpose of the weak ref rather than a normal one!-), the referring-to object needs to be warned if it tries to use an object that's gone away -- and that alert is given exactly by the exception you're seeing.
In your code...:
def __init__(self,dbname):
tmp = sqlite.connect(dbname)
self.con = tmp.cursor()
def __del__(self):
self.con.close()
tmp is a normal reference to the connection... but it's a local variable, so it goes away at the end of __init__. The (peculiarly named) cursor self.con stays, BUT it's internally implemented to only hold a WEAK ref to the connection, so the connection goes away when tmp does. So in __del__ the call to .close fails (since the cursor needs to use the connection in order to close itself).
Simplest solution is the following tiny change:
def __init__(self,dbname):
self.con = sqlite.connect(dbname)
self.cur = self.con.cursor()
def __del__(self):
self.cur.close()
self.con.close()
I've also taken the opportunity to use con for connection and cur for cursor, but Python won't mind if you're keen to swap those (you'll just leave readers perplexed).
A:
The code is referring to an instance which has already been garbage collected.
To avoid circular references you can use a weak reference which isn't enough to prevent garbage collection. In this case there is a weakref.proxy (http://docs.python.org/library/weakref.html#weakref.proxy) to a searchengine.crawler object.
A:
Weak references are a form of reference that does not prevent the garbage collector from disposing the referenced object. If you want to guarantee that the object will continue to exist, you should use a strong (normal) reference.
Otherwise, there is no guarantee that the object will or will not exist after all the normal references have gone out of scope.
|
What does it mean "weakly-referenced object no longer exists"?
|
I am running a Python code and I get the following error message:
Exception exceptions.ReferenceError: 'weakly-referenced object no longer exists' in <bound method crawler.__del__ of <searchengine.crawler instance at 0x2b8c1f99ef80>> ignored
Does anybody know what can it means?
P.S.
This is the code which produce the error:
import sqlite
class crawler:
def __init__(self,dbname):
tmp = sqlite.connect(dbname)
self.con = tmp.cursor()
def __del__(self):
self.con.close()
crawler = crawler('searchindex.db')
|
[
"A normal AKA strong reference is one that keeps the referred-to object alive: in CPython, each object keeps the number of (normal) references to it that exists (known as its \"reference count\" or RC) and goes away as soon as the RC reaches zero (occasional generational mark and sweep passes also garbage-collect \"reference loops\" once in a while).\nWhen you don't want an object to stay alive just because another one refers to it, then you use a \"weak reference\", a special variety of reference that doesn't increment the RC; see the docs for details. Of course, since the referred-to object CAN go away if not otherwise referred to (the whole purpose of the weak ref rather than a normal one!-), the referring-to object needs to be warned if it tries to use an object that's gone away -- and that alert is given exactly by the exception you're seeing.\nIn your code...:\ndef __init__(self,dbname):\n tmp = sqlite.connect(dbname)\n self.con = tmp.cursor()\n\ndef __del__(self):\n self.con.close()\n\ntmp is a normal reference to the connection... but it's a local variable, so it goes away at the end of __init__. The (peculiarly named) cursor self.con stays, BUT it's internally implemented to only hold a WEAK ref to the connection, so the connection goes away when tmp does. So in __del__ the call to .close fails (since the cursor needs to use the connection in order to close itself).\nSimplest solution is the following tiny change:\ndef __init__(self,dbname):\n self.con = sqlite.connect(dbname)\n self.cur = self.con.cursor()\n\ndef __del__(self):\n self.cur.close()\n self.con.close()\n\nI've also taken the opportunity to use con for connection and cur for cursor, but Python won't mind if you're keen to swap those (you'll just leave readers perplexed).\n",
"The code is referring to an instance which has already been garbage collected.\nTo avoid circular references you can use a weak reference which isn't enough to prevent garbage collection. In this case there is a weakref.proxy (http://docs.python.org/library/weakref.html#weakref.proxy) to a searchengine.crawler object.\n",
"Weak references are a form of reference that does not prevent the garbage collector from disposing the referenced object. If you want to guarantee that the object will continue to exist, you should use a strong (normal) reference.\nOtherwise, there is no guarantee that the object will or will not exist after all the normal references have gone out of scope.\n"
] |
[
55,
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001482141_python.txt
|
Q:
In Python interpreter, return without " ' "
In Python, how do you return a variable like:
function(x):
return x
Without the 'x' (') being around the x?
A:
In the Python interactive prompt, if you return a string, it will be displayed with quotes around it, mainly so that you know it's a string.
If you just print the string, it will not be shown with quotes (unless the string has quotes in it).
>>> 1 # just a number, so no quotes
1
>>> "hi" # just a string, displayed with quotes
'hi'
>>> print("hi") # being *printed* to the screen, so do not show quotes
hi
>>> "'hello'" # string with embedded single quotes
"'hello'"
>>> print("'hello'") # *printing* a string with embedded single quotes
'hello'
If you actually do need to remove leading/trailing quotation marks, use the .strip method of the string to remove single and/or double quotes:
>>> print("""'"hello"'""")
'"hello"'
>>> print("""'"hello"'""".strip('"\''))
hello
A:
Here's one way that will remove all the single quotes in a string.
def remove(x):
return x.replace("'", "")
Here's another alternative that will remove the first and last character.
def remove2(x):
return x[1:-1]
|
In Python interpreter, return without " ' "
|
In Python, how do you return a variable like:
function(x):
return x
Without the 'x' (') being around the x?
|
[
"In the Python interactive prompt, if you return a string, it will be displayed with quotes around it, mainly so that you know it's a string.\nIf you just print the string, it will not be shown with quotes (unless the string has quotes in it).\n>>> 1 # just a number, so no quotes\n1\n>>> \"hi\" # just a string, displayed with quotes\n'hi'\n>>> print(\"hi\") # being *printed* to the screen, so do not show quotes\nhi\n>>> \"'hello'\" # string with embedded single quotes\n\"'hello'\"\n>>> print(\"'hello'\") # *printing* a string with embedded single quotes\n'hello'\n\nIf you actually do need to remove leading/trailing quotation marks, use the .strip method of the string to remove single and/or double quotes:\n>>> print(\"\"\"'\"hello\"'\"\"\")\n'\"hello\"'\n>>> print(\"\"\"'\"hello\"'\"\"\".strip('\"\\''))\nhello\n\n",
"Here's one way that will remove all the single quotes in a string.\ndef remove(x):\n return x.replace(\"'\", \"\")\n\nHere's another alternative that will remove the first and last character.\ndef remove2(x):\n return x[1:-1]\n\n"
] |
[
45,
2
] |
[] |
[] |
[
"interpreter",
"python",
"read_eval_print_loop"
] |
stackoverflow_0001482649_interpreter_python_read_eval_print_loop.txt
|
Q:
`xrange(2**100)` -> OverflowError: long int too large to convert to int
xrange function doesn't work for large integers:
>>> N = 10**100
>>> xrange(N)
Traceback (most recent call last):
...
OverflowError: long int too large to convert to int
>>> xrange(N, N+10)
Traceback (most recent call last):
...
OverflowError: long int too large to convert to int
Python 3.x:
>>> N = 10**100
>>> r = range(N)
>>> r = range(N, N+10)
>>> len(r)
10
Is there a backport of py3k builtin range() function for Python 2.x?
Edit
I'm looking for a complete implementation of "lazy" range(), not just a partial implementation of some of its functionality.
A:
I believe there is no backport (Py 3's completely removed the int/long distinction, after all, but in 2.* it's here to stay;-) but it's not hard to hack your own, e.g....:
import operator
def wowrange(start, stop, step=1):
if step == 0:
raise ValueError('step must be != 0')
elif step < 0:
proceed = operator.gt
else:
proceed = operator.lt
while proceed(start, stop):
yield start
start += step
Edit it appears the OP doesn't just want looping (the normal purpose of xrange, and
range in Py3), but also len and the in operator (the latter does work on the above generator, but slowly -- optimizations are possible). For such richness a class
is better...:
import operator
class wowrange(object):
def __init__(self, start, stop=None, step=1):
if step == 0: raise ValueError('step must be != 0')
if stop is None: start, stop = 0, start
if step < 0:
self.proceed = operator.gt
self.l = (stop-start+step+1)//step
else:
self.proceed = operator.lt
self.l = (stop-start+step-1)//step
self.lo = min(start, stop)
self.start, self.stop, self.step = start, stop, step
def __iter__(self):
start = self.start
while self.proceed(start, self.stop):
yield start
start += self.step
def __len__(self):
return self.l
def __contains__(self, x):
if x == self.stop:
return False
if self.proceed(x, self.start):
return False
if self.proceed(self.stop, x):
return False
return (x-self.lo) % self.step == 0
I wouldn't be surprised if there's an off-by-one or similar glitch lurking here, but, I hope this helps!
Edit again: I see indexing is ALSO required. Is it just too hard to write your own __getitem__? I guess it is, so here it, too, is, served on a silver plate...:
def __getitem__(self, i):
if i < 0:
i += self.l
if i < 0: raise IndexError
elif if i >= self.l:
raise IndexError
return self.start + i * self.step
I don't know if 3.0 range supports slicing (xrange in recent 2.* releases doesn't -- it used to, but that was removed because the complication was ridiculous and prone to bugs), but I guess I do have to draw a line in the sand somewhere, so I'm not going to add it;-).
A:
Okay, here's a go at a fuller reimplementation.
class MyXRange(object):
def __init__(self, a1, a2=None, step=1):
if step == 0:
raise ValueError("arg 3 must not be 0")
if a2 is None:
a1, a2 = 0, a1
if (a2 - a1) % step != 0:
a2 += step - (a2 - a1) % step
if cmp(a1, a2) != cmp(0, step):
a2 = a1
self.start, self.stop, self.step = a1, a2, step
def __iter__(self):
n = self.start
while cmp(n, self.stop) == cmp(0, self.step):
yield n
n += self.step
def __repr__(self):
return "MyXRange(%d,%d,%d)" % (self.start, self.stop, self.step)
# NB: len(self) will convert this to an int, and may fail
def __len__(self):
return (self.stop - self.start)//(self.step)
def __getitem__(self, key):
if key < 0:
key = self.__len__() + key
if key < 0:
raise IndexError("list index out of range")
return self[key]
n = self.start + self.step*key
if cmp(n, self.stop) != cmp(0, self.step):
raise IndexError("list index out of range")
return n
def __reversed__(self):
return MyXRange(self.stop-self.step, self.start-self.step, -self.step)
def __contains__(self, val):
if val == self.start: return cmp(0, self.step) == cmp(self.start, self.stop)
if cmp(self.start, val) != cmp(0, self.step): return False
if cmp(val, self.stop) != cmp(0, self.step): return False
return (val - self.start) % self.step == 0
And some testing:
def testMyXRange(testsize=10):
def normexcept(f,args):
try:
r = [f(args)]
except Exception, e:
r = type(e)
return r
for i in range(-testsize,testsize+1):
for j in range(-testsize,testsize+1):
print i, j
for k in range(-9, 10, 2):
r, mr = range(i,j,k), MyXRange(i,j,k)
if r != list(mr):
print "iter fail: %d, %d, %d" % (i,j,k)
if list(reversed(r)) != list(reversed(mr)):
print "reversed fail: %d, %d, %d" % (i,j,k)
if len(r) != len(mr):
print "len fail: %d, %d, %d" % (i,j,k)
z = [m for m in range(-testsize*2,testsize*2+1)
if (m in r) != (m in mr)]
if z != []:
print "contains fail: %d, %d, %d, %s" % (i,j,k,(z+["..."])[:10])
z = [m for m in range(-testsize*2, testsize*2+1)
if normexcept(r.__getitem__, m) != normexcept(mr.__getitem__, m)]
if z != []:
print "getitem fail: %d, %d, %d, %s" % (i,j,k,(z+["..."])[:10])
A:
From the docs:
Note
xrange() is intended to be simple and fast. Implementations may impose restrictions to achieve this. The C implementation of Python restricts all arguments to native C longs (“short” Python integers), and also requires that the number of elements fit in a native C long. If a larger range is needed, an alternate version can be crafted using the itertools module: islice(count(start, step), (stop-start+step-1)//step).
Alternatively reimplement xrange using generators:
def myxrange(a1, a2=None, step=1):
if a2 is None:
start, last = 0, a1
else:
start, last = a1, a2
while cmp(start, last) == cmp(0, step):
yield start
start += step
and
N = 10**100
len(list(myxrange(N, N+10)))
A:
Edit
Issue 1546078: "xrange that supports longs, etc" on the Python issue tracker contains C patch and pure Python implementation of unlimited xrange written by Neal Norwitz (nnorwitz). See xrange.py
Edit
The latest version of irange (renamed as lrange) is at github.
Implementation based on py3k's rangeobject.c
irange.py
"""Define `irange.irange` class
`xrange`, py3k's `range` analog for large integers
See help(irange.irange)
>>> r = irange(2**100, 2**101, 2**100)
>>> len(r)
1
>>> for i in r:
... print i,
1267650600228229401496703205376
>>> for i in r:
... print i,
1267650600228229401496703205376
>>> 2**100 in r
True
>>> r[0], r[-1]
(1267650600228229401496703205376L, 1267650600228229401496703205376L)
>>> L = list(r)
>>> L2 = [1, 2, 3]
>>> L2[:] = r
>>> L == L2 == [2**100]
True
"""
def toindex(arg):
"""Convert `arg` to integer type that could be used as an index.
"""
if not any(isinstance(arg, cls) for cls in (long, int, bool)):
raise TypeError("'%s' object cannot be interpreted as an integer" % (
type(arg).__name__,))
return int(arg)
class irange(object):
"""irange([start,] stop[, step]) -> irange object
Return an iterator that generates the numbers in the range on demand.
Return `xrange` for small integers
Pure Python implementation of py3k's `range()`.
(I.e. it supports large integers)
If `xrange` and py3k `range()` differ then prefer `xrange`'s behaviour
Based on `[1]`_
.. [1] http://svn.python.org/view/python/branches/py3k/Objects/rangeobject.c?view=markup
>>> # on Python 2.6
>>> N = 10**80
>>> len(range(N, N+3))
3
>>> len(xrange(N, N+3))
Traceback (most recent call last):
...
OverflowError: long int too large to convert to int
>>> len(irange(N, N+3))
3
>>> xrange(N)
Traceback (most recent call last):
...
OverflowError: long int too large to convert to int
>>> irange(N).length() == N
True
"""
def __new__(cls, *args):
try: return xrange(*args) # use `xrange` for small integers
except OverflowError: pass
nargs = len(args)
if nargs == 1:
stop = toindex(args[0])
start = 0
step = 1
elif nargs in (2, 3):
start = toindex(args[0])
stop = toindex(args[1])
if nargs == 3:
step = args[2]
if step is None:
step = 1
step = toindex(step)
if step == 0:
raise ValueError("irange() arg 3 must not be zero")
else:
step = 1
else:
raise ValueError("irange(): wrong number of arguments," +
" got %s" % args)
r = super(irange, cls).__new__(cls)
r._start, r._stop, r._step = start, stop, step
return r
def length(self):
"""len(self) might throw OverflowError, this method shouldn't."""
if self._step > 0:
lo, hi = self._start, self._stop
step = self._step
else:
hi, lo = self._start, self._stop
step = -self._step
assert step
if lo >= hi:
return 0
else:
return (hi - lo - 1) // step + 1
__len__ = length
def __getitem__(self, i): # for L[:] = irange(..)
if i < 0:
i = i + self.length()
if i < 0 or i >= self.length():
raise IndexError("irange object index out of range")
return self._start + i * self._step
def __repr__(self):
if self._step == 1:
return "irange(%r, %r)" % (self._start, self._stop)
else:
return "irange(%r, %r, %r)" % (
self._start, self._stop, self._step)
def __contains__(self, ob):
if type(ob) not in (int, long, bool): # mimic py3k
# perform iterative search
return any(i == ob for i in self)
# if long or bool
if self._step > 0:
inrange = self._start <= ob < self._stop
else:
assert self._step
inrange = self._stop < ob <= self._start
if not inrange:
return False
else:
return ((ob - self._start) % self._step) == 0
def __iter__(self):
len_ = self.length()
i = 0
while i < len_:
yield self._start + i * self._step
i += 1
def __reversed__(self):
len_ = self.length()
new_start = self._start + (len_ - 1) * self._step
new_stop = self._start
if self._step > 0:
new_stop -= 1
else:
new_stop += 1
return irange(new_start, new_stop, -self._step)
test_irange.py
"""Unit-tests for irange.irange class.
Usage:
$ python -W error test_irange.py --with-doctest --doctest-tests
"""
import sys
from nose.tools import raises
from irange import irange
def eq_irange(a, b):
"""Assert that `a` equals `b`.
Where `a`, `b` are `irange` objects
"""
try:
assert a.length() == b.length()
assert a._start == b._start
assert a._stop == b._stop
assert a._step == b._step
if a.length() < 100:
assert list(a) == list(b)
try:
assert list(a) == range(a._start, a._stop, a._step)
except OverflowError:
pass
except AttributeError:
if type(a) == xrange:
assert len(a) == len(b)
if len(a) == 0: # empty xrange
return
if len(a) > 0:
assert a[0] == b[0]
if len(a) > 1:
a = irange(a[0], a[-1], a[1] - a[0])
b = irange(b[0], b[-1], b[1] - b[0])
eq_irange(a, b)
else:
raise
def _get_short_iranges_args():
# perl -E'local $,= q/ /; $n=100; for (1..20)
# > { say map {int(-$n + 2*$n*rand)} 0..int(3*rand) }'
input_args = """\
67
-11
51
-36
-15 38 19
43 -58 79
-91 -71
-56
3 51
-23 -63
-80 13 -30
24
-14 49
10 73
31
38 66
-22 20 -81
79 5 84
44
40 49
"""
return [[int(arg) for arg in line.split()]
for line in input_args.splitlines() if line.strip()]
def _get_iranges_args():
N = 2**100
return [(start, stop, step)
for start in range(-2*N, 2*N, N//2+1)
for stop in range(-4*N, 10*N, N+1)
for step in range(-N//2, N, N//8+1)]
def _get_short_iranges():
return [irange(*args) for args in _get_short_iranges_args()]
def _get_iranges():
return (_get_short_iranges() +
[irange(*args) for args in _get_iranges_args()])
@raises(TypeError)
def test_kwarg():
irange(stop=10)
@raises(TypeError, DeprecationWarning)
def test_float_stop():
irange(1.0)
@raises(TypeError, DeprecationWarning)
def test_float_step2():
irange(-1, 2, 1.0)
@raises(TypeError, DeprecationWarning)
def test_float_start():
irange(1.0, 2)
@raises(TypeError, DeprecationWarning)
def test_float_step():
irange(1, 2, 1.0)
@raises(TypeError)
def test_empty_args():
irange()
def test_empty_range():
for args in (
"-3",
"1 3 -1",
"1 1",
"1 1 1",
"-3 -4",
"-3 -2 -1",
"-3 -3 -1",
"-3 -3",
):
r = irange(*[int(a) for a in args.split()])
assert len(r) == 0
L = list(r)
assert len(L) == 0
def test_small_ints():
for args in _get_short_iranges_args():
ir, r = irange(*args), xrange(*args)
assert len(ir) == len(r)
assert list(ir) == list(r)
def test_big_ints():
N = 10**100
for args, len_ in [
[(N,), N],
[(N, N+10), 10],
[(N, N-10, -2), 5],
]:
try:
xrange(*args)
assert 0
except OverflowError:
pass
ir = irange(*args)
assert ir.length() == len_
try:
assert ir.length() == len(ir)
except OverflowError:
pass
#
ir[ir.length()-1]
#
if len(args) >= 2:
r = range(*args)
assert list(ir) == r
assert ir[ir.length()-1] == r[-1]
assert list(reversed(ir)) == list(reversed(r))
#
def test_negative_index():
assert irange(10)[-1] == 9
assert irange(2**100+1)[-1] == 2**100
def test_reversed():
for r in _get_iranges():
if type(r) == xrange: continue # known not to work for xrange
if r.length() > 1000: continue # skip long
assert list(reversed(reversed(r))) == list(r)
assert list(r) == range(r._start, r._stop, r._step)
def test_pickle():
import pickle
for r in _get_iranges():
rp = pickle.loads(pickle.dumps(r))
eq_irange(rp, r)
def test_equility():
for args in _get_iranges_args():
a, b = irange(*args), irange(*args)
assert a is not b
assert a != b
eq_irange(a, b)
def test_contains():
class IntSubclass(int):
pass
r10 = irange(10)
for i in range(10):
assert i in r10
assert IntSubclass(i) in r10
assert 10 not in r10
assert -1 not in r10
assert IntSubclass(10) not in r10
assert IntSubclass(-1) not in r10
def test_repr():
for r in _get_iranges():
eq_irange(eval(repr(r)), r)
def test_new():
assert repr(irange(True)) == repr(irange(1))
def test_overflow():
lo, hi = sys.maxint-2, sys.maxint+3
assert list(irange(lo, hi)) == list(range(lo, hi))
def test_getitem():
r = irange(sys.maxint-2, sys.maxint+3)
L = []
L[:] = r
assert len(L) == len(r)
assert L == list(r)
if __name__ == "__main__":
import nose
nose.main()
A:
Even if there was a backport, it would probably have to be modified. The underlying problem here is that in Python 2.x int and long are separate data types, even though ints get automatically upcast to longs as necessary. However, this doesn't necessarily happen in functions written in C, depending on how they're written.
|
`xrange(2**100)` -> OverflowError: long int too large to convert to int
|
xrange function doesn't work for large integers:
>>> N = 10**100
>>> xrange(N)
Traceback (most recent call last):
...
OverflowError: long int too large to convert to int
>>> xrange(N, N+10)
Traceback (most recent call last):
...
OverflowError: long int too large to convert to int
Python 3.x:
>>> N = 10**100
>>> r = range(N)
>>> r = range(N, N+10)
>>> len(r)
10
Is there a backport of py3k builtin range() function for Python 2.x?
Edit
I'm looking for a complete implementation of "lazy" range(), not just a partial implementation of some of its functionality.
|
[
"I believe there is no backport (Py 3's completely removed the int/long distinction, after all, but in 2.* it's here to stay;-) but it's not hard to hack your own, e.g....:\nimport operator\n\ndef wowrange(start, stop, step=1):\n if step == 0:\n raise ValueError('step must be != 0')\n elif step < 0:\n proceed = operator.gt\n else:\n proceed = operator.lt\n while proceed(start, stop):\n yield start\n start += step\n\nEdit it appears the OP doesn't just want looping (the normal purpose of xrange, and \nrange in Py3), but also len and the in operator (the latter does work on the above generator, but slowly -- optimizations are possible). For such richness a class \nis better...:\nimport operator\n\nclass wowrange(object):\n def __init__(self, start, stop=None, step=1):\n if step == 0: raise ValueError('step must be != 0')\n if stop is None: start, stop = 0, start\n if step < 0:\n self.proceed = operator.gt\n self.l = (stop-start+step+1)//step\n else:\n self.proceed = operator.lt\n self.l = (stop-start+step-1)//step\n self.lo = min(start, stop)\n self.start, self.stop, self.step = start, stop, step\n def __iter__(self):\n start = self.start\n while self.proceed(start, self.stop):\n yield start\n start += self.step\n def __len__(self):\n return self.l\n def __contains__(self, x):\n if x == self.stop:\n return False\n if self.proceed(x, self.start):\n return False\n if self.proceed(self.stop, x):\n return False\n return (x-self.lo) % self.step == 0\n\nI wouldn't be surprised if there's an off-by-one or similar glitch lurking here, but, I hope this helps!\nEdit again: I see indexing is ALSO required. Is it just too hard to write your own __getitem__? I guess it is, so here it, too, is, served on a silver plate...:\n def __getitem__(self, i):\n if i < 0:\n i += self.l\n if i < 0: raise IndexError\n elif if i >= self.l:\n raise IndexError\n return self.start + i * self.step\n\nI don't know if 3.0 range supports slicing (xrange in recent 2.* releases doesn't -- it used to, but that was removed because the complication was ridiculous and prone to bugs), but I guess I do have to draw a line in the sand somewhere, so I'm not going to add it;-).\n",
"Okay, here's a go at a fuller reimplementation.\nclass MyXRange(object):\n def __init__(self, a1, a2=None, step=1):\n if step == 0:\n raise ValueError(\"arg 3 must not be 0\")\n if a2 is None:\n a1, a2 = 0, a1\n if (a2 - a1) % step != 0:\n a2 += step - (a2 - a1) % step\n if cmp(a1, a2) != cmp(0, step):\n a2 = a1\n self.start, self.stop, self.step = a1, a2, step\n\n def __iter__(self):\n n = self.start\n while cmp(n, self.stop) == cmp(0, self.step):\n yield n\n n += self.step\n\n def __repr__(self):\n return \"MyXRange(%d,%d,%d)\" % (self.start, self.stop, self.step)\n\n # NB: len(self) will convert this to an int, and may fail\n def __len__(self):\n return (self.stop - self.start)//(self.step)\n\n def __getitem__(self, key):\n if key < 0:\n key = self.__len__() + key\n if key < 0:\n raise IndexError(\"list index out of range\")\n return self[key]\n n = self.start + self.step*key\n if cmp(n, self.stop) != cmp(0, self.step):\n raise IndexError(\"list index out of range\")\n return n\n\n def __reversed__(self):\n return MyXRange(self.stop-self.step, self.start-self.step, -self.step)\n\n def __contains__(self, val):\n if val == self.start: return cmp(0, self.step) == cmp(self.start, self.stop)\n if cmp(self.start, val) != cmp(0, self.step): return False\n if cmp(val, self.stop) != cmp(0, self.step): return False\n return (val - self.start) % self.step == 0\n\nAnd some testing:\ndef testMyXRange(testsize=10):\n def normexcept(f,args):\n try:\n r = [f(args)]\n except Exception, e:\n r = type(e)\n return r\n\n for i in range(-testsize,testsize+1):\n for j in range(-testsize,testsize+1):\n print i, j\n for k in range(-9, 10, 2):\n r, mr = range(i,j,k), MyXRange(i,j,k)\n\n if r != list(mr):\n print \"iter fail: %d, %d, %d\" % (i,j,k)\n\n if list(reversed(r)) != list(reversed(mr)):\n print \"reversed fail: %d, %d, %d\" % (i,j,k)\n\n if len(r) != len(mr):\n print \"len fail: %d, %d, %d\" % (i,j,k)\n\n z = [m for m in range(-testsize*2,testsize*2+1)\n if (m in r) != (m in mr)]\n if z != []:\n print \"contains fail: %d, %d, %d, %s\" % (i,j,k,(z+[\"...\"])[:10])\n\n z = [m for m in range(-testsize*2, testsize*2+1) \n if normexcept(r.__getitem__, m) != normexcept(mr.__getitem__, m)]\n if z != []:\n print \"getitem fail: %d, %d, %d, %s\" % (i,j,k,(z+[\"...\"])[:10])\n\n",
"From the docs:\n\nNote\nxrange() is intended to be simple and fast. Implementations may impose restrictions to achieve this. The C implementation of Python restricts all arguments to native C longs (“short” Python integers), and also requires that the number of elements fit in a native C long. If a larger range is needed, an alternate version can be crafted using the itertools module: islice(count(start, step), (stop-start+step-1)//step).\n\nAlternatively reimplement xrange using generators:\ndef myxrange(a1, a2=None, step=1):\n if a2 is None:\n start, last = 0, a1\n else:\n start, last = a1, a2\n while cmp(start, last) == cmp(0, step):\n yield start\n start += step\n\nand\nN = 10**100\nlen(list(myxrange(N, N+10)))\n\n",
"Edit\nIssue 1546078: \"xrange that supports longs, etc\" on the Python issue tracker contains C patch and pure Python implementation of unlimited xrange written by Neal Norwitz (nnorwitz). See xrange.py\nEdit\nThe latest version of irange (renamed as lrange) is at github.\n\nImplementation based on py3k's rangeobject.c\nirange.py\n\"\"\"Define `irange.irange` class\n\n`xrange`, py3k's `range` analog for large integers\n\nSee help(irange.irange)\n\n>>> r = irange(2**100, 2**101, 2**100)\n>>> len(r)\n1\n>>> for i in r:\n... print i,\n1267650600228229401496703205376\n>>> for i in r:\n... print i,\n1267650600228229401496703205376\n>>> 2**100 in r\nTrue\n>>> r[0], r[-1]\n(1267650600228229401496703205376L, 1267650600228229401496703205376L)\n>>> L = list(r)\n>>> L2 = [1, 2, 3]\n>>> L2[:] = r\n>>> L == L2 == [2**100]\nTrue\n\"\"\"\n\n\ndef toindex(arg): \n \"\"\"Convert `arg` to integer type that could be used as an index.\n\n \"\"\"\n if not any(isinstance(arg, cls) for cls in (long, int, bool)):\n raise TypeError(\"'%s' object cannot be interpreted as an integer\" % (\n type(arg).__name__,))\n return int(arg)\n\n\nclass irange(object):\n \"\"\"irange([start,] stop[, step]) -> irange object\n\n Return an iterator that generates the numbers in the range on demand.\n Return `xrange` for small integers \n\n Pure Python implementation of py3k's `range()`.\n\n (I.e. it supports large integers)\n\n If `xrange` and py3k `range()` differ then prefer `xrange`'s behaviour\n\n Based on `[1]`_\n\n .. [1] http://svn.python.org/view/python/branches/py3k/Objects/rangeobject.c?view=markup\n\n >>> # on Python 2.6\n >>> N = 10**80\n >>> len(range(N, N+3))\n 3\n >>> len(xrange(N, N+3))\n Traceback (most recent call last):\n ...\n OverflowError: long int too large to convert to int\n >>> len(irange(N, N+3))\n 3\n >>> xrange(N)\n Traceback (most recent call last):\n ...\n OverflowError: long int too large to convert to int\n >>> irange(N).length() == N\n True\n \"\"\"\n def __new__(cls, *args):\n try: return xrange(*args) # use `xrange` for small integers\n except OverflowError: pass\n \n nargs = len(args)\n if nargs == 1:\n stop = toindex(args[0])\n start = 0\n step = 1\n elif nargs in (2, 3):\n start = toindex(args[0]) \n stop = toindex(args[1])\n if nargs == 3:\n step = args[2]\n if step is None: \n step = 1\n\n step = toindex(step)\n if step == 0:\n raise ValueError(\"irange() arg 3 must not be zero\")\n else:\n step = 1\n else:\n raise ValueError(\"irange(): wrong number of arguments,\" +\n \" got %s\" % args)\n \n r = super(irange, cls).__new__(cls)\n r._start, r._stop, r._step = start, stop, step\n return r\n\n def length(self):\n \"\"\"len(self) might throw OverflowError, this method shouldn't.\"\"\"\n if self._step > 0:\n lo, hi = self._start, self._stop\n step = self._step\n else:\n hi, lo = self._start, self._stop\n step = -self._step\n assert step\n\n if lo >= hi:\n return 0\n else:\n return (hi - lo - 1) // step + 1\n\n __len__ = length\n\n def __getitem__(self, i): # for L[:] = irange(..)\n if i < 0:\n i = i + self.length() \n if i < 0 or i >= self.length():\n raise IndexError(\"irange object index out of range\")\n\n return self._start + i * self._step\n\n def __repr__(self):\n if self._step == 1:\n return \"irange(%r, %r)\" % (self._start, self._stop)\n else:\n \n return \"irange(%r, %r, %r)\" % (\n self._start, self._stop, self._step)\n\n def __contains__(self, ob):\n if type(ob) not in (int, long, bool): # mimic py3k\n # perform iterative search\n return any(i == ob for i in self)\n\n # if long or bool\n if self._step > 0:\n inrange = self._start <= ob < self._stop\n else:\n assert self._step\n inrange = self._stop < ob <= self._start\n\n if not inrange:\n return False\n else:\n return ((ob - self._start) % self._step) == 0\n\n def __iter__(self):\n len_ = self.length()\n i = 0\n while i < len_:\n yield self._start + i * self._step\n i += 1\n\n def __reversed__(self):\n len_ = self.length()\n new_start = self._start + (len_ - 1) * self._step\n new_stop = self._start\n if self._step > 0:\n new_stop -= 1\n else:\n new_stop += 1\n return irange(new_start, new_stop, -self._step) \n\ntest_irange.py\n\"\"\"Unit-tests for irange.irange class.\n\nUsage:\n\n $ python -W error test_irange.py --with-doctest --doctest-tests\n\"\"\"\nimport sys\n\nfrom nose.tools import raises\n\nfrom irange import irange\n\n\ndef eq_irange(a, b):\n \"\"\"Assert that `a` equals `b`.\n\n Where `a`, `b` are `irange` objects\n \"\"\"\n try:\n assert a.length() == b.length()\n assert a._start == b._start\n assert a._stop == b._stop\n assert a._step == b._step\n if a.length() < 100:\n assert list(a) == list(b)\n try:\n assert list(a) == range(a._start, a._stop, a._step)\n except OverflowError:\n pass\n except AttributeError:\n if type(a) == xrange:\n assert len(a) == len(b)\n if len(a) == 0: # empty xrange\n return\n if len(a) > 0:\n assert a[0] == b[0]\n if len(a) > 1:\n a = irange(a[0], a[-1], a[1] - a[0])\n b = irange(b[0], b[-1], b[1] - b[0])\n eq_irange(a, b)\n else:\n raise\n\n\ndef _get_short_iranges_args():\n # perl -E'local $,= q/ /; $n=100; for (1..20)\n # > { say map {int(-$n + 2*$n*rand)} 0..int(3*rand) }'\n input_args = \"\"\"\\\n 67\n -11\n 51\n -36\n -15 38 19\n 43 -58 79\n -91 -71\n -56\n 3 51\n -23 -63\n -80 13 -30\n 24\n -14 49\n 10 73\n 31\n 38 66\n -22 20 -81\n 79 5 84\n 44\n 40 49\n \"\"\"\n return [[int(arg) for arg in line.split()]\n for line in input_args.splitlines() if line.strip()]\n\n\ndef _get_iranges_args():\n N = 2**100\n return [(start, stop, step)\n for start in range(-2*N, 2*N, N//2+1)\n for stop in range(-4*N, 10*N, N+1)\n for step in range(-N//2, N, N//8+1)]\n\n\n\ndef _get_short_iranges():\n return [irange(*args) for args in _get_short_iranges_args()]\n\n\ndef _get_iranges():\n return (_get_short_iranges() +\n [irange(*args) for args in _get_iranges_args()])\n\n\n@raises(TypeError)\ndef test_kwarg():\n irange(stop=10)\n\n\n@raises(TypeError, DeprecationWarning)\ndef test_float_stop():\n irange(1.0)\n\n\n@raises(TypeError, DeprecationWarning)\ndef test_float_step2():\n irange(-1, 2, 1.0)\n\n\n@raises(TypeError, DeprecationWarning)\ndef test_float_start():\n irange(1.0, 2)\n\n\n@raises(TypeError, DeprecationWarning)\ndef test_float_step():\n irange(1, 2, 1.0)\n\n\n@raises(TypeError)\ndef test_empty_args():\n irange()\n\n\ndef test_empty_range():\n for args in (\n \"-3\",\n \"1 3 -1\",\n \"1 1\",\n \"1 1 1\",\n \"-3 -4\",\n \"-3 -2 -1\",\n \"-3 -3 -1\",\n \"-3 -3\",\n ):\n r = irange(*[int(a) for a in args.split()])\n assert len(r) == 0\n L = list(r)\n assert len(L) == 0\n\n\ndef test_small_ints():\n for args in _get_short_iranges_args():\n ir, r = irange(*args), xrange(*args)\n assert len(ir) == len(r)\n assert list(ir) == list(r)\n\n\ndef test_big_ints():\n N = 10**100\n for args, len_ in [\n [(N,), N],\n [(N, N+10), 10],\n [(N, N-10, -2), 5],\n ]:\n try:\n xrange(*args)\n assert 0\n except OverflowError:\n pass\n\n ir = irange(*args)\n assert ir.length() == len_\n try:\n assert ir.length() == len(ir)\n except OverflowError:\n pass\n #\n ir[ir.length()-1]\n #\n if len(args) >= 2:\n r = range(*args)\n assert list(ir) == r\n assert ir[ir.length()-1] == r[-1]\n assert list(reversed(ir)) == list(reversed(r))\n #\n\n\ndef test_negative_index():\n assert irange(10)[-1] == 9\n assert irange(2**100+1)[-1] == 2**100\n\n\ndef test_reversed():\n for r in _get_iranges():\n if type(r) == xrange: continue # known not to work for xrange\n if r.length() > 1000: continue # skip long\n assert list(reversed(reversed(r))) == list(r)\n assert list(r) == range(r._start, r._stop, r._step)\n\n\ndef test_pickle():\n import pickle\n for r in _get_iranges():\n rp = pickle.loads(pickle.dumps(r))\n eq_irange(rp, r)\n\n\ndef test_equility():\n for args in _get_iranges_args():\n a, b = irange(*args), irange(*args)\n assert a is not b\n assert a != b \n eq_irange(a, b)\n\n\ndef test_contains():\n class IntSubclass(int):\n pass\n\n r10 = irange(10)\n for i in range(10):\n assert i in r10\n assert IntSubclass(i) in r10\n\n assert 10 not in r10\n assert -1 not in r10\n assert IntSubclass(10) not in r10\n assert IntSubclass(-1) not in r10\n\n\ndef test_repr():\n for r in _get_iranges():\n eq_irange(eval(repr(r)), r)\n\n\ndef test_new():\n assert repr(irange(True)) == repr(irange(1))\n\n\ndef test_overflow():\n lo, hi = sys.maxint-2, sys.maxint+3\n assert list(irange(lo, hi)) == list(range(lo, hi))\n\n\ndef test_getitem():\n r = irange(sys.maxint-2, sys.maxint+3)\n L = []\n L[:] = r\n assert len(L) == len(r)\n assert L == list(r)\n\n\nif __name__ == \"__main__\":\n import nose \n nose.main() \n\n",
"Even if there was a backport, it would probably have to be modified. The underlying problem here is that in Python 2.x int and long are separate data types, even though ints get automatically upcast to longs as necessary. However, this doesn't necessarily happen in functions written in C, depending on how they're written.\n"
] |
[
19,
11,
9,
3,
1
] |
[] |
[] |
[
"biginteger",
"python",
"python_3.x",
"range",
"xrange"
] |
stackoverflow_0001482480_biginteger_python_python_3.x_range_xrange.txt
|
Q:
Auto create next Key in python dictionary
Is there an easy way to create dictionary like associative array in php? in php i can do:
> $x='a';
> while($x<d){
> $arr[]['Letter']=$x;
> $x++
> }
The interpreter adds automatically a new number into empty brackets "[]"
so i can access letter b from $arr[1]['Letter'], etc.
Is there a way to do same with python?
A:
Edit: I'm stupid. Of course there is a way of doing this in Python. Like so:
result = [{'Letter': chr(i+97)} for i in range(26)]
That will give you a List. Which can be indexed with a number.
So result[1]['Letter'] will give you 'b'.
A:
Empty-brackets indexing is syntactically invalid in Python, but you could conceivably program a class that takes e.g. [None] as a signal to add one more key with a dict as its value and an incremented integer as the key. Before I make the substantial effort to program such a class, I'd love to understand what real problem you're trying to solve that, e.g. a collections.defaultdict wouldn't address.
A:
In python lists and dictionaries are distinct types. PHP has the one type to rule them all the associative array.
I think what you want to do above translates into a list of dictionaries in python, like this
x = [ dict(Letter=chr(i)) for i in range(ord('a'),ord('f')) ]
If you try this in the interpreter
>>> x = [ dict(Letter=chr(i)) for i in range(ord('a'),ord('f')) ]
>>> x
[{'Letter': 'a'}, {'Letter': 'b'}, {'Letter': 'c'}, {'Letter': 'd'}, {'Letter': 'e'}]
>>> x[0]
{'Letter': 'a'}
>>> x[1]
{'Letter': 'b'}
>>> x[1]['Letter']
'b'
>>>
Or if you prefer it written out in full without a list comprehension
x = []
for c in range(ord('a'),ord('f')):
d = { 'Letter': chr(c) }
x.append(d)
A:
Already mentioned that Python doesn't support it by default.
You can achieve that effect by using the length of the dictionary, like this:
>>> arr = {'letter':{1:'a'}}
>>> arr
{'letter': {1: 'a'}}
>>> arr['letter'][len(arr['letter'])+1] = 'b'
>>> arr['letter'][len(arr['letter'])+1] = 'c'
>>> arr['letter'][len(arr['letter'])+1] = 'd'
>>> arr
{'letter': {1: 'a', 2: 'b', 3: 'c', 4: 'd'}}
>>>
|
Auto create next Key in python dictionary
|
Is there an easy way to create dictionary like associative array in php? in php i can do:
> $x='a';
> while($x<d){
> $arr[]['Letter']=$x;
> $x++
> }
The interpreter adds automatically a new number into empty brackets "[]"
so i can access letter b from $arr[1]['Letter'], etc.
Is there a way to do same with python?
|
[
"Edit: I'm stupid. Of course there is a way of doing this in Python. Like so:\nresult = [{'Letter': chr(i+97)} for i in range(26)]\n\nThat will give you a List. Which can be indexed with a number.\nSo result[1]['Letter'] will give you 'b'.\n",
"Empty-brackets indexing is syntactically invalid in Python, but you could conceivably program a class that takes e.g. [None] as a signal to add one more key with a dict as its value and an incremented integer as the key. Before I make the substantial effort to program such a class, I'd love to understand what real problem you're trying to solve that, e.g. a collections.defaultdict wouldn't address.\n",
"In python lists and dictionaries are distinct types. PHP has the one type to rule them all the associative array.\nI think what you want to do above translates into a list of dictionaries in python, like this\nx = [ dict(Letter=chr(i)) for i in range(ord('a'),ord('f')) ]\n\nIf you try this in the interpreter\n>>> x = [ dict(Letter=chr(i)) for i in range(ord('a'),ord('f')) ]\n>>> x\n[{'Letter': 'a'}, {'Letter': 'b'}, {'Letter': 'c'}, {'Letter': 'd'}, {'Letter': 'e'}]\n>>> x[0]\n{'Letter': 'a'}\n>>> x[1]\n{'Letter': 'b'}\n>>> x[1]['Letter']\n'b'\n>>>\n\nOr if you prefer it written out in full without a list comprehension\nx = []\nfor c in range(ord('a'),ord('f')):\n d = { 'Letter': chr(c) }\n x.append(d)\n\n",
"Already mentioned that Python doesn't support it by default. \nYou can achieve that effect by using the length of the dictionary, like this:\n>>> arr = {'letter':{1:'a'}}\n>>> arr\n{'letter': {1: 'a'}}\n>>> arr['letter'][len(arr['letter'])+1] = 'b'\n>>> arr['letter'][len(arr['letter'])+1] = 'c'\n>>> arr['letter'][len(arr['letter'])+1] = 'd'\n>>> arr\n{'letter': {1: 'a', 2: 'b', 3: 'c', 4: 'd'}}\n>>> \n\n"
] |
[
3,
3,
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001483058_python.txt
|
Q:
decypher with me that obfuscated MultiplierFactory
This week on comp.lang.python, an "interesting" piece of code was posted by Steven D'Aprano as a joke answer to an homework question. Here it is:
class MultiplierFactory(object):
def __init__(self, factor=1):
self.__factor = factor
@property
def factor(self):
return getattr(self, '_%s__factor' % self.__class__.__name__)
def __call__(self, factor=None):
if not factor is not None is True:
factor = self.factor
class Multiplier(object):
def __init__(self, factor=None):
self.__factor = factor
@property
def factor(self):
return getattr(self,
'_%s__factor' % self.__class__.__name__)
def __call__(self, n):
return self.factor*n
Multiplier.__init__.im_func.func_defaults = (factor,)
return Multiplier(factor)
twice = MultiplierFactory(2)()
We know that twice is an equivalent to the answer:
def twice(x):
return 2*x
From the names Multiplier and MultiplierFactory we get an idea of what's the code doing, but we're not sure of the exact internals. Let's simplify it first.
Logic
if not factor is not None is True:
factor = self.factor
not factor is not None is True is equivalent to not factor is not None, which is also factor is None. Result:
if factor is None:
factor = self.factor
Until now, that was easy :)
Attribute access
Another interesting point is the curious factor accessor.
def factor(self):
return getattr(self, '_%s__factor' % self.__class__.__name__)
During initialization of MultiplierFactory, self.__factor is set. But later on, the code accesses self.factor.
It then seems that:
getattr(self, '_%s__factor' % self.__class__.__name__)
Is doing exactly "self.__factor".
Can we always access attributes in this fashion?
def mygetattr(self, attr):
return getattr(self, '_%s%s' % (self.__class__.__name__, attr))
Dynamically changing function signatures
Anyway, at this point, here is the simplified code:
class MultiplierFactory(object):
def __init__(self, factor=1):
self.factor = factor
def __call__(self, factor=None):
if factor is None:
factor = self.factor
class Multiplier(object):
def __init__(self, factor=None):
self.factor = factor
def __call__(self, n):
return self.factor*n
Multiplier.__init__.im_func.func_defaults = (factor,)
return Multiplier(factor)
twice = MultiplierFactory(2)()
Code is almost clean now. The only puzzling line, maybe, would be:
Multiplier.__init__.im_func.func_defaults = (factor,)
What's in there? I looked at the datamodel doc, and found that func_defaults was "A tuple containing default argument values for those arguments that have defaults, or None if no arguments have a default value". Are we just changing the default value for factor argument in __init__ here? Resulting code would then be:
class MultiplierFactory(object):
def __init__(self, factor=1):
self.factor = factor
def __call__(self, factor=None):
if factor is None:
factor = self.factor
class Multiplier(object):
def __init__(self, innerfactor=factor):
self.factor = innerfactor
def __call__(self, n):
return self.factor*n
return Multiplier(factor)
twice = MultiplierFactory(2)()
Which means that dynamically setting the default value was just useless noise, since Multiplier is never called without a default parameter, right?
And we could probably simplify it to:
class MultiplierFactory(object):
def __init__(self, factor=1):
self.factor = factor
def __call__(self, factor=None):
if factor is None:
factor = self.factor
def my_multiplier(n):
return factor*n
return my_multiplier
twice = MultiplierFactory(2)() # similar to MultiplierFactory()(2)
Correct?
And for those hurrying to "this is not a real question"... read again, my questions are in bold+italic
A:
Q1. Can we always access attributes in this fashion?
A: No. It's only those attributes who start with double underscores. They get obfuscated in that way, to prevent accidental access/overriding from outside the class.
Q2: Are we just changing the default value for factor argument in __init__ here?
A: Yes.
Q2: right?
Right.
|
decypher with me that obfuscated MultiplierFactory
|
This week on comp.lang.python, an "interesting" piece of code was posted by Steven D'Aprano as a joke answer to an homework question. Here it is:
class MultiplierFactory(object):
def __init__(self, factor=1):
self.__factor = factor
@property
def factor(self):
return getattr(self, '_%s__factor' % self.__class__.__name__)
def __call__(self, factor=None):
if not factor is not None is True:
factor = self.factor
class Multiplier(object):
def __init__(self, factor=None):
self.__factor = factor
@property
def factor(self):
return getattr(self,
'_%s__factor' % self.__class__.__name__)
def __call__(self, n):
return self.factor*n
Multiplier.__init__.im_func.func_defaults = (factor,)
return Multiplier(factor)
twice = MultiplierFactory(2)()
We know that twice is an equivalent to the answer:
def twice(x):
return 2*x
From the names Multiplier and MultiplierFactory we get an idea of what's the code doing, but we're not sure of the exact internals. Let's simplify it first.
Logic
if not factor is not None is True:
factor = self.factor
not factor is not None is True is equivalent to not factor is not None, which is also factor is None. Result:
if factor is None:
factor = self.factor
Until now, that was easy :)
Attribute access
Another interesting point is the curious factor accessor.
def factor(self):
return getattr(self, '_%s__factor' % self.__class__.__name__)
During initialization of MultiplierFactory, self.__factor is set. But later on, the code accesses self.factor.
It then seems that:
getattr(self, '_%s__factor' % self.__class__.__name__)
Is doing exactly "self.__factor".
Can we always access attributes in this fashion?
def mygetattr(self, attr):
return getattr(self, '_%s%s' % (self.__class__.__name__, attr))
Dynamically changing function signatures
Anyway, at this point, here is the simplified code:
class MultiplierFactory(object):
def __init__(self, factor=1):
self.factor = factor
def __call__(self, factor=None):
if factor is None:
factor = self.factor
class Multiplier(object):
def __init__(self, factor=None):
self.factor = factor
def __call__(self, n):
return self.factor*n
Multiplier.__init__.im_func.func_defaults = (factor,)
return Multiplier(factor)
twice = MultiplierFactory(2)()
Code is almost clean now. The only puzzling line, maybe, would be:
Multiplier.__init__.im_func.func_defaults = (factor,)
What's in there? I looked at the datamodel doc, and found that func_defaults was "A tuple containing default argument values for those arguments that have defaults, or None if no arguments have a default value". Are we just changing the default value for factor argument in __init__ here? Resulting code would then be:
class MultiplierFactory(object):
def __init__(self, factor=1):
self.factor = factor
def __call__(self, factor=None):
if factor is None:
factor = self.factor
class Multiplier(object):
def __init__(self, innerfactor=factor):
self.factor = innerfactor
def __call__(self, n):
return self.factor*n
return Multiplier(factor)
twice = MultiplierFactory(2)()
Which means that dynamically setting the default value was just useless noise, since Multiplier is never called without a default parameter, right?
And we could probably simplify it to:
class MultiplierFactory(object):
def __init__(self, factor=1):
self.factor = factor
def __call__(self, factor=None):
if factor is None:
factor = self.factor
def my_multiplier(n):
return factor*n
return my_multiplier
twice = MultiplierFactory(2)() # similar to MultiplierFactory()(2)
Correct?
And for those hurrying to "this is not a real question"... read again, my questions are in bold+italic
|
[
"Q1. Can we always access attributes in this fashion?\nA: No. It's only those attributes who start with double underscores. They get obfuscated in that way, to prevent accidental access/overriding from outside the class.\nQ2: Are we just changing the default value for factor argument in __init__ here?\nA: Yes.\nQ2: right?\nRight.\n"
] |
[
1
] |
[] |
[] |
[
"obfuscation",
"python",
"python_datamodel"
] |
stackoverflow_0001483085_obfuscation_python_python_datamodel.txt
|
Q:
regex for character appearing at most once
I want to check a string that contains the period, ".", at most once in python.
A:
[^.]*\.?[^.]*$
And be sure to match, don't search
>>> dot = re.compile("[^.]*\.[^.]*$")
>>> dot.match("fooooooooooooo.bar")
<_sre.SRE_Match object at 0xb7651838>
>>> dot.match("fooooooooooooo.bar.sad") is None
True
>>>
Edit:
If you consider only integers and decimals, it's even easier:
def valid(s):
return re.match('[0-9]+(\.[0-9]*)?$', s) is not None
assert valid("42")
assert valid("13.37")
assert valid("1.")
assert not valid("1.2.3.4")
assert not valid("abcd")
A:
No regexp is needed, see str.count():
str.count(sub[, start[, end]])
Return the number of non-overlapping occurrences of substring sub in the range [start, end]. Optional arguments start and end are interpreted as in slice notation.
>>> "A.B.C.D".count(".")
3
>>> "A/B.C/D".count(".")
1
>>> "A/B.C/D".count(".") == 1
True
>>>
A:
You can use:
re.search('^[^.]*\.?[^.]*$', 'this.is') != None
>>> re.search('^[^.]*\.?[^.]*$', 'thisis') != None
True
>>> re.search('^[^.]*\.?[^.]*$', 'this.is') != None
True
>>> re.search('^[^.]*\.?[^.]*$', 'this..is') != None
False
(Matches period zero or one times.)
A:
While period is special char it must be escaped. So "\.+" should work.
EDIT:
Use '?' instead of '+' to match one or zero repetitions.
Have a look at: re — Regular expression operations
A:
If the period should exist only once in the entire string, then use the ? operator:
^[^.]*\.?[^.]*$
Breaking this down:
^ matches the beginning of the string
[^.] matches zero or more characters that are not periods
\.? matches the period character (must be escaped with \ as it's a reserved char) exactly 0 or 1 times
[^.]* is the same pattern used in 2 above
$ matches the end of the string
As an aside, personally I wouldn't use a regular expression for this (unless I was checking other aspects of the string for validity too). I would just use the count function.
A:
Why do you need to check? If you have a number in a string, I now guess you will want to handle it as a number soon. Perhaps you can do this without Looking Before You Leap:
try:
value = float(input_str)
except ValueError:
...
else:
...
|
regex for character appearing at most once
|
I want to check a string that contains the period, ".", at most once in python.
|
[
"[^.]*\\.?[^.]*$\n\nAnd be sure to match, don't search\n>>> dot = re.compile(\"[^.]*\\.[^.]*$\")\n>>> dot.match(\"fooooooooooooo.bar\")\n<_sre.SRE_Match object at 0xb7651838>\n>>> dot.match(\"fooooooooooooo.bar.sad\") is None\nTrue\n>>>\n\nEdit:\nIf you consider only integers and decimals, it's even easier:\ndef valid(s):\n return re.match('[0-9]+(\\.[0-9]*)?$', s) is not None\n\nassert valid(\"42\")\nassert valid(\"13.37\")\nassert valid(\"1.\")\nassert not valid(\"1.2.3.4\")\nassert not valid(\"abcd\")\n\n",
"No regexp is needed, see str.count():\n\nstr.count(sub[, start[, end]])\nReturn the number of non-overlapping occurrences of substring sub in the range [start, end]. Optional arguments start and end are interpreted as in slice notation.\n\n>>> \"A.B.C.D\".count(\".\")\n3\n>>> \"A/B.C/D\".count(\".\")\n1\n>>> \"A/B.C/D\".count(\".\") == 1\nTrue\n>>> \n\n",
"You can use:\nre.search('^[^.]*\\.?[^.]*$', 'this.is') != None\n\n>>> re.search('^[^.]*\\.?[^.]*$', 'thisis') != None\nTrue\n>>> re.search('^[^.]*\\.?[^.]*$', 'this.is') != None\nTrue\n>>> re.search('^[^.]*\\.?[^.]*$', 'this..is') != None\nFalse\n\n(Matches period zero or one times.)\n",
"While period is special char it must be escaped. So \"\\.+\" should work.\nEDIT:\nUse '?' instead of '+' to match one or zero repetitions.\nHave a look at: re — Regular expression operations\n",
"If the period should exist only once in the entire string, then use the ? operator:\n^[^.]*\\.?[^.]*$\n\nBreaking this down:\n\n^ matches the beginning of the string\n[^.] matches zero or more characters that are not periods\n\\.? matches the period character (must be escaped with \\ as it's a reserved char) exactly 0 or 1 times\n[^.]* is the same pattern used in 2 above\n$ matches the end of the string\n\nAs an aside, personally I wouldn't use a regular expression for this (unless I was checking other aspects of the string for validity too). I would just use the count function.\n",
"Why do you need to check? If you have a number in a string, I now guess you will want to handle it as a number soon. Perhaps you can do this without Looking Before You Leap:\ntry:\n value = float(input_str)\nexcept ValueError:\n ...\nelse:\n ...\n\n"
] |
[
10,
5,
2,
0,
0,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001483108_python_regex.txt
|
Q:
How to split data into equal size packets having variable header size..
I am building peer to peer application in python. Its going to work over UDP. I have function called getHeader(packetNo,totalPackets) which returns me the header for that packet.Depending on size of header I am chopping data, attaching data to header and getting same packet size.
Header size is not fixed because length consumed by different no of digits is different e.g. I am writing header for packetNo=1 as PACKET_NO=1 , its length will be different for packetNo 10, 100,.. etc
I am currently not including no of packets in header. I am just including packet number, I want to include it, but how can I know no of packets prior to computing header size as header should now contain no of packets and NO_OF_PACKETS=--- can be of any length.
I can pass it through some function which will compute no of packets but that will be something like brute force and will consume unnecessary time and processing power. Is there any intelligent way to do it?
A:
Don't use plain-text. Make packet's header a two packed 4-byte (or 8-byte, depending on how many packets you expect) integers, e.g.
import struct
header = struct.pack('!II', packetNo, totalPackets)
Here's documentation for struct module.
A:
Why not zero-pad your number of packets, so that the header becomes fixed. Say you want to support 1 billion packets in a message:
PACKET_NO=0000000001
is the same length as:
PACKET_NO=1000000000
Of course, this will create an upper bound on the possible number of packets, but there has to be some upper limit, no?
|
How to split data into equal size packets having variable header size..
|
I am building peer to peer application in python. Its going to work over UDP. I have function called getHeader(packetNo,totalPackets) which returns me the header for that packet.Depending on size of header I am chopping data, attaching data to header and getting same packet size.
Header size is not fixed because length consumed by different no of digits is different e.g. I am writing header for packetNo=1 as PACKET_NO=1 , its length will be different for packetNo 10, 100,.. etc
I am currently not including no of packets in header. I am just including packet number, I want to include it, but how can I know no of packets prior to computing header size as header should now contain no of packets and NO_OF_PACKETS=--- can be of any length.
I can pass it through some function which will compute no of packets but that will be something like brute force and will consume unnecessary time and processing power. Is there any intelligent way to do it?
|
[
"Don't use plain-text. Make packet's header a two packed 4-byte (or 8-byte, depending on how many packets you expect) integers, e.g.\nimport struct\nheader = struct.pack('!II', packetNo, totalPackets)\n\nHere's documentation for struct module.\n",
"Why not zero-pad your number of packets, so that the header becomes fixed. Say you want to support 1 billion packets in a message:\nPACKET_NO=0000000001\n\nis the same length as:\nPACKET_NO=1000000000\n\nOf course, this will create an upper bound on the possible number of packets, but there has to be some upper limit, no?\n"
] |
[
2,
0
] |
[] |
[] |
[
"packets",
"python",
"sockets",
"udp"
] |
stackoverflow_0001483243_packets_python_sockets_udp.txt
|
Q:
How to except SyntaxError?
I would like to except the error the following code produces, but I don't know how.
from datetime import datetime
try:
date = datetime(2009, 12a, 31)
except:
print "error"
The code above is not printing "error". That's what I would like to be able to do.
edit: The reason I would like to check for syntax errors, is because 12a is a command line parameter.
Thanks.
A:
command-line "parameters" are strings. if your code is:
datetime(2009, '12a', 31)
it won't produce SyntaxError. It raises TypeError.
All command-line parameters are needed to be cleaned up first, before use in your code. for example like this:
month = '12'
try:
month = int(month)
except ValueError:
print('bad argument for month')
raise
else:
if not 1<= month <= 12:
raise ValueError('month should be between 1 to 12')
A:
You can't catch syntax errors because the source must be valid before it can be executed. I am not quite sure, why you can't simple fix the syntax error, but if the line with the datetime is generated automatically from user input (?) and you must be able to catch those errors, you might try:
try:
date = eval('datetime(2009, 12a, 31)')
except SyntaxError:
print 'error'
But that's still a horrible solution. Maybe you can tell us why you have to catch such syntax errors.
A:
If you want to check command-line parameters, you could also use argparse or optparse, they will handle the syntax check for you.
|
How to except SyntaxError?
|
I would like to except the error the following code produces, but I don't know how.
from datetime import datetime
try:
date = datetime(2009, 12a, 31)
except:
print "error"
The code above is not printing "error". That's what I would like to be able to do.
edit: The reason I would like to check for syntax errors, is because 12a is a command line parameter.
Thanks.
|
[
"command-line \"parameters\" are strings. if your code is:\ndatetime(2009, '12a', 31)\n\nit won't produce SyntaxError. It raises TypeError.\nAll command-line parameters are needed to be cleaned up first, before use in your code. for example like this:\nmonth = '12'\ntry:\n month = int(month)\nexcept ValueError:\n print('bad argument for month')\n raise\nelse:\n if not 1<= month <= 12:\n raise ValueError('month should be between 1 to 12')\n\n",
"You can't catch syntax errors because the source must be valid before it can be executed. I am not quite sure, why you can't simple fix the syntax error, but if the line with the datetime is generated automatically from user input (?) and you must be able to catch those errors, you might try:\ntry:\n date = eval('datetime(2009, 12a, 31)')\nexcept SyntaxError:\n print 'error'\n\nBut that's still a horrible solution. Maybe you can tell us why you have to catch such syntax errors.\n",
"If you want to check command-line parameters, you could also use argparse or optparse, they will handle the syntax check for you.\n"
] |
[
10,
4,
3
] |
[] |
[] |
[
"exception_handling",
"python"
] |
stackoverflow_0001483343_exception_handling_python.txt
|
Q:
Export QT Menu to text
I was wondering if there is a way to convert my QT (version 4.50) menu and all its submenus into a text document in something similar to the following format:
Menu 1
Sub Menu 2
Menu 2
sub menu 3
sub menu 4
sub menu 1
A:
You might want to look at QObject::findChildren, and use it like this:
submenus = mainwindow.menuBar().findChildren(QAction)
Depending on how you construct your menus you might have to use QMenu as a parameter aswell.
Also depending on your usage some sub menus could have no parent (depending on how you constructed them or added them to the parent menu) and then findChildren won't find them.
|
Export QT Menu to text
|
I was wondering if there is a way to convert my QT (version 4.50) menu and all its submenus into a text document in something similar to the following format:
Menu 1
Sub Menu 2
Menu 2
sub menu 3
sub menu 4
sub menu 1
|
[
"You might want to look at QObject::findChildren, and use it like this:\nsubmenus = mainwindow.menuBar().findChildren(QAction)\n\nDepending on how you construct your menus you might have to use QMenu as a parameter aswell.\nAlso depending on your usage some sub menus could have no parent (depending on how you constructed them or added them to the parent menu) and then findChildren won't find them. \n"
] |
[
1
] |
[] |
[] |
[
"pyqt",
"python",
"qt"
] |
stackoverflow_0001481999_pyqt_python_qt.txt
|
Q:
Why can't I crop this image in Python PIL? (simple syntax problem?)
from PIL import Image
im = Image.open(f) #the size is 500x350
box = (0,0,100,100)
kay = im.crop(box)
It seems like there's nothing wrong with this, right?
That last line will result in an error and won't continue, but I don't know what the error is because it's AJAX and I can't debug ATM.
A:
If your controller is dealing with strings because the crop data is coming in via an ajax GET, it might be worth trying to make them into integers before applying the crop. Example from my terminal...
Trinity:~ kelvin$ python
Python 2.5.2 (r252:60911, Feb 22 2008, 07:57:53)
[GCC 4.0.1 (Apple Computer, Inc. build 5363)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from PIL import Image
>>> f = open("happy.jpg")
>>> im = Image.open(f)
>>> box = (0,0,100,100)
>>> kay = im.crop(box)
>>> kay
<PIL.Image._ImageCrop instance at 0xb1ea80>
>>> bad_box = ("0","0","100","100")
>>> nkay = im.crop(bad_box)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/PIL/Image.py", line 742, in crop
return _ImageCrop(self, box)
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/PIL/Image.py", line 1657, in __init__
self.size = x1-x0, y1-y0
TypeError: unsupported operand type(s) for -: 'str' and 'str'
>>>
A:
When you pick up the co-ordinates from the AJAX get request they are strings, you have to parse them to Int for the crop to succeed.
A:
Try using integer coordinates instead of strings:
from PIL import Image
im = Image.open(f) #the size is 500x350
box = (0,0,100,100)
kay = im.crop(box)
|
Why can't I crop this image in Python PIL? (simple syntax problem?)
|
from PIL import Image
im = Image.open(f) #the size is 500x350
box = (0,0,100,100)
kay = im.crop(box)
It seems like there's nothing wrong with this, right?
That last line will result in an error and won't continue, but I don't know what the error is because it's AJAX and I can't debug ATM.
|
[
"If your controller is dealing with strings because the crop data is coming in via an ajax GET, it might be worth trying to make them into integers before applying the crop. Example from my terminal...\nTrinity:~ kelvin$ python\nPython 2.5.2 (r252:60911, Feb 22 2008, 07:57:53) \n[GCC 4.0.1 (Apple Computer, Inc. build 5363)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from PIL import Image\n>>> f = open(\"happy.jpg\")\n>>> im = Image.open(f)\n>>> box = (0,0,100,100)\n>>> kay = im.crop(box)\n>>> kay\n<PIL.Image._ImageCrop instance at 0xb1ea80>\n>>> bad_box = (\"0\",\"0\",\"100\",\"100\")\n>>> nkay = im.crop(bad_box)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/PIL/Image.py\", line 742, in crop\n return _ImageCrop(self, box)\n File \"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/PIL/Image.py\", line 1657, in __init__\n self.size = x1-x0, y1-y0\nTypeError: unsupported operand type(s) for -: 'str' and 'str'\n>>> \n\n",
"When you pick up the co-ordinates from the AJAX get request they are strings, you have to parse them to Int for the crop to succeed.\n",
"Try using integer coordinates instead of strings:\nfrom PIL import Image\n\nim = Image.open(f) #the size is 500x350 \nbox = (0,0,100,100) \nkay = im.crop(box)\n\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"image",
"python",
"python_imaging_library"
] |
stackoverflow_0001483393_image_python_python_imaging_library.txt
|
Q:
Grab an image via the web and save it with Python
I want to be able to download an image (to my computer or to a web server) resize it, and upload it to S3. The piece concerned here is:
What would be a recommended way to do the downloading portion within Python (i.e., don't want to use external tools, bash, etc). I want it to be stored into memory until it's done with (versus downloading the image to a local drive, and then working with it). Any help is much appreciated.
A:
urllib (simple but a bit rough) and urllib2 (powerful but a bit more complicated) are the recommended standard library modules for grabbing data from a URL (either to memory or to disk). For simple-enough needs, x=urllib.urlopen(theurl) will give you an object that lets you access the response headers (e.g. to find out the image's content-type) and data (as x.read()); urllib2 works similarly but lets you control proxying, user agent, coockies, https, authentication, etc, etc, much more than simple urllib does.
A:
Pycurl, urllib, and urllib2 are all options. Pycurl is a Python interface for libcurl, and urllib and urllib2 are both part of Python's standard library. urllib is simple, urllib2 is more powerful but also more complicated.
urllib example:
import urllib
image = urllib.URLopener()
image.urlretrieve("http://sstatic.net/so/img/logo.png")
In this case, the file is not stored in local memory, but rather as a temp file with a generated name.
A:
Consider:
import urllib
f = urllib.urlopen(url_of_image)
image = f.read()
http://docs.python.org/library/urllib.html
A:
You could always have a look at hand.
If I remember correctly, it was written to grab cartoons from sites that don't have feeds
This project seems to have died, so it's no longer an option. Anyway, using urllib seems to be what you're looking for.
|
Grab an image via the web and save it with Python
|
I want to be able to download an image (to my computer or to a web server) resize it, and upload it to S3. The piece concerned here is:
What would be a recommended way to do the downloading portion within Python (i.e., don't want to use external tools, bash, etc). I want it to be stored into memory until it's done with (versus downloading the image to a local drive, and then working with it). Any help is much appreciated.
|
[
"urllib (simple but a bit rough) and urllib2 (powerful but a bit more complicated) are the recommended standard library modules for grabbing data from a URL (either to memory or to disk). For simple-enough needs, x=urllib.urlopen(theurl) will give you an object that lets you access the response headers (e.g. to find out the image's content-type) and data (as x.read()); urllib2 works similarly but lets you control proxying, user agent, coockies, https, authentication, etc, etc, much more than simple urllib does.\n",
"Pycurl, urllib, and urllib2 are all options. Pycurl is a Python interface for libcurl, and urllib and urllib2 are both part of Python's standard library. urllib is simple, urllib2 is more powerful but also more complicated.\nurllib example:\nimport urllib\nimage = urllib.URLopener()\nimage.urlretrieve(\"http://sstatic.net/so/img/logo.png\")\n\nIn this case, the file is not stored in local memory, but rather as a temp file with a generated name.\n",
"Consider:\nimport urllib\nf = urllib.urlopen(url_of_image)\nimage = f.read()\n\nhttp://docs.python.org/library/urllib.html\n",
"You could always have a look at hand.\nIf I remember correctly, it was written to grab cartoons from sites that don't have feeds\nThis project seems to have died, so it's no longer an option. Anyway, using urllib seems to be what you're looking for.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"image",
"python"
] |
stackoverflow_0001482600_image_python.txt
|
Q:
python _+ django, is it compiled code?
Just looking into python from a .net background.
Is python compiled like .net?
If yes, can it be obfuscated and is it more or less secure than .net compiled code that is obfuscated?
does pretty much every web host (unix) support django and python?
A:
There are many implementations of the Python language; the three that are certainly solid, mature and complete enough for production use are CPython, IronPython, and Jython. All of them are typically compiled to some form of bytecode, also known as intermediate code. The compilation from source to bytecode may take place as and when needed, but you can also do it in advance if you prefer; however Google App Engine, which lets you run small Python web apps, including Django, for free, as one of its limitations requires you to upload source and not compiled code (I know of no other host imposing the same limitation, but then I know of none giving you so many resources for free in exchange;-).
You might be most at home with IronPython, which is a Microsoft product (albeit, I believe, the first Microsoft product to be entirely open-source): in that case you can be certain that it is "compiled like .net", because it is (part of) .net (more precisely, .net and silverlight). Therefore it cannot be neither more nor less obfuscated and/or secure than .net (meaning, any other .net language).
Jython works on JVM, the Java Virtual Machine, much like IronPython works on Microsoft's Common Language Runtime, aka CLR. CPython has its own dedicated virtual machine.
For completeness, other implementations (not yet recommended for production use) include pypy, a highly flexible implementation that supports many possible back-ends (including but not limited to .net); unladen swallow, focused on evolving CPython to make it faster; pynie, a Python compiler to the Parrot virtual machine; wpython, a reimplementation based on "wordcode" instead of "bytecode"; and no doubt many, many others.
CPython, IronPython, Jython and pypy can all run Django (other implementations might also be complete enough for that, but I'm not certain).
A:
I don't know about the security part but.
Python is interpreted. Like PHP. (It's turned into bytecode which CPython reads)
Django is just a framework on top of Python.
Python can be compiled.
And no not all hosts support python + django.
A:
You shouldn't have to worry about obfuscating your code, specially since it's going to run on your server.
You are not supposed to put your code in a public directory anyway. The right thing to do with django (as oposed to PHP) is to make the code accessible by the webserver, but not by the public.
And if your server's security has been breached, then you have other things to worry about...
A:
Obfuscation is false security. And the only thing worse than no security is false security. Why would you obfuscate a web app anyways?
Python is compiled to bytecode and run on a virtual machine, but usually distributed as source code.
Unless you really plan to run your webapp on "pretty much every web host" that question doesn't matter. There are many good hosts that support python and django.
A:
Code obfuscation in .NET are mostly a question of changing variable names to make it harder to understand the disassembled code. Yes, you can do those techniques with CPython too.
Now, why ever you would want to is another question completely. It doesn't actually provide you with any security, and does not prevent anybody from stealing your software.
|
python _+ django, is it compiled code?
|
Just looking into python from a .net background.
Is python compiled like .net?
If yes, can it be obfuscated and is it more or less secure than .net compiled code that is obfuscated?
does pretty much every web host (unix) support django and python?
|
[
"There are many implementations of the Python language; the three that are certainly solid, mature and complete enough for production use are CPython, IronPython, and Jython. All of them are typically compiled to some form of bytecode, also known as intermediate code. The compilation from source to bytecode may take place as and when needed, but you can also do it in advance if you prefer; however Google App Engine, which lets you run small Python web apps, including Django, for free, as one of its limitations requires you to upload source and not compiled code (I know of no other host imposing the same limitation, but then I know of none giving you so many resources for free in exchange;-).\nYou might be most at home with IronPython, which is a Microsoft product (albeit, I believe, the first Microsoft product to be entirely open-source): in that case you can be certain that it is \"compiled like .net\", because it is (part of) .net (more precisely, .net and silverlight). Therefore it cannot be neither more nor less obfuscated and/or secure than .net (meaning, any other .net language).\nJython works on JVM, the Java Virtual Machine, much like IronPython works on Microsoft's Common Language Runtime, aka CLR. CPython has its own dedicated virtual machine.\nFor completeness, other implementations (not yet recommended for production use) include pypy, a highly flexible implementation that supports many possible back-ends (including but not limited to .net); unladen swallow, focused on evolving CPython to make it faster; pynie, a Python compiler to the Parrot virtual machine; wpython, a reimplementation based on \"wordcode\" instead of \"bytecode\"; and no doubt many, many others.\nCPython, IronPython, Jython and pypy can all run Django (other implementations might also be complete enough for that, but I'm not certain).\n",
"I don't know about the security part but.\n\nPython is interpreted. Like PHP. (It's turned into bytecode which CPython reads)\nDjango is just a framework on top of Python.\nPython can be compiled.\nAnd no not all hosts support python + django.\n\n",
"You shouldn't have to worry about obfuscating your code, specially since it's going to run on your server.\nYou are not supposed to put your code in a public directory anyway. The right thing to do with django (as oposed to PHP) is to make the code accessible by the webserver, but not by the public.\nAnd if your server's security has been breached, then you have other things to worry about...\n",
"Obfuscation is false security. And the only thing worse than no security is false security. Why would you obfuscate a web app anyways?\nPython is compiled to bytecode and run on a virtual machine, but usually distributed as source code.\nUnless you really plan to run your webapp on \"pretty much every web host\" that question doesn't matter. There are many good hosts that support python and django.\n",
"Code obfuscation in .NET are mostly a question of changing variable names to make it harder to understand the disassembled code. Yes, you can do those techniques with CPython too.\nNow, why ever you would want to is another question completely. It doesn't actually provide you with any security, and does not prevent anybody from stealing your software.\n"
] |
[
11,
2,
2,
1,
0
] |
[
"The Python is interpreted language. But you can compile the python program into a Unix executable using Freeze.\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0001483685_python.txt
|
Q:
List of Lists in python?
I need a good function to do this in python.
def foo(n):
# do somthing
return list_of_lists
>> foo(6)
[[1],
[2,3],
[4,5,6]]
>> foot(10)
[[1],
[2,3],
[4,5,6]
[7,8,9,10]]
A:
def foo(n):
lol = [ [] ]
i = 1
for x in range(n):
if len(lol[-1]) >= i:
i += 1
lol.append([])
lol[-1].append(x)
return lol
A:
def foo(n):
i = 1
while i <= n:
last = int(i * 1.5 + 1)
yield range(i, last)
i = last
list(foo(3))
What behavior do you expect when you use a number for n that doesn't work, like 9?
A:
Adapted from gs's answer but without the mysterious "1.5".
def foo(n):
i = c = 1
while i <= n:
yield range(i, i + c)
i += c
c += 1
list(foo(10))
A:
This is probably not a case where list comprehensions are appropriate, but I don't care!
from math import ceil, sqrt, max
def tri(n):
return n*(n+1) // 2
def irt(x):
return int(ceil((-1 + sqrt(1 + 8*x)) / 2))
def foo(n):
return [list(range(tri(i)+1, min(tri(i+1)+1, n+1))) for i in range(irt(n))]
A:
One more, just for fun:
def lol(n):
entries = range(1,n+1)
i, out = 1, []
while len(entries) > i:
out.append( [entries.pop(0) for x in xrange(i)] )
i += 1
return out + [entries]
(This doesn't rely on the underlying list having the numbers 1..n)
A:
Here's my python golf entry:
>>> def foo(n):
... def lower(i): return 1 + (i*(i-1)) // 2
... def upper(i): return i + lower(i)
... import math
... x = (math.sqrt(1 + 8*n) - 1) // 2
... return [list(range(lower(i), upper(i))) for i in range(1, x+1)]
...
>>>
>>> for i in [1,3,6,10,15]:
... print i, foo(i)
...
1 [[1]]
3 [[1], [2, 3]]
6 [[1], [2, 3], [4, 5, 6]]
10 [[1], [2, 3], [4, 5, 6], [7, 8, 9, 10]]
15 [[1], [2, 3], [4, 5, 6], [7, 8, 9, 10], [11, 12, 13, 14, 15]]
>>>
The calculation of x relies on solution of the quadratic equation with positive roots for
0 = y*y + y - 2*n
|
List of Lists in python?
|
I need a good function to do this in python.
def foo(n):
# do somthing
return list_of_lists
>> foo(6)
[[1],
[2,3],
[4,5,6]]
>> foot(10)
[[1],
[2,3],
[4,5,6]
[7,8,9,10]]
|
[
"def foo(n):\n lol = [ [] ]\n i = 1\n for x in range(n):\n if len(lol[-1]) >= i:\n i += 1\n lol.append([])\n lol[-1].append(x)\n return lol\n\n",
"def foo(n):\n i = 1\n while i <= n:\n last = int(i * 1.5 + 1)\n yield range(i, last)\n i = last\n\nlist(foo(3))\n\nWhat behavior do you expect when you use a number for n that doesn't work, like 9?\n",
"Adapted from gs's answer but without the mysterious \"1.5\".\ndef foo(n):\n i = c = 1\n while i <= n:\n yield range(i, i + c)\n i += c\n c += 1\n\nlist(foo(10))\n\n",
"This is probably not a case where list comprehensions are appropriate, but I don't care!\nfrom math import ceil, sqrt, max\n\ndef tri(n):\n return n*(n+1) // 2\n\ndef irt(x):\n return int(ceil((-1 + sqrt(1 + 8*x)) / 2))\n\ndef foo(n):\n return [list(range(tri(i)+1, min(tri(i+1)+1, n+1))) for i in range(irt(n))]\n\n",
"One more, just for fun:\ndef lol(n):\n entries = range(1,n+1)\n i, out = 1, []\n while len(entries) > i:\n out.append( [entries.pop(0) for x in xrange(i)] )\n i += 1\n return out + [entries]\n\n(This doesn't rely on the underlying list having the numbers 1..n)\n",
"Here's my python golf entry:\n>>> def foo(n):\n... def lower(i): return 1 + (i*(i-1)) // 2\n... def upper(i): return i + lower(i)\n... import math\n... x = (math.sqrt(1 + 8*n) - 1) // 2\n... return [list(range(lower(i), upper(i))) for i in range(1, x+1)]\n...\n>>>\n>>> for i in [1,3,6,10,15]:\n... print i, foo(i)\n...\n1 [[1]]\n3 [[1], [2, 3]]\n6 [[1], [2, 3], [4, 5, 6]]\n10 [[1], [2, 3], [4, 5, 6], [7, 8, 9, 10]]\n15 [[1], [2, 3], [4, 5, 6], [7, 8, 9, 10], [11, 12, 13, 14, 15]]\n>>>\n\nThe calculation of x relies on solution of the quadratic equation with positive roots for\n0 = y*y + y - 2*n\n\n"
] |
[
9,
8,
5,
3,
1,
1
] |
[] |
[] |
[
"list",
"list_comprehension",
"python"
] |
stackoverflow_0001482967_list_list_comprehension_python.txt
|
Q:
Python Script to find instances of a set of strings in a set of files
I have a file which I use to centralize all strings used in my application. Lets call it Strings.txt;
TITLE="Title"
T_AND_C="Accept my terms and conditions please"
START_BUTTON="Start"
BACK_BUTTON="Back"
...
This helps me with I18n, the issue is that my application is now a lot larger and has evolved. As such a lot of these strings are probably not used anymore. I want to eliminate the ones that have gone and tidy up the file.
I want to write a python script, using regular expressions I can get all of the string aliases but how can I search all files in a Java package hierarchy for an instance of a string? If there is a reason I use use perl or bash then let me know as I can but I'd prefer to stick to one scripting language.
Please ask for clarification if this doesn't make sense, hopefully this is straightforward, I just haven't used python much.
Thanks in advance,
Gav
A:
Assuming the files are of reasonable size (as source files will be) so you can easily read them in memory, and that you're looking for the parts in quotes right of the = signs:
import collections
files_by_str = collections.defaultdict(list)
thestrings = []
with open('Strings.txt') as f:
for line in f:
text = line.split('=', 1)[1]
text = text.strip().replace('"', '')
thestrings.append(text)
import os
for root, dirs, files in os.walk('/top/dir/of/interest'):
for name in files:
path = os.path.join(root, name)
with open(path) as f:
data = f.read()
for text in thestrings:
if text in data:
files_by_str[text].append(path)
break
This gives you a dict with the texts (those that are present in 1+ files, only), as keys, and lists of the paths to the files containing them as values. If you care only about a yes/no answer to the question "is this text present somewhere", and don't care where, you can save some memory by keeping only a set instead of the defaultdict; but I think that often knowing what files contained each text will be useful, so I suggest this more complete version.
A:
You might consider using ack.
% ack --java 'search_string'
This will search under the current directory.
A:
to parse your strings.txt you don't need regular expressions:
all_strings = [i.partition('=')[0] for i in open('strings.txt')]
to parse your source you could use the dumbest regex:
re.search('\bTITLE\b', source) # for each string in all_strings
to walk the source directory you could use os.walk.
Successful re.search would mean that you need to remove that string from the all_strings: you'll be left with strings that needs to be removed from strings.txt.
A:
You should consider using YAML: easy to use, human readable.
A:
You are re-inventing gettext, the standard for translating programs in the Free Software sphere (even outside python).
Gettext works with, in principle, large files with strings like these :-). Helper programs exist to merge in new marked strings from the source into all translated versions, marking unused strings etc etc. Perhaps you should take a look at it.
|
Python Script to find instances of a set of strings in a set of files
|
I have a file which I use to centralize all strings used in my application. Lets call it Strings.txt;
TITLE="Title"
T_AND_C="Accept my terms and conditions please"
START_BUTTON="Start"
BACK_BUTTON="Back"
...
This helps me with I18n, the issue is that my application is now a lot larger and has evolved. As such a lot of these strings are probably not used anymore. I want to eliminate the ones that have gone and tidy up the file.
I want to write a python script, using regular expressions I can get all of the string aliases but how can I search all files in a Java package hierarchy for an instance of a string? If there is a reason I use use perl or bash then let me know as I can but I'd prefer to stick to one scripting language.
Please ask for clarification if this doesn't make sense, hopefully this is straightforward, I just haven't used python much.
Thanks in advance,
Gav
|
[
"Assuming the files are of reasonable size (as source files will be) so you can easily read them in memory, and that you're looking for the parts in quotes right of the = signs:\nimport collections\nfiles_by_str = collections.defaultdict(list)\n\nthestrings = []\nwith open('Strings.txt') as f:\n for line in f:\n text = line.split('=', 1)[1]\n text = text.strip().replace('\"', '')\n thestrings.append(text)\n\nimport os\n\nfor root, dirs, files in os.walk('/top/dir/of/interest'):\n for name in files:\n path = os.path.join(root, name)\n with open(path) as f:\n data = f.read()\n for text in thestrings:\n if text in data:\n files_by_str[text].append(path)\n break\n\nThis gives you a dict with the texts (those that are present in 1+ files, only), as keys, and lists of the paths to the files containing them as values. If you care only about a yes/no answer to the question \"is this text present somewhere\", and don't care where, you can save some memory by keeping only a set instead of the defaultdict; but I think that often knowing what files contained each text will be useful, so I suggest this more complete version.\n",
"You might consider using ack.\n% ack --java 'search_string'\n\nThis will search under the current directory.\n",
"to parse your strings.txt you don't need regular expressions:\nall_strings = [i.partition('=')[0] for i in open('strings.txt')]\n\nto parse your source you could use the dumbest regex:\nre.search('\\bTITLE\\b', source) # for each string in all_strings\n\nto walk the source directory you could use os.walk.\nSuccessful re.search would mean that you need to remove that string from the all_strings: you'll be left with strings that needs to be removed from strings.txt.\n",
"You should consider using YAML: easy to use, human readable.\n",
"You are re-inventing gettext, the standard for translating programs in the Free Software sphere (even outside python).\nGettext works with, in principle, large files with strings like these :-). Helper programs exist to merge in new marked strings from the source into all translated versions, marking unused strings etc etc. Perhaps you should take a look at it.\n"
] |
[
4,
0,
0,
0,
0
] |
[] |
[] |
[
"find",
"internationalization",
"python"
] |
stackoverflow_0001483830_find_internationalization_python.txt
|
Q:
Get offset of current buffer in vim (in particular, via python scripting)
i want to get the offset of
the current cursor position
the current selection range
in vim, beginning from the start of the file. I do this in python, so hints how to do it with vim's python scripting would be very helpful.
I have used vim.current.. before for doing scripting, but it uses lines and columns rather than a general offset.
Would i have to calculate the amount of all preceding line lengths + the current row, or is there a simpler method ?
A:
If your vim is compiled with the +byte_offset option, then in a Python script after the usual import vim, you can use, e.g.:
vim.eval('line2byte(line("."))+col(".")')
to get the byte offset from start of file of the cursor position, and similarly for other marks. More generally, if you have a line/column pair this (assuming +byte_offset is how your vim was compiled with) is the way to get a byte offset (there's also a byte2line function to go the other way).
While the vim module does make a lot of functionality available directly to Python scripts in vim, I've found that vim.eval and vim.command are often the handiest (and sometimes the only;-) way to get in just as deep as needed;-). Oh, and I always try to have a vim compiled with +justabouteverything whenever I can;-).
A:
You may also want to look at the statusline setting. This will add the bye offset to the statusline:
set statusline+=%o
See :h statusline
Just be careful because the default statusline is blank, and by appending the %o to it, you loose all the defaults.
|
Get offset of current buffer in vim (in particular, via python scripting)
|
i want to get the offset of
the current cursor position
the current selection range
in vim, beginning from the start of the file. I do this in python, so hints how to do it with vim's python scripting would be very helpful.
I have used vim.current.. before for doing scripting, but it uses lines and columns rather than a general offset.
Would i have to calculate the amount of all preceding line lengths + the current row, or is there a simpler method ?
|
[
"If your vim is compiled with the +byte_offset option, then in a Python script after the usual import vim, you can use, e.g.:\nvim.eval('line2byte(line(\".\"))+col(\".\")')\n\nto get the byte offset from start of file of the cursor position, and similarly for other marks. More generally, if you have a line/column pair this (assuming +byte_offset is how your vim was compiled with) is the way to get a byte offset (there's also a byte2line function to go the other way).\nWhile the vim module does make a lot of functionality available directly to Python scripts in vim, I've found that vim.eval and vim.command are often the handiest (and sometimes the only;-) way to get in just as deep as needed;-). Oh, and I always try to have a vim compiled with +justabouteverything whenever I can;-).\n",
"You may also want to look at the statusline setting. This will add the bye offset to the statusline:\nset statusline+=%o\n\nSee :h statusline\nJust be careful because the default statusline is blank, and by appending the %o to it, you loose all the defaults.\n"
] |
[
16,
13
] |
[] |
[] |
[
"offset",
"python",
"vim"
] |
stackoverflow_0001483796_offset_python_vim.txt
|
Q:
python PIL - background displayed opaque instead of transparent
I want to generate 32x32 sized thumbnails from uploaded images (actually avatars).
To prevent a thumbnail from being smaller than that size, I want to create a transparent 32x32 background and paste the thumbnail on it.
The code below tries to do so. However, the avatar is displayed on a black and opaque background; I lose transparency information somewhere through the process. Where am I doing wrong?
def handle_image(img):
size = SMALL_AVATAR_IMAGE_SIZE
img.thumbnail(size, Image.ANTIALIAS)
img = img.convert('RGBA')
background = Image.new('RGBA', size, (255, 255, 255, 0))
background.paste(img, (0, (size[1] - img.size[1]) / 2), img)
img = background
processed_image_small = ContentFile(img.tostring('jpeg', img.mode))
targetpath = str(self.user.id) + '_S' + '.jpg'
self.img_small.save(targetpath, processed_image_small,save=False)
A:
You're generating a JPG image. JPEGs don't support background transparency. You need to generate a PNG image to support transparencies.
A:
That is because JPEG cannot save transparency informations which are contained in a RGBA image. You may want to save the avatar to a format like PNG which is able to keep these informations.
|
python PIL - background displayed opaque instead of transparent
|
I want to generate 32x32 sized thumbnails from uploaded images (actually avatars).
To prevent a thumbnail from being smaller than that size, I want to create a transparent 32x32 background and paste the thumbnail on it.
The code below tries to do so. However, the avatar is displayed on a black and opaque background; I lose transparency information somewhere through the process. Where am I doing wrong?
def handle_image(img):
size = SMALL_AVATAR_IMAGE_SIZE
img.thumbnail(size, Image.ANTIALIAS)
img = img.convert('RGBA')
background = Image.new('RGBA', size, (255, 255, 255, 0))
background.paste(img, (0, (size[1] - img.size[1]) / 2), img)
img = background
processed_image_small = ContentFile(img.tostring('jpeg', img.mode))
targetpath = str(self.user.id) + '_S' + '.jpg'
self.img_small.save(targetpath, processed_image_small,save=False)
|
[
"You're generating a JPG image. JPEGs don't support background transparency. You need to generate a PNG image to support transparencies.\n",
"That is because JPEG cannot save transparency informations which are contained in a RGBA image. You may want to save the avatar to a format like PNG which is able to keep these informations.\n"
] |
[
5,
5
] |
[] |
[] |
[
"django",
"image",
"python"
] |
stackoverflow_0001484101_django_image_python.txt
|
Q:
Replace/delete field using sqlalchemy
Using postgres in python,
How do I replace all fields from the same column that match a specified value? For example, let's say I want to replace any fields that match "green" with "red" in the "Color" column.
How to delete all fields from the same column that match a specified value? For example, I'm trying to deleted all fields that match "green" in the Color column.
A:
Ad1. You need something like this:
session.query(Foo).filter_by(color = 'green').update({ 'color': 'red' })
session.commit()
Ad2. Similarly:
session.query(Foo).filter_by(color = 'green').delete()
session.commit()
You can find the querying documentation here and here.
|
Replace/delete field using sqlalchemy
|
Using postgres in python,
How do I replace all fields from the same column that match a specified value? For example, let's say I want to replace any fields that match "green" with "red" in the "Color" column.
How to delete all fields from the same column that match a specified value? For example, I'm trying to deleted all fields that match "green" in the Color column.
|
[
"Ad1. You need something like this:\nsession.query(Foo).filter_by(color = 'green').update({ 'color': 'red' })\nsession.commit()\n\nAd2. Similarly:\nsession.query(Foo).filter_by(color = 'green').delete()\nsession.commit()\n\nYou can find the querying documentation here and here.\n"
] |
[
9
] |
[] |
[] |
[
"postgresql",
"python",
"replace",
"sqlalchemy"
] |
stackoverflow_0001484235_postgresql_python_replace_sqlalchemy.txt
|
Q:
how % applies to this method in Python?
From my studying of python, I've found two uses for %. It can be used as what's called a modulo, meaning it will divide the value to the left of it and the value to the right of it and spit back the remainder.
The other use is a string formatter. So I can do something like 'Hi there %s' % name, where name is a list of names.
Also, if you see %% in a string formatting, that means a literal % will be entered.
Here is my question, I found this:
class FormatFormatStr(FormatObj):
def __init__(self, fmt):
self.fmt = fmt
def tostr(self, x):
if x is None: return 'None'
return self.fmt%self.toval(x)
What does return self.fmt%self.toval(x) mean? It can't be a modulo because toval will give me a string. It's not really a string formatter because there isn't another percent sign.
also, related to this:
def csvformat_factory(format):
format = copy.deepcopy(format)
if isinstance(format, FormatFloat):
format.scale = 1. # override scaling for storage
format.fmt = '%r'
return format
What does the percent mean in format.fmt = '%r' does this mean to insert a string a la repr()? Or does it mean insert what the variable r represents? r in this overall program also refers to a recarray.
Thanks everyone. Hope this makes sense =)
A:
The string % operator is simpler than you are imagining. It takes a string on the left side, and a variety of things on the right side. The left side doesn't have to be a literal string, it can be a variable, or the result of another computation. Any expression that results in a string is valid for the left side of the %.
In your first example, self.fmt is a string. In order to be useful in this context, it should have a percent sign in it.
In your second example, format.fmt is being set to a string that would be useful as the left side of the %. In this case, "%r" means, insert the repr() of the value into the string, as you have said.
A:
In
return self.fmt%self.toval(x)
self.fmt is a string, and that string presumably has a percent-sign placeholder in it.
%r in a format string is like %s but it prints the repr() of the string, so it'll have quotes and backslashes and all that.
% is just an operator which is just a method, and like any other method you can either pass in a literal value or a variable containing a value. In your examples they use a variable containing the format string.
A:
def tostr(self, x):
if x is None: return 'None'
return self.fmt%self.toval(x)
The % in this is a string formatter, definitely. Pass the tostr method a formatter, eg "%s" or "%r" to see what happens
I think the '%r' in csvformat_factory is also a string formatter. '%r' means take the repr() which is a reasonable way to display something to a user. I imagine that format.fmt is used elsewhere format.fmt % somevalue.
A:
The code:
return self.fmt % self.toval(x)
Is the "string formatting" use of the % operator, just like you suspected.
The class is handed format, which is a string containing the formatting, and when tostr(x) is called, it will return the string % x.
This is just like using % directly, only with saving the format string for later. In other words, instead of doing:
"I want to print the number: %n" % 20
What's happening is:
format_str = "I want to print the number: %n"
x = 20
print format_str % x
Which is exactly the same thing.
A:
% has more than one use in string formatting. One use is in %s, %d, etc.
Another use is to separate 'string in which we use %d and %s' from int-value and string-value.
For example
'string in which we use %d and %s' % (17, 'blue')
would result in
'string in which we use 17 and blue'
we could store 'string in which we use %d and %s' in a variable,
a = 'string in which we use %d and %s'
then
a % (17, 'blue')
results in
'string in which we use 17 and blue'
In your example
self.fmt%self.toval(x)
self.fmt is similar to a above and self.toval(x) is (17, 'blue')
|
how % applies to this method in Python?
|
From my studying of python, I've found two uses for %. It can be used as what's called a modulo, meaning it will divide the value to the left of it and the value to the right of it and spit back the remainder.
The other use is a string formatter. So I can do something like 'Hi there %s' % name, where name is a list of names.
Also, if you see %% in a string formatting, that means a literal % will be entered.
Here is my question, I found this:
class FormatFormatStr(FormatObj):
def __init__(self, fmt):
self.fmt = fmt
def tostr(self, x):
if x is None: return 'None'
return self.fmt%self.toval(x)
What does return self.fmt%self.toval(x) mean? It can't be a modulo because toval will give me a string. It's not really a string formatter because there isn't another percent sign.
also, related to this:
def csvformat_factory(format):
format = copy.deepcopy(format)
if isinstance(format, FormatFloat):
format.scale = 1. # override scaling for storage
format.fmt = '%r'
return format
What does the percent mean in format.fmt = '%r' does this mean to insert a string a la repr()? Or does it mean insert what the variable r represents? r in this overall program also refers to a recarray.
Thanks everyone. Hope this makes sense =)
|
[
"The string % operator is simpler than you are imagining. It takes a string on the left side, and a variety of things on the right side. The left side doesn't have to be a literal string, it can be a variable, or the result of another computation. Any expression that results in a string is valid for the left side of the %.\nIn your first example, self.fmt is a string. In order to be useful in this context, it should have a percent sign in it.\nIn your second example, format.fmt is being set to a string that would be useful as the left side of the %. In this case, \"%r\" means, insert the repr() of the value into the string, as you have said.\n",
"In\nreturn self.fmt%self.toval(x)\n\nself.fmt is a string, and that string presumably has a percent-sign placeholder in it.\n%r in a format string is like %s but it prints the repr() of the string, so it'll have quotes and backslashes and all that.\n% is just an operator which is just a method, and like any other method you can either pass in a literal value or a variable containing a value. In your examples they use a variable containing the format string.\n",
"def tostr(self, x):\n if x is None: return 'None'\n return self.fmt%self.toval(x)\n\nThe % in this is a string formatter, definitely. Pass the tostr method a formatter, eg \"%s\" or \"%r\" to see what happens\nI think the '%r' in csvformat_factory is also a string formatter. '%r' means take the repr() which is a reasonable way to display something to a user. I imagine that format.fmt is used elsewhere format.fmt % somevalue.\n",
"The code:\n return self.fmt % self.toval(x)\nIs the \"string formatting\" use of the % operator, just like you suspected.\nThe class is handed format, which is a string containing the formatting, and when tostr(x) is called, it will return the string % x.\nThis is just like using % directly, only with saving the format string for later. In other words, instead of doing:\n\"I want to print the number: %n\" % 20\n\nWhat's happening is:\nformat_str = \"I want to print the number: %n\"\nx = 20\nprint format_str % x\n\nWhich is exactly the same thing.\n",
"% has more than one use in string formatting. One use is in %s, %d, etc.\nAnother use is to separate 'string in which we use %d and %s' from int-value and string-value.\nFor example \n'string in which we use %d and %s' % (17, 'blue')\n\nwould result in \n'string in which we use 17 and blue'\n\nwe could store 'string in which we use %d and %s' in a variable,\na = 'string in which we use %d and %s'\n\nthen\na % (17, 'blue')\n\nresults in \n'string in which we use 17 and blue'\n\nIn your example\n self.fmt%self.toval(x)\nself.fmt is similar to a above and self.toval(x) is (17, 'blue')\n"
] |
[
4,
3,
0,
0,
0
] |
[] |
[] |
[
"class",
"python",
"string_formatting"
] |
stackoverflow_0001484375_class_python_string_formatting.txt
|
Q:
How do I use PyMock and Nose with Django models?
I'm trying to do TDD with PyMock, but I keep getting error when I use Nose and execute core.py from command line:
"ERROR: Failure: ImportError (Settings cannot be imported, because environment variable DJA
NGO_SETTINGS_MODULE is undefined.)"
If I remove "from cms.models import Entry" from the unit test module I created, everything works fine, but I need to mock functionality in django module cms.models.Entry that I created.
What am I doing wrong? Can this be done?
A:
You do need DJANGO_SETTINGS_MODULE defined in order to run core.py -- why don't you just export DJANGO_SETTINGS_MODULE=whatever in your bash session before starting nose?
|
How do I use PyMock and Nose with Django models?
|
I'm trying to do TDD with PyMock, but I keep getting error when I use Nose and execute core.py from command line:
"ERROR: Failure: ImportError (Settings cannot be imported, because environment variable DJA
NGO_SETTINGS_MODULE is undefined.)"
If I remove "from cms.models import Entry" from the unit test module I created, everything works fine, but I need to mock functionality in django module cms.models.Entry that I created.
What am I doing wrong? Can this be done?
|
[
"You do need DJANGO_SETTINGS_MODULE defined in order to run core.py -- why don't you just export DJANGO_SETTINGS_MODULE=whatever in your bash session before starting nose?\n"
] |
[
4
] |
[] |
[] |
[
"django",
"nose",
"python",
"unit_testing"
] |
stackoverflow_0001484293_django_nose_python_unit_testing.txt
|
Q:
Ñ not displayed in Google App Engine website
I'm using Google App Engine to build a website and I'm having problems with special characters. I think I've reduced the problem to this two code samples:
request = urlfetch.fetch(
url=self.WWW_INFO,
payload=urllib.urlencode(inputs),
method=urlfetch.POST,
headers={'Content-Type': 'application/x-www-form-urlencoded'})
print request.content
The previous code displays the content just fine, showing the special characters. But, the correct way to use the framework to display something is using:
request = urlfetch.fetch(
url=self.WWW_INFO,
payload=urllib.urlencode(inputs),
method=urlfetch.POST,
headers={'Content-Type': 'application/x-www-form-urlencoded'})
self.response.out.write(request.content)
Which doesn't display the special characters, and instead just prints �. What should I do so it displays correctly?
I know I'm missing something, but I can't seem to grasp what it is. The website sets the <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">, and I've tried with charset=UTF-8 with no success.
I'll appreciate any advice that can point me in the right direction.
A:
You need to get the charset from the content-type header in the fetch's result, use it to decode the bytes into Unicode, then, on the response, set the header with your favorite encoding (I do suggest utf-8 -- no good reason to do otherwise) and emit the encoding of the Unicode text via that codec. The pass through unicode is not strictly needed (when you're doing nothing at all with the contents, just bouncing it right back to the response, you might use identical content-type and charset to what you received) but it's recommended on general grounds (use encoded byte strings only on input/output, always keep all text "within" your app as unicode).
IOW, your problem seems to be mostly that you're not setting headers correctly on the response.
|
Ñ not displayed in Google App Engine website
|
I'm using Google App Engine to build a website and I'm having problems with special characters. I think I've reduced the problem to this two code samples:
request = urlfetch.fetch(
url=self.WWW_INFO,
payload=urllib.urlencode(inputs),
method=urlfetch.POST,
headers={'Content-Type': 'application/x-www-form-urlencoded'})
print request.content
The previous code displays the content just fine, showing the special characters. But, the correct way to use the framework to display something is using:
request = urlfetch.fetch(
url=self.WWW_INFO,
payload=urllib.urlencode(inputs),
method=urlfetch.POST,
headers={'Content-Type': 'application/x-www-form-urlencoded'})
self.response.out.write(request.content)
Which doesn't display the special characters, and instead just prints �. What should I do so it displays correctly?
I know I'm missing something, but I can't seem to grasp what it is. The website sets the <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">, and I've tried with charset=UTF-8 with no success.
I'll appreciate any advice that can point me in the right direction.
|
[
"You need to get the charset from the content-type header in the fetch's result, use it to decode the bytes into Unicode, then, on the response, set the header with your favorite encoding (I do suggest utf-8 -- no good reason to do otherwise) and emit the encoding of the Unicode text via that codec. The pass through unicode is not strictly needed (when you're doing nothing at all with the contents, just bouncing it right back to the response, you might use identical content-type and charset to what you received) but it's recommended on general grounds (use encoded byte strings only on input/output, always keep all text \"within\" your app as unicode).\nIOW, your problem seems to be mostly that you're not setting headers correctly on the response.\n"
] |
[
1
] |
[] |
[] |
[
"google_app_engine",
"python",
"unicode",
"utf_8"
] |
stackoverflow_0001484427_google_app_engine_python_unicode_utf_8.txt
|
Q:
Python process pool and scope
I am trying to run autogenerated code (which might potentially not terminate) in a loop, for genetic programming. I'm trying to use multiprocessing pool for this, since I don't want the big performance overhead of creating a new process each time, and I can terminate the pool process if it runs too long (which i cant do with threads).
Essentially, my program is
if __name__ == '__main__':
pool = Pool(processes=1)
while ...:
source = generate() #autogenerate code
exec(source)
print meth() # just a test, prints a result, since meth was defined in source
result = pool.apply_async(meth)
try:
print result.get(timeout=3)
except:
pool.terminate()
This is the code that should work, but doesn't, instead i get
AttributeError: 'module' object has no attribute 'meth'
It seems that Pool only sees the meth method, if it is defined in the very top level. Any suggestions how to get it to run dynamically created method?
Edit:
the problem is exactly the same with Process, i.e.
source = generated()
exec(source)
if __name__ == '__main__':
p = Process(target = meth)
p.start()
works, while
if __name__ == '__main__':
source = generated()
exec(source)
p = Process(target = meth)
p.start()
doesn't, and fails with an AttributeError
A:
Did you read the programming guidelines? There is lots of stuff in there about global variables. There are even more limitations under Windows. You don't say which platform you are running on, but this could be the problem if you are running under Windows. From the above link
Global variables
Bear in mind that if code run in a child process tries to access a global variable, then the value it sees (if any) may not be the same as the value in the parent process at the time that Process.start() was called.
However, global variables which are just module level constants cause no problems.
A:
Process (via pool or otherwise) won't have a __name__ of '__main__', so it will not execute anything that depends on that condition -- including the exec statements that you depend on in order to find your meth, of course.
Why are you so keen on having that exec guarded by a condition that, by design, IS going to be false in your sub-process, yet have that sub-process depend (contradictorily!) on the execution of that exec...?! It's really boggling my mind...
A:
As I commented above, all your examples are working as you expect on my Linux box (Debian Lenny, Python2.5, processing 0.52, see test code below).
There seems to be many restrictions on windows on objects you can transmit from one process to another. Reading the doc pointed out by Nick it seems that on window the os missing fork it will run a brand new python interpreter import modules and pickle/unpickle objects that should be passed around. If they can't be pickled I expect that you'll get the kind of problem that occured to you.
Hence a complete (not) working example may be usefull for diagnosis. The answer may be in the things you've hidden as irrelevant.
from processing import Pool
import os
def generated():
return (
"""
def meth():
import time
starttime = time.time()
pid = os.getpid()
while 1:
if time.time() - starttime > 1:
print "process %s" % pid
starttime = starttime + 1
""")
if __name__ == '__main__':
pid = os.getpid()
print "main pid=%s" % pid
for n in range(5):
source = generated() #autogenerate code
exec(source)
pool = Pool(processes=1)
result = pool.apply_async(meth)
try:
print result.get(timeout=3)
except:
pool.terminate()
Another suggestion would be to use threads. yes you can even if you don't know if your generated code will stop or not or if your generated code have differently nested loops. Loops are no restriction at all, that's precisely a point for using generators (extracting control flow). I do not see why it couldn't apply to what you are doing. [Agreed it is probably more change that independent processes] See example below.
import time
class P(object):
def __init__(self, name):
self.name = name
self.starttime = time.time()
self.lastexecutiontime = self.starttime
self.gen = self.run()
def toolong(self):
if time.time() - self.starttime > 10:
print "process %s too long" % self.name
return True
return False
class P1(P):
def run(self):
for x in xrange(1000):
for y in xrange(1000):
for z in xrange(1000):
if time.time() - self.lastexecutiontime > 1:
print "process %s" % self.name
self.lastexecutiontime = self.lastexecutiontime + 1
yield
self.result = self.name.uppercase()
class P2(P):
def run(self):
for x in range(10000000):
if time.time() - self.lastexecutiontime > 1:
print "process %s" % self.name
self.lastexecutiontime = self.lastexecutiontime + 1
yield
self.result = self.name.capitalize()
pool = [P1('one'), P1('two'), P2('three')]
while len(pool) > 0:
current = pool.pop()
try:
current.gen.next()
except StopIteration:
print "Thread %s ended. Result '%s'" % (current.name, current.result)
else:
if current.toolong():
print "Forced end for thread %s" % current.name
else:
pool.insert(0, current)
|
Python process pool and scope
|
I am trying to run autogenerated code (which might potentially not terminate) in a loop, for genetic programming. I'm trying to use multiprocessing pool for this, since I don't want the big performance overhead of creating a new process each time, and I can terminate the pool process if it runs too long (which i cant do with threads).
Essentially, my program is
if __name__ == '__main__':
pool = Pool(processes=1)
while ...:
source = generate() #autogenerate code
exec(source)
print meth() # just a test, prints a result, since meth was defined in source
result = pool.apply_async(meth)
try:
print result.get(timeout=3)
except:
pool.terminate()
This is the code that should work, but doesn't, instead i get
AttributeError: 'module' object has no attribute 'meth'
It seems that Pool only sees the meth method, if it is defined in the very top level. Any suggestions how to get it to run dynamically created method?
Edit:
the problem is exactly the same with Process, i.e.
source = generated()
exec(source)
if __name__ == '__main__':
p = Process(target = meth)
p.start()
works, while
if __name__ == '__main__':
source = generated()
exec(source)
p = Process(target = meth)
p.start()
doesn't, and fails with an AttributeError
|
[
"Did you read the programming guidelines? There is lots of stuff in there about global variables. There are even more limitations under Windows. You don't say which platform you are running on, but this could be the problem if you are running under Windows. From the above link\n\nGlobal variables\nBear in mind that if code run in a child process tries to access a global variable, then the value it sees (if any) may not be the same as the value in the parent process at the time that Process.start() was called.\nHowever, global variables which are just module level constants cause no problems.\n\n",
"Process (via pool or otherwise) won't have a __name__ of '__main__', so it will not execute anything that depends on that condition -- including the exec statements that you depend on in order to find your meth, of course.\nWhy are you so keen on having that exec guarded by a condition that, by design, IS going to be false in your sub-process, yet have that sub-process depend (contradictorily!) on the execution of that exec...?! It's really boggling my mind...\n",
"As I commented above, all your examples are working as you expect on my Linux box (Debian Lenny, Python2.5, processing 0.52, see test code below).\nThere seems to be many restrictions on windows on objects you can transmit from one process to another. Reading the doc pointed out by Nick it seems that on window the os missing fork it will run a brand new python interpreter import modules and pickle/unpickle objects that should be passed around. If they can't be pickled I expect that you'll get the kind of problem that occured to you.\nHence a complete (not) working example may be usefull for diagnosis. The answer may be in the things you've hidden as irrelevant.\nfrom processing import Pool\nimport os\n\ndef generated():\n return (\n\"\"\"\ndef meth():\n import time\n starttime = time.time()\n pid = os.getpid()\n while 1:\n if time.time() - starttime > 1:\n print \"process %s\" % pid\n starttime = starttime + 1\n\n\"\"\")\n\n\nif __name__ == '__main__':\n pid = os.getpid()\n print \"main pid=%s\" % pid\n for n in range(5):\n source = generated() #autogenerate code\n exec(source)\n pool = Pool(processes=1) \n result = pool.apply_async(meth)\n try:\n print result.get(timeout=3) \n except:\n pool.terminate()\n\nAnother suggestion would be to use threads. yes you can even if you don't know if your generated code will stop or not or if your generated code have differently nested loops. Loops are no restriction at all, that's precisely a point for using generators (extracting control flow). I do not see why it couldn't apply to what you are doing. [Agreed it is probably more change that independent processes] See example below.\nimport time\n\nclass P(object):\n def __init__(self, name):\n self.name = name\n self.starttime = time.time()\n self.lastexecutiontime = self.starttime\n self.gen = self.run()\n\n def toolong(self):\n if time.time() - self.starttime > 10:\n print \"process %s too long\" % self.name\n return True\n return False\n\nclass P1(P):\n def run(self):\n for x in xrange(1000):\n for y in xrange(1000):\n for z in xrange(1000):\n if time.time() - self.lastexecutiontime > 1:\n print \"process %s\" % self.name\n self.lastexecutiontime = self.lastexecutiontime + 1\n yield\n self.result = self.name.uppercase()\n\nclass P2(P):\n def run(self):\n for x in range(10000000):\n if time.time() - self.lastexecutiontime > 1:\n print \"process %s\" % self.name\n self.lastexecutiontime = self.lastexecutiontime + 1\n yield\n self.result = self.name.capitalize()\n\npool = [P1('one'), P1('two'), P2('three')]\nwhile len(pool) > 0:\n current = pool.pop()\n try:\n current.gen.next()\n except StopIteration:\n print \"Thread %s ended. Result '%s'\" % (current.name, current.result) \n else:\n if current.toolong():\n print \"Forced end for thread %s\" % current.name \n else:\n pool.insert(0, current)\n\n"
] |
[
3,
2,
0
] |
[] |
[] |
[
"pool",
"python"
] |
stackoverflow_0001484310_pool_python.txt
|
Q:
how are these two variables unpacked?
Through tutorials I had learned that you can define two variables in the same statement, e.g.:
In [15]: a, b = 'hello', 'hi!'
In [16]: a
Out[16]: 'hello'
In [17]: b
Out[17]: 'hi!'
well how does that apply to here?
fh, opened = cbook.to_filehandle(fname, 'w', return_opened = True)
I prodded further:
In [18]: fh
Out[18]: <open file 'attempt.csv', mode 'w' at 0xaac89d0>
In [19]: opened
Out[19]: True
my issue comes really with 'opened'. Well normally if two variables are being defined, there would be a comma and then whatever is there would define 'opened.' This is not the case. Even with that issue looming, 'opened' is equal to True which I assume is because 'return_opened = True.' Well that's weird because I don't remember in any tutorial that you could just add a 'return_' before a variable to affect that variable.
I play with it some more and I change the True to False and I get this:
In [10]: fh, opened = cbook.to_filehandle(fname, 'w', return_opened = False)
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
/home/blahblahblah/Documents/Programming/EXERCISES/piece.py in <module>()
----> 1
2
3
4
5
IOError: [Errno 9] Bad file descriptor
so I guess it only accepts False.
I guess I'd like if someone could explain what is going on here.
Gracias amigos!
here's the entire code:
if missingd is None:
missingd = dict()
def with_mask(func):
def newfunc(val, mask, mval):
if mask:
return mval
else:
return func(val)
return newfunc
formatd = get_formatd(r, formatd)
funcs = []
for i, name in enumerate(r.dtype.names):
funcs.append(with_mask(csvformat_factory(formatd[name]).tostr))
fh, opened = cbook.to_filehandle(fname, 'w', return_opened=True)
writer = csv.writer(fh, delimiter=delimiter)
header = r.dtype.names
writer.writerow(header)
# Our list of specials for missing values
mvals = []
for name in header:
mvals.append(missingd.get(name, missing))
ismasked = False
if len(r):
row = r[0]
ismasked = hasattr(row, '_fieldmask')
for row in r:
if ismasked:
row, rowmask = row.item(), row._fieldmask.item()
else:
rowmask = [False] * len(row)
writer.writerow([func(val, mask, mval) for func, val, mask, mval
in zip(funcs, row, rowmask, mvals)])
if opened:
fh.close()
A:
My guess is that the function internally looks something like this:
def to_filehandle(filename, mode, return_opened=False):
# do something to open the file and set opened
# to True if it worked, False otherwise
if return_opened:
return the_filehandle, opened
else:
return the_filehandle
There is nothing special or magical about the return_opened keyword argument; it is simply changing the behavior of this particular function.
A:
With tuple assignment, the right side doesn't need to be an explicit tuple:
x = 1, 0
a, b = x
does the same thing as:
a, b = 1, 0
If a function returns a tuple, you can unpack it with tuple assignment:
def my_fn():
return 1, 0
a, b = my_fn()
A:
As @dcrosta says, there is nothing magical about the variable names. To better see what's going on, try:
result = cbook.to_filehandle(fname, 'w', return_opened=True)
and examine result, type(result), etc: you'll see it's a tuple (could conceivably be a list or other sequence, but that's not likely) with exactly two items. result[0] will be an open file, result[1] will be a bool. Most likely, that's because function to_filehandle is coded with a return thefile, thebool-like statement, as dcrosta also surmises.
So, this part is "packing" -- two things are packed into one return value, making the latter a tuple with two items. The "unpacking" part is when you later do:
fh, opened = result
and the two-item sequence is unpacked into two variables. By doing the unpacking directly, you're just "cutting out the middleman", the variable I here named result (to make it easier for you to examine exactly what result comes from that function call, before it gets unpacked). If you know in advance you'll always get a 2-item sequence, and don't need the sequence as such but rather each item with a separate name, then you might as well unpack at once and save one "intermediate step" -- that's all there is to it!
A:
Unpacking is not a magic process : every tuple (for example, (1, 2, 3) is a tuple with three values) can be unpacked into three values. But the tuple itself is also a value, so you can assign it to a variable or return it from a function :
x = (1, 2, 3)
a, b, c = x
# Or, for example :
def function_returning_a_tuple():
return (True, False)
a, b = function_returning_a_tuple()
As you may now understand, cbook.to_filehandle is only a function returning a tuple of two values (file, opened). There is no magic behind that, nothing about return_something parameters handled differently.
A:
In Python, a function have more than one return value. In this case, the function 'cbook.to_filehandle' return the two value.
About the error, I think we cannot tell much about it until we know what 'cbook.to_filehandle' supposes to do or see it code.
|
how are these two variables unpacked?
|
Through tutorials I had learned that you can define two variables in the same statement, e.g.:
In [15]: a, b = 'hello', 'hi!'
In [16]: a
Out[16]: 'hello'
In [17]: b
Out[17]: 'hi!'
well how does that apply to here?
fh, opened = cbook.to_filehandle(fname, 'w', return_opened = True)
I prodded further:
In [18]: fh
Out[18]: <open file 'attempt.csv', mode 'w' at 0xaac89d0>
In [19]: opened
Out[19]: True
my issue comes really with 'opened'. Well normally if two variables are being defined, there would be a comma and then whatever is there would define 'opened.' This is not the case. Even with that issue looming, 'opened' is equal to True which I assume is because 'return_opened = True.' Well that's weird because I don't remember in any tutorial that you could just add a 'return_' before a variable to affect that variable.
I play with it some more and I change the True to False and I get this:
In [10]: fh, opened = cbook.to_filehandle(fname, 'w', return_opened = False)
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
/home/blahblahblah/Documents/Programming/EXERCISES/piece.py in <module>()
----> 1
2
3
4
5
IOError: [Errno 9] Bad file descriptor
so I guess it only accepts False.
I guess I'd like if someone could explain what is going on here.
Gracias amigos!
here's the entire code:
if missingd is None:
missingd = dict()
def with_mask(func):
def newfunc(val, mask, mval):
if mask:
return mval
else:
return func(val)
return newfunc
formatd = get_formatd(r, formatd)
funcs = []
for i, name in enumerate(r.dtype.names):
funcs.append(with_mask(csvformat_factory(formatd[name]).tostr))
fh, opened = cbook.to_filehandle(fname, 'w', return_opened=True)
writer = csv.writer(fh, delimiter=delimiter)
header = r.dtype.names
writer.writerow(header)
# Our list of specials for missing values
mvals = []
for name in header:
mvals.append(missingd.get(name, missing))
ismasked = False
if len(r):
row = r[0]
ismasked = hasattr(row, '_fieldmask')
for row in r:
if ismasked:
row, rowmask = row.item(), row._fieldmask.item()
else:
rowmask = [False] * len(row)
writer.writerow([func(val, mask, mval) for func, val, mask, mval
in zip(funcs, row, rowmask, mvals)])
if opened:
fh.close()
|
[
"My guess is that the function internally looks something like this:\ndef to_filehandle(filename, mode, return_opened=False):\n # do something to open the file and set opened\n # to True if it worked, False otherwise\n if return_opened:\n return the_filehandle, opened\n else:\n return the_filehandle\n\nThere is nothing special or magical about the return_opened keyword argument; it is simply changing the behavior of this particular function.\n",
"With tuple assignment, the right side doesn't need to be an explicit tuple:\nx = 1, 0\na, b = x\n\ndoes the same thing as:\na, b = 1, 0\n\nIf a function returns a tuple, you can unpack it with tuple assignment:\ndef my_fn():\n return 1, 0\n\na, b = my_fn()\n\n",
"As @dcrosta says, there is nothing magical about the variable names. To better see what's going on, try:\nresult = cbook.to_filehandle(fname, 'w', return_opened=True)\n\nand examine result, type(result), etc: you'll see it's a tuple (could conceivably be a list or other sequence, but that's not likely) with exactly two items. result[0] will be an open file, result[1] will be a bool. Most likely, that's because function to_filehandle is coded with a return thefile, thebool-like statement, as dcrosta also surmises.\nSo, this part is \"packing\" -- two things are packed into one return value, making the latter a tuple with two items. The \"unpacking\" part is when you later do:\nfh, opened = result\n\nand the two-item sequence is unpacked into two variables. By doing the unpacking directly, you're just \"cutting out the middleman\", the variable I here named result (to make it easier for you to examine exactly what result comes from that function call, before it gets unpacked). If you know in advance you'll always get a 2-item sequence, and don't need the sequence as such but rather each item with a separate name, then you might as well unpack at once and save one \"intermediate step\" -- that's all there is to it!\n",
"Unpacking is not a magic process : every tuple (for example, (1, 2, 3) is a tuple with three values) can be unpacked into three values. But the tuple itself is also a value, so you can assign it to a variable or return it from a function :\nx = (1, 2, 3)\na, b, c = x\n\n# Or, for example :\ndef function_returning_a_tuple():\n return (True, False)\n\na, b = function_returning_a_tuple()\n\nAs you may now understand, cbook.to_filehandle is only a function returning a tuple of two values (file, opened). There is no magic behind that, nothing about return_something parameters handled differently.\n",
"In Python, a function have more than one return value. In this case, the function 'cbook.to_filehandle' return the two value.\nAbout the error, I think we cannot tell much about it until we know what 'cbook.to_filehandle' supposes to do or see it code.\n"
] |
[
2,
2,
2,
1,
0
] |
[] |
[] |
[
"python",
"variables"
] |
stackoverflow_0001484748_python_variables.txt
|
Q:
GTK Twitter Client
I am learning Python and PyGTK.
I'm trying to write a Twitter client. Which widget is best suited for displaying the Tweets (Timeline). I can do it easily with textview but it doesn't support sub widgets to display users image.
Tried using TreeView but it seems to be an overkill and is too complex.
I'm using Glade
A:
You could try Webkit (the browser rendering engine) using pywebkitgtk. It let's you develop in web technologies (HTML, CSS, JS) on the desktop. I think Gwibber, the microblogging client, uses it.
The widget you'd have to use is webkit.WebView. I'm not able to post more links here, just google for "HOWTO Create Python GUIs using HTML".
A:
I would suggest GtkTreeView, too. I agree it's kind of complicated (especially if you had time with GTK+ 1.2.x to get used to GtkCList, which is now deprecated). Still, it's a very powerful API and widget, and you will not regret learning it.
Trees are flexible and easy to use, and you will probably find more than once place where you can use one, so you will get a lot of use out of the learning.
There should be plenty of tutorials, showing the necessary steps.
A:
Actually the TreeView does support embedding images and widgets within the text. Checkout the PyGTK Tutorial section on TextView specifically: Inserting Images and Widgets
|
GTK Twitter Client
|
I am learning Python and PyGTK.
I'm trying to write a Twitter client. Which widget is best suited for displaying the Tweets (Timeline). I can do it easily with textview but it doesn't support sub widgets to display users image.
Tried using TreeView but it seems to be an overkill and is too complex.
I'm using Glade
|
[
"You could try Webkit (the browser rendering engine) using pywebkitgtk. It let's you develop in web technologies (HTML, CSS, JS) on the desktop. I think Gwibber, the microblogging client, uses it.\nThe widget you'd have to use is webkit.WebView. I'm not able to post more links here, just google for \"HOWTO Create Python GUIs using HTML\".\n",
"I would suggest GtkTreeView, too. I agree it's kind of complicated (especially if you had time with GTK+ 1.2.x to get used to GtkCList, which is now deprecated). Still, it's a very powerful API and widget, and you will not regret learning it.\nTrees are flexible and easy to use, and you will probably find more than once place where you can use one, so you will get a lot of use out of the learning.\nThere should be plenty of tutorials, showing the necessary steps.\n",
"Actually the TreeView does support embedding images and widgets within the text. Checkout the PyGTK Tutorial section on TextView specifically: Inserting Images and Widgets\n"
] |
[
5,
3,
1
] |
[] |
[] |
[
"gtk",
"pygtk",
"python"
] |
stackoverflow_0001392872_gtk_pygtk_python.txt
|
Q:
Djapian - filtering results
I use Djapian to search for object by keywords, but I want to be able to filter results. It would be nice to use Django's QuerySet API for this, for example:
if query.strip():
results = Model.indexer.search(query).prefetch()
else:
results = Model.objects.all()
results = results.filter(somefield__lt=somevalue)
return results
But Djapian returns a ResultSet of Hit objects, not Model objects. I can of course filter the objects "by hand", in Python, but it's not realistic in case of filtering all objects (when query is empty) - I would have to retrieve the whole table from database.
Am I out of luck with using Djapian for this?
A:
I went through its source and found that Djapian has a filter method that can be applied to its results. I have just tried the below code and it seems to be working.
My indexer is as follows:
class MarketIndexer( djapian.Indexer ):
fields = [ 'name', 'description', 'tags_string', 'state']
tags = [('state', 'state'),]
Here is how I filter results (never mind the first line that does stuff for wildcard usage):
objects = model.indexer.search(q_wc).flags(djapian.resultset.xapian.QueryParser.FLAG_WILDCARD).prefetch()
objects = objects.filter(state=1)
When executed, it now brings Markets that have their state equal to "1".
A:
I dont know Djapian, but i am familiar with xapian. In Xapian you can filter the results with a MatchDecider.
The decision function of the match decider gets called on every document which matches the search criteria so it's not a good idea to do a database query for every document here, but you can of course access the values of the document.
For example at ubuntuusers.de we have a xapian database which contains blog posts, forum posts, planet entries, wiki entries and so on and each document in the xapian database has some additional access information stored as value. After the query, an AuthMatchDecider filters the potential documents and returns the filtered MSet which are then displayed to the user.
If the decision procedure is as simple as somefield < somevalue, you could also simply add the value of somefield to the values of the document (using the sortable_serialize function provided by xapian) and add (using OP_FILTER) an OP_VALUE_RANGE query to the original query.
|
Djapian - filtering results
|
I use Djapian to search for object by keywords, but I want to be able to filter results. It would be nice to use Django's QuerySet API for this, for example:
if query.strip():
results = Model.indexer.search(query).prefetch()
else:
results = Model.objects.all()
results = results.filter(somefield__lt=somevalue)
return results
But Djapian returns a ResultSet of Hit objects, not Model objects. I can of course filter the objects "by hand", in Python, but it's not realistic in case of filtering all objects (when query is empty) - I would have to retrieve the whole table from database.
Am I out of luck with using Djapian for this?
|
[
"I went through its source and found that Djapian has a filter method that can be applied to its results. I have just tried the below code and it seems to be working. \nMy indexer is as follows:\nclass MarketIndexer( djapian.Indexer ):\n\n fields = [ 'name', 'description', 'tags_string', 'state']\n tags = [('state', 'state'),]\n\nHere is how I filter results (never mind the first line that does stuff for wildcard usage):\nobjects = model.indexer.search(q_wc).flags(djapian.resultset.xapian.QueryParser.FLAG_WILDCARD).prefetch()\nobjects = objects.filter(state=1)\n\nWhen executed, it now brings Markets that have their state equal to \"1\".\n",
"I dont know Djapian, but i am familiar with xapian. In Xapian you can filter the results with a MatchDecider.\nThe decision function of the match decider gets called on every document which matches the search criteria so it's not a good idea to do a database query for every document here, but you can of course access the values of the document.\nFor example at ubuntuusers.de we have a xapian database which contains blog posts, forum posts, planet entries, wiki entries and so on and each document in the xapian database has some additional access information stored as value. After the query, an AuthMatchDecider filters the potential documents and returns the filtered MSet which are then displayed to the user.\nIf the decision procedure is as simple as somefield < somevalue, you could also simply add the value of somefield to the values of the document (using the sortable_serialize function provided by xapian) and add (using OP_FILTER) an OP_VALUE_RANGE query to the original query.\n"
] |
[
4,
0
] |
[] |
[] |
[
"django",
"full_text_search",
"python",
"search",
"xapian"
] |
stackoverflow_0001483874_django_full_text_search_python_search_xapian.txt
|
Q:
Piping output of subprocess.call to progress bar
I'm using growisofs to burn an iso through my Python application. I have two classes in two different files; GUI() (main.py) and Boxblaze() (core.py). GUI() builds the window and handles all the events and stuff, and Boxblaze() has all the methods that GUI() calls.
Now when the user has selected the device to burn with, and the file to be burned, I need to call a method that calls the following command:`
growisofs -use-the-force-luke=dao -use-the-force-luke=break:1913760 -dvd-compat -speed=2 -Z /burner/device=/full/path/to.iso
This command should give an output similar to this:
Executing 'builtin_dd if=/home/nevon/games/Xbox 360 isos/The Godfather 2/alls-tgod2.iso of=/dev/scd0 obs=32k seek=0'
/dev/scd0: "Current Write Speed" is 2.5x1352KBps.
#more of the lines below, indicating progress.
7798128640/7835492352 (99.5%) @3.8x, remaining 0:06 RBU 100.0% UBU 99.8%
7815495680/7835492352 (99.7%) @3.8x, remaining 0:03 RBU 59.7% UBU 99.8%
7832862720/7835492352 (100.0%) @3.8x, remaining 0:00 RBU 7.9% UBU 99.8%
builtin_dd: 3825936*2KB out @ average 3.9x1352KBps
/dev/burner: flushing cache
/dev/burner: closing track
/dev/burner: closing disc
This command is run in a method called burn() in Boxblaze(). It looks simply like this:
def burn(self, file, device):
subprocess.call(["growisofs", '-dry-run', "-use-the-force-luke=dao", "-use-the-force-luke=break:1913760", "-dvd-compat", "-speed=2", "-Z", device +'='+ file])
Now my questions are the following:
How can I get the progress from the output (the percentage in brackets) and have my progress bar be set to "follow" that progress? My progress bar is called in the GUI() class, as such:
get = builder.get_object
self.progress_window = get("progressWindow")
self.progressbar = get("progressbar")
Do I have to run this command in a separate thread in order for the GUI to remain responsive (so that I can update the progress bar and allow the user to cancel the burn if they want to)? If so, how can I do that and still be able to pass the progress to the progress bar?
The full code is available on Launchpad if you are interested. If you have bazaar installed, just run:
bzr branch lp:boxblaze
Oh, and in case you were wondering, this application is only meant to work in Linux - so don't worry about cross-platform compatibility.
A:
You can use glib.io_add_watch() to watch for output on the pipes connected to stdout and stderr in the subprocess object.
proc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout_id = glib.io_add_watch(proc.stdout, glib.IO_IN|glib.IO_HUP, stdout_cb)
stderr_id = glib.io_add_watch(proc.stderr, glib.IO_IN|glib.IO_HUP, stderr_cb)
Then when the callback is called it should check for the condition and reads all the data from the pipe and processes it to get the info to update the ProgressBar. If the app buffers io then you may have to use a pty to fool it into thinking it's connected to a terminal so it will output a line at a time.
A:
To get the output you need to use the subprocess.Popen call. (stdout = subprocess.PIPE)
(Second Question)
You probably need a separate thread, unless the GUI framework can select on a filedescriptor in the normal loop.
You can have a background thread read the pipe, process it (to extract the progress), the pass that to the GUI thread.
## You might have to redirect stderr instead/as well
proc = sucprocess.Popen(command,stdout=subprocess.PIPE)
for line in proc.stdout.readlines():
## might not work - not sure about reading lines
## Parse the line to extract progress
## Pass progress to GUI thread
Edit:
I'm afraid I don't want to waste lots of CDs testing it out, so I haven't run it, but by you're comment it looks like it's not outputing the info to stdout, but to stderr.
I suggest running a sample command directly on the command-line, and redirecting stdout and stderr to different files.
growisofs [options] >stdout 2>stderr
Then you can work out which things come out on stdout and which on stderr.
If the stuff you want come on stderr, change stdout=subprocess.PIPE to stderr=subprocess.PIPE and see if that works any better.
Edit2:
You're not using threads correctly - you should be starting it - not running it directly.
Also:
gtk.gdk.threads_init()
threading.Thread.__init__(self)
is very weird - the initialiser calls should be in the initialiser - and I don't think you need to make it a gtk thread?
The way you call the run() method, is weird itself:
core.Burning.run(self.burning, self.filechooser.get_filename(), self.listofdevices[self.combobox.get_active()])
Call instance methods through the object:
self.burning.run(self.filechooser.get_filename(), self.listofdevices[self.combobox.get_active()])
(But you should have an __init__() method)
It seems to me that you are trying to run before you can walk. Try writing some simple threading code, then some simple code to run growisofs and parse the output, then some simple gtk+background threading code, and only then try combining them all together.
In fact first start writing some simple Object oriented code, so that you understand methods and object first.
e.g. All classes you create in python should be new-style classes, you should call super-class initialisers from your initialiser etc.
A:
Can you pass a timeout to reading from subprocess? I guess you can because my subProcess module was used as design input for it. You can use that to growiso.read(.1) and then parse and display the percentage from the outdata (or maybe errdata).
A:
You need to run the command from a separate thread, and update the gui with gobject.idle_add calls. Right now you have a class "Burning" but you are using it wrong, it should be used like this:
self.burning = core.Burning(self.filechooser.get_filename(), self.listofdevices[self.combobox.get_active()], self.progressbar)
self.burning.start()
Obviously you will have to modify core.Burning.
Then you will have access to the progressbar so you could make a function like this:
def set_progress_bar_fraction(self, fraction):
self.progress_bar.set_fraction(fraction)
Then every percentage update call it like this: gobject.idle_add(self.set_progress_bar_fraction, fraction)
More info on pygtk with threads here.
|
Piping output of subprocess.call to progress bar
|
I'm using growisofs to burn an iso through my Python application. I have two classes in two different files; GUI() (main.py) and Boxblaze() (core.py). GUI() builds the window and handles all the events and stuff, and Boxblaze() has all the methods that GUI() calls.
Now when the user has selected the device to burn with, and the file to be burned, I need to call a method that calls the following command:`
growisofs -use-the-force-luke=dao -use-the-force-luke=break:1913760 -dvd-compat -speed=2 -Z /burner/device=/full/path/to.iso
This command should give an output similar to this:
Executing 'builtin_dd if=/home/nevon/games/Xbox 360 isos/The Godfather 2/alls-tgod2.iso of=/dev/scd0 obs=32k seek=0'
/dev/scd0: "Current Write Speed" is 2.5x1352KBps.
#more of the lines below, indicating progress.
7798128640/7835492352 (99.5%) @3.8x, remaining 0:06 RBU 100.0% UBU 99.8%
7815495680/7835492352 (99.7%) @3.8x, remaining 0:03 RBU 59.7% UBU 99.8%
7832862720/7835492352 (100.0%) @3.8x, remaining 0:00 RBU 7.9% UBU 99.8%
builtin_dd: 3825936*2KB out @ average 3.9x1352KBps
/dev/burner: flushing cache
/dev/burner: closing track
/dev/burner: closing disc
This command is run in a method called burn() in Boxblaze(). It looks simply like this:
def burn(self, file, device):
subprocess.call(["growisofs", '-dry-run', "-use-the-force-luke=dao", "-use-the-force-luke=break:1913760", "-dvd-compat", "-speed=2", "-Z", device +'='+ file])
Now my questions are the following:
How can I get the progress from the output (the percentage in brackets) and have my progress bar be set to "follow" that progress? My progress bar is called in the GUI() class, as such:
get = builder.get_object
self.progress_window = get("progressWindow")
self.progressbar = get("progressbar")
Do I have to run this command in a separate thread in order for the GUI to remain responsive (so that I can update the progress bar and allow the user to cancel the burn if they want to)? If so, how can I do that and still be able to pass the progress to the progress bar?
The full code is available on Launchpad if you are interested. If you have bazaar installed, just run:
bzr branch lp:boxblaze
Oh, and in case you were wondering, this application is only meant to work in Linux - so don't worry about cross-platform compatibility.
|
[
"You can use glib.io_add_watch() to watch for output on the pipes connected to stdout and stderr in the subprocess object.\nproc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\nstdout_id = glib.io_add_watch(proc.stdout, glib.IO_IN|glib.IO_HUP, stdout_cb)\nstderr_id = glib.io_add_watch(proc.stderr, glib.IO_IN|glib.IO_HUP, stderr_cb)\n\nThen when the callback is called it should check for the condition and reads all the data from the pipe and processes it to get the info to update the ProgressBar. If the app buffers io then you may have to use a pty to fool it into thinking it's connected to a terminal so it will output a line at a time.\n",
"To get the output you need to use the subprocess.Popen call. (stdout = subprocess.PIPE)\n(Second Question)\nYou probably need a separate thread, unless the GUI framework can select on a filedescriptor in the normal loop.\nYou can have a background thread read the pipe, process it (to extract the progress), the pass that to the GUI thread.\n## You might have to redirect stderr instead/as well\nproc = sucprocess.Popen(command,stdout=subprocess.PIPE)\nfor line in proc.stdout.readlines():\n ## might not work - not sure about reading lines\n ## Parse the line to extract progress\n ## Pass progress to GUI thread\n\nEdit:\nI'm afraid I don't want to waste lots of CDs testing it out, so I haven't run it, but by you're comment it looks like it's not outputing the info to stdout, but to stderr.\nI suggest running a sample command directly on the command-line, and redirecting stdout and stderr to different files.\ngrowisofs [options] >stdout 2>stderr\n\nThen you can work out which things come out on stdout and which on stderr.\nIf the stuff you want come on stderr, change stdout=subprocess.PIPE to stderr=subprocess.PIPE and see if that works any better.\nEdit2:\nYou're not using threads correctly - you should be starting it - not running it directly.\nAlso:\ngtk.gdk.threads_init()\nthreading.Thread.__init__(self)\n\nis very weird - the initialiser calls should be in the initialiser - and I don't think you need to make it a gtk thread?\nThe way you call the run() method, is weird itself:\ncore.Burning.run(self.burning, self.filechooser.get_filename(), self.listofdevices[self.combobox.get_active()])\n\nCall instance methods through the object:\nself.burning.run(self.filechooser.get_filename(), self.listofdevices[self.combobox.get_active()])\n\n(But you should have an __init__() method)\nIt seems to me that you are trying to run before you can walk. Try writing some simple threading code, then some simple code to run growisofs and parse the output, then some simple gtk+background threading code, and only then try combining them all together.\nIn fact first start writing some simple Object oriented code, so that you understand methods and object first.\ne.g. All classes you create in python should be new-style classes, you should call super-class initialisers from your initialiser etc.\n",
"Can you pass a timeout to reading from subprocess? I guess you can because my subProcess module was used as design input for it. You can use that to growiso.read(.1) and then parse and display the percentage from the outdata (or maybe errdata).\n",
"You need to run the command from a separate thread, and update the gui with gobject.idle_add calls. Right now you have a class \"Burning\" but you are using it wrong, it should be used like this:\nself.burning = core.Burning(self.filechooser.get_filename(), self.listofdevices[self.combobox.get_active()], self.progressbar)\nself.burning.start()\n\nObviously you will have to modify core.Burning.\nThen you will have access to the progressbar so you could make a function like this: \ndef set_progress_bar_fraction(self, fraction): \n self.progress_bar.set_fraction(fraction)\n\nThen every percentage update call it like this: gobject.idle_add(self.set_progress_bar_fraction, fraction)\nMore info on pygtk with threads here.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"multithreading",
"progress_bar",
"pygtk",
"python",
"subprocess"
] |
stackoverflow_0001284196_multithreading_progress_bar_pygtk_python_subprocess.txt
|
Q:
HTML tag replacement using regex and python
I have a Python script that will look at an HTML file that has the following format:
<DOC>
<HTML>
...
</HTML>
</DOC>
<DOC>
<HTML>
...
</HTML>
</DOC>
How do I remove all HTML tags (replace the tags with '') with the exception of the opening and closing DOC tags using regex in Python? Also, if I want to retain the alt-text of an tag, what should the regex expression look like?
A:
For what you are trying to accomplish I would use BeautifulSoup rather than regex.
http://www.crummy.com/software/BeautifulSoup/
A:
Check out lxml, a really nice python library for dealing with xml. You can use drop_tag to accomplish what you are looking for.
from lxml import html
h = html.fragment_fromstring('<doc>Hello <b>World!</b></doc>')
h.find('*').drop_tag()
print(html.tostring(h, encoding=unicode))
<doc>Hello World!</doc>
A:
search and replace with this regex: search for: <.*?> replace with: "
|
HTML tag replacement using regex and python
|
I have a Python script that will look at an HTML file that has the following format:
<DOC>
<HTML>
...
</HTML>
</DOC>
<DOC>
<HTML>
...
</HTML>
</DOC>
How do I remove all HTML tags (replace the tags with '') with the exception of the opening and closing DOC tags using regex in Python? Also, if I want to retain the alt-text of an tag, what should the regex expression look like?
|
[
"For what you are trying to accomplish I would use BeautifulSoup rather than regex.\nhttp://www.crummy.com/software/BeautifulSoup/\n",
"Check out lxml, a really nice python library for dealing with xml. You can use drop_tag to accomplish what you are looking for.\n\nfrom lxml import html \nh = html.fragment_fromstring('<doc>Hello <b>World!</b></doc>')\nh.find('*').drop_tag()\nprint(html.tostring(h, encoding=unicode))\n\n<doc>Hello World!</doc>\n\n",
"search and replace with this regex: search for: <.*?> replace with: \" \n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"html",
"python",
"regex",
"tags"
] |
stackoverflow_0001484575_html_python_regex_tags.txt
|
Q:
Better resources to learn buildout
I am trying to grasp a bit more of buildout with this tutorial, but unlike a tutorial, it seems like a cut and paste of presentation slides.
I don't have a really clear idea of what the purpose of buildout is, and how it positions itself with scons and setuptools. Would you be so kind to provide details on these issues?
Thanks!
A:
I quite like the Plone Buildout Tutorial.
It gives a reasonable overview of how it all works and the ways in which you can extend a simple buildout file.
Here is the new link to Plone Buildout Tutorial.
A:
The most useful resource that I found so far are the videos from pycon 2009 on Setuptools, Distutils and Buildout.
Eggs and Buildout Deployment in Python - Part 1
Eggs and Buildout Deployment in Python - Part 2
Eggs and Buildout Deployment in Python - Part 3
Check it out.
|
Better resources to learn buildout
|
I am trying to grasp a bit more of buildout with this tutorial, but unlike a tutorial, it seems like a cut and paste of presentation slides.
I don't have a really clear idea of what the purpose of buildout is, and how it positions itself with scons and setuptools. Would you be so kind to provide details on these issues?
Thanks!
|
[
"I quite like the Plone Buildout Tutorial.\nIt gives a reasonable overview of how it all works and the ways in which you can extend a simple buildout file.\nHere is the new link to Plone Buildout Tutorial.\n",
"The most useful resource that I found so far are the videos from pycon 2009 on Setuptools, Distutils and Buildout.\nEggs and Buildout Deployment in Python - Part 1\nEggs and Buildout Deployment in Python - Part 2 \nEggs and Buildout Deployment in Python - Part 3\nCheck it out.\n"
] |
[
6,
3
] |
[] |
[] |
[
"buildout",
"python",
"resources"
] |
stackoverflow_0001369664_buildout_python_resources.txt
|
Q:
Wildcard for PyGTK States
How do I combine:
button.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse("Green"))
button.modify_bg(gtk.STATE_ACTIVE, gtk.gdk.color_parse("Green"))
button.modify_bg(gtk.STATE_SELECTED, gtk.gdk.color_parse("Green"))
etc.
Into a one-liner wildcard covering all of the possible states (See Doc)
A:
I do not think you can do that. You can still do it with fewer lines though:
states = [gtk.STATE_NORMAL, gtk.STATE_ACTIVE, gtk.STATE_PRELIGHT,
gtk.STATE_SELECTED, gtk.STATE_INSENSITIVE]
for state in states:
button.modify_bg(state, gtk.gdk.color_parse("Green"))
A:
EDIT:
Maybe this comes in handy: http://faq.pygtk.org/index.py?req=show&file=faq04.006.htp
|
Wildcard for PyGTK States
|
How do I combine:
button.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse("Green"))
button.modify_bg(gtk.STATE_ACTIVE, gtk.gdk.color_parse("Green"))
button.modify_bg(gtk.STATE_SELECTED, gtk.gdk.color_parse("Green"))
etc.
Into a one-liner wildcard covering all of the possible states (See Doc)
|
[
"I do not think you can do that. You can still do it with fewer lines though:\nstates = [gtk.STATE_NORMAL, gtk.STATE_ACTIVE, gtk.STATE_PRELIGHT,\n gtk.STATE_SELECTED, gtk.STATE_INSENSITIVE]\n\nfor state in states:\n button.modify_bg(state, gtk.gdk.color_parse(\"Green\"))\n\n",
"EDIT:\nMaybe this comes in handy: http://faq.pygtk.org/index.py?req=show&file=faq04.006.htp\n"
] |
[
1,
0
] |
[] |
[] |
[
"pygtk",
"python",
"state"
] |
stackoverflow_0001484339_pygtk_python_state.txt
|
Q:
Best approach to a command line proxy?
I'd like to write a simple command line proxy in Python to sit between a Telnet/SSH connection and a local serial interface. The application should simply bridge I/O between the two, but filter out certain unallowed strings (matched by regular expressions). (This for a router/switch lab in which the user is given remote serial access to the boxes.)
Basically, a client established a Telnet or SSH connection to the daemon. The daemon passes the client's input out (for example) /dev/ttyS0, and passes input from ttyS0 back out to the client. However, I want to be able to blacklist certain strings coming from the client. For instance, the command 'delete foo' should not be allowed.
I'm not sure how best to approach this. Communication must be asynchronous; I can't simply wait for a carriage return to allow the buffer to be fed out the serial interface. Matching regular expressions against the stream seems tricky too, as all of the following must be intercepted:
delete foo(enter)
del foo(enter)
el foo(ctrl+a)d(enter)
dl(left)e(right) foo(enter)
...and so forth. The only solid delimiter is the CR/LF.
I'm hoping someone can point me in the right direction. I've been looking through Python modules but so far haven't come up with anything.
A:
Python is not my primary language, so I'll leave that part of the answer for others. I do alot of security work, though, and I would urge a "white list" approach, not a "black list" approach. In other words, pick a set of safe commands and forbid all others. This is much much easier than trying to think of all the malicious possibilities and guarding against all of them.
A:
As all the examples you show finish with (enter), why is it that...:
Communication must be asynchronous; I
can't simply wait for a carriage
return to allow the buffer to be fed
out the serial interface
if you can collect incoming data until the "enter", and apply the "edit" requests (such as the ctrl-a, left, right in your examples) to the data you're collecting, then you're left with the "completed command about to be sent" in memory where it can be matched and rejected or sent on.
If you must do it character by character, .read(1) on the (unbuffered) input will allow you to, but the vetting becomes potentially more problematic; again you can keep an in-memory image of the edited command that you've sent so far (as you apply the edit requests even while sending them), but what happens when the "enter" arrives and your vetting shows you that the command thus composed must NOT be allowed -- can you e.g. send a number of "delete"s to the device to wipe away said command? Or is there a single "toss the complete line" edit request that would serve?
If you must send every character as you receive it (not allowed to accumulate them until decision point) AND there is no way to delete/erase characters already sent, then the task appears to be impossible (though I don't understand the "can't wait for the enter" condition AT ALL, so maybe there's hope).
A:
After thinking about this for a while, it doesn't seem like there's any practical, reliable method to filter on client input. I'm going to attempt this from another angle: if I can identify persistent patterns in warning messages coming from the serial devices (e.g. confirmation prompts) I may be able to abort reliably. Thanks anyway for the input!
A:
Fabric is doing a similar thing.
For SSH api you should check paramiko.
|
Best approach to a command line proxy?
|
I'd like to write a simple command line proxy in Python to sit between a Telnet/SSH connection and a local serial interface. The application should simply bridge I/O between the two, but filter out certain unallowed strings (matched by regular expressions). (This for a router/switch lab in which the user is given remote serial access to the boxes.)
Basically, a client established a Telnet or SSH connection to the daemon. The daemon passes the client's input out (for example) /dev/ttyS0, and passes input from ttyS0 back out to the client. However, I want to be able to blacklist certain strings coming from the client. For instance, the command 'delete foo' should not be allowed.
I'm not sure how best to approach this. Communication must be asynchronous; I can't simply wait for a carriage return to allow the buffer to be fed out the serial interface. Matching regular expressions against the stream seems tricky too, as all of the following must be intercepted:
delete foo(enter)
del foo(enter)
el foo(ctrl+a)d(enter)
dl(left)e(right) foo(enter)
...and so forth. The only solid delimiter is the CR/LF.
I'm hoping someone can point me in the right direction. I've been looking through Python modules but so far haven't come up with anything.
|
[
"Python is not my primary language, so I'll leave that part of the answer for others. I do alot of security work, though, and I would urge a \"white list\" approach, not a \"black list\" approach. In other words, pick a set of safe commands and forbid all others. This is much much easier than trying to think of all the malicious possibilities and guarding against all of them. \n",
"As all the examples you show finish with (enter), why is it that...:\n\nCommunication must be asynchronous; I\n can't simply wait for a carriage\n return to allow the buffer to be fed\n out the serial interface\n\nif you can collect incoming data until the \"enter\", and apply the \"edit\" requests (such as the ctrl-a, left, right in your examples) to the data you're collecting, then you're left with the \"completed command about to be sent\" in memory where it can be matched and rejected or sent on.\nIf you must do it character by character, .read(1) on the (unbuffered) input will allow you to, but the vetting becomes potentially more problematic; again you can keep an in-memory image of the edited command that you've sent so far (as you apply the edit requests even while sending them), but what happens when the \"enter\" arrives and your vetting shows you that the command thus composed must NOT be allowed -- can you e.g. send a number of \"delete\"s to the device to wipe away said command? Or is there a single \"toss the complete line\" edit request that would serve?\nIf you must send every character as you receive it (not allowed to accumulate them until decision point) AND there is no way to delete/erase characters already sent, then the task appears to be impossible (though I don't understand the \"can't wait for the enter\" condition AT ALL, so maybe there's hope).\n",
"After thinking about this for a while, it doesn't seem like there's any practical, reliable method to filter on client input. I'm going to attempt this from another angle: if I can identify persistent patterns in warning messages coming from the serial devices (e.g. confirmation prompts) I may be able to abort reliably. Thanks anyway for the input!\n",
"Fabric is doing a similar thing.\nFor SSH api you should check paramiko.\n"
] |
[
6,
0,
0,
0
] |
[] |
[] |
[
"command_line",
"python",
"regex"
] |
stackoverflow_0001482367_command_line_python_regex.txt
|
Q:
Using mechanize to visit a site that requires SSL
I need to visit a site (https://*) that requires me to install two certificates in Firefox before I can visit it successfully. One I can export as a .p12 file (Client Certificate), and one is a .crt file (CA Certificate). If I try accessing this site without these certificates, I get a "failed handshake error".
How do I visit this site in Python? I was thinking of using mechanize. Thanks.
A:
I'd suggest you use webdriver to automate Firefox. It has a Python interface too.
|
Using mechanize to visit a site that requires SSL
|
I need to visit a site (https://*) that requires me to install two certificates in Firefox before I can visit it successfully. One I can export as a .p12 file (Client Certificate), and one is a .crt file (CA Certificate). If I try accessing this site without these certificates, I get a "failed handshake error".
How do I visit this site in Python? I was thinking of using mechanize. Thanks.
|
[
"I'd suggest you use webdriver to automate Firefox. It has a Python interface too.\n"
] |
[
1
] |
[] |
[] |
[
"mechanize",
"python",
"ssl",
"ssl_certificate"
] |
stackoverflow_0001485571_mechanize_python_ssl_ssl_certificate.txt
|
Q:
With python.multiprocessing, how do I create a proxy in the current process to pass to other processes?
I'm using the multiprocessing library in Python. I can see how to define that objects returned from functions should have proxies created, but I'd like to have objects in the current process turned into proxies so I can pass them as parameters.
For example, running the following script:
from multiprocessing import current_process
from multiprocessing.managers import BaseManager
class ProxyTest(object):
def call_a(self):
print 'A called in %s' % current_process()
def call_b(self, proxy_test):
print 'B called in %s' % current_process()
proxy_test.call_a()
class MyManager(BaseManager):
pass
MyManager.register('proxy_test', ProxyTest)
if __name__ == '__main__':
manager = MyManager()
manager.start()
pt1 = ProxyTest()
pt2 = manager.proxy_test()
pt1.call_a()
pt2.call_a()
pt1.call_b(pt2)
pt2.call_b(pt1)
... I get the following output ...
A called in <_MainProcess(MainProcess, started)>
A called in <Process(MyManager-1, started)>
B called in <_MainProcess(MainProcess, started)>
A called in <Process(MyManager-1, started)>
B called in <Process(MyManager-1, started)>
A called in <Process(MyManager-1, started)>
... but I want that final line of output coming from _MainProcess.
I could just create another Process and run it from there, but I'm trying to keep the amount of data that needs to be passed between processes to a minimum. The documentation for the Manager object mentioned a serve_forever method, but it doesn't seem to be supported. Any ideas?
Does anyone know?
A:
Why do you say serve_forever is not supported?
manager = Mymanager()
s = manager.get_server()
s.serve_forever()
should work.
See managers.BaseManager.get_server doc for official examples.
|
With python.multiprocessing, how do I create a proxy in the current process to pass to other processes?
|
I'm using the multiprocessing library in Python. I can see how to define that objects returned from functions should have proxies created, but I'd like to have objects in the current process turned into proxies so I can pass them as parameters.
For example, running the following script:
from multiprocessing import current_process
from multiprocessing.managers import BaseManager
class ProxyTest(object):
def call_a(self):
print 'A called in %s' % current_process()
def call_b(self, proxy_test):
print 'B called in %s' % current_process()
proxy_test.call_a()
class MyManager(BaseManager):
pass
MyManager.register('proxy_test', ProxyTest)
if __name__ == '__main__':
manager = MyManager()
manager.start()
pt1 = ProxyTest()
pt2 = manager.proxy_test()
pt1.call_a()
pt2.call_a()
pt1.call_b(pt2)
pt2.call_b(pt1)
... I get the following output ...
A called in <_MainProcess(MainProcess, started)>
A called in <Process(MyManager-1, started)>
B called in <_MainProcess(MainProcess, started)>
A called in <Process(MyManager-1, started)>
B called in <Process(MyManager-1, started)>
A called in <Process(MyManager-1, started)>
... but I want that final line of output coming from _MainProcess.
I could just create another Process and run it from there, but I'm trying to keep the amount of data that needs to be passed between processes to a minimum. The documentation for the Manager object mentioned a serve_forever method, but it doesn't seem to be supported. Any ideas?
Does anyone know?
|
[
"Why do you say serve_forever is not supported?\nmanager = Mymanager()\ns = manager.get_server()\ns.serve_forever()\n\nshould work.\nSee managers.BaseManager.get_server doc for official examples.\n"
] |
[
1
] |
[] |
[] |
[
"multiprocessing",
"proxy",
"python",
"python_multiprocessing"
] |
stackoverflow_0001458205_multiprocessing_proxy_python_python_multiprocessing.txt
|
Q:
Global statements v. variables available throughout a classes
I try to avoid "global" statements in python and Do you use the "global" statement in Python? suggests this is a common view. Values go into a function through its arguments and come out through its return statement (or reading/writing files or exceptions or probably something else I'm forgetting).
Within a class, self.variable statements are in effect global to each instance of the class. You can access the variable in any method in the class.
Do the same reasons we should avoid globals apply within classes, so that we should only use values in methods that come in through its arguments? I'm especially thinking about long classes that can be just about an entire program. Does the encapsulation inherent in a class eliminate the concern? In any case, we should make inputs, outputs and side effects clear in comments?
A:
self.variable is not global to the class, it's global to the instance. There's a big difference:
class MyClass:
def __init__(self, a):
self.a = a
mc1 = MyClass(1)
mc2 = MyClass(2)
assert mc1.a == 1
assert mc2.a == 2
You should definitely use self to encapsulate data in your classes.
That said, it is definitely possible to create huge overgrown classes that abuse instance variables in all the ways regular variables can be abused. This is where skill and craftsmanship come into play: properly dividing up your code into manageable chunks.
A:
Do the same reasons we should avoid
globals apply within classes, so that
we should only use values in methods
that come in through its arguments?
I'm especially thinking about long
classes that can be just about an
entire program.
Classes exist to couple behaviour with state. If you take away the state part (which is what you're suggesting) then you have no need for classes. Nothing wrong with that, of course - much good software has been written without object-orientation.
Generally, if you're following the Single Responsibility Principle when making your classes, then these variables will be typically used together by a class that needs access to most or all of them in each method. You don't pass them in explicitly because the class exclusively works with behaviour that could reasonably access the entire state.
To put it another way, if you find yourself with a class that doesn't use half of its variables in a lot of its methods, that's probably a sign that you should split it into two classes.
|
Global statements v. variables available throughout a classes
|
I try to avoid "global" statements in python and Do you use the "global" statement in Python? suggests this is a common view. Values go into a function through its arguments and come out through its return statement (or reading/writing files or exceptions or probably something else I'm forgetting).
Within a class, self.variable statements are in effect global to each instance of the class. You can access the variable in any method in the class.
Do the same reasons we should avoid globals apply within classes, so that we should only use values in methods that come in through its arguments? I'm especially thinking about long classes that can be just about an entire program. Does the encapsulation inherent in a class eliminate the concern? In any case, we should make inputs, outputs and side effects clear in comments?
|
[
"self.variable is not global to the class, it's global to the instance. There's a big difference:\nclass MyClass:\n def __init__(self, a):\n self.a = a\n\nmc1 = MyClass(1)\nmc2 = MyClass(2)\nassert mc1.a == 1\nassert mc2.a == 2\n\nYou should definitely use self to encapsulate data in your classes. \nThat said, it is definitely possible to create huge overgrown classes that abuse instance variables in all the ways regular variables can be abused. This is where skill and craftsmanship come into play: properly dividing up your code into manageable chunks.\n",
"\nDo the same reasons we should avoid\n globals apply within classes, so that\n we should only use values in methods\n that come in through its arguments?\n I'm especially thinking about long\n classes that can be just about an\n entire program.\n\nClasses exist to couple behaviour with state. If you take away the state part (which is what you're suggesting) then you have no need for classes. Nothing wrong with that, of course - much good software has been written without object-orientation.\nGenerally, if you're following the Single Responsibility Principle when making your classes, then these variables will be typically used together by a class that needs access to most or all of them in each method. You don't pass them in explicitly because the class exclusively works with behaviour that could reasonably access the entire state.\nTo put it another way, if you find yourself with a class that doesn't use half of its variables in a lot of its methods, that's probably a sign that you should split it into two classes.\n"
] |
[
1,
1
] |
[
"Ideally, no instance-wide variables would be used and everything would be passed as a parameter and well-documented in comments. That being said, it can get very tedious to comment every little thing and method parameter lists can start to look ridiculous (unless you have a hierarchy of partially-applied methods). Pragmatically, a balance should be sought between using non-local variables and making everything excruciatingly explicit.\nThere is at least one case where you have to have instance- or class-level variables and that's when an implementation-specific value has to be retained between method calls.\nScalability and concurrency depend on minimization if not complete elimination of state and side effects except for the most local and exclusive of runtime scopes. OOP without objects or display classes (i.e., closures) would be procedural, yes. Languages are increasingly becoming multiparadigm, but a lot of them have a primary paradigm. C# is object oriented with functional features. F# is functional with objects. \nIf the data is immutable, then instance variables are always okay in my books. \n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0001486708_python.txt
|
Q:
Type of object from udp buffer in python using metaclasses/reflection
Is it possible to extract type of object or class name from message received on a udp socket in python using metaclasses/reflection ?
The scenario is like this:
Receive udp buffer on a socket.
The UDP buffer is a serialized binary string(a message). But the type of message is not known at this time. So can't de-serialize into appropriate message.
Now, my ques is Can I know the classname of the seraialized binary string(recvd as UDP buffer) so that I can de-serialize into appropriate message and process further.
Thanks in Advance.
A:
What you receive from the udp socket is a byte string -- that's all the "type of object or class name" that's actually there. If the byte string was built as a serialized object (e.g. via pickle, or maybe marshal etc) then you can deserialize it back to an object (using e.g. pickle.loads) and then introspect to your heart's content. But most byte strings were built otherwise and will raise exceptions when you try to loads from them;-).
Edit: the OP's edit mentions the string is "a serialized object" but still doesn't say what serialization approach produced it, and that makes all the difference. pickle (and for a much narrower range of type marshal) place enough information on the strings they produce (via the .dumps functions of the modules) that their respective loads functions can deserialize back to the appropriate type; but other approaches (e.g., struct.pack) do not place such metadata in the strings they produce, so it's not feasible to deserialize without other, "out of bands" so to speak, indications about the format in use. So, o O.P., how was that serialized string of bytes produced in the first place...?
A:
Updated answer after updated question:
"But the type of message is not known at this time. So can't de-serialize into appropriate message."
What you get is a sequence of bytes. How that sequence of types should be interpreted is a question of how the protocol looks. Only you know what protocol you use. So if you don't know the type of message, then there is nothing you can do about it. If you are to receive a stream of data an interpret it, you must know what that data means, otherwise you can't interpret it.
It's as simple as that.
"Now, my ques is Can I know the classname of the seraialized binary string"
Yes. The classname is "str", as all strings. (Unless you use Python 3, in which case you would not get a str but a binary). The data inside that str has no classname. It's just binary data. It means whatever the sender wants it to mean.
Again, I need to stress that you should not try to make this into a generic question. Explain exactly what you are trying to do, not generically, but specifically.
A:
You need to use a serialization module. pickle and marshal are both options. They provide functions to turn objects into bytestreams, and back again.
|
Type of object from udp buffer in python using metaclasses/reflection
|
Is it possible to extract type of object or class name from message received on a udp socket in python using metaclasses/reflection ?
The scenario is like this:
Receive udp buffer on a socket.
The UDP buffer is a serialized binary string(a message). But the type of message is not known at this time. So can't de-serialize into appropriate message.
Now, my ques is Can I know the classname of the seraialized binary string(recvd as UDP buffer) so that I can de-serialize into appropriate message and process further.
Thanks in Advance.
|
[
"What you receive from the udp socket is a byte string -- that's all the \"type of object or class name\" that's actually there. If the byte string was built as a serialized object (e.g. via pickle, or maybe marshal etc) then you can deserialize it back to an object (using e.g. pickle.loads) and then introspect to your heart's content. But most byte strings were built otherwise and will raise exceptions when you try to loads from them;-).\nEdit: the OP's edit mentions the string is \"a serialized object\" but still doesn't say what serialization approach produced it, and that makes all the difference. pickle (and for a much narrower range of type marshal) place enough information on the strings they produce (via the .dumps functions of the modules) that their respective loads functions can deserialize back to the appropriate type; but other approaches (e.g., struct.pack) do not place such metadata in the strings they produce, so it's not feasible to deserialize without other, \"out of bands\" so to speak, indications about the format in use. So, o O.P., how was that serialized string of bytes produced in the first place...?\n",
"Updated answer after updated question:\n\"But the type of message is not known at this time. So can't de-serialize into appropriate message.\"\nWhat you get is a sequence of bytes. How that sequence of types should be interpreted is a question of how the protocol looks. Only you know what protocol you use. So if you don't know the type of message, then there is nothing you can do about it. If you are to receive a stream of data an interpret it, you must know what that data means, otherwise you can't interpret it.\nIt's as simple as that.\n\"Now, my ques is Can I know the classname of the seraialized binary string\"\nYes. The classname is \"str\", as all strings. (Unless you use Python 3, in which case you would not get a str but a binary). The data inside that str has no classname. It's just binary data. It means whatever the sender wants it to mean.\nAgain, I need to stress that you should not try to make this into a generic question. Explain exactly what you are trying to do, not generically, but specifically.\n",
"You need to use a serialization module. pickle and marshal are both options. They provide functions to turn objects into bytestreams, and back again.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"python",
"sockets",
"udp"
] |
stackoverflow_0001487582_python_sockets_udp.txt
|
Q:
How to build a mini-network of small programs feeding each other data?
I'm trying to simulate a real-time network where nodes are consumers and producers of different rates. How would I quickly implement a sample of this with Python? I was imagining that I'd write a simple program for each node, but I'm not sure how to connect them to each other.
A:
Stick with traditional simulation structures, at least at first
Is it your goal to write an asynchronous system as an exercise? If so, then I guess you have to implement at least a multi-threaded if not multi-process or network system.
But if it's really a simulation, and what you want are the analysis results, implementing an actual distributed model would be a horribly complex approach that is likely to yield far less data than an abstract simulation would, that is, the simulation itself doesn't have to be a network of asynchronous actors communicating. That would just be a good way to make the problem so hard it won't get solved.
I say, stick with the traditional simulation architecture.
The Classic Discrete Event Simulation
The way this works is that as a central data structure you have a sorted collection of pending events. The events are naturally sorted by increasing time.
The program has a main loop, where it takes the next (i.e., the lowest-valued) event out of the collection, advances the simulation clock to the time of this event, and then calls any task-specific processing associated with the event.
But, you ask, what if something was supposed to happen in the time delta that the simulator just jumped across? Well, by definition, there was nothing. If an individual element of the simulation needed something to happen in that interval, it was responsible for allocating an event and inserting it into the (sorted) collection.
While there are many packages and programs out there that are geared to simulation, the guts of a simulation are not that hard, and it's perfectly reasonable to write it up from scratch in your favorite language.
Have fun!
A:
Inter-process communication is a generally hard thing to get right. You may want to consider if another approach would meet your needs, like a discrete events simulator.
In a DES, you don't actually do the work of each node, just simulate how long each node would take to get the work done. You can use a priority queue to keep track of the incoming and outgoing work, and instrument the system to keep track of the queue size globally, and for each node.
Perhaps if you provide more detail on what you're trying to accomplish, we can give more specific recommendations.
Edit: Python has a built-in priority queue in the heapq module, see http://docs.python.org/library/heapq.html.
A:
I like @DigitalRoss's and dcrosta's suggestions of a discrete-even simulation, and I'd like to point out that the sched module in the Python standard library is just what you need at the core of such a system (no need to rebuild the core on top of heapq, or otherwise). You just need to initialize a sched.scheduler instance, instead of the usual time.time and time.sleep, by passing it two callables that simulate the passage of time.
For example:
class FakeTime(object):
def __init__(self, start=0.0):
self.now = start
def time(self):
return self.now
def sleep(self, delay):
self.now += delay
mytimer = FakeTime()
and use s = sched.scheduler(mytimer.time, mytimer.sleep) to instantiate the scheduler.
A:
Look into NetworkX a Python library for creating and manipulating networks. Edit: Attention, nearby/related to NetworkX, and also hosted at Los Alamos NL, is PyGraphviz, a utility to display graphs. Thank you to las3jrock to point out that I had the wrong link initialy.
You can either use NetworkX as-is, or get inspiration from this library (I wouldn't bother this library really has "all things graph" that seem to be needed for network simulation.) In any case, this type of graph creation and manipulation would allow you to represent (and grow / shrink / evolve) the network.
With a single centralized object/service based on a graph library as mentioned, you'd be left with creating one (or several) class to represent the behavior of the network nodes.
Then, depending on your needs, and if the network is relatively small, the network nodes could effectively be "let loose", and run inside of threads. Alternatively (and this is often easy to manage for simulations), there could be a centralized method which iterates through the nodes and invoke their "do_it" methods when appropriate. If the "refresh rate" of such a centralized controller is high enough, real-time clocks at the level of nodes can be used to determine when particular behaviors of the node should be triggered. This simiulation can be event driven, or simply polled (again if refresh period is low enough relative to the clocks' basic unit of time.)
The centralized "map" of the network would offer the various network-related primitives required by the network nodes to perform their "job" (whatever this may be). In other word, the nodes could inquire from the "map" the list of their direct neighbors, the directory of the complete network, the routes (and their cost) towards any given node etc, .
Alternatively, the structure of the network could be distributed over the nodes of the network itself, à la Internet. This decentralized approach has several advantages, but it implies adding logic and data to the nodes so they implement the equivalent of DNS and Routing. This approach also has the drawback of requiring that a fair amount of traffic between nodes be related to the discovery and maintenance of the topology of the network rather than to whatever semantics of communication/exchange the network is meant to emulate. In a nutshell, I wouldn't suggest using this decentralized approach, unless the simulation at hand was aimed at studying the very protocols of such distributed network management systems.
Edit:
The DES approach suggested in several replies certainly addresses the simulation part of the question. If the network part of question is important, implementing a virtual network based on a strong graph library will be an important part of the solution. Such an approach would expose more readily the dynamics of the system associated with the network topology.
A:
Here's how to make a basic client/server program:
http://wdvl.internet.com/Authoring/python/client/watts06152009.html
For the data to send from one to another, I'd recommend JSON, the most simple format ever.
Check simplejson if you want to implement it on python:
http://pypi.python.org/pypi/simplejson/
A:
Two things come to mind:
1- You could write one or more daemons with Twisted Python. (Be warned, twisted can get a little overwhelming , since its an event-driven async system ). Each daemon can bind to a port and make itself available to other daemons. Alternately, you could just run everything within one daemon, and just have each "process" that you script fire at a different interval.. and talk to one another through bound ports as well.
2- You could use a single event driven core -- there are a few --, and just fork a bunch of processes or threads for each task.
A:
Just use Stackless Python, create tasklets, connect them with channels, and everything will work. It is extremely simple.
A:
This is something Stackless does very well.
Also, you could use generators / co-routines.
Interesting links:
http://www.python.org/dev/peps/pep-0342/
New users can only post 1 hyper link... so here is the other one
'/'.join(['http:/', 'us.pycon.org', '2009', 'tutorials', 'schedule', '1PM6/'])
|
How to build a mini-network of small programs feeding each other data?
|
I'm trying to simulate a real-time network where nodes are consumers and producers of different rates. How would I quickly implement a sample of this with Python? I was imagining that I'd write a simple program for each node, but I'm not sure how to connect them to each other.
|
[
"Stick with traditional simulation structures, at least at first\nIs it your goal to write an asynchronous system as an exercise? If so, then I guess you have to implement at least a multi-threaded if not multi-process or network system.\nBut if it's really a simulation, and what you want are the analysis results, implementing an actual distributed model would be a horribly complex approach that is likely to yield far less data than an abstract simulation would, that is, the simulation itself doesn't have to be a network of asynchronous actors communicating. That would just be a good way to make the problem so hard it won't get solved.\nI say, stick with the traditional simulation architecture.\nThe Classic Discrete Event Simulation\nThe way this works is that as a central data structure you have a sorted collection of pending events. The events are naturally sorted by increasing time.\nThe program has a main loop, where it takes the next (i.e., the lowest-valued) event out of the collection, advances the simulation clock to the time of this event, and then calls any task-specific processing associated with the event.\nBut, you ask, what if something was supposed to happen in the time delta that the simulator just jumped across? Well, by definition, there was nothing. If an individual element of the simulation needed something to happen in that interval, it was responsible for allocating an event and inserting it into the (sorted) collection.\nWhile there are many packages and programs out there that are geared to simulation, the guts of a simulation are not that hard, and it's perfectly reasonable to write it up from scratch in your favorite language.\nHave fun!\n",
"Inter-process communication is a generally hard thing to get right. You may want to consider if another approach would meet your needs, like a discrete events simulator.\nIn a DES, you don't actually do the work of each node, just simulate how long each node would take to get the work done. You can use a priority queue to keep track of the incoming and outgoing work, and instrument the system to keep track of the queue size globally, and for each node.\nPerhaps if you provide more detail on what you're trying to accomplish, we can give more specific recommendations.\nEdit: Python has a built-in priority queue in the heapq module, see http://docs.python.org/library/heapq.html.\n",
"I like @DigitalRoss's and dcrosta's suggestions of a discrete-even simulation, and I'd like to point out that the sched module in the Python standard library is just what you need at the core of such a system (no need to rebuild the core on top of heapq, or otherwise). You just need to initialize a sched.scheduler instance, instead of the usual time.time and time.sleep, by passing it two callables that simulate the passage of time.\nFor example:\nclass FakeTime(object):\n def __init__(self, start=0.0):\n self.now = start\n def time(self):\n return self.now\n def sleep(self, delay):\n self.now += delay\n\nmytimer = FakeTime()\n\nand use s = sched.scheduler(mytimer.time, mytimer.sleep) to instantiate the scheduler.\n",
"Look into NetworkX a Python library for creating and manipulating networks. Edit: Attention, nearby/related to NetworkX, and also hosted at Los Alamos NL, is PyGraphviz, a utility to display graphs. Thank you to las3jrock to point out that I had the wrong link initialy.\nYou can either use NetworkX as-is, or get inspiration from this library (I wouldn't bother this library really has \"all things graph\" that seem to be needed for network simulation.) In any case, this type of graph creation and manipulation would allow you to represent (and grow / shrink / evolve) the network. \nWith a single centralized object/service based on a graph library as mentioned, you'd be left with creating one (or several) class to represent the behavior of the network nodes. \nThen, depending on your needs, and if the network is relatively small, the network nodes could effectively be \"let loose\", and run inside of threads. Alternatively (and this is often easy to manage for simulations), there could be a centralized method which iterates through the nodes and invoke their \"do_it\" methods when appropriate. If the \"refresh rate\" of such a centralized controller is high enough, real-time clocks at the level of nodes can be used to determine when particular behaviors of the node should be triggered. This simiulation can be event driven, or simply polled (again if refresh period is low enough relative to the clocks' basic unit of time.)\nThe centralized \"map\" of the network would offer the various network-related primitives required by the network nodes to perform their \"job\" (whatever this may be). In other word, the nodes could inquire from the \"map\" the list of their direct neighbors, the directory of the complete network, the routes (and their cost) towards any given node etc, .\nAlternatively, the structure of the network could be distributed over the nodes of the network itself, à la Internet. This decentralized approach has several advantages, but it implies adding logic and data to the nodes so they implement the equivalent of DNS and Routing. This approach also has the drawback of requiring that a fair amount of traffic between nodes be related to the discovery and maintenance of the topology of the network rather than to whatever semantics of communication/exchange the network is meant to emulate. In a nutshell, I wouldn't suggest using this decentralized approach, unless the simulation at hand was aimed at studying the very protocols of such distributed network management systems.\nEdit:\nThe DES approach suggested in several replies certainly addresses the simulation part of the question. If the network part of question is important, implementing a virtual network based on a strong graph library will be an important part of the solution. Such an approach would expose more readily the dynamics of the system associated with the network topology.\n",
"Here's how to make a basic client/server program:\nhttp://wdvl.internet.com/Authoring/python/client/watts06152009.html\nFor the data to send from one to another, I'd recommend JSON, the most simple format ever.\nCheck simplejson if you want to implement it on python: \nhttp://pypi.python.org/pypi/simplejson/\n",
"Two things come to mind:\n1- You could write one or more daemons with Twisted Python. (Be warned, twisted can get a little overwhelming , since its an event-driven async system ). Each daemon can bind to a port and make itself available to other daemons. Alternately, you could just run everything within one daemon, and just have each \"process\" that you script fire at a different interval.. and talk to one another through bound ports as well.\n2- You could use a single event driven core -- there are a few --, and just fork a bunch of processes or threads for each task.\n",
"Just use Stackless Python, create tasklets, connect them with channels, and everything will work. It is extremely simple.\n",
"This is something Stackless does very well.\nAlso, you could use generators / co-routines.\nInteresting links:\nhttp://www.python.org/dev/peps/pep-0342/\nNew users can only post 1 hyper link... so here is the other one\n'/'.join(['http:/', 'us.pycon.org', '2009', 'tutorials', 'schedule', '1PM6/'])\n\n"
] |
[
5,
2,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"python",
"simulation"
] |
stackoverflow_0001484658_python_simulation.txt
|
Q:
Case-insensitive comparison of sets in Python
I have two sets (although I can do lists, or whatever):
a = frozenset(('Today','I','am','fine'))
b = frozenset(('hello','how','are','you','today'))
I want to get:
frozenset(['Today'])
or at least:
frozenset(['today'])
The second option is doable if I lowercase everything I presume, but I'm looking for a more elegant way. Is it possible to do
a.intersection(b)
in a case-insensitive manner?
Shortcuts in Django are also fine since I'm using that framework.
Example from intersection method below (I couldn't figure out how to get this formatted in a comment):
print intersection('Today I am fine tomorrow'.split(),
'Hello How a re you TODAY and today and Today and Tomorrow'.split(),
key=str.lower)
[(['tomorrow'], ['Tomorrow']), (['Today'], ['TODAY', 'today', 'Today'])]
A:
Here's version that works for any pair of iterables:
def intersection(iterableA, iterableB, key=lambda x: x):
"""Return the intersection of two iterables with respect to `key` function.
"""
def unify(iterable):
d = {}
for item in iterable:
d.setdefault(key(item), []).append(item)
return d
A, B = unify(iterableA), unify(iterableB)
return [(A[k], B[k]) for k in A if k in B]
Example:
print intersection('Today I am fine'.split(),
'Hello How a re you TODAY'.split(),
key=str.lower)
# -> [(['Today'], ['TODAY'])]
A:
Unfortunately, even if you COULD "change on the fly" the comparison-related special methods of the sets' items (__lt__ and friends -- actually, only __eq__ needed the way sets are currently implemented, but that's an implementatio detail) -- and you can't, because they belong to a built-in type, str -- that wouldn't suffice, because __hash__ is also crucial and by the time you want to do your intersection it's already been applied, putting the sets' items in different hash buckets from where they'd need to end up to make intersection work the way you want (i.e., no guarantee that 'Today' and 'today' are in the same bucket).
So, for your purposes, you inevitably need to build new data structures -- if you consider it "inelegant" to have to do that at all, you're plain out of luck: built-in sets just don't carry around the HUGE baggage and overhead that would be needed to allow people to change comparison and hashing functions, which would bloat things by 10 times (or more) for the sae of a need felt in (maybe) one use case in a million.
If you have frequent needs connected with case-insensitive comparison, you should consider subclassing or wrapping str (overriding comparison and hashing) to provide a "case insensitive str" type cistr -- and then, of course, make sure than only instances of cistr are (e.g.) added to your sets (&c) of interest (either by subclassing set &c, or simply by paying care). To give an oversimplified example...:
class ci(str):
def __hash__(self):
return hash(self.lower())
def __eq__(self, other):
return self.lower() == other.lower()
class cifrozenset(frozenset):
def __new__(cls, seq=()):
return frozenset((ci(x) for x in seq))
a = cifrozenset(('Today','I','am','fine'))
b = cifrozenset(('hello','how','are','you','today'))
print a.intersection(b)
this does emit frozenset(['Today']), as per your expressed desire. Of course, in real life you'd probably want to do MUCH more overriding (for example...: the way I have things here, any operation on a cifrozenset returns a plain frozenset, losing the precious case independence special feature -- you'd probably want to ensure that a cifrozenset is returned each time instead, and, while quite feasible, that's NOT trivial).
A:
First, don't you mean a.intersection(b)? The intersection (if case insensitive) would be set(['today']). The difference would be set(['i', 'am', 'fine'])
Here are two ideas:
1.) Write a function to convert the elements of both sets to lowercase and then do the intersection. Here's one way you could do it:
>>> intersect_with_key = lambda s1, s2, key=lambda i: i: set(map(key, s1)).intersection(map(key, s2))
>>> fs1 = frozenset('Today I am fine'.split())
>>> fs2 = frozenset('Hello how are you TODAY'.split())
>>> intersect_with_key(fs1, fs2)
set([])
>>> intersect_with_key(fs1, fs2, key=str.lower)
set(['today'])
>>>
This is not very efficient though because the conversion and new sets would have to be created on each call.
2.) Extend the frozenset class to keep a case insensitive copy of the elements. Override the intersection method to use the case insensitive copy of the elements. This would be more efficient.
A:
>>> a_, b_ = map(set, [map(str.lower, a), map(str.lower, b)])
>>> a_ & b_
set(['today'])
Or... with less maps,
>>> a_ = set(map(str.lower, a))
>>> b_ = set(map(str.lower, b))
>>> a_ & b_
set(['today'])
|
Case-insensitive comparison of sets in Python
|
I have two sets (although I can do lists, or whatever):
a = frozenset(('Today','I','am','fine'))
b = frozenset(('hello','how','are','you','today'))
I want to get:
frozenset(['Today'])
or at least:
frozenset(['today'])
The second option is doable if I lowercase everything I presume, but I'm looking for a more elegant way. Is it possible to do
a.intersection(b)
in a case-insensitive manner?
Shortcuts in Django are also fine since I'm using that framework.
Example from intersection method below (I couldn't figure out how to get this formatted in a comment):
print intersection('Today I am fine tomorrow'.split(),
'Hello How a re you TODAY and today and Today and Tomorrow'.split(),
key=str.lower)
[(['tomorrow'], ['Tomorrow']), (['Today'], ['TODAY', 'today', 'Today'])]
|
[
"Here's version that works for any pair of iterables:\ndef intersection(iterableA, iterableB, key=lambda x: x):\n \"\"\"Return the intersection of two iterables with respect to `key` function.\n\n \"\"\"\n def unify(iterable):\n d = {}\n for item in iterable:\n d.setdefault(key(item), []).append(item)\n return d\n\n A, B = unify(iterableA), unify(iterableB)\n\n return [(A[k], B[k]) for k in A if k in B]\n\nExample:\nprint intersection('Today I am fine'.split(),\n 'Hello How a re you TODAY'.split(),\n key=str.lower)\n# -> [(['Today'], ['TODAY'])]\n\n",
"Unfortunately, even if you COULD \"change on the fly\" the comparison-related special methods of the sets' items (__lt__ and friends -- actually, only __eq__ needed the way sets are currently implemented, but that's an implementatio detail) -- and you can't, because they belong to a built-in type, str -- that wouldn't suffice, because __hash__ is also crucial and by the time you want to do your intersection it's already been applied, putting the sets' items in different hash buckets from where they'd need to end up to make intersection work the way you want (i.e., no guarantee that 'Today' and 'today' are in the same bucket).\nSo, for your purposes, you inevitably need to build new data structures -- if you consider it \"inelegant\" to have to do that at all, you're plain out of luck: built-in sets just don't carry around the HUGE baggage and overhead that would be needed to allow people to change comparison and hashing functions, which would bloat things by 10 times (or more) for the sae of a need felt in (maybe) one use case in a million.\nIf you have frequent needs connected with case-insensitive comparison, you should consider subclassing or wrapping str (overriding comparison and hashing) to provide a \"case insensitive str\" type cistr -- and then, of course, make sure than only instances of cistr are (e.g.) added to your sets (&c) of interest (either by subclassing set &c, or simply by paying care). To give an oversimplified example...:\nclass ci(str):\n def __hash__(self):\n return hash(self.lower())\n def __eq__(self, other):\n return self.lower() == other.lower()\n\nclass cifrozenset(frozenset):\n def __new__(cls, seq=()):\n return frozenset((ci(x) for x in seq))\n\na = cifrozenset(('Today','I','am','fine'))\nb = cifrozenset(('hello','how','are','you','today'))\n\nprint a.intersection(b)\n\nthis does emit frozenset(['Today']), as per your expressed desire. Of course, in real life you'd probably want to do MUCH more overriding (for example...: the way I have things here, any operation on a cifrozenset returns a plain frozenset, losing the precious case independence special feature -- you'd probably want to ensure that a cifrozenset is returned each time instead, and, while quite feasible, that's NOT trivial).\n",
"First, don't you mean a.intersection(b)? The intersection (if case insensitive) would be set(['today']). The difference would be set(['i', 'am', 'fine'])\nHere are two ideas:\n1.) Write a function to convert the elements of both sets to lowercase and then do the intersection. Here's one way you could do it:\n>>> intersect_with_key = lambda s1, s2, key=lambda i: i: set(map(key, s1)).intersection(map(key, s2))\n>>> fs1 = frozenset('Today I am fine'.split())\n>>> fs2 = frozenset('Hello how are you TODAY'.split())\n>>> intersect_with_key(fs1, fs2)\nset([])\n>>> intersect_with_key(fs1, fs2, key=str.lower)\nset(['today'])\n>>>\n\nThis is not very efficient though because the conversion and new sets would have to be created on each call.\n2.) Extend the frozenset class to keep a case insensitive copy of the elements. Override the intersection method to use the case insensitive copy of the elements. This would be more efficient.\n",
">>> a_, b_ = map(set, [map(str.lower, a), map(str.lower, b)])\n>>> a_ & b_\nset(['today'])\n\nOr... with less maps,\n>>> a_ = set(map(str.lower, a))\n>>> b_ = set(map(str.lower, b))\n>>> a_ & b_\nset(['today'])\n\n"
] |
[
10,
8,
4,
2
] |
[] |
[] |
[
"compare",
"django",
"python"
] |
stackoverflow_0001479979_compare_django_python.txt
|
Q:
Time difference between system date and string, e.g. from directory name?
I would like to write a small script that does the following (and that I can then run using my crontab):
Look into a directory that contains directories whose names are in some date format, e.g. 30-10-09.
Convert the directory name to the date it represents (of course, I could put this information as a string into a file in these directories, that doesn't matter to me).
Compare each date with the current system time and find the one that has a specific time difference to the current system date, e.g. less than two days.
Then, do something with the files in that directory (e.g., paste them together and send an email).
I know a little bash scripting, but I don't know whether bash can itself handle this. I think I could do this in R, but the server where this needs to run doesn't have R.
I'm curious anyway to learn a little bit of either Python or Ruby (both of which are on the server).
Can someone point me in the right direction what might be the best way to do this?
A:
I would suggest using Python. You'll need the following functions:
os.listdir gives you the directory contents, as a list of strings
time.strptime(name, "%d-%m-%y") will try to parse such a string, and return a time tuple. You get a ValueError exception if parsing fails.
time.mktime will convert a time tuple into seconds since the epoch.
time.time returns seconds since the epoch
the smtplib module can send emails, assuming you know what SMTP server to use. Alternatively, you can run /usr/lib/sendmail, through the subprocess module (assuming /usr/lib/sendmail is correctly configured)
|
Time difference between system date and string, e.g. from directory name?
|
I would like to write a small script that does the following (and that I can then run using my crontab):
Look into a directory that contains directories whose names are in some date format, e.g. 30-10-09.
Convert the directory name to the date it represents (of course, I could put this information as a string into a file in these directories, that doesn't matter to me).
Compare each date with the current system time and find the one that has a specific time difference to the current system date, e.g. less than two days.
Then, do something with the files in that directory (e.g., paste them together and send an email).
I know a little bash scripting, but I don't know whether bash can itself handle this. I think I could do this in R, but the server where this needs to run doesn't have R.
I'm curious anyway to learn a little bit of either Python or Ruby (both of which are on the server).
Can someone point me in the right direction what might be the best way to do this?
|
[
"I would suggest using Python. You'll need the following functions:\n\nos.listdir gives you the directory contents, as a list of strings\ntime.strptime(name, \"%d-%m-%y\") will try to parse such a string, and return a time tuple. You get a ValueError exception if parsing fails.\ntime.mktime will convert a time tuple into seconds since the epoch.\ntime.time returns seconds since the epoch\nthe smtplib module can send emails, assuming you know what SMTP server to use. Alternatively, you can run /usr/lib/sendmail, through the subprocess module (assuming /usr/lib/sendmail is correctly configured)\n\n"
] |
[
1
] |
[] |
[] |
[
"date",
"python",
"scripting"
] |
stackoverflow_0001487450_date_python_scripting.txt
|
Q:
python multiprocessing proxy
I have a 2 processes:
the first process is manager.py starts in backgroung:
from multiprocessing.managers import SyncManager, BaseProxy
from CompositeDict import *
class CompositeDictProxy(BaseProxy):
_exposed_ = ('addChild', 'setName')
def addChild(self, child):
return self._callmethod('addChild', [child])
def setName(self, name):
return self._callmethod('setName', [name])
class Manager(SyncManager):
def __init__(self):
super(Manager, self).__init__(address=('127.0.0.1', 50000), authkey='abracadabra')
def start_Manager():
Manager().get_server().serve_forever()
if __name__=="__main__":
Manager.register('get_plant', CompositeDict, proxytype=CompositeDictProxy)
start_Manager()
and the second is consumer.py supposed to use registered objects defined into the manager:
from manager import *
import time
import random
class Consumer():
def __init__(self):
Manager.register('get_plant')
m = Manager()
m.connect()
plant = m.get_plant()
#plant.setName('alfa')
plant.addChild('beta')
if __name__=="__main__":
Consumer()
Running the manager in background, and than the consumer I get the error message:
RuntimeError: maximum recursion depth exceeded,
when using addChild into the consumer, while I can correctly use setName.
Methods addChild and setName belongs to CompositeDict, I suppose to be proxied.
What's wrong?
CompositeDict overwrites native __getattr____ method and is involved in the error message. I suppose, in some way, it's not used the right one __getattr__ method. If so how could I solve this problem??
The detailed error message is:
Traceback (most recent call last):
File "consumer.py", line 21, in <module>
Consumer()
File "consumer.py", line 17, in __init__
plant.addChild('beta')
File "<string>", line 2, in addChild
File "/usr/lib/python2.5/site-packages/multiprocessing-2.6.1.1-py2.5-linux-i686.egg/multiprocessing/managers.py", line 729, in _callmethod
kind, result = conn.recv()
File "/home/--/--/CompositeDict.py", line 99, in __getattr__
child = self.findChild(name)
File "/home/--/--/CompositeDict.py", line 185, in findChild
for child in self.getAllChildren():
File "/home/--/--/CompositeDict.py", line 167, in getAllChildren
l.extend(child.getAllChildren())
File "/home/--/--/CompositeDict.py", line 165, in getAllChildren
for child in self._children:
File "/home/--/--/CompositeDict.py", line 99, in __getattr__
child = self.findChild(name)
File "/home/--/--/CompositeDict.py", line 185, in findChild
for child in self.getAllChildren():
File "/--/--/prove/CompositeDict.py", line 165, in getAllChildren
for child in self._children:
...
File "/home/--/--/CompositeDict.py", line 99, in __getattr__
child = self.findChild(name)
File "/home/--/--/CompositeDict.py", line 185, in findChild
for child in self.getAllChildren():
RuntimeError: maximum recursion depth exceeded
A:
Besides fixing many other bugs in the above which I assume are accidental (init must be __init__, you're missing several instances of self, misindentation, etc, etc), the key bit is to make the registration in manager.py into:
Manager.register('get_plant', CompositeDict, proxytype=CompositeDictProxy)
no idea what you're trying to accomplish w/that lambda as the second arg, but the second arg must be the callable that makes the type you need, not one that makes a two-items tuple like you're using.
|
python multiprocessing proxy
|
I have a 2 processes:
the first process is manager.py starts in backgroung:
from multiprocessing.managers import SyncManager, BaseProxy
from CompositeDict import *
class CompositeDictProxy(BaseProxy):
_exposed_ = ('addChild', 'setName')
def addChild(self, child):
return self._callmethod('addChild', [child])
def setName(self, name):
return self._callmethod('setName', [name])
class Manager(SyncManager):
def __init__(self):
super(Manager, self).__init__(address=('127.0.0.1', 50000), authkey='abracadabra')
def start_Manager():
Manager().get_server().serve_forever()
if __name__=="__main__":
Manager.register('get_plant', CompositeDict, proxytype=CompositeDictProxy)
start_Manager()
and the second is consumer.py supposed to use registered objects defined into the manager:
from manager import *
import time
import random
class Consumer():
def __init__(self):
Manager.register('get_plant')
m = Manager()
m.connect()
plant = m.get_plant()
#plant.setName('alfa')
plant.addChild('beta')
if __name__=="__main__":
Consumer()
Running the manager in background, and than the consumer I get the error message:
RuntimeError: maximum recursion depth exceeded,
when using addChild into the consumer, while I can correctly use setName.
Methods addChild and setName belongs to CompositeDict, I suppose to be proxied.
What's wrong?
CompositeDict overwrites native __getattr____ method and is involved in the error message. I suppose, in some way, it's not used the right one __getattr__ method. If so how could I solve this problem??
The detailed error message is:
Traceback (most recent call last):
File "consumer.py", line 21, in <module>
Consumer()
File "consumer.py", line 17, in __init__
plant.addChild('beta')
File "<string>", line 2, in addChild
File "/usr/lib/python2.5/site-packages/multiprocessing-2.6.1.1-py2.5-linux-i686.egg/multiprocessing/managers.py", line 729, in _callmethod
kind, result = conn.recv()
File "/home/--/--/CompositeDict.py", line 99, in __getattr__
child = self.findChild(name)
File "/home/--/--/CompositeDict.py", line 185, in findChild
for child in self.getAllChildren():
File "/home/--/--/CompositeDict.py", line 167, in getAllChildren
l.extend(child.getAllChildren())
File "/home/--/--/CompositeDict.py", line 165, in getAllChildren
for child in self._children:
File "/home/--/--/CompositeDict.py", line 99, in __getattr__
child = self.findChild(name)
File "/home/--/--/CompositeDict.py", line 185, in findChild
for child in self.getAllChildren():
File "/--/--/prove/CompositeDict.py", line 165, in getAllChildren
for child in self._children:
...
File "/home/--/--/CompositeDict.py", line 99, in __getattr__
child = self.findChild(name)
File "/home/--/--/CompositeDict.py", line 185, in findChild
for child in self.getAllChildren():
RuntimeError: maximum recursion depth exceeded
|
[
"Besides fixing many other bugs in the above which I assume are accidental (init must be __init__, you're missing several instances of self, misindentation, etc, etc), the key bit is to make the registration in manager.py into:\nManager.register('get_plant', CompositeDict, proxytype=CompositeDictProxy)\n\nno idea what you're trying to accomplish w/that lambda as the second arg, but the second arg must be the callable that makes the type you need, not one that makes a two-items tuple like you're using.\n"
] |
[
1
] |
[] |
[] |
[
"composite",
"multiprocessing",
"proxy",
"python"
] |
stackoverflow_0001486835_composite_multiprocessing_proxy_python.txt
|
Q:
Error in nested for loops (Python)
I am getting an error in the following code. The Error message is "Error: Inconsistent indentation detected!"
s=[30,40,50]
a=[5e6,6e6,7e6,8e6,8.5e6,9e6,10e6,12e6]
p=[0.0,0.002,0.004,0.006,0.008,0.01,0.015,0.05,0.1,0.15,0.2]
j=0
b=0
x=0
for j in s:
h=s[j]
print "here is the first loop" +h
for b in a:
c=a[b] #too much indentation
print"here is the second loop" +c #too much indentation
for x in p: #too much indentation
k=p[x]
print"here is the third loop" +k
If there is any other error I will be highly obliged if anyone here could correct me.
Thanks.
/Gillani
A:
Once you cleaned your tabs and spaces (you should have only tabs or only spaces), you'd need to fix your loops:
s = [30,40,50]
a = [5e6,6e6,7e6,8e6,8.5e6,9e6,10e6,12e6]
p = [0.0,0.002,0.004,0.006,0.008,0.01,0.015,0.05,0.1,0.15,0.2]
for j in s: # within loop j is being 30 then 40 then 50, same true for other loops
print "here is the first loop" + j
for b in a:
print"here is the second loop" + b
for x in p:
print"here is the third loop" + x
Otherwise you'd have IndexError.
A:
SilentGhost is correct -- unlike languages like Javascript, when you write
s = [30, 40, 50]
for j in s:
Then j is not assigned 0, 1, and 2 -- it is assigned the actual values 30, 40, and 50. So there is no need to say, on another line,
h = s[j]
In fact, if you do that, the first time through the loop, it will evaluate as
h = s[30]
Which is way out of bounds for a three-element list, and you'll get an IndexError.
If you really wanted to do it the other way -- if you really needed the indexes as well as the values, you could do something like this:
s = [30, 40, 50]
for j in range(len(s)):
h = s[j]
len(s) gives you the length of s (3, in this case), and the range function makes a new list for you, range(n) contains the integers from 0 to n-1. In this case, range(3) returns [0, 1, 2]
As SilentGhost points out in the comments, this is much more pythonic:
s = [30, 40, 50]
for (j, h) in enumerate(s):
# do stuff here
enumerate(s) returns the three pairs (0, 30), (1, 40), and (2, 50), in that order. With that, you have the indexes into s, as well as the actual element, at the same time.
A:
Your three lines initing your variable to zero are at a different indentation than the rest of your code. Even the code display here on stackoverflow shows that.
Also, check that you haven't mixed tabs and spaces.
A:
I also think it's a tab/space mixing problem. Some editors, like Textmate, has the option for displaying 'invisible' characters, like tab and newline. Comes very handy when you code, especially in Python.
A:
This part:
for b in a:
c=a[b] #too much indentation
the "c=a[b]" is indented 8 spaces instead of 4. It needs to be only 4 spaces (ie. one python indentation).
And Ian is right, the "for x in y" syntax is different than other languages.
list1 = [10, 20, 30]
for item in list1:
print item
The output will be: "10, 20, 30", not "1, 2, 3".
|
Error in nested for loops (Python)
|
I am getting an error in the following code. The Error message is "Error: Inconsistent indentation detected!"
s=[30,40,50]
a=[5e6,6e6,7e6,8e6,8.5e6,9e6,10e6,12e6]
p=[0.0,0.002,0.004,0.006,0.008,0.01,0.015,0.05,0.1,0.15,0.2]
j=0
b=0
x=0
for j in s:
h=s[j]
print "here is the first loop" +h
for b in a:
c=a[b] #too much indentation
print"here is the second loop" +c #too much indentation
for x in p: #too much indentation
k=p[x]
print"here is the third loop" +k
If there is any other error I will be highly obliged if anyone here could correct me.
Thanks.
/Gillani
|
[
"Once you cleaned your tabs and spaces (you should have only tabs or only spaces), you'd need to fix your loops:\ns = [30,40,50]\na = [5e6,6e6,7e6,8e6,8.5e6,9e6,10e6,12e6]\np = [0.0,0.002,0.004,0.006,0.008,0.01,0.015,0.05,0.1,0.15,0.2]\n\nfor j in s: # within loop j is being 30 then 40 then 50, same true for other loops\n print \"here is the first loop\" + j\n for b in a:\n print\"here is the second loop\" + b\n for x in p:\n print\"here is the third loop\" + x\n\nOtherwise you'd have IndexError.\n",
"SilentGhost is correct -- unlike languages like Javascript, when you write\ns = [30, 40, 50]\nfor j in s:\n\nThen j is not assigned 0, 1, and 2 -- it is assigned the actual values 30, 40, and 50. So there is no need to say, on another line,\nh = s[j]\n\nIn fact, if you do that, the first time through the loop, it will evaluate as\nh = s[30]\n\nWhich is way out of bounds for a three-element list, and you'll get an IndexError.\nIf you really wanted to do it the other way -- if you really needed the indexes as well as the values, you could do something like this:\ns = [30, 40, 50]\nfor j in range(len(s)):\n h = s[j]\n\nlen(s) gives you the length of s (3, in this case), and the range function makes a new list for you, range(n) contains the integers from 0 to n-1. In this case, range(3) returns [0, 1, 2]\nAs SilentGhost points out in the comments, this is much more pythonic:\ns = [30, 40, 50]\nfor (j, h) in enumerate(s):\n # do stuff here\n\nenumerate(s) returns the three pairs (0, 30), (1, 40), and (2, 50), in that order. With that, you have the indexes into s, as well as the actual element, at the same time.\n",
"Your three lines initing your variable to zero are at a different indentation than the rest of your code. Even the code display here on stackoverflow shows that. \nAlso, check that you haven't mixed tabs and spaces.\n",
"I also think it's a tab/space mixing problem. Some editors, like Textmate, has the option for displaying 'invisible' characters, like tab and newline. Comes very handy when you code, especially in Python.\n",
"This part:\n for b in a:\n c=a[b] #too much indentation\n\nthe \"c=a[b]\" is indented 8 spaces instead of 4. It needs to be only 4 spaces (ie. one python indentation).\nAnd Ian is right, the \"for x in y\" syntax is different than other languages.\n\nlist1 = [10, 20, 30]\nfor item in list1:\n print item\n\nThe output will be: \"10, 20, 30\", not \"1, 2, 3\".\n"
] |
[
6,
5,
2,
1,
0
] |
[] |
[] |
[
"python",
"syntax"
] |
stackoverflow_0001487561_python_syntax.txt
|
Q:
Python: pass c++ object to a script, then invoke extending c++ function from script
First of all, the problem is that program fails with double memory freeing ...
The deal is:
I have
FooCPlusPlus *obj;
and I pass it to my script. It works fine. Like this:
PyObject *pArgs, *pValue;
pArgs = Py_BuildValue("((O))", obj);
pValue = PyObject_CallObject(pFunc, pArgs);
where pFunc is a python function...
So, my script has function, where I use obj.
def main(args)
...
pythonObj = FooPython(args[0])
...
# hardcore calculation of "x"
...
...
pythonObj.doWork(x)
Of course I've defined python class
class FooPython:
def __init__(self, data):
self._base = data
def doWork(arg):
import extend_module
extend_module.bar(self._base, arg)
"Extend_module" is an extension c++ module where I've defined function "bar".
I expected that "bar" function would work fine, but instead of it I got memory errors: "double memory free or corruption".
Here is "bar" function:
static PyObject* bar(PyObject *self, PyObject *args)
{
PyObject *pyFooObject = 0;
int arg;
int ok = PyArg_ParseTuple(args,"Oi",&pyRuleHandler, &arg);
if(!ok) return 0;
void * temp = PyCObject_AsVoidPtr(pyFooObject);
FooCPlusPlus* obj = static_cast<FooCPlusPlus*>(temp);
obj->method(arg); // some c++ method
return PyCObject_FromVoidPtr((void *) ruleHandler, NULL);
}
It fails at "bar"'s return statement...
A:
Well, finally I know where the problem was:
we should return from "bar" function input args:
return args;
instead of
return PyCObject_FromVoidPtr((void *) ruleHandler, NULL);
|
Python: pass c++ object to a script, then invoke extending c++ function from script
|
First of all, the problem is that program fails with double memory freeing ...
The deal is:
I have
FooCPlusPlus *obj;
and I pass it to my script. It works fine. Like this:
PyObject *pArgs, *pValue;
pArgs = Py_BuildValue("((O))", obj);
pValue = PyObject_CallObject(pFunc, pArgs);
where pFunc is a python function...
So, my script has function, where I use obj.
def main(args)
...
pythonObj = FooPython(args[0])
...
# hardcore calculation of "x"
...
...
pythonObj.doWork(x)
Of course I've defined python class
class FooPython:
def __init__(self, data):
self._base = data
def doWork(arg):
import extend_module
extend_module.bar(self._base, arg)
"Extend_module" is an extension c++ module where I've defined function "bar".
I expected that "bar" function would work fine, but instead of it I got memory errors: "double memory free or corruption".
Here is "bar" function:
static PyObject* bar(PyObject *self, PyObject *args)
{
PyObject *pyFooObject = 0;
int arg;
int ok = PyArg_ParseTuple(args,"Oi",&pyRuleHandler, &arg);
if(!ok) return 0;
void * temp = PyCObject_AsVoidPtr(pyFooObject);
FooCPlusPlus* obj = static_cast<FooCPlusPlus*>(temp);
obj->method(arg); // some c++ method
return PyCObject_FromVoidPtr((void *) ruleHandler, NULL);
}
It fails at "bar"'s return statement...
|
[
"Well, finally I know where the problem was:\nwe should return from \"bar\" function input args:\nreturn args;\n\ninstead of\nreturn PyCObject_FromVoidPtr((void *) ruleHandler, NULL);\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"reference_counting"
] |
stackoverflow_0001487001_python_reference_counting.txt
|
Q:
Server side clusters of coordinates based on zoom level
Thanks to this answer I managed to come up with a temporary solution to my problem.
However, with a list of 6000 points that grows everyday it's becoming slower and slower.
I can't use a third party service* therefore I need to come up with my own solution.
Here are my requirements:
Clustering of the coordinates need
to work with any zoom level of the
map.
All clusters need to be cached
Ideally there won't be a need to
cluster (calculate distances) on all
points if a new point is added.
So far I have implemented quadtree that returns the four boundaries of my map and returns whatever coordinates are within the viewable section of the map.
What I need and I know this isn't easy is to have clusters of the points returned from the DB (postgres).
A:
I don't see why you have to "cluster" on the fly. Summarize at each zoom level at a resolution you're happy with.
Simply have a structure of X, Y, # of links. When someone adds a link, you insert the real locations (Zoom level max, or whatever), then start bubbling up from there.
Eventually you'll have 10 sets of distinct coordinates if you have 10 zoom levels - one for each different zoom level.
The calculation is trivial, and you only have to do it once.
A:
I am currently doing dynamic server-side clustering of about 2,000 markers, but it runs pretty quick up to 20,000. You can see discussion of my algorithm here:
Map Clustering Algorithm
Whenever the user moves the map I send a request with the zoom level and the boundaries of the view to the server, which clusters the viewable markers and sends it back to the client.
I don't cache the clusters because the markers can be dynamically filtered and searched - but if they were pre-clustered it would be super fast!
|
Server side clusters of coordinates based on zoom level
|
Thanks to this answer I managed to come up with a temporary solution to my problem.
However, with a list of 6000 points that grows everyday it's becoming slower and slower.
I can't use a third party service* therefore I need to come up with my own solution.
Here are my requirements:
Clustering of the coordinates need
to work with any zoom level of the
map.
All clusters need to be cached
Ideally there won't be a need to
cluster (calculate distances) on all
points if a new point is added.
So far I have implemented quadtree that returns the four boundaries of my map and returns whatever coordinates are within the viewable section of the map.
What I need and I know this isn't easy is to have clusters of the points returned from the DB (postgres).
|
[
"I don't see why you have to \"cluster\" on the fly. Summarize at each zoom level at a resolution you're happy with.\nSimply have a structure of X, Y, # of links. When someone adds a link, you insert the real locations (Zoom level max, or whatever), then start bubbling up from there.\nEventually you'll have 10 sets of distinct coordinates if you have 10 zoom levels - one for each different zoom level.\nThe calculation is trivial, and you only have to do it once.\n",
"I am currently doing dynamic server-side clustering of about 2,000 markers, but it runs pretty quick up to 20,000. You can see discussion of my algorithm here:\nMap Clustering Algorithm\nWhenever the user moves the map I send a request with the zoom level and the boundaries of the view to the server, which clusters the viewable markers and sends it back to the client. \nI don't cache the clusters because the markers can be dynamically filtered and searched - but if they were pre-clustered it would be super fast!\n"
] |
[
2,
2
] |
[] |
[] |
[
"cluster_analysis",
"google_maps",
"google_maps_markers",
"postgresql",
"python"
] |
stackoverflow_0001487704_cluster_analysis_google_maps_google_maps_markers_postgresql_python.txt
|
Q:
Return a list of dictionaries that match the corresponding list of values in python
For example, this is my list of dictionaries:
[{'name': 'John', 'color': 'red' },
{'name': 'Bob', 'color': 'green'},
{'name': 'Tom', 'color': 'blue' }]
Based on the list ['blue', 'red', 'green'] I want to return the following:
[{'name': 'Tom', 'color': 'blue' },
{'name': 'John', 'color': 'red' },
{'name': 'Bob', 'color': 'green'}]
A:
This might be a little naieve, but it works:
data = [
{'name':'John', 'color':'red'},
{'name':'Bob', 'color':'green'},
{'name':'Tom', 'color':'blue'}
]
colors = ['blue', 'red', 'green']
result = []
for c in colors:
result.extend([d for d in data if d['color'] == c])
print result
A:
Update:
>>> list_ = [{'c': 3}, {'c': 2}, {'c': 5}]
>>> mp = [3, 5, 2]
>>> sorted(list_, cmp=lambda x, y: cmp(mp.index(x.get('c')), mp.index(y.get('c'))))
[{'c': 3}, {'c': 5}, {'c': 2}]
A:
You can sort using any custom key function.
>>> people = [
{'name': 'John', 'color': 'red'},
{'name': 'Bob', 'color': 'green'},
{'name': 'Tom', 'color': 'blue'},
]
>>> colors = ['blue', 'red', 'green']
>>> sorted(people, key=lambda person: colors.index(person['color']))
[{'color': 'blue', 'name': 'Tom'}, {'color': 'red', 'name': 'John'}, {'color': 'green', 'name': 'Bob'}]
list.index takes linear time though, so if the number of colors can grow, then convert to a faster key lookup.
>>> colorkeys = dict((color, index) for index, color in enumerate(colors))
>>> sorted(people, key=lambda person: colorkeys[person['color']])
[{'color': 'blue', 'name': 'Tom'}, {'color': 'red', 'name': 'John'}, {'color': 'green', 'name': 'Bob'}]
A:
Riffing on Harto's solution:
>>> from pprint import pprint
>>> [{'color': 'red', 'name': 'John'},
... {'color': 'green', 'name': 'Bob'},
... {'color': 'blue', 'name': 'Tom'}]
[{'color': 'red', 'name': 'John'}, {'color': 'green', 'name': 'Bob'}, {'color':
'blue', 'name': 'Tom'}]
>>> data = [
... {'name':'John', 'color':'red'},
... {'name':'Bob', 'color':'green'},
... {'name':'Tom', 'color':'blue'}
... ]
>>> colors = ['blue', 'red', 'green']
>>> result = [d for d in data for c in colors if d['color'] == c]
>>> pprint(result)
[{'color': 'red', 'name': 'John'},
{'color': 'green', 'name': 'Bob'},
{'color': 'blue', 'name': 'Tom'}]
>>>
The main difference is in using a list comprehension to build result.
Edit: What was I thinking? This clearly calls out for the use of the any() expression:
>>> from pprint import pprint
>>> data = [{'name':'John', 'color':'red'}, {'name':'Bob', 'color':'green'}, {'name':'Tom', 'color':'blue'}]
>>> colors = ['blue', 'red', 'green']
>>> result = [d for d in data if any(d['color'] == c for c in colors)]
>>> pprint(result)
[{'color': 'red', 'name': 'John'},
{'color': 'green', 'name': 'Bob'},
{'color': 'blue', 'name': 'Tom'}]
>>>
A:
Here is a simple loop function:
# Heres the people:
people = [{'name':'John', 'color':'red'},
{'name':'Bob', 'color':'green'},
{'name':'Tom', 'color':'blue'}]
# Now we can make a method to get people out in order by color:
def orderpeople(order):
for color in order:
for person in people:
if person['color'] == color:
yield person
order = ['blue', 'red', 'green']
print(list(orderpeople(order)))
Now that will be VERY slow if you have many people. Then you can loop through them only once, but build an index by color:
# Here's the people:
people = [{'name':'John', 'color':'red'},
{'name':'Bob', 'color':'green'},
{'name':'Tom', 'color':'blue'}]
# Now make an index:
colorindex = {}
for each in people:
color = each['color']
if color not in colorindex:
# Note that we want a list here, if several people have the same color.
colorindex[color] = []
colorindex[color].append(each)
# Now we can make a method to get people out in order by color:
def orderpeople(order):
for color in order:
for each in colorindex[color]:
yield each
order = ['blue', 'red', 'green']
print(list(orderpeople(order)))
This will be quite fast even for really big lists.
A:
Given:
people = [{'name':'John', 'color':'red'}, {'name':'Bob', 'color':'green'}, {'name':'Tom', 'color':'blue'}]
colors = ['blue', 'red', 'green']
you can do something like this:
def people_by_color(people, colors):
index = {}
for person in people:
if person.has_key('color'):
index[person['color']] = person
return [index.get(color) for color in colors]
If you're going to do this many times with the same list of dictionaries but different lists of colors you'll want to split the index building out and keep the index around so you don't need to rebuild it every time.
|
Return a list of dictionaries that match the corresponding list of values in python
|
For example, this is my list of dictionaries:
[{'name': 'John', 'color': 'red' },
{'name': 'Bob', 'color': 'green'},
{'name': 'Tom', 'color': 'blue' }]
Based on the list ['blue', 'red', 'green'] I want to return the following:
[{'name': 'Tom', 'color': 'blue' },
{'name': 'John', 'color': 'red' },
{'name': 'Bob', 'color': 'green'}]
|
[
"This might be a little naieve, but it works:\ndata = [\n {'name':'John', 'color':'red'},\n {'name':'Bob', 'color':'green'},\n {'name':'Tom', 'color':'blue'}\n]\ncolors = ['blue', 'red', 'green']\nresult = []\n\nfor c in colors:\n result.extend([d for d in data if d['color'] == c])\n\nprint result\n\n",
"Update:\n>>> list_ = [{'c': 3}, {'c': 2}, {'c': 5}]\n>>> mp = [3, 5, 2]\n>>> sorted(list_, cmp=lambda x, y: cmp(mp.index(x.get('c')), mp.index(y.get('c'))))\n[{'c': 3}, {'c': 5}, {'c': 2}]\n\n",
"You can sort using any custom key function.\n>>> people = [\n {'name': 'John', 'color': 'red'},\n {'name': 'Bob', 'color': 'green'},\n {'name': 'Tom', 'color': 'blue'},\n]\n>>> colors = ['blue', 'red', 'green']\n>>> sorted(people, key=lambda person: colors.index(person['color']))\n[{'color': 'blue', 'name': 'Tom'}, {'color': 'red', 'name': 'John'}, {'color': 'green', 'name': 'Bob'}]\n\nlist.index takes linear time though, so if the number of colors can grow, then convert to a faster key lookup.\n>>> colorkeys = dict((color, index) for index, color in enumerate(colors))\n>>> sorted(people, key=lambda person: colorkeys[person['color']])\n[{'color': 'blue', 'name': 'Tom'}, {'color': 'red', 'name': 'John'}, {'color': 'green', 'name': 'Bob'}]\n\n",
"Riffing on Harto's solution:\n>>> from pprint import pprint\n>>> [{'color': 'red', 'name': 'John'},\n... {'color': 'green', 'name': 'Bob'},\n... {'color': 'blue', 'name': 'Tom'}]\n[{'color': 'red', 'name': 'John'}, {'color': 'green', 'name': 'Bob'}, {'color':\n'blue', 'name': 'Tom'}]\n>>> data = [\n... {'name':'John', 'color':'red'},\n... {'name':'Bob', 'color':'green'},\n... {'name':'Tom', 'color':'blue'}\n... ]\n>>> colors = ['blue', 'red', 'green']\n>>> result = [d for d in data for c in colors if d['color'] == c]\n>>> pprint(result)\n[{'color': 'red', 'name': 'John'},\n {'color': 'green', 'name': 'Bob'},\n {'color': 'blue', 'name': 'Tom'}]\n>>>\n\nThe main difference is in using a list comprehension to build result.\nEdit: What was I thinking? This clearly calls out for the use of the any() expression:\n>>> from pprint import pprint\n>>> data = [{'name':'John', 'color':'red'}, {'name':'Bob', 'color':'green'}, {'name':'Tom', 'color':'blue'}]\n>>> colors = ['blue', 'red', 'green']\n>>> result = [d for d in data if any(d['color'] == c for c in colors)]\n>>> pprint(result)\n[{'color': 'red', 'name': 'John'},\n {'color': 'green', 'name': 'Bob'},\n {'color': 'blue', 'name': 'Tom'}]\n>>>\n\n",
"Here is a simple loop function:\n# Heres the people:\npeople = [{'name':'John', 'color':'red'}, \n {'name':'Bob', 'color':'green'}, \n {'name':'Tom', 'color':'blue'}] \n\n# Now we can make a method to get people out in order by color:\ndef orderpeople(order):\n for color in order:\n for person in people:\n if person['color'] == color:\n yield person\n\norder = ['blue', 'red', 'green']\nprint(list(orderpeople(order)))\n\nNow that will be VERY slow if you have many people. Then you can loop through them only once, but build an index by color:\n# Here's the people:\npeople = [{'name':'John', 'color':'red'}, \n {'name':'Bob', 'color':'green'}, \n {'name':'Tom', 'color':'blue'}] \n\n# Now make an index:\ncolorindex = {}\nfor each in people:\n color = each['color']\n if color not in colorindex:\n # Note that we want a list here, if several people have the same color.\n colorindex[color] = [] \n colorindex[color].append(each)\n\n# Now we can make a method to get people out in order by color:\ndef orderpeople(order):\n for color in order:\n for each in colorindex[color]:\n yield each\n\norder = ['blue', 'red', 'green']\nprint(list(orderpeople(order)))\n\nThis will be quite fast even for really big lists.\n",
"Given:\npeople = [{'name':'John', 'color':'red'}, {'name':'Bob', 'color':'green'}, {'name':'Tom', 'color':'blue'}]\ncolors = ['blue', 'red', 'green']\n\nyou can do something like this: \ndef people_by_color(people, colors):\n index = {}\n for person in people:\n if person.has_key('color'):\n index[person['color']] = person \n return [index.get(color) for color in colors]\n\nIf you're going to do this many times with the same list of dictionaries but different lists of colors you'll want to split the index building out and keep the index around so you don't need to rebuild it every time.\n"
] |
[
4,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0001485660_dictionary_python.txt
|
Q:
What are good python libraries for the following needs?
What are good python libraries for the following needs:
MVC
Domain Abstraction
Database Abstraction
Video library (just to create thumbnails)
I already know that SQLAlchemy is really good for Database Abstraction so don't bother with it unless you want to suggest a better one.
Edit: This might seem stupid to mention but I'm talking about MVC for GUI and not for web, just mentioning for clarification
Edit: Also does the MVC part contain GUI part or can I use a separate library for GUI like PyQt
A:
Have you tried wxWidgets (well, wxPython in fact)?
It has nice documentation (which is always a good thing), and allows creating code in MVC manner. It's just the GUI library, but allows some simple image manipulation (if it's not good enough for you try using Python version of ImageMagick). It uses native controls, so the application looks native on the OS it's being ran.
PyQt on the other hand has even better docs than wxWidgets or wxPython, but I could never get used to the look&feel of its GUI (it's custom, so it doesn't look native on any OS). Because riverbankcomputing couldn't agree with nokia on a license nokia started a project called PySide which is a LGPL version of the Qt-bindings. It's supposed to be finished in early 2010.
A:
django is a pretty good mvc framework with an orm
A:
You could go with http://turbogears.org/ . Its like Django, but uses "of the shelves" existing modules.
TurboGears 2 is the built on top of the experience of several next generation web frameworks including TurboGears 1 (of course), Django, and Rails. All of these frameworks had limitations which were frustrating in various ways, and TG2 is an answer to that frustration. We wanted something that had:
Real multi-database support
Horizontal data partitioning (sharding)
Support for a variety of JavaScript toolkits, and new widget system to make building ajax heavy apps easier
Support for multiple data-exchange formats.
Built in extensibility via standard WSGI components
|
What are good python libraries for the following needs?
|
What are good python libraries for the following needs:
MVC
Domain Abstraction
Database Abstraction
Video library (just to create thumbnails)
I already know that SQLAlchemy is really good for Database Abstraction so don't bother with it unless you want to suggest a better one.
Edit: This might seem stupid to mention but I'm talking about MVC for GUI and not for web, just mentioning for clarification
Edit: Also does the MVC part contain GUI part or can I use a separate library for GUI like PyQt
|
[
"Have you tried wxWidgets (well, wxPython in fact)? \nIt has nice documentation (which is always a good thing), and allows creating code in MVC manner. It's just the GUI library, but allows some simple image manipulation (if it's not good enough for you try using Python version of ImageMagick). It uses native controls, so the application looks native on the OS it's being ran.\nPyQt on the other hand has even better docs than wxWidgets or wxPython, but I could never get used to the look&feel of its GUI (it's custom, so it doesn't look native on any OS). Because riverbankcomputing couldn't agree with nokia on a license nokia started a project called PySide which is a LGPL version of the Qt-bindings. It's supposed to be finished in early 2010.\n",
"django is a pretty good mvc framework with an orm\n",
"You could go with http://turbogears.org/ . Its like Django, but uses \"of the shelves\" existing modules.\n\nTurboGears 2 is the built on top of the experience of several next generation web frameworks including TurboGears 1 (of course), Django, and Rails. All of these frameworks had limitations which were frustrating in various ways, and TG2 is an answer to that frustration. We wanted something that had:\n\nReal multi-database support\nHorizontal data partitioning (sharding)\nSupport for a variety of JavaScript toolkits, and new widget system to make building ajax heavy apps easier\nSupport for multiple data-exchange formats.\nBuilt in extensibility via standard WSGI components\n\n\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"database_abstraction",
"domain_model",
"libraries",
"model_view_controller",
"python"
] |
stackoverflow_0001488691_database_abstraction_domain_model_libraries_model_view_controller_python.txt
|
Q:
scipy 'Minimize the sum of squares of a set of equations'
I face a problem in scipy 'leastsq' optimisation routine, if i execute the following program it says
raise errors[info][1], errors[info][0]
TypeError: Improper input parameters.
and sometimes index out of range for an array...
from scipy import *
import numpy
from scipy import optimize
from numpy import asarray
from math import *
def func(apar):
apar = numpy.asarray(apar)
x = apar[0]
y = apar[1]
eqn = abs(x-y)
return eqn
Init = numpy.asarray([20.0, 10.0])
x = optimize.leastsq(func, Init, full_output=0, col_deriv=0, factor=100, diag=None, warning=True)
print 'optimized parameters: ',x
print '******* The End ******'
I don't know what is the problem with my func optimize.leastsq() call, please help me
A:
leastsq works with vectors so the residual function, func, needs to return a vector of length at least two. So if you replace return eqn with return [eqn, 0.], your example will work. Running it gives:
optimized parameters: (array([10., 10.]), 2)
which is one of the many correct answers for the minimum of the absolute difference.
If you want to minimize a scalar function, fmin is the way to go, optimize.fmin(func, Init).
The issue here is that these two functions, although they look the same for a scalars are aimed at different goals. leastsq finds the least squared error, generally from a set of idealized curves, and is just one way of doing a "best fit". On the other hand fmin finds the minimum value of a scalar function.
Obviously yours is a toy example, for which neither of these really makes sense, so which way you go will depend on what your final goal is.
A:
Just looking at the least squares docs, it might be that your function func is defined incorrectly. You're assuming that you always receive an array of at least length 2, but the optimize function is insanely vague about the length of the array you will receive. You might try writing to screen whatever apar is, to see what you're actually getting.
If you're using something like ipython or the python shell, you ought to be getting stack traces that show you exactly which line the error is occurring on, so start there. If you can't figure it out from there, posting the stack trace would probably help us.
A:
Since you want to minimize a simple scalar function (func() returns a single value, not a list of values), scipy.optimize.leastsq() should be replaced by a call to one of the fmin functions (with the appropriate arguments):
x = optimize.fmin(func, Init)
correctly works!
In fact, leastsq() minimizes the sum of squares of a list of values. It does not appear to work on a (list containing a) single value, as in your example (even though it could, in theory).
|
scipy 'Minimize the sum of squares of a set of equations'
|
I face a problem in scipy 'leastsq' optimisation routine, if i execute the following program it says
raise errors[info][1], errors[info][0]
TypeError: Improper input parameters.
and sometimes index out of range for an array...
from scipy import *
import numpy
from scipy import optimize
from numpy import asarray
from math import *
def func(apar):
apar = numpy.asarray(apar)
x = apar[0]
y = apar[1]
eqn = abs(x-y)
return eqn
Init = numpy.asarray([20.0, 10.0])
x = optimize.leastsq(func, Init, full_output=0, col_deriv=0, factor=100, diag=None, warning=True)
print 'optimized parameters: ',x
print '******* The End ******'
I don't know what is the problem with my func optimize.leastsq() call, please help me
|
[
"leastsq works with vectors so the residual function, func, needs to return a vector of length at least two. So if you replace return eqn with return [eqn, 0.], your example will work. Running it gives:\noptimized parameters: (array([10., 10.]), 2)\n\nwhich is one of the many correct answers for the minimum of the absolute difference.\nIf you want to minimize a scalar function, fmin is the way to go, optimize.fmin(func, Init).\nThe issue here is that these two functions, although they look the same for a scalars are aimed at different goals. leastsq finds the least squared error, generally from a set of idealized curves, and is just one way of doing a \"best fit\". On the other hand fmin finds the minimum value of a scalar function.\nObviously yours is a toy example, for which neither of these really makes sense, so which way you go will depend on what your final goal is.\n",
"Just looking at the least squares docs, it might be that your function func is defined incorrectly. You're assuming that you always receive an array of at least length 2, but the optimize function is insanely vague about the length of the array you will receive. You might try writing to screen whatever apar is, to see what you're actually getting.\nIf you're using something like ipython or the python shell, you ought to be getting stack traces that show you exactly which line the error is occurring on, so start there. If you can't figure it out from there, posting the stack trace would probably help us.\n",
"Since you want to minimize a simple scalar function (func() returns a single value, not a list of values), scipy.optimize.leastsq() should be replaced by a call to one of the fmin functions (with the appropriate arguments):\nx = optimize.fmin(func, Init)\n\ncorrectly works!\nIn fact, leastsq() minimizes the sum of squares of a list of values. It does not appear to work on a (list containing a) single value, as in your example (even though it could, in theory).\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"python",
"scipy"
] |
stackoverflow_0001488227_python_scipy.txt
|
Q:
Mimic Python's strip() function in C
I started on a little toy project in C lately and have been scratching my head over the best way to mimic the strip() functionality that is part of the python string objects.
Reading around for fscanf or sscanf says that the string is processed upto the first whitespace that is encountered.
fgets doesn't help either as I still have newlines sticking around.
I did try a strchr() to search for a whitespace and setting the returned pointer to '\0' explicitly but that doesn't seem to work.
A:
Python strings' strip method removes both trailing and leading whitespace. The two halves of the problem are very different when working on a C "string" (array of char, \0 terminated).
For trailing whitespace: set a pointer (or equivalently index) to the existing trailing \0. Keep decrementing the pointer until it hits against the start-of-string, or any non-white character; set the \0 to right after this terminate-backwards-scan point.
For leading whitespace: set a pointer (or equivalently index) to the start of string; keep incrementing the pointer until it hits a non-white character (possibly the trailing \0); memmove the rest-of-string so that the first non-white goes to the start of string (and similarly for everything following).
A:
There is no standard C implementation for a strip() or trim() function. That said, here's the one included in the Linux kernel:
char *strstrip(char *s)
{
size_t size;
char *end;
size = strlen(s);
if (!size)
return s;
end = s + size - 1;
while (end >= s && isspace(*end))
end--;
*(end + 1) = '\0';
while (*s && isspace(*s))
s++;
return s;
}
A:
If you want to remove, in place, the final newline on a line, you can use this snippet:
size_t s = strlen(buf);
if (s && (buf[s-1] == '\n')) buf[--s] = 0;
To faithfully mimic Python's str.strip([chars]) method (the way I interpreted its workings), you need to allocate space for a new string, fill the new string and return it. After that, when you no longer need the stripped string you need to free the memory it used to have no memory leaks.
Or you can use C pointers and modify the initial string and achieve a similar result.
Suppose your initial string is "____forty two____\n" and you want to strip all underscores and the '\n'
____forty two___\n
^ ptr
If you change ptr to the 'f' and replace the first '_' after two with a '\0' the result is the same as Python's "____forty two____\n".strip("_\n");
____forty two\0___\n
^ptr
Again, this is not the same as Python. The string is modified in place, there's no 2nd string and you cannot revert the changes (the original string is lost).
A:
I wrote C code to implement this function. I also wrote a few trivial tests to make sure my function does sensible things.
This function writes to a buffer you provide, and should never write past the end of the buffer, so it should not be prone to buffer overflow security issues.
Note: only Test() uses stdio.h, so if you just need the function, you only need to include ctype.h (for isspace()) and string.h (for strlen()).
// strstrip.c -- implement white space stripping for a string in C
//
// This code is released into the public domain.
//
// You may use it for any purpose whatsoever, and you don't need to advertise
// where you got it, but you aren't allowed to sue me for giving you free
// code; all the risk of using this is yours.
#include <ctype.h>
#include <stdio.h>
#include <string.h>
// strstrip() -- strip leading and trailing white space from a string
//
// Copies from sIn to sOut, writing at most lenOut characters.
//
// Returns number of characters in returned string, or -1 on an error.
// If you get -1 back, then nothing was written to sOut at all.
int
strstrip(char *sOut, unsigned int lenOut, char const *sIn)
{
char const *pStart, *pEnd;
unsigned int len;
char *pOut;
// if there is no room for any output, or a null pointer, return error!
if (0 == lenOut || !sIn || !sOut)
return -1;
pStart = sIn;
pEnd = sIn + strlen(sIn) - 1;
// skip any leading whitespace
while (*pStart && isspace(*pStart))
++pStart;
// skip any trailing whitespace
while (pEnd >= sIn && isspace(*pEnd))
--pEnd;
pOut = sOut;
len = 0;
// copy into output buffer
while (pStart <= pEnd && len < lenOut - 1)
{
*pOut++ = *pStart++;
++len;
}
// ensure output buffer is properly terminated
*pOut = '\0';
return len;
}
void
Test(const char *s)
{
int len;
char buf[1024];
len = strstrip(buf, sizeof(buf), s);
if (!s)
s = "**null**"; // don't ask printf to print a null string
if (-1 == len)
*buf = '\0'; // don't ask printf to print garbage from buf
printf("Input: \"%s\" Result: \"%s\" (%d chars)\n", s, buf, len);
}
main()
{
Test(NULL);
Test("");
Test(" ");
Test(" ");
Test("x");
Test(" x");
Test(" x ");
Test(" x y z ");
Test("x y z");
}
|
Mimic Python's strip() function in C
|
I started on a little toy project in C lately and have been scratching my head over the best way to mimic the strip() functionality that is part of the python string objects.
Reading around for fscanf or sscanf says that the string is processed upto the first whitespace that is encountered.
fgets doesn't help either as I still have newlines sticking around.
I did try a strchr() to search for a whitespace and setting the returned pointer to '\0' explicitly but that doesn't seem to work.
|
[
"Python strings' strip method removes both trailing and leading whitespace. The two halves of the problem are very different when working on a C \"string\" (array of char, \\0 terminated).\nFor trailing whitespace: set a pointer (or equivalently index) to the existing trailing \\0. Keep decrementing the pointer until it hits against the start-of-string, or any non-white character; set the \\0 to right after this terminate-backwards-scan point.\nFor leading whitespace: set a pointer (or equivalently index) to the start of string; keep incrementing the pointer until it hits a non-white character (possibly the trailing \\0); memmove the rest-of-string so that the first non-white goes to the start of string (and similarly for everything following).\n",
"There is no standard C implementation for a strip() or trim() function. That said, here's the one included in the Linux kernel:\nchar *strstrip(char *s)\n{\n size_t size;\n char *end;\n\n size = strlen(s);\n\n if (!size)\n return s;\n\n end = s + size - 1;\n while (end >= s && isspace(*end))\n end--;\n *(end + 1) = '\\0';\n\n while (*s && isspace(*s))\n s++;\n\n return s;\n}\n\n",
"If you want to remove, in place, the final newline on a line, you can use this snippet:\nsize_t s = strlen(buf);\nif (s && (buf[s-1] == '\\n')) buf[--s] = 0;\n\nTo faithfully mimic Python's str.strip([chars]) method (the way I interpreted its workings), you need to allocate space for a new string, fill the new string and return it. After that, when you no longer need the stripped string you need to free the memory it used to have no memory leaks.\nOr you can use C pointers and modify the initial string and achieve a similar result.\nSuppose your initial string is \"____forty two____\\n\" and you want to strip all underscores and the '\\n'\n____forty two___\\n\n^ ptr\n\nIf you change ptr to the 'f' and replace the first '_' after two with a '\\0' the result is the same as Python's \"____forty two____\\n\".strip(\"_\\n\");\n____forty two\\0___\\n\n ^ptr\n\nAgain, this is not the same as Python. The string is modified in place, there's no 2nd string and you cannot revert the changes (the original string is lost).\n",
"I wrote C code to implement this function. I also wrote a few trivial tests to make sure my function does sensible things.\nThis function writes to a buffer you provide, and should never write past the end of the buffer, so it should not be prone to buffer overflow security issues.\nNote: only Test() uses stdio.h, so if you just need the function, you only need to include ctype.h (for isspace()) and string.h (for strlen()).\n// strstrip.c -- implement white space stripping for a string in C\n//\n// This code is released into the public domain.\n//\n// You may use it for any purpose whatsoever, and you don't need to advertise\n// where you got it, but you aren't allowed to sue me for giving you free\n// code; all the risk of using this is yours.\n\n\n\n#include <ctype.h>\n#include <stdio.h>\n#include <string.h>\n\n\n\n// strstrip() -- strip leading and trailing white space from a string\n//\n// Copies from sIn to sOut, writing at most lenOut characters.\n//\n// Returns number of characters in returned string, or -1 on an error.\n// If you get -1 back, then nothing was written to sOut at all.\n\nint\nstrstrip(char *sOut, unsigned int lenOut, char const *sIn)\n{\n char const *pStart, *pEnd;\n unsigned int len;\n char *pOut;\n\n // if there is no room for any output, or a null pointer, return error!\n if (0 == lenOut || !sIn || !sOut)\n return -1;\n\n pStart = sIn;\n pEnd = sIn + strlen(sIn) - 1;\n\n // skip any leading whitespace\n while (*pStart && isspace(*pStart))\n ++pStart;\n\n // skip any trailing whitespace\n while (pEnd >= sIn && isspace(*pEnd))\n --pEnd;\n\n pOut = sOut;\n len = 0;\n\n // copy into output buffer\n while (pStart <= pEnd && len < lenOut - 1)\n {\n *pOut++ = *pStart++;\n ++len;\n }\n\n\n // ensure output buffer is properly terminated\n *pOut = '\\0';\n return len;\n}\n\n\nvoid\nTest(const char *s)\n{\n int len;\n char buf[1024];\n\n len = strstrip(buf, sizeof(buf), s);\n\n if (!s)\n s = \"**null**\"; // don't ask printf to print a null string\n if (-1 == len)\n *buf = '\\0'; // don't ask printf to print garbage from buf\n\n printf(\"Input: \\\"%s\\\" Result: \\\"%s\\\" (%d chars)\\n\", s, buf, len);\n}\n\n\nmain()\n{\n Test(NULL);\n Test(\"\");\n Test(\" \");\n Test(\" \");\n Test(\"x\");\n Test(\" x\");\n Test(\" x \");\n Test(\" x y z \");\n Test(\"x y z\");\n}\n\n"
] |
[
14,
12,
1,
0
] |
[] |
[] |
[
"c",
"fgets",
"python",
"string"
] |
stackoverflow_0001488372_c_fgets_python_string.txt
|
Q:
Python matplotlib 3d bar function
How to add strings to the axes in Axes3D instead of numbers?
I just started using the matplotlib. I have used Axes3dD to plot similar to the example given on their website (http://matplotlib.sourceforge.net/examples/mplot3d/bars3d_demo.html). Note that one must use the last verson (matplotlib 0.99.1), otherwise the axis gets a bit freaky. The example use this code:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = Axes3D(fig)
for c, z in zip(['r', 'g', 'b', 'y'], [30, 20, 10, 0]):
xs = np.arange(20)
ys = np.random.rand(20)
ax.bar(xs, ys, zs=z, zdir='y', color=c, alpha=0.8)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
Now to my problem, I cant rename the axis to what I want. Instead of numbers i need to name the staples to string names. This I only want to do in two dimensions - the x dimension, and the z dimension (depth). I have tried to use this command:
ax.set_xticklabels('First staple',(0,0))
I get no error message but nevertheless no sting. If anyone can answer this question I would be most delighted!
A:
Your actually on the right path there. but instead of a string you will want to pass a list or tuple to set_xticklabels(). You may also wish to adjust the center location for the label with set_xticks().
You may also find this function of use get_xmajorticklabels(). It will return the rendered tick labels. So you may also be able to adjust the value from its results.
A:
Also retrieve the axis data, work on that and set it back to the renderer, turned out to be much simpler for me ;) other than that, what NerdyNick says
|
Python matplotlib 3d bar function
|
How to add strings to the axes in Axes3D instead of numbers?
I just started using the matplotlib. I have used Axes3dD to plot similar to the example given on their website (http://matplotlib.sourceforge.net/examples/mplot3d/bars3d_demo.html). Note that one must use the last verson (matplotlib 0.99.1), otherwise the axis gets a bit freaky. The example use this code:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = Axes3D(fig)
for c, z in zip(['r', 'g', 'b', 'y'], [30, 20, 10, 0]):
xs = np.arange(20)
ys = np.random.rand(20)
ax.bar(xs, ys, zs=z, zdir='y', color=c, alpha=0.8)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
Now to my problem, I cant rename the axis to what I want. Instead of numbers i need to name the staples to string names. This I only want to do in two dimensions - the x dimension, and the z dimension (depth). I have tried to use this command:
ax.set_xticklabels('First staple',(0,0))
I get no error message but nevertheless no sting. If anyone can answer this question I would be most delighted!
|
[
"Your actually on the right path there. but instead of a string you will want to pass a list or tuple to set_xticklabels(). You may also wish to adjust the center location for the label with set_xticks().\nYou may also find this function of use get_xmajorticklabels(). It will return the rendered tick labels. So you may also be able to adjust the value from its results.\n",
"Also retrieve the axis data, work on that and set it back to the renderer, turned out to be much simpler for me ;) other than that, what NerdyNick says\n"
] |
[
2,
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0001487463_matplotlib_python.txt
|
Q:
How to draw a class's metaclass in UML?
If class A is created by its __metaclass M, how does the arrow look in UML?
The stereotype syntax seems to be related.
I didn't look in Python UML tools yet.
A:
A metaclass is drawn using the class notation plus the <<metaclass>> stereotype. The relationship between a class and its metaclass can be defined using a dependency relationship between the two (dashed line with the arrow pointing to the metaclass) annotated with the stereotype <<instantiate>>.
A:
This answer is from the UML 2.2 Superstructure Specification:
More class answer:
"For instance, the «create» keyword can appear next to an operation name to indicate that it is a constructor operation, and it can also be used to label a Usage dependency between two Classes to indicate that one Class creates instances of the other." (Pg 690[706-AdobeReader], Appendix B, Unnumbered 4th paragraph, 1st on the page) I think this would apply to meta-classes.
Stereotype answer:
This is kind of an answer, but does not infer "create" which is the word you used in your post, but might have just been an ambiguous word choice. The notation is an normal line with a filled triangle. I have also seen the keyword of <<extend>> used in tools like Rational Software Modeler. (Pg 657[673-AdobeReader] Figure 18.3 and 659 Figure 18.5, Profile Section)
You might also want to clarify if you mean meta-class in the MOF/Model definition sense or in some language or other context.
Let me know if you refine your question.
A:
I would draw a dependency with the stereotype «metaclass». This is not a predefined stereotype, but should make it clear to the read what kind of dependency this is.
|
How to draw a class's metaclass in UML?
|
If class A is created by its __metaclass M, how does the arrow look in UML?
The stereotype syntax seems to be related.
I didn't look in Python UML tools yet.
|
[
"A metaclass is drawn using the class notation plus the <<metaclass>> stereotype. The relationship between a class and its metaclass can be defined using a dependency relationship between the two (dashed line with the arrow pointing to the metaclass) annotated with the stereotype <<instantiate>>.\n",
"This answer is from the UML 2.2 Superstructure Specification:\nMore class answer:\n\"For instance, the «create» keyword can appear next to an operation name to indicate that it is a constructor operation, and it can also be used to label a Usage dependency between two Classes to indicate that one Class creates instances of the other.\" (Pg 690[706-AdobeReader], Appendix B, Unnumbered 4th paragraph, 1st on the page) I think this would apply to meta-classes.\nStereotype answer:\nThis is kind of an answer, but does not infer \"create\" which is the word you used in your post, but might have just been an ambiguous word choice. The notation is an normal line with a filled triangle. I have also seen the keyword of <<extend>> used in tools like Rational Software Modeler. (Pg 657[673-AdobeReader] Figure 18.3 and 659 Figure 18.5, Profile Section)\nYou might also want to clarify if you mean meta-class in the MOF/Model definition sense or in some language or other context.\nLet me know if you refine your question.\n",
"I would draw a dependency with the stereotype «metaclass». This is not a predefined stereotype, but should make it clear to the read what kind of dependency this is.\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"metadata",
"python",
"uml"
] |
stackoverflow_0001483273_metadata_python_uml.txt
|
Q:
Evaluating into two or more lists
Howdy, codeboys and codegirls!
I have came across a simple problem with seemingly easy solution. But being a Python neophyte I feel that there is a better approach somewhere.
Say you have a list of mixed strings. There are two basic types of strings in the sack - ones with "=" in them (a=potato) and ones without (Lady Jane). What you need is to sort them into two lists.
The obvious approach is to:
for arg in arguments:
if '=' in arg:
equal.append(arg)
else:
plain.append(arg)
Is there any other, more elegant way into it? Something like:
equal = [arg for arg in arguments if '=' in arg]
but to sort into multiple lists?
And what if you have more than one type of data?
A:
Try
for arg in arguments:
lst = equal if '=' in arg else plain
lst.append(arg)
or (holy ugly)
for arg in arguments:
(equal if '=' in arg else plain).append(arg)
A third option: Create a class which offers append() and which sorts into several lists.
A:
You can use itertools.groupby() for this:
import itertools
f = lambda x: '=' in x
groups = itertools.groupby(sorted(data, key=f), key=f)
for k, g in groups:
print k, list(g)
A:
I would just go for two list comprehensions. While that does incur some overhead (two loops on the list), it is more Pythonic to use a list comprehension than to use a for. It's also (in my mind) much more readable than using all sorts of really cool tricks, but that less people know about.
A:
def which_list(s):
if "=" in s:
return 1
return 0
lists = [[], []]
for arg in arguments:
lists[which_list(arg)].append(arg)
plain, equal = lists
If you have more types of data, add an if clause to which_list, and initialize lists to more empty lists.
A:
I would go for Edan's approach, e.g.
equal = [arg for arg in arguments if '=' in arg]
plain = [arg for arg in arguments if '=' not in arg]
A:
I read somewhere here that you might be interested in a solution that
will work for more than two identifiers (equals sign and space).
The following solution just requires you update the uniques set with
anything you would like to match, the results are placed in a dictionary of lists
with the identifier as the key.
uniques = set('= ')
matches = dict((key, []) for key in uniques)
for arg in args:
key = set(arg) & uniques
try:
matches[key.pop()].append(arg)
except KeyError:
# code to handle where arg does not contain = or ' '.
Now the above code assumes that you will only have a single match for your identifier
in your arg. I.e that you don't have an arg that looks like this 'John= equalspace'.
You will have to also think about how you would like to treat cases that don't match anything in the set (KeyError occurs.)
A:
Another approach is to use the filter function, although it's not the most efficient solution.
Example:
>>> l = ['a=s','aa','bb','=', 'a+b']
>>> l2 = filter(lambda s: '=' in s, l)
>>> l3 = filter(lambda s: '+' in s, l)
>>> l2
['a=s', '=']
>>> l3
['a+b']
A:
I put this together, and then see that Ned Batchelder was already on this same tack. I chose to package the splitting method instead of the list chooser, though, and to just use the implicit 0/1 values for False and True.
def split_on_condition(source, condition):
ret = [],[]
for s in source:
ret[condition(s)].append(s)
return ret
src = "z=1;q=2;lady jane;y=a;lucy in the sky".split(';')
plain,equal = split_on_condition(src, lambda s:'=' in s)
A:
Your approach is the best one. For sorting just into two lists it can't get clearer than that. If you want it to be a one-liner, encapsulate it in a function:
def classify(arguments):
equal, plain = [], []
for arg in arguments:
if '=' in arg:
equal.append(arg)
else:
plain.append(arg)
return equal, plain
equal, plain = classify(lst)
|
Evaluating into two or more lists
|
Howdy, codeboys and codegirls!
I have came across a simple problem with seemingly easy solution. But being a Python neophyte I feel that there is a better approach somewhere.
Say you have a list of mixed strings. There are two basic types of strings in the sack - ones with "=" in them (a=potato) and ones without (Lady Jane). What you need is to sort them into two lists.
The obvious approach is to:
for arg in arguments:
if '=' in arg:
equal.append(arg)
else:
plain.append(arg)
Is there any other, more elegant way into it? Something like:
equal = [arg for arg in arguments if '=' in arg]
but to sort into multiple lists?
And what if you have more than one type of data?
|
[
"Try\nfor arg in arguments:\n lst = equal if '=' in arg else plain\n lst.append(arg)\n\nor (holy ugly)\nfor arg in arguments:\n (equal if '=' in arg else plain).append(arg)\n\nA third option: Create a class which offers append() and which sorts into several lists.\n",
"You can use itertools.groupby() for this:\nimport itertools\nf = lambda x: '=' in x\ngroups = itertools.groupby(sorted(data, key=f), key=f)\nfor k, g in groups:\n print k, list(g)\n\n",
"I would just go for two list comprehensions. While that does incur some overhead (two loops on the list), it is more Pythonic to use a list comprehension than to use a for. It's also (in my mind) much more readable than using all sorts of really cool tricks, but that less people know about.\n",
"def which_list(s):\n if \"=\" in s: \n return 1\n return 0\n\nlists = [[], []]\n\nfor arg in arguments:\n lists[which_list(arg)].append(arg)\n\nplain, equal = lists\n\nIf you have more types of data, add an if clause to which_list, and initialize lists to more empty lists.\n",
"I would go for Edan's approach, e.g.\nequal = [arg for arg in arguments if '=' in arg]\nplain = [arg for arg in arguments if '=' not in arg]\n\n",
"I read somewhere here that you might be interested in a solution that\nwill work for more than two identifiers (equals sign and space). \nThe following solution just requires you update the uniques set with\nanything you would like to match, the results are placed in a dictionary of lists\nwith the identifier as the key. \nuniques = set('= ')\nmatches = dict((key, []) for key in uniques)\n\nfor arg in args:\n key = set(arg) & uniques\n try:\n matches[key.pop()].append(arg)\n except KeyError:\n # code to handle where arg does not contain = or ' '.\n\nNow the above code assumes that you will only have a single match for your identifier\nin your arg. I.e that you don't have an arg that looks like this 'John= equalspace'.\nYou will have to also think about how you would like to treat cases that don't match anything in the set (KeyError occurs.)\n",
"Another approach is to use the filter function, although it's not the most efficient solution.\nExample:\n>>> l = ['a=s','aa','bb','=', 'a+b']\n>>> l2 = filter(lambda s: '=' in s, l)\n>>> l3 = filter(lambda s: '+' in s, l)\n>>> l2\n['a=s', '=']\n>>> l3\n['a+b']\n\n",
"I put this together, and then see that Ned Batchelder was already on this same tack. I chose to package the splitting method instead of the list chooser, though, and to just use the implicit 0/1 values for False and True.\ndef split_on_condition(source, condition):\n ret = [],[]\n for s in source:\n ret[condition(s)].append(s)\n return ret\n\nsrc = \"z=1;q=2;lady jane;y=a;lucy in the sky\".split(';')\n\nplain,equal = split_on_condition(src, lambda s:'=' in s)\n\n",
"Your approach is the best one. For sorting just into two lists it can't get clearer than that. If you want it to be a one-liner, encapsulate it in a function:\ndef classify(arguments):\n equal, plain = [], []\n for arg in arguments:\n if '=' in arg:\n equal.append(arg)\n else:\n plain.append(arg)\n return equal, plain\n\n\nequal, plain = classify(lst)\n\n"
] |
[
4,
4,
3,
2,
2,
2,
1,
1,
1
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0001486558_list_python.txt
|
Q:
When is it advisable to use a ret_val variable?
I have seen conflicting advice on whether the following code is better
def function():
ret_val = 0
if some_condition():
ret_val = 2
else:
ret_val = 3
return ret_val
or whether this is better:
def function():
if some_condition():
return 2
else:
return 3
This is a simple example, and I've written it in python-style, but what I'm looking for is a general principle as to when to use some "accumulator" variable to keep track of the return value, or whether to use multiple exit points. I know different languages might have different reasons for using one style over the other, so I'd appreciate differing viewpoints for why a particular language might stick with a particular style. (Specifically in the past I've heard that structured programming in C avoids having multiple exit points for a function.)
A:
Did we forget why "multiple exit points" was considered harmful in the first place? Back in the day (before widespread access to good exception handling and finally constructs, or managing objects like auto_ptr that do cleanup when they leave scope), this was the problem that haunted many multi-exit functions:
int function blah(arg1, arg2)
allocate resource
if early failure detection
return failure_status
... much later...
release resource // oh rats! resource didn't release
return success_status
If the resource is memory, this creates a memory leak. If it's a database transaction, we are heading for bad database contention or deadlock. For that matter, with the advent of more exception support, we implicitly add many potential exits from a method (by virtue of an unhandled exception). In my C++ days, I developed the habit of never calling delete, but instead using auto_ptr, so that allocated memory was cleaned up when the auto_ptr exited its scope, even if some unexpected exception reared its head.
In our garbage collected Python world, we can still have this issue, even though many of our objects, such as files, or locks, have improved self-cleaning behavior. But in implementations other than CPython (jython and IronPython to name two), there is no guarantee just when a destructor will get called, so something more proactive needs to be built into your method. The first mechanism for this purpose was try/finally:
int function blah(arg1, arg2)
allocate resource
try:
if early failure detection
return failure_status
... much later...
return success_status
finally:
release resource // always releases no matter what
But now Python has context managers, in conjunction with the new 'with' syntax:
int function blah(arg1, arg2)
allocate resource
with context_manager(resource): // releases on exit from 'with'
if early failure detection
return failure_status
... much later...
return success_status
So let's be sure that we tell the whole story, that the reason we can discard this old chestnut is that newer coding practices make it unnecessary.
A:
In Python, it is quite common to have a return statement in the middle of the function - in particular, if it is an early exit. Your example often is rewritten as
def function():
if some_condition():
return 2
return 3
I.e. you drop the else case when the if ends with a return.
A:
Don't use an accumulator unless it's absolutely unavoidable. It introduces unnecessary statefulness and branching into your procedures, which you then have to track manually. By returning early, you can reduce the state and branch count of your code.
Specifically in the past I've heard that structured programming in C avoids having multiple exit points for a function.
Precisely the opposite -- structured programming discourages multiple points of entry, but multiple points of exit are acceptable and even encouraged (eg "guard clauses").
A:
Stylistics aside, let's take a look at the disassembly for the two approaches:
>>> def foo():
... r = 0
... if bar():
... r = 2
... else:
... r = 3
... return r
...
>>> dis.dis(foo)
2 0 LOAD_CONST 1 (0)
3 STORE_FAST 0 (r)
3 6 LOAD_GLOBAL 0 (bar)
9 CALL_FUNCTION 0
12 JUMP_IF_FALSE 10 (to 25)
15 POP_TOP
4 16 LOAD_CONST 2 (2)
19 STORE_FAST 0 (r)
22 JUMP_FORWARD 7 (to 32)
>> 25 POP_TOP
6 26 LOAD_CONST 3 (3)
29 STORE_FAST 0 (r)
7 >> 32 LOAD_FAST 0 (r)
35 RETURN_VALUE
14 bytecode instructions in the first approach...
>>> def quux():
... if bar():
... return 2
... else:
... return 3
...
>>> dis.dis(quux)
2 0 LOAD_GLOBAL 0 (bar)
3 CALL_FUNCTION 0
6 JUMP_IF_FALSE 5 (to 14)
9 POP_TOP
3 10 LOAD_CONST 1 (2)
13 RETURN_VALUE
>> 14 POP_TOP
5 15 LOAD_CONST 2 (3)
18 RETURN_VALUE
19 LOAD_CONST 0 (None)
22 RETURN_VALUE
11 in the second approach...
And a third approach, slightly shorter than the second:
>>> def baz():
... if bar():
... return 2
... return 3
...
>>> dis.dis(baz)
2 0 LOAD_GLOBAL 0 (bar)
3 CALL_FUNCTION 0
6 JUMP_IF_FALSE 5 (to 14)
9 POP_TOP
3 10 LOAD_CONST 1 (2)
13 RETURN_VALUE
>> 14 POP_TOP
4 15 LOAD_CONST 2 (3)
18 RETURN_VALUE
Has just nine instructions. The differences may not seem like much, but it actually makes a bit of a difference over a million runs with timeit, with bar defined to return alternating zeros and ones:
$ sudo nice -n -19 python b.py
('foo', 1.3846859931945801)
('quux', 1.282526969909668)
('baz', 1.2973799705505371)
$ sudo nice -n -19 python b.py
('foo', 1.354640007019043)
('quux', 1.2609632015228271)
('baz', 1.2767179012298584)
$ sudo nice -n -19 python3 b.py
foo 1.72521305084
quux 1.62322306633
baz 1.62547206879
$ sudo nice -n -19 python3 b.py
foo 1.73264288902
quux 1.67029309273
baz 1.62204194069
quux and baz tended to be close to the same time, both of which were consistently faster than foo.
If you're still on the fence about which one is better, hopefully this illustrates another advantage of the accumulator-less approach that nobody else mentioned so far.
A:
It depends on the language to a large extent, however I would go with the second method returning the value directly rather than imposing another superfluous variable. The second method is cleaner, more precise and therefore more maintainable in my opinion.
A:
I suppose it's more a question of style and coding conventions. Generally, theory tells us that multiple exit points are bad. In practice it can be easier to follow to simply return inside each condition. The code is likely to be compiled down to very similar if not identical instructions, so it has little to no functional impact.
My rule of thumb is this: If the function is longer than one page (25 lines) avoid multiple exit points. If you can see it all at once, do whatever seems best at the time you write it.
A:
A further alternative in recent versions of Python (since 2.6?) is a ternary operator statement like this:
def function():
return (2 if some_condition() else 3)
Just in case you like that better.
A:
For primitives, it doesn't matter. In a language like C++ (& presumably with structs in C it's the compiler will do something similar), the compiler is able to optimize the copy constructor out if you ensure all code paths return the same variable. For example:
Foo someFunction()
{
Foo result(5);
if (someConditionA()) return result;
else if (someConditionB()) result.doSomething();
result.doSomethingElse();
return result;
}
becomes more efficient than (unless your compiler is very very good):
Foo someFunction()
{
if (someConditionA()) return Foo(5);
if (someConditionB()) { Foo result(5); result.doSomething(); result.doSomethingElse(); return result; }
Foo result(5);
result.doSomethingElse();
return result;
}
In all other cases, it's more style-preference & readability. In the end, choose the format that's more readable for that particular case.
A:
Although people advocate single exit strategy, I find it useful to return early. That way you don't have to keep track when you are adding code later.
A:
In a language with function prototypes like C++ or Java, the compiler enforces that you return something of the correct type, even if execution would otherwise fall off the end of the function. In Python, since there are no function prototypes, falling off the end of the function will return the special value None. For this reason, you may want to use an accumulator variable and an explicit return ret_val at the end when coding in Python. Or use another style that ensures that execution cannot fall off the end without returning a value.
A:
Returning values directly is not terrible for small functions like your example. However, if you have a large or complex function then multiple return points can be more difficult to debug. If you have a coding standard I'd refer to it (here the variable is preferred according to our company coding standard).
|
When is it advisable to use a ret_val variable?
|
I have seen conflicting advice on whether the following code is better
def function():
ret_val = 0
if some_condition():
ret_val = 2
else:
ret_val = 3
return ret_val
or whether this is better:
def function():
if some_condition():
return 2
else:
return 3
This is a simple example, and I've written it in python-style, but what I'm looking for is a general principle as to when to use some "accumulator" variable to keep track of the return value, or whether to use multiple exit points. I know different languages might have different reasons for using one style over the other, so I'd appreciate differing viewpoints for why a particular language might stick with a particular style. (Specifically in the past I've heard that structured programming in C avoids having multiple exit points for a function.)
|
[
"Did we forget why \"multiple exit points\" was considered harmful in the first place? Back in the day (before widespread access to good exception handling and finally constructs, or managing objects like auto_ptr that do cleanup when they leave scope), this was the problem that haunted many multi-exit functions:\nint function blah(arg1, arg2)\n allocate resource\n\n if early failure detection\n return failure_status\n\n ... much later...\n\n release resource // oh rats! resource didn't release\n return success_status\n\nIf the resource is memory, this creates a memory leak. If it's a database transaction, we are heading for bad database contention or deadlock. For that matter, with the advent of more exception support, we implicitly add many potential exits from a method (by virtue of an unhandled exception). In my C++ days, I developed the habit of never calling delete, but instead using auto_ptr, so that allocated memory was cleaned up when the auto_ptr exited its scope, even if some unexpected exception reared its head.\nIn our garbage collected Python world, we can still have this issue, even though many of our objects, such as files, or locks, have improved self-cleaning behavior. But in implementations other than CPython (jython and IronPython to name two), there is no guarantee just when a destructor will get called, so something more proactive needs to be built into your method. The first mechanism for this purpose was try/finally:\nint function blah(arg1, arg2)\n allocate resource\n try:\n\n if early failure detection\n return failure_status\n\n ... much later...\n return success_status\n\n finally:\n release resource // always releases no matter what\n\nBut now Python has context managers, in conjunction with the new 'with' syntax:\nint function blah(arg1, arg2)\n allocate resource\n with context_manager(resource): // releases on exit from 'with'\n\n if early failure detection\n return failure_status\n\n ... much later...\n return success_status\n\nSo let's be sure that we tell the whole story, that the reason we can discard this old chestnut is that newer coding practices make it unnecessary.\n",
"In Python, it is quite common to have a return statement in the middle of the function - in particular, if it is an early exit. Your example often is rewritten as\ndef function():\n if some_condition():\n return 2\n return 3\n\nI.e. you drop the else case when the if ends with a return.\n",
"Don't use an accumulator unless it's absolutely unavoidable. It introduces unnecessary statefulness and branching into your procedures, which you then have to track manually. By returning early, you can reduce the state and branch count of your code.\n\nSpecifically in the past I've heard that structured programming in C avoids having multiple exit points for a function.\n\nPrecisely the opposite -- structured programming discourages multiple points of entry, but multiple points of exit are acceptable and even encouraged (eg \"guard clauses\").\n",
"Stylistics aside, let's take a look at the disassembly for the two approaches:\n>>> def foo():\n... r = 0\n... if bar():\n... r = 2\n... else:\n... r = 3\n... return r\n... \n>>> dis.dis(foo)\n 2 0 LOAD_CONST 1 (0)\n 3 STORE_FAST 0 (r)\n\n 3 6 LOAD_GLOBAL 0 (bar)\n 9 CALL_FUNCTION 0\n 12 JUMP_IF_FALSE 10 (to 25)\n 15 POP_TOP \n\n 4 16 LOAD_CONST 2 (2)\n 19 STORE_FAST 0 (r)\n 22 JUMP_FORWARD 7 (to 32)\n >> 25 POP_TOP \n\n 6 26 LOAD_CONST 3 (3)\n 29 STORE_FAST 0 (r)\n\n 7 >> 32 LOAD_FAST 0 (r)\n 35 RETURN_VALUE \n\n14 bytecode instructions in the first approach...\n>>> def quux():\n... if bar():\n... return 2\n... else:\n... return 3\n... \n>>> dis.dis(quux)\n 2 0 LOAD_GLOBAL 0 (bar)\n 3 CALL_FUNCTION 0\n 6 JUMP_IF_FALSE 5 (to 14)\n 9 POP_TOP \n\n 3 10 LOAD_CONST 1 (2)\n 13 RETURN_VALUE \n >> 14 POP_TOP \n\n 5 15 LOAD_CONST 2 (3)\n 18 RETURN_VALUE \n 19 LOAD_CONST 0 (None)\n 22 RETURN_VALUE \n\n11 in the second approach...\nAnd a third approach, slightly shorter than the second:\n>>> def baz():\n... if bar():\n... return 2\n... return 3\n... \n>>> dis.dis(baz)\n 2 0 LOAD_GLOBAL 0 (bar)\n 3 CALL_FUNCTION 0\n 6 JUMP_IF_FALSE 5 (to 14)\n 9 POP_TOP \n\n 3 10 LOAD_CONST 1 (2)\n 13 RETURN_VALUE \n >> 14 POP_TOP \n\n 4 15 LOAD_CONST 2 (3)\n 18 RETURN_VALUE \n\nHas just nine instructions. The differences may not seem like much, but it actually makes a bit of a difference over a million runs with timeit, with bar defined to return alternating zeros and ones:\n\n$ sudo nice -n -19 python b.py\n('foo', 1.3846859931945801)\n('quux', 1.282526969909668)\n('baz', 1.2973799705505371)\n$ sudo nice -n -19 python b.py\n('foo', 1.354640007019043)\n('quux', 1.2609632015228271)\n('baz', 1.2767179012298584)\n\n$ sudo nice -n -19 python3 b.py\nfoo 1.72521305084\nquux 1.62322306633\nbaz 1.62547206879\n$ sudo nice -n -19 python3 b.py\nfoo 1.73264288902\nquux 1.67029309273\nbaz 1.62204194069\n\nquux and baz tended to be close to the same time, both of which were consistently faster than foo.\nIf you're still on the fence about which one is better, hopefully this illustrates another advantage of the accumulator-less approach that nobody else mentioned so far.\n",
"It depends on the language to a large extent, however I would go with the second method returning the value directly rather than imposing another superfluous variable. The second method is cleaner, more precise and therefore more maintainable in my opinion.\n",
"I suppose it's more a question of style and coding conventions. Generally, theory tells us that multiple exit points are bad. In practice it can be easier to follow to simply return inside each condition. The code is likely to be compiled down to very similar if not identical instructions, so it has little to no functional impact.\nMy rule of thumb is this: If the function is longer than one page (25 lines) avoid multiple exit points. If you can see it all at once, do whatever seems best at the time you write it.\n",
"A further alternative in recent versions of Python (since 2.6?) is a ternary operator statement like this:\ndef function():\n return (2 if some_condition() else 3)\n\nJust in case you like that better.\n",
"For primitives, it doesn't matter. In a language like C++ (& presumably with structs in C it's the compiler will do something similar), the compiler is able to optimize the copy constructor out if you ensure all code paths return the same variable. For example:\nFoo someFunction()\n{\n Foo result(5);\n if (someConditionA()) return result;\n else if (someConditionB()) result.doSomething();\n result.doSomethingElse();\n return result;\n}\n\nbecomes more efficient than (unless your compiler is very very good):\nFoo someFunction()\n{\n if (someConditionA()) return Foo(5);\n if (someConditionB()) { Foo result(5); result.doSomething(); result.doSomethingElse(); return result; }\n Foo result(5);\n result.doSomethingElse();\n return result;\n}\n\nIn all other cases, it's more style-preference & readability. In the end, choose the format that's more readable for that particular case.\n",
"Although people advocate single exit strategy, I find it useful to return early. That way you don't have to keep track when you are adding code later.\n",
"In a language with function prototypes like C++ or Java, the compiler enforces that you return something of the correct type, even if execution would otherwise fall off the end of the function. In Python, since there are no function prototypes, falling off the end of the function will return the special value None. For this reason, you may want to use an accumulator variable and an explicit return ret_val at the end when coding in Python. Or use another style that ensures that execution cannot fall off the end without returning a value.\n",
"Returning values directly is not terrible for small functions like your example. However, if you have a large or complex function then multiple return points can be more difficult to debug. If you have a coding standard I'd refer to it (here the variable is preferred according to our company coding standard).\n"
] |
[
8,
5,
5,
2,
1,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"coding_style",
"python",
"return_value"
] |
stackoverflow_0001489372_coding_style_python_return_value.txt
|
Q:
Python Modules most worthwhile reading
I have been programming Python for a while and I have a very good understanding of its features, but I would like to improve my coding style. I think reading the source code of the Python Modules would be a good idea. Can anyone recommend any ones in particular?
Related Threads:
Beginner looking for beautiful and instructional Python code: This thread actually inspired this question
A:
Queue.py shows you how to make a class thread-safe, and the proper use of the Template Method design pattern.
sched.py is a great example of the Dependency Injection pattern.
heapq.py is a really well-crafted implementation of the Heap data structure.
If I had to pick my three favorite modules in the Python standard library, this triplet would probably be my choice. (It doesn't hurt that they're all so very useful... but I'm picking in terms of quality of code, comments and design, first and foremost).
A:
I vote for itertools. You'll learn a lot of functional programming style from using this code, though perhaps not from reading the source.
For a good module-by-module tutorial, try Doug Hellmann's Python Module of the Week. I also like the python programming style/practices explored and developed at WordAligned. I also like Peter Norvig's code, especially the spelling corrector code and the sudoku solver.
Other cool modules to learn: collections, operator, os.path, optparse, and the process/threading modules.
A:
I am learning Django and I really like their coding style,
http://www.djangoproject.com/
A:
PEP 8 is the "standard" Python coding style, for whatever version of "standard" you want to use. =)
|
Python Modules most worthwhile reading
|
I have been programming Python for a while and I have a very good understanding of its features, but I would like to improve my coding style. I think reading the source code of the Python Modules would be a good idea. Can anyone recommend any ones in particular?
Related Threads:
Beginner looking for beautiful and instructional Python code: This thread actually inspired this question
|
[
"Queue.py shows you how to make a class thread-safe, and the proper use of the Template Method design pattern.\nsched.py is a great example of the Dependency Injection pattern.\nheapq.py is a really well-crafted implementation of the Heap data structure.\nIf I had to pick my three favorite modules in the Python standard library, this triplet would probably be my choice. (It doesn't hurt that they're all so very useful... but I'm picking in terms of quality of code, comments and design, first and foremost).\n",
"I vote for itertools. You'll learn a lot of functional programming style from using this code, though perhaps not from reading the source.\nFor a good module-by-module tutorial, try Doug Hellmann's Python Module of the Week. I also like the python programming style/practices explored and developed at WordAligned. I also like Peter Norvig's code, especially the spelling corrector code and the sudoku solver.\nOther cool modules to learn: collections, operator, os.path, optparse, and the process/threading modules.\n",
"I am learning Django and I really like their coding style,\nhttp://www.djangoproject.com/\n",
"PEP 8 is the \"standard\" Python coding style, for whatever version of \"standard\" you want to use. =)\n"
] |
[
18,
6,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001490190_python.txt
|
Q:
Python object inspector?
besides from using a completely integrated IDE with debugger for python (like with Eclipse), is there any little tool for achieving this:
when running a program, i want to be able to hook somewhere into it (similar to inserting a print statement) and call a window with an object inspector (a tree view)
after closing the window, the program should resume
It doesnt need to be polished, not even absolutely stable, it could be introspection example code for some widget library like wx. Platform independent would be nice though (not a PyObjC program, or something like that on Windows).
Any Ideas ?
Edit:
Yes, i know about pdb, but I'm looking for a graphical tree of all the current objects.
Nevertheless, here is a nice introduction on how to use pdb (in this case in Django):
pdb + Django
A:
Winpdb is a platform independent graphical GPL Python debugger with an object inspector.
It supports remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb.
Some other features:
GPL license. Winpdb is Free Software.
Compatible with CPython 2.3 through 2.6 and Python 3000
Compatible with wxPython 2.6 through 2.8
Platform independent, and tested on Ubuntu Jaunty and Windows XP.
User Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later.
Here's a screenshot that shows the local object tree at the top-left.
(source: winpdb.org)
A:
pdb isn't windowed, it runs in a console, but it's the standard way to debug in Python programs.
Insert this where you want to stop:
import pdb;pdb.set_trace()
you'll get a prompt on stdout.
A:
If a commercial solution is acceptable, Wingware may be the answer to the OP's desires (Wingware does have free versions, but I don't think they have the full debugging power he requires, which the for-pay versions do provide).
A:
Python Debugging Techniques is worth reading. and it's Reddit's comment is worth reading too. I have really find some nice debug tricks from Brian's comment. such as this comment and this comment.
Of course, WingIDE is cool (for general Python coding and Python code debugging) and I use it everyday. unlucky for WingIDE still can't embedded a IPython at now.
|
Python object inspector?
|
besides from using a completely integrated IDE with debugger for python (like with Eclipse), is there any little tool for achieving this:
when running a program, i want to be able to hook somewhere into it (similar to inserting a print statement) and call a window with an object inspector (a tree view)
after closing the window, the program should resume
It doesnt need to be polished, not even absolutely stable, it could be introspection example code for some widget library like wx. Platform independent would be nice though (not a PyObjC program, or something like that on Windows).
Any Ideas ?
Edit:
Yes, i know about pdb, but I'm looking for a graphical tree of all the current objects.
Nevertheless, here is a nice introduction on how to use pdb (in this case in Django):
pdb + Django
|
[
"Winpdb is a platform independent graphical GPL Python debugger with an object inspector.\nIt supports remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb.\nSome other features:\n\nGPL license. Winpdb is Free Software.\nCompatible with CPython 2.3 through 2.6 and Python 3000\nCompatible with wxPython 2.6 through 2.8\nPlatform independent, and tested on Ubuntu Jaunty and Windows XP.\nUser Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later.\n\nHere's a screenshot that shows the local object tree at the top-left.\n\n(source: winpdb.org) \n",
"pdb isn't windowed, it runs in a console, but it's the standard way to debug in Python programs.\nInsert this where you want to stop:\nimport pdb;pdb.set_trace() \n\nyou'll get a prompt on stdout.\n",
"If a commercial solution is acceptable, Wingware may be the answer to the OP's desires (Wingware does have free versions, but I don't think they have the full debugging power he requires, which the for-pay versions do provide).\n",
"Python Debugging Techniques is worth reading. and it's Reddit's comment is worth reading too. I have really find some nice debug tricks from Brian's comment. such as this comment and this comment.\nOf course, WingIDE is cool (for general Python coding and Python code debugging) and I use it everyday. unlucky for WingIDE still can't embedded a IPython at now.\n"
] |
[
5,
2,
1,
1
] |
[
"You can use ipython, with the %debug statement. Once your code crashes, you can add breakpoints, see objects etc. A very crude way to kickoff the debugger is to raise Exception at some line of your code, run it in ipython, the type %debug when it crashes.\n"
] |
[
-1
] |
[
"debugging",
"introspection",
"python"
] |
stackoverflow_0001487952_debugging_introspection_python.txt
|
Q:
save method in a view
I have a very simple model:
class Artist(models.Model):
name = models.CharField(max_length=64, unique=False)
band = models.CharField(max_length=64, unique=False)
instrument = models.CharField(max_length=64, unique=False)
def __unicode__ (self):
return self.name
that I'm using as a model form:
from django.forms import ModelForm
from artistmod.artistcat.models import *
class ArtistForm(ModelForm):
class Meta:
model = Artist
but I can't seem to construct a view that will save the form data to the database. Currently I'm using:
def create_page(request):
if request.method == 'POST':
form = ArtistForm(request.POST)
if form.is_valid():
form.save()
return render_to_response('display.html')
else:
form = ArtistForm()
return render_to_response('create.html', {
'form': form,
})
can anyone help the newbie?
A:
Apparently the problem resided in my template. I was using
<form action="display/" method="POST">
as opposed to
<form action="." method="POST">
also changed my HttpRequest object from render_to_response to HttpResponseRedirect
true newbie errors but at least it works now
|
save method in a view
|
I have a very simple model:
class Artist(models.Model):
name = models.CharField(max_length=64, unique=False)
band = models.CharField(max_length=64, unique=False)
instrument = models.CharField(max_length=64, unique=False)
def __unicode__ (self):
return self.name
that I'm using as a model form:
from django.forms import ModelForm
from artistmod.artistcat.models import *
class ArtistForm(ModelForm):
class Meta:
model = Artist
but I can't seem to construct a view that will save the form data to the database. Currently I'm using:
def create_page(request):
if request.method == 'POST':
form = ArtistForm(request.POST)
if form.is_valid():
form.save()
return render_to_response('display.html')
else:
form = ArtistForm()
return render_to_response('create.html', {
'form': form,
})
can anyone help the newbie?
|
[
"Apparently the problem resided in my template. I was using \n <form action=\"display/\" method=\"POST\">\n\nas opposed to\n <form action=\".\" method=\"POST\">\n\nalso changed my HttpRequest object from render_to_response to HttpResponseRedirect\ntrue newbie errors but at least it works now\n"
] |
[
1
] |
[] |
[] |
[
"django",
"django_forms",
"python"
] |
stackoverflow_0001489041_django_django_forms_python.txt
|
Q:
In python, how do you launch an Amazon EC2 instance from within a Google App Engine app?
In python, what is the best way to launch an Amazon EC2 instance from within a Google App Engine app? I would like to keep my AWS keys as secure as possible and be able to retrieve the public DNS for the newly launched EC2 instance.
A:
I believe you can use boto with the current App Engine release (and maybe AEP to help, though maybe that's not needed for your specific task of starting an instance and retrieving its public domain name). This post has a good overview of "lessons learned" while getting all this to work. (Sorry, no personal experience -- my occasional playing with AWS has been done from my laptop, disconnected with my more intense use of GAE;-).
If this doesn't work for you or doesn't meet you needs, perhaps you could give us details of what you tried, how it failed, and what different behavior(s) you're looking for...?
|
In python, how do you launch an Amazon EC2 instance from within a Google App Engine app?
|
In python, what is the best way to launch an Amazon EC2 instance from within a Google App Engine app? I would like to keep my AWS keys as secure as possible and be able to retrieve the public DNS for the newly launched EC2 instance.
|
[
"I believe you can use boto with the current App Engine release (and maybe AEP to help, though maybe that's not needed for your specific task of starting an instance and retrieving its public domain name). This post has a good overview of \"lessons learned\" while getting all this to work. (Sorry, no personal experience -- my occasional playing with AWS has been done from my laptop, disconnected with my more intense use of GAE;-).\nIf this doesn't work for you or doesn't meet you needs, perhaps you could give us details of what you tried, how it failed, and what different behavior(s) you're looking for...?\n"
] |
[
5
] |
[] |
[] |
[
"amazon_ec2",
"google_app_engine",
"python"
] |
stackoverflow_0001490429_amazon_ec2_google_app_engine_python.txt
|
Q:
calling Objective C functions from Python?
Is there a way to dynamically call an Objective C function from Python?
For example, On the mac I would like to call this Objective C function
[NSSpeechSynthesizer availableVoices]
without having to precompile any special Python wrapper module.
A:
As others have mentioned, PyObjC is the way to go. But, for completeness' sake, here's how you can do it with ctypes, in case you need it to work on versions of OS X prior to 10.5 that do not have PyObjC installed:
import ctypes
import ctypes.util
# Need to do this to load the NSSpeechSynthesizer class, which is in AppKit.framework
appkit = ctypes.cdll.LoadLibrary(ctypes.util.find_library('AppKit'))
objc = ctypes.cdll.LoadLibrary(ctypes.util.find_library('objc'))
objc.objc_getClass.restype = ctypes.c_void_p
objc.sel_registerName.restype = ctypes.c_void_p
objc.objc_msgSend.restype = ctypes.c_void_p
objc.objc_msgSend.argtypes = [ctypes.c_void_p, ctypes.c_void_p]
# Without this, it will still work, but it'll leak memory
NSAutoreleasePool = objc.objc_getClass('NSAutoreleasePool')
pool = objc.objc_msgSend(NSAutoreleasePool, objc.sel_registerName('alloc'))
pool = objc.objc_msgSend(pool, objc.sel_registerName('init'))
NSSpeechSynthesizer = objc.objc_getClass('NSSpeechSynthesizer')
availableVoices = objc.objc_msgSend(NSSpeechSynthesizer, objc.sel_registerName('availableVoices'))
count = objc.objc_msgSend(availableVoices, objc.sel_registerName('count'))
voiceNames = [
ctypes.string_at(
objc.objc_msgSend(
objc.objc_msgSend(availableVoices, objc.sel_registerName('objectAtIndex:'), i),
objc.sel_registerName('UTF8String')))
for i in range(count)]
print voiceNames
objc.objc_msgSend(pool, objc.sel_registerName('release'))
It ain't pretty, but it gets the job done. The final list of available names is stored in the voiceNames variable above.
2012-4-28 Update: Fixed to work in 64-bit Python builds by making sure all parameters and return types are passed as pointers instead of 32-bit integers.
A:
Since OS X 10.5, OS X has shipped with the PyObjC bridge, a Python-Objective-C bridge. It uses the BridgeSupport framework to map Objective-C frameworks to Python. Unlike, MacRuby, PyObjC is a classical bridge--there is a proxy object on the python side for each ObjC object and visa versa. The bridge is pretty seamless, however, and its possible to write entire apps in PyObjC (Xcode has some basic PyObjC support, and you can download the app and file templates for Xcode from the PyObjC SVN at the above link). Many folks use it for utilities or for app-scripting/plugins. Apple's developer site also has an introduction to developing Cocoa applications with Python via PyObjC which is slightly out of date, but may be a good overview for you.
In your case, the following code will call [NSSpeechSynthesizer availableVoices]:
from AppKit import NSSpeechSynthesizer
NSSpeechSynthesizer.availableVoices()
which returns
(
"com.apple.speech.synthesis.voice.Agnes",
"com.apple.speech.synthesis.voice.Albert",
"com.apple.speech.synthesis.voice.Alex",
"com.apple.speech.synthesis.voice.BadNews",
"com.apple.speech.synthesis.voice.Bahh",
"com.apple.speech.synthesis.voice.Bells",
"com.apple.speech.synthesis.voice.Boing",
"com.apple.speech.synthesis.voice.Bruce",
"com.apple.speech.synthesis.voice.Bubbles",
"com.apple.speech.synthesis.voice.Cellos",
"com.apple.speech.synthesis.voice.Deranged",
"com.apple.speech.synthesis.voice.Fred",
"com.apple.speech.synthesis.voice.GoodNews",
"com.apple.speech.synthesis.voice.Hysterical",
"com.apple.speech.synthesis.voice.Junior",
"com.apple.speech.synthesis.voice.Kathy",
"com.apple.speech.synthesis.voice.Organ",
"com.apple.speech.synthesis.voice.Princess",
"com.apple.speech.synthesis.voice.Ralph",
"com.apple.speech.synthesis.voice.Trinoids",
"com.apple.speech.synthesis.voice.Vicki",
"com.apple.speech.synthesis.voice.Victoria",
"com.apple.speech.synthesis.voice.Whisper",
"com.apple.speech.synthesis.voice.Zarvox"
)
(a bridged NSCFArray) on my SL machine.
A:
Mac OS X from 10.5 onward has shipped with Python and the objc module that will let you do what you want.
An example:
from Foundation import *
thing = NSKeyedUnarchiver.unarchiveObjectWithFile_(some_plist_file)
You can find more documentation here.
A:
You probably want PyObjC. That said, I've never actually used it myself (I've only ever seen demos), so I'm not certain that it will do what you need.
|
calling Objective C functions from Python?
|
Is there a way to dynamically call an Objective C function from Python?
For example, On the mac I would like to call this Objective C function
[NSSpeechSynthesizer availableVoices]
without having to precompile any special Python wrapper module.
|
[
"As others have mentioned, PyObjC is the way to go. But, for completeness' sake, here's how you can do it with ctypes, in case you need it to work on versions of OS X prior to 10.5 that do not have PyObjC installed:\nimport ctypes\nimport ctypes.util\n\n# Need to do this to load the NSSpeechSynthesizer class, which is in AppKit.framework\nappkit = ctypes.cdll.LoadLibrary(ctypes.util.find_library('AppKit'))\nobjc = ctypes.cdll.LoadLibrary(ctypes.util.find_library('objc'))\n\nobjc.objc_getClass.restype = ctypes.c_void_p\nobjc.sel_registerName.restype = ctypes.c_void_p\nobjc.objc_msgSend.restype = ctypes.c_void_p\nobjc.objc_msgSend.argtypes = [ctypes.c_void_p, ctypes.c_void_p]\n\n# Without this, it will still work, but it'll leak memory\nNSAutoreleasePool = objc.objc_getClass('NSAutoreleasePool')\npool = objc.objc_msgSend(NSAutoreleasePool, objc.sel_registerName('alloc'))\npool = objc.objc_msgSend(pool, objc.sel_registerName('init'))\n\nNSSpeechSynthesizer = objc.objc_getClass('NSSpeechSynthesizer')\navailableVoices = objc.objc_msgSend(NSSpeechSynthesizer, objc.sel_registerName('availableVoices'))\n\ncount = objc.objc_msgSend(availableVoices, objc.sel_registerName('count'))\nvoiceNames = [\n ctypes.string_at(\n objc.objc_msgSend(\n objc.objc_msgSend(availableVoices, objc.sel_registerName('objectAtIndex:'), i),\n objc.sel_registerName('UTF8String')))\n for i in range(count)]\nprint voiceNames\n\nobjc.objc_msgSend(pool, objc.sel_registerName('release'))\n\nIt ain't pretty, but it gets the job done. The final list of available names is stored in the voiceNames variable above.\n2012-4-28 Update: Fixed to work in 64-bit Python builds by making sure all parameters and return types are passed as pointers instead of 32-bit integers.\n",
"Since OS X 10.5, OS X has shipped with the PyObjC bridge, a Python-Objective-C bridge. It uses the BridgeSupport framework to map Objective-C frameworks to Python. Unlike, MacRuby, PyObjC is a classical bridge--there is a proxy object on the python side for each ObjC object and visa versa. The bridge is pretty seamless, however, and its possible to write entire apps in PyObjC (Xcode has some basic PyObjC support, and you can download the app and file templates for Xcode from the PyObjC SVN at the above link). Many folks use it for utilities or for app-scripting/plugins. Apple's developer site also has an introduction to developing Cocoa applications with Python via PyObjC which is slightly out of date, but may be a good overview for you.\nIn your case, the following code will call [NSSpeechSynthesizer availableVoices]:\nfrom AppKit import NSSpeechSynthesizer\n\nNSSpeechSynthesizer.availableVoices()\n\nwhich returns\n(\n \"com.apple.speech.synthesis.voice.Agnes\",\n \"com.apple.speech.synthesis.voice.Albert\",\n \"com.apple.speech.synthesis.voice.Alex\",\n \"com.apple.speech.synthesis.voice.BadNews\",\n \"com.apple.speech.synthesis.voice.Bahh\",\n \"com.apple.speech.synthesis.voice.Bells\",\n \"com.apple.speech.synthesis.voice.Boing\",\n \"com.apple.speech.synthesis.voice.Bruce\",\n \"com.apple.speech.synthesis.voice.Bubbles\",\n \"com.apple.speech.synthesis.voice.Cellos\",\n \"com.apple.speech.synthesis.voice.Deranged\",\n \"com.apple.speech.synthesis.voice.Fred\",\n \"com.apple.speech.synthesis.voice.GoodNews\",\n \"com.apple.speech.synthesis.voice.Hysterical\",\n \"com.apple.speech.synthesis.voice.Junior\",\n \"com.apple.speech.synthesis.voice.Kathy\",\n \"com.apple.speech.synthesis.voice.Organ\",\n \"com.apple.speech.synthesis.voice.Princess\",\n \"com.apple.speech.synthesis.voice.Ralph\",\n \"com.apple.speech.synthesis.voice.Trinoids\",\n \"com.apple.speech.synthesis.voice.Vicki\",\n \"com.apple.speech.synthesis.voice.Victoria\",\n \"com.apple.speech.synthesis.voice.Whisper\",\n \"com.apple.speech.synthesis.voice.Zarvox\"\n)\n\n(a bridged NSCFArray) on my SL machine.\n",
"Mac OS X from 10.5 onward has shipped with Python and the objc module that will let you do what you want.\nAn example:\nfrom Foundation import *\n\nthing = NSKeyedUnarchiver.unarchiveObjectWithFile_(some_plist_file)\n\nYou can find more documentation here.\n",
"You probably want PyObjC. That said, I've never actually used it myself (I've only ever seen demos), so I'm not certain that it will do what you need.\n"
] |
[
23,
10,
4,
3
] |
[] |
[] |
[
"macos",
"objective_c",
"python"
] |
stackoverflow_0001490039_macos_objective_c_python.txt
|
Q:
Difference between attributes and style tags in lxml
I am trying to learn lxml after having used BeautifulSoup. However, I am not a strong programmer in general.
I have the following code in some source html:
<p style="font-family:times;text-align:justify"><font size="2"><b><i> The reasons to eat pickles include: </i></b></font></p>
Because the text is bolded, I want to pull that text. I can't seem to be able to differentiate that that particular line is bolded.
When I started this work this evening I was working with a document that had the word bold in the style attrib like the following:
<p style="font-style:italic;font-weight:bold;margin:0pt 0pt 6.0pt;text-indent:0pt;"><b><i><font size="2" face="Times New Roman" style="font-size:10.0pt;">The reason I like tomatoes include:</font></i></b></p>
I should say that the document I am working from is a fragment that I read in the lines, joined the lines together and then used the html.fromstring function
txtFile=open(r'c:\myfile.htm','r').readlines()
strHTM=''.join(txtFile)
newHTM=html.fromstring(strHTM)
and so the first line of htm code I have above is newHTM[19]
Humm this seems to be getting me closer
newHTM.cssselect('b')
I don't fully understand yet but here is the solution:
for each in newHTM:
if each.cssselect('b')
each.text_content()
A:
Using the CSS API really isn't the right approach. If you want to find all b elements, do
strHTM=open(r'c:\myfile.htm','r').read() # no need to split it into lines first
newHTM=html.fromString(strHTM)
bELements = newHTM.findall('b')
for b in bElements:
print b.text_content()
|
Difference between attributes and style tags in lxml
|
I am trying to learn lxml after having used BeautifulSoup. However, I am not a strong programmer in general.
I have the following code in some source html:
<p style="font-family:times;text-align:justify"><font size="2"><b><i> The reasons to eat pickles include: </i></b></font></p>
Because the text is bolded, I want to pull that text. I can't seem to be able to differentiate that that particular line is bolded.
When I started this work this evening I was working with a document that had the word bold in the style attrib like the following:
<p style="font-style:italic;font-weight:bold;margin:0pt 0pt 6.0pt;text-indent:0pt;"><b><i><font size="2" face="Times New Roman" style="font-size:10.0pt;">The reason I like tomatoes include:</font></i></b></p>
I should say that the document I am working from is a fragment that I read in the lines, joined the lines together and then used the html.fromstring function
txtFile=open(r'c:\myfile.htm','r').readlines()
strHTM=''.join(txtFile)
newHTM=html.fromstring(strHTM)
and so the first line of htm code I have above is newHTM[19]
Humm this seems to be getting me closer
newHTM.cssselect('b')
I don't fully understand yet but here is the solution:
for each in newHTM:
if each.cssselect('b')
each.text_content()
|
[
"Using the CSS API really isn't the right approach. If you want to find all b elements, do\nstrHTM=open(r'c:\\myfile.htm','r').read() # no need to split it into lines first\nnewHTM=html.fromString(strHTM)\nbELements = newHTM.findall('b')\nfor b in bElements:\n print b.text_content()\n\n"
] |
[
0
] |
[] |
[] |
[
"lxml",
"python"
] |
stackoverflow_0001490474_lxml_python.txt
|
Q:
Managing Perl habits in a Python environment
Perl habits die hard. Variable declaration, scoping, global/local is different between the 2 languages. Is there a set of recommended python language idioms that will render the transition from perl coding to python coding less painful.
Subtle variable misspelling can waste an extraordinary amount of time.
I understand the variable declaration issue is quasi-religious among python folks
I'm not arguing for language changes or features, just a reliable bridge between
the 2 languages that will not cause my perl habits sink my python efforts.
Thanks.
A:
Splitting Python classes into separate files (like in Java, one class per file) helps find scoping problems, although this is not idiomatic python (that is, not pythonic).
I have been writing python after much perl and found this from tchrist to be useful, even though it is old:
http://linuxmafia.com/faq/Devtools/python-to-perl-conversions.html
Getting used to doing without perl's most excellent variable scoping has been the second most difficult issue with my perl->python transition. The first is obvious if you have much perl: CPAN.
A:
I like the question, but I don't have any experience in Perl so I'm not sure how to best advise you.
I suggest you do a Google search for "Python idioms". You will find some gems. In particular:
http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html
http://docs.python.org/dev/howto/doanddont.html
http://jaynes.colorado.edu/PythonIdioms.html
As for the variable "declaration" issue, here's my best advice for you:
Remember that in Python, objects have a life of their own, separate from variable names. A variable name is a tag that is bound to an object. At any time, you may rebind the name to a different object, perhaps of a completely different type. Thus, this is perfectly legal:
x = 1 # bind x to integer, value == 1
x = "1" # bind x to string, value is "1"
Python is in fact strongly typed; try executing the code 1 + "1" and see how well it works, if you don't believe me. The integer object with value 1 does not accept addition of a string value, in the absence of explicit type coercion. So Python names never ever have sigil characters that flag properties of the variable; that's just not how Python does things. Any legal identifier name could be bound to any Python object of any type.
A:
In python $_ does not exist except in the python shell and variables with global scope are frowned upon.
In practice this has two major effects:
In Python you can't use regular expressions as naturally as Perl, s0 matching each iterated $_ and similarly catching matches is more cumbersome
Python functions tend to be called explicitly or have default variables
However these differences are fairly minor when one considers that in Python just about everything becomes a class. When I used to do Perl I thought of "carving"; in Python I rather feel I am "composing".
Python doesn't have the idiomatic richness of Perl and I think it is probably a mistake to attempt to do the translation.
A:
Read, understand, follow, and love PEP 8, which details the style guidelines for everything about Python.
Seriously, if you want to know about the recommended idioms and habits of Python, that's the source.
A:
Don't mis-type your variable names. Seriously. Use short, easy, descriptive ones, use them locally, and don't rely on the global scope.
If you're doing a larger project that isn't served well by this, use pylint, unit tests and coverage.py to make SURE your code does what you expect.
Copied from a comment in one of the other threads:
"‘strict vars’ is primarily intended to stop typoed references and missed-out ‘my’s from creating accidental globals (well, package variables in Perl terms). This can't happen in Python as bare assignments default to local declaration, and bare unassigned symbols result in an exception."
|
Managing Perl habits in a Python environment
|
Perl habits die hard. Variable declaration, scoping, global/local is different between the 2 languages. Is there a set of recommended python language idioms that will render the transition from perl coding to python coding less painful.
Subtle variable misspelling can waste an extraordinary amount of time.
I understand the variable declaration issue is quasi-religious among python folks
I'm not arguing for language changes or features, just a reliable bridge between
the 2 languages that will not cause my perl habits sink my python efforts.
Thanks.
|
[
"Splitting Python classes into separate files (like in Java, one class per file) helps find scoping problems, although this is not idiomatic python (that is, not pythonic).\nI have been writing python after much perl and found this from tchrist to be useful, even though it is old:\nhttp://linuxmafia.com/faq/Devtools/python-to-perl-conversions.html\nGetting used to doing without perl's most excellent variable scoping has been the second most difficult issue with my perl->python transition. The first is obvious if you have much perl: CPAN.\n",
"I like the question, but I don't have any experience in Perl so I'm not sure how to best advise you.\nI suggest you do a Google search for \"Python idioms\". You will find some gems. In particular:\nhttp://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html\nhttp://docs.python.org/dev/howto/doanddont.html\nhttp://jaynes.colorado.edu/PythonIdioms.html\nAs for the variable \"declaration\" issue, here's my best advice for you:\nRemember that in Python, objects have a life of their own, separate from variable names. A variable name is a tag that is bound to an object. At any time, you may rebind the name to a different object, perhaps of a completely different type. Thus, this is perfectly legal:\nx = 1 # bind x to integer, value == 1\nx = \"1\" # bind x to string, value is \"1\"\n\nPython is in fact strongly typed; try executing the code 1 + \"1\" and see how well it works, if you don't believe me. The integer object with value 1 does not accept addition of a string value, in the absence of explicit type coercion. So Python names never ever have sigil characters that flag properties of the variable; that's just not how Python does things. Any legal identifier name could be bound to any Python object of any type.\n",
"In python $_ does not exist except in the python shell and variables with global scope are frowned upon.\nIn practice this has two major effects:\n\nIn Python you can't use regular expressions as naturally as Perl, s0 matching each iterated $_ and similarly catching matches is more cumbersome\nPython functions tend to be called explicitly or have default variables\n\nHowever these differences are fairly minor when one considers that in Python just about everything becomes a class. When I used to do Perl I thought of \"carving\"; in Python I rather feel I am \"composing\".\nPython doesn't have the idiomatic richness of Perl and I think it is probably a mistake to attempt to do the translation.\n",
"Read, understand, follow, and love PEP 8, which details the style guidelines for everything about Python.\nSeriously, if you want to know about the recommended idioms and habits of Python, that's the source.\n",
"Don't mis-type your variable names. Seriously. Use short, easy, descriptive ones, use them locally, and don't rely on the global scope. \nIf you're doing a larger project that isn't served well by this, use pylint, unit tests and coverage.py to make SURE your code does what you expect.\nCopied from a comment in one of the other threads:\n\"‘strict vars’ is primarily intended to stop typoed references and missed-out ‘my’s from creating accidental globals (well, package variables in Perl terms). This can't happen in Python as bare assignments default to local declaration, and bare unassigned symbols result in an exception.\"\n"
] |
[
2,
1,
1,
1,
0
] |
[] |
[] |
[
"perl",
"python",
"transitions"
] |
stackoverflow_0001489355_perl_python_transitions.txt
|
Q:
get the exit code for python program
I'm running a python program on WindowsXP. How can I obtain the exit code after my program ends?
A:
From a Windows command line you can use:
echo %ERRORLEVEL%
For example:
C:\work>python helloworld.py
Hello World!
C:\work>echo %ERRORLEVEL%
0
A:
How do you run the program?
Exit in python with sys.exit(1)
If you're in CMD or a BAT file you can access the variable %ERRORLEVEL% to obtain the exit code.
For example (batch file):
IF ERRORLEVEL 1 GOTO LABEL
A:
You can also use python to start your python-program
import subprocess
import sys
retcode = subprocess.call([sys.executable, "myscript.py"])
print retcode
A:
If you want to use ERRORLEVEL (as opposed to %ERRORLEVEL%) to check for a specific exit value use
IF ERRORLEVEL <N> IF NOT ERRORLEVEL <N+1> <COMMAND>
For example
IF ERRORLEVEL 3 IF NOT ERRORLEVEL 4 GOTO LABEL
|
get the exit code for python program
|
I'm running a python program on WindowsXP. How can I obtain the exit code after my program ends?
|
[
"From a Windows command line you can use:\necho %ERRORLEVEL%\n\nFor example:\nC:\\work>python helloworld.py\nHello World!\n\nC:\\work>echo %ERRORLEVEL%\n0\n\n",
"How do you run the program?\nExit in python with sys.exit(1)\nIf you're in CMD or a BAT file you can access the variable %ERRORLEVEL% to obtain the exit code.\nFor example (batch file):\nIF ERRORLEVEL 1 GOTO LABEL\n\n",
"You can also use python to start your python-program\nimport subprocess\nimport sys\nretcode = subprocess.call([sys.executable, \"myscript.py\"])\nprint retcode\n\n",
"If you want to use ERRORLEVEL (as opposed to %ERRORLEVEL%) to check for a specific exit value use\nIF ERRORLEVEL <N> IF NOT ERRORLEVEL <N+1> <COMMAND>\n\nFor example \nIF ERRORLEVEL 3 IF NOT ERRORLEVEL 4 GOTO LABEL\n\n"
] |
[
8,
5,
4,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001491796_python.txt
|
Q:
Precise response to tablet/mouse events in Windows
How can I tell Windows not to do unhelpful pre-processing on tablet pen events?
I am programming in Python 2.6, targetting tablet PCs running Windows 7 (though I would like my program to work with little modification on XP with a SMART interactive whiteboard, and for mouse users on Linux/Mac). I've written a program which hooks into the normal Windows mouse events, WM_MOUSEMOVE etc., and writes on a canvas.
The problem is that the mouse messages are being fiddled with before they reach my application. I found that if I make long strokes and pause between strokes then the mouse messages are sent properly. But if I make several rapid short strokes, then something is doing unhelpful pre-processing. Specifically, if I make a down-stroke about 10 pixels long, and then make another downstroke about five pixels to the right of the first, then the second WM_MOUSEDOWN reports that it comes from exactly the same place as the first.
This looks like some sort of pre-processing, perhaps so that naive applications don't get confused about double-clicks. But for my application, where I want very faithful response to rapid gestures, it's unhelpful.
I found a reference to the MicrosoftTabletPenServiceProperty atom, and to CS_DBLCLKS window style, and I turned them both off with the following piece of Python code:
hwnd = self.GetHandle()
tablet_atom = "MicrosoftTabletPenServiceProperty"
atom_ID = windll.kernel32.GlobalAddAtomA(tablet_atom)
windll.user32.SetPropA(hwnd,tablet_atom,1)
currentstyle = windll.user32.GetClassLongA(hwnd, win32con.GCL_STYLE)
windll.user32.SetClassLongA(hwnd, win32con.GCL_STYLE, currentstyle & ~win32con.CS_DBLCLKS)
But it has no effect.
I tried writing a low-level hook for the mouse driver, with SetWindowsHookEx, but it doesn't work -- obviously the mouse messages are being pre-processed even before they are sent to my low-level Windows hook.
I would be very grateful for advice about how to turn off this pre-processing. I do not want to switch to RealTimeStylus -- first because it won't work on Windows XP plus SMART interactive whiteboard, second because I can't see how to use RealTimeStylus in CPython, so I would need to switch to IronPython, and then my code would no longer run on Linux/Mac.
Damon.
A:
For raw mouse messages, you can use WM_INPUT on XP and later. Seven added some touch specific stuff: WM_GESTURE and WM_TOUCH
|
Precise response to tablet/mouse events in Windows
|
How can I tell Windows not to do unhelpful pre-processing on tablet pen events?
I am programming in Python 2.6, targetting tablet PCs running Windows 7 (though I would like my program to work with little modification on XP with a SMART interactive whiteboard, and for mouse users on Linux/Mac). I've written a program which hooks into the normal Windows mouse events, WM_MOUSEMOVE etc., and writes on a canvas.
The problem is that the mouse messages are being fiddled with before they reach my application. I found that if I make long strokes and pause between strokes then the mouse messages are sent properly. But if I make several rapid short strokes, then something is doing unhelpful pre-processing. Specifically, if I make a down-stroke about 10 pixels long, and then make another downstroke about five pixels to the right of the first, then the second WM_MOUSEDOWN reports that it comes from exactly the same place as the first.
This looks like some sort of pre-processing, perhaps so that naive applications don't get confused about double-clicks. But for my application, where I want very faithful response to rapid gestures, it's unhelpful.
I found a reference to the MicrosoftTabletPenServiceProperty atom, and to CS_DBLCLKS window style, and I turned them both off with the following piece of Python code:
hwnd = self.GetHandle()
tablet_atom = "MicrosoftTabletPenServiceProperty"
atom_ID = windll.kernel32.GlobalAddAtomA(tablet_atom)
windll.user32.SetPropA(hwnd,tablet_atom,1)
currentstyle = windll.user32.GetClassLongA(hwnd, win32con.GCL_STYLE)
windll.user32.SetClassLongA(hwnd, win32con.GCL_STYLE, currentstyle & ~win32con.CS_DBLCLKS)
But it has no effect.
I tried writing a low-level hook for the mouse driver, with SetWindowsHookEx, but it doesn't work -- obviously the mouse messages are being pre-processed even before they are sent to my low-level Windows hook.
I would be very grateful for advice about how to turn off this pre-processing. I do not want to switch to RealTimeStylus -- first because it won't work on Windows XP plus SMART interactive whiteboard, second because I can't see how to use RealTimeStylus in CPython, so I would need to switch to IronPython, and then my code would no longer run on Linux/Mac.
Damon.
|
[
"For raw mouse messages, you can use WM_INPUT on XP and later. Seven added some touch specific stuff: WM_GESTURE and WM_TOUCH\n"
] |
[
1
] |
[] |
[] |
[
"python",
"tablet_pc",
"winapi"
] |
stackoverflow_0001490011_python_tablet_pc_winapi.txt
|
Q:
Testing for an empty iterator in a Python for... loop
The code below is based on this recipe. However, the key point of the recipe - that it provides a way to break out of the iteration on an iterator if the iterator is empty - doesn't seem to work here, instead behaving in the following undesired ways:
If get_yes_no_answer() == False and there are two or more items left in the iterator, next_choice is skipped, rather than being selected on the next iteration.
If get_yes_no_answer() == False and there are less than two items left in the iterator, my_func() returns None.
How can I ensure that:
If get_yes_no_answer() == False and there are two or more items left in the iterator, next_choice is not skipped?
If get_yes_no_answer() == False and there is one item left in the iterator, my_func() prints it and calls get_yes_no_answer()?
If get_yes_no_answer() == False and there are no items left in the iterator, the except StopIteration clause is triggered?
Here's the code:
def my_func(choice_pattern, input):
# Search in input for some things to choose from.
choice_iterator = choice_pattern.finditer(input, re.M)
if not choice_iterator:
print "No choices. Exiting..."
sys.exit()
else:
# Show choices to the user. For each one, ask user for a yes/no response. If
# choice accepted, return a result. Otherwise show user next choice. If no
# choices accepted by user, quit.
for choice in choice_iterator:
print choice.group()
# get_yes_no_answer() returns True or False depending on user response.
if get_yes_no_answer():
return choice
else:
# Check if iterator is empty. If so, quit; if not, do something else.
try:
next_choice = choice_iterator.next()
except StopIteration:
print "No matches. Exiting..."
sys.exit()
else:
choice_iterator = itertools.chain([next_choice], choice_iterator)
A:
why are you doing it this way at all? why not just:
def get_choice(pattern, inpt):
choices = pattern.finditer(inpt, re.M)
if not choices:
sys.exit('No choices')
for choice in choices:
print(choice.group(0))
if get_yes_no_answer():
return choice
sys.exit('No matches')
I don't know what your is the length of your input but I doubt it's worth the trouble.
A:
You don't need to check if the iterator is empty. The for loop will do that for you, and stop when the iterator is empty. It's as simple as that.
Also, you don't need the else after the sys.exit() or the return.
That done, your code looks like this:
def my_func(choice_pattern, input):
# Search in input for some things to choose from.
choice_iterator = choice_pattern.finditer(input, re.M)
if not choice_iterator:
print "No choices. Exiting..."
sys.exit()
# Show choices to the user. For each one, ask user for a yes/no response. If
# choice accepted, return a result. Otherwise show user next choice. If no
# choices accepted by user, quit.
for choice in choice_iterator:
print choice
# get_yes_no_answer() returns True or False depending on user response.
if get_yes_no_answer():
return choice
# Loop exited without matches.
print "No matches. Exiting..."
sys.exit()
That's it!
What happens is that you in the loop, also gets the next item. The result is that you in fact only show every second answer.
In fact, you can simplify it even more:
def my_func(choice_pattern, input):
choice_iterator = choice_pattern.finditer(input, re.M)
if choice_iterator:
for choice in choice_iterator:
print choice
if get_yes_no_answer():
return choice
# If there is no choices or no matches, you end up here:
print "No matches. Exiting..."
sys.exit()
Iterators are used pretty much as any sequence type. You don't need to treat it differently from a list.
A:
See the pairwise iterator from this question. You can then check for last item like this:
MISSING = object()
for choice, next_choice in pairwise(chain(choice_iterator, [MISSING])):
print(choice.group())
if get_yes_no_answer():
return choice.group()
if next_choice is MISSING:
print("No matches. Exiting...")
sys.exit()
In the example you showed this doesn't seem to be necessary. You don't need to check if finditer returned an iterator because it always does. And you can just fall through the for loop if you don't find what you want:
def my_func(choice_pattern, input):
"""Search in input for some things to choose from."""
for choice in choice_pattern.finditer(input, re.M):
print(choice.group())
if get_yes_no_answer():
return choice.group()
else:
print("No choices. Exiting...")
sys.exit()
A:
Why so complex approach?
I think this should do pretty much the same:
def my_func(pattern, data):
choices = pattern.findall(data, re.M)
while(len(choices)>1):
choice = choices.pop(0)
if get_yes_no_answer():
return choice
else:
choices.pop(0)
else:
return None
|
Testing for an empty iterator in a Python for... loop
|
The code below is based on this recipe. However, the key point of the recipe - that it provides a way to break out of the iteration on an iterator if the iterator is empty - doesn't seem to work here, instead behaving in the following undesired ways:
If get_yes_no_answer() == False and there are two or more items left in the iterator, next_choice is skipped, rather than being selected on the next iteration.
If get_yes_no_answer() == False and there are less than two items left in the iterator, my_func() returns None.
How can I ensure that:
If get_yes_no_answer() == False and there are two or more items left in the iterator, next_choice is not skipped?
If get_yes_no_answer() == False and there is one item left in the iterator, my_func() prints it and calls get_yes_no_answer()?
If get_yes_no_answer() == False and there are no items left in the iterator, the except StopIteration clause is triggered?
Here's the code:
def my_func(choice_pattern, input):
# Search in input for some things to choose from.
choice_iterator = choice_pattern.finditer(input, re.M)
if not choice_iterator:
print "No choices. Exiting..."
sys.exit()
else:
# Show choices to the user. For each one, ask user for a yes/no response. If
# choice accepted, return a result. Otherwise show user next choice. If no
# choices accepted by user, quit.
for choice in choice_iterator:
print choice.group()
# get_yes_no_answer() returns True or False depending on user response.
if get_yes_no_answer():
return choice
else:
# Check if iterator is empty. If so, quit; if not, do something else.
try:
next_choice = choice_iterator.next()
except StopIteration:
print "No matches. Exiting..."
sys.exit()
else:
choice_iterator = itertools.chain([next_choice], choice_iterator)
|
[
"why are you doing it this way at all? why not just:\ndef get_choice(pattern, inpt):\n choices = pattern.finditer(inpt, re.M)\n if not choices:\n sys.exit('No choices')\n for choice in choices:\n print(choice.group(0))\n if get_yes_no_answer():\n return choice\n sys.exit('No matches')\n\nI don't know what your is the length of your input but I doubt it's worth the trouble.\n",
"You don't need to check if the iterator is empty. The for loop will do that for you, and stop when the iterator is empty. It's as simple as that.\nAlso, you don't need the else after the sys.exit() or the return.\nThat done, your code looks like this:\ndef my_func(choice_pattern, input):\n # Search in input for some things to choose from.\n choice_iterator = choice_pattern.finditer(input, re.M)\n if not choice_iterator:\n print \"No choices. Exiting...\"\n sys.exit()\n\n # Show choices to the user. For each one, ask user for a yes/no response. If\n # choice accepted, return a result. Otherwise show user next choice. If no\n # choices accepted by user, quit.\n for choice in choice_iterator:\n print choice\n # get_yes_no_answer() returns True or False depending on user response.\n if get_yes_no_answer():\n return choice\n # Loop exited without matches.\n print \"No matches. Exiting...\"\n sys.exit()\n\nThat's it!\nWhat happens is that you in the loop, also gets the next item. The result is that you in fact only show every second answer.\nIn fact, you can simplify it even more:\ndef my_func(choice_pattern, input):\n choice_iterator = choice_pattern.finditer(input, re.M)\n if choice_iterator:\n for choice in choice_iterator:\n print choice\n if get_yes_no_answer():\n return choice\n # If there is no choices or no matches, you end up here:\n print \"No matches. Exiting...\"\n sys.exit()\n\nIterators are used pretty much as any sequence type. You don't need to treat it differently from a list.\n",
"See the pairwise iterator from this question. You can then check for last item like this:\nMISSING = object()\nfor choice, next_choice in pairwise(chain(choice_iterator, [MISSING])):\n print(choice.group())\n if get_yes_no_answer():\n return choice.group()\n if next_choice is MISSING:\n print(\"No matches. Exiting...\")\n sys.exit()\n\nIn the example you showed this doesn't seem to be necessary. You don't need to check if finditer returned an iterator because it always does. And you can just fall through the for loop if you don't find what you want:\ndef my_func(choice_pattern, input):\n \"\"\"Search in input for some things to choose from.\"\"\"\n for choice in choice_pattern.finditer(input, re.M):\n print(choice.group())\n if get_yes_no_answer():\n return choice.group()\n else:\n print(\"No choices. Exiting...\")\n sys.exit()\n\n",
"Why so complex approach?\nI think this should do pretty much the same:\ndef my_func(pattern, data):\n choices = pattern.findall(data, re.M)\n while(len(choices)>1):\n choice = choices.pop(0)\n if get_yes_no_answer():\n return choice\n else:\n choices.pop(0)\n else:\n return None\n\n"
] |
[
4,
2,
0,
0
] |
[] |
[] |
[
"conditional",
"exception",
"iterator",
"python"
] |
stackoverflow_0001491957_conditional_exception_iterator_python.txt
|
Q:
How do I disable psycopg2 connection pooling?
I have configured pgpool-II for postgres connection pooling and I want to disable psycopg2 connection pooling. How do I do this?
Thanks!
A:
psycopg2 doesn't pool connections unless you explicitely use the psycopg.pool module.
|
How do I disable psycopg2 connection pooling?
|
I have configured pgpool-II for postgres connection pooling and I want to disable psycopg2 connection pooling. How do I do this?
Thanks!
|
[
"psycopg2 doesn't pool connections unless you explicitely use the psycopg.pool module.\n"
] |
[
6
] |
[
"I don't think you can. Dan McKinley bemoaned this fact (among some other interesting issues) in his blog post Python PostgreSQL Driver Authors Hate You.\n"
] |
[
-1
] |
[
"psycopg2",
"python"
] |
stackoverflow_0001440245_psycopg2_python.txt
|
Q:
How to obtain the name of the calling shell in Python?
I have a Python script that is always called from a shell, which can be either zsh or bash.
How can I tell which one called the script?
A:
In Linux you can use procfs:
>>> os.readlink('/proc/%d/exe' % os.getppid())
'/bin/bash'
os.getppid() returns the PID of parent process. This is portable. But obtaining process name can't be done in portable way. You can parse ps output which is available on all unices, e.g. with psutil.
A:
You can't do this in a reliable automated way.
Environment variables can be misleading (a user can maliciously switch them). Most automatic shell variables aren't "leaky", i.e. they are only visible in the shell process, and not for child processes.
You could figure out your parent PID and then search the list of processes for that ID. Doesn't work if you're run in the background (in this case PPID is always 1).
A user could start your program from within a script. Which is the correct shell in this case? The one in which the script was started or the script's shell?
Other programs can use system calls to run your script. In this case, you'd get either their shell or nothing.
If you have absolute control over the user's environment, then put a variable in their profile (check the manuals for BASH and ZSH for a file which is always read at startup. IIRC, it's .profile for BASH).
[EDIT] Create an alias which is invoked for both shells. In the alias, use
env SHELL_HINT="x$BASH_VERSION" your_script.py
That should evaluate to "x" for zsh and to something else for bash.
|
How to obtain the name of the calling shell in Python?
|
I have a Python script that is always called from a shell, which can be either zsh or bash.
How can I tell which one called the script?
|
[
"In Linux you can use procfs:\n>>> os.readlink('/proc/%d/exe' % os.getppid())\n'/bin/bash'\n\nos.getppid() returns the PID of parent process. This is portable. But obtaining process name can't be done in portable way. You can parse ps output which is available on all unices, e.g. with psutil. \n",
"You can't do this in a reliable automated way.\n\nEnvironment variables can be misleading (a user can maliciously switch them). Most automatic shell variables aren't \"leaky\", i.e. they are only visible in the shell process, and not for child processes.\nYou could figure out your parent PID and then search the list of processes for that ID. Doesn't work if you're run in the background (in this case PPID is always 1).\nA user could start your program from within a script. Which is the correct shell in this case? The one in which the script was started or the script's shell?\nOther programs can use system calls to run your script. In this case, you'd get either their shell or nothing.\n\nIf you have absolute control over the user's environment, then put a variable in their profile (check the manuals for BASH and ZSH for a file which is always read at startup. IIRC, it's .profile for BASH).\n[EDIT] Create an alias which is invoked for both shells. In the alias, use\nenv SHELL_HINT=\"x$BASH_VERSION\" your_script.py\n\nThat should evaluate to \"x\" for zsh and to something else for bash.\n"
] |
[
10,
0
] |
[
"os.system(\"echo $0\")\nThis works flawlessly on my system:\ncat shell.py: \n\n #!/ms/dist/python/PROJ/core/2.5/bin/python\n\n import os\n print os.system(\"echo $0\")\n\n\nbash-2.05b$ uname -a\nLinux pi929c1n10 2.4.21-32.0.1.EL.msdwhugemem #1 SMP Mon Dec 5 21:32:44 EST 2005 i686 athlon i386 GNU/Linux\n\n\npi929c1n10 /ms/user/h/hirscst 8$ ./shell.py\n/bin/ksh\npi929c1n10 /ms/user/h/hirscst 9$ bash\nbash-2.05b$ ./shell.py\n/bin/ksh\nbash-2.05b$ \n\n"
] |
[
-3
] |
[
"python",
"shell"
] |
stackoverflow_0001492508_python_shell.txt
|
Q:
How to shutdown cherrypy from within?
I am developing on cherrypy, I start it from a python script.
For better development I wonder what is the correct way to stop cherrypy from within the main process (and not from the outside with ctrl-c or SIGTERM).
I assume I have to register a callback function from the main application to be able to stop the cherrypy main process from a worker thread.
But how do I stop the main process from within?
A:
import sys
class MyCherryPyApplication(object):
def default(self):
sys.exit()
default.exposed = True
cherrypy.quickstart(MyCherryPyApplication())
Putting a sys.exit() in any request handler exits the whole server
I would have expected this only terminates the current thread, but it terminates the whole server. That's what I wanted.
|
How to shutdown cherrypy from within?
|
I am developing on cherrypy, I start it from a python script.
For better development I wonder what is the correct way to stop cherrypy from within the main process (and not from the outside with ctrl-c or SIGTERM).
I assume I have to register a callback function from the main application to be able to stop the cherrypy main process from a worker thread.
But how do I stop the main process from within?
|
[
"import sys\nclass MyCherryPyApplication(object):\n\n def default(self):\n sys.exit()\n default.exposed = True\n\ncherrypy.quickstart(MyCherryPyApplication())\n\nPutting a sys.exit() in any request handler exits the whole server\nI would have expected this only terminates the current thread, but it terminates the whole server. That's what I wanted.\n"
] |
[
5
] |
[] |
[] |
[
"cherrypy",
"python"
] |
stackoverflow_0001492699_cherrypy_python.txt
|
Q:
Download multiple pages concurrently?
I'd like to write a script in Python that can grab url's from a database, and download web pages concurrently to speed things instead of waiting for each page to download one after the other.
According to this thread, Python doesn't allow this because of something called Global Interpreter Lock that prevents lauching the same script multiple times.
Before investing time learning the Twisted framework, I'd like to make sure there isn't an easier way to do what I need to do above.
Thank you for any tip.
A:
Don't worry about GIL. In your case it doesn't matter.
Easiest way to do what you want is to create thread pool, using threading module and one of thread pool implementations from ASPN. Each thread from that pool can use httplib to download your web pages.
Another option is to use PyCURL module -- it supports parallel downlaods natively, so you don't have to implement it yourself.
A:
GIL prevents you from effectively doing processor load balancing with threads. As this is not processor loads balancing but preventing one IO wait from stopping the whole download, the GIL is not relevant here. *)
So all you need to do is create several processes that download at the same time. You can do that with the threading module or the multiprocessing module.
*) Well... unless you have Gigabit connections and your problem is really that your processor gets overloaded before your net does. But that's obviously not the case here.
A:
I recently solved this same problem. One thing to consider is that some people don't take kindly to having their servers bogged down and will block an IP address that does so. The standard courtesy that I've heard is about 3 seconds between page requests, but this is flexible.
If you are downloading from multiple websites you can group your URLs by domain and create one thread per. Then in your thread you can do something like this:
for url in urls:
timer = time.time()
# ... get your content ...
# perhaps put content in a queue to be written back to
# your database if it doesn't allow concurrent writes.
while time.time() - timer < 3.0:
time.sleep(0.5)
Sometimes just getting your response will take the full 3 seconds and you don't have to worry about it.
Granted, this won't help you at all if you're only downloading from one site, but it may keep you from getting blocked.
My machine handles about 200 threads before the overhead of managing them slowed the process down. I ended up at something like 40-50 pages per second.
A:
urllib & threading (or multiprocessing) packages has all you need to do the "spider" you need.
What you have to do is get urls from DB, and for each url start a thread or process that
grabs the url.
just as example (misses Data Base urls retrieving):
#!/usr/bin/env python
import Queue
import threading
import urllib2
import time
hosts = ["http://yahoo.com", "http://google.com", "http://amazon.com",
"http://ibm.com", "http://apple.com"]
queue = Queue.Queue()
class ThreadUrl(threading.Thread):
"""Threaded Url Grab"""
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
#grabs host from queue
host = self.queue.get()
#grabs urls of hosts and prints first 1024 bytes of page
url = urllib2.urlopen(host)
print url.read(1024)
#signals to queue job is done
self.queue.task_done()
start = time.time()
def main():
#spawn a pool of threads, and pass them queue instance
for i in range(5):
t = ThreadUrl(queue)
t.setDaemon(True)
t.start()
#populate queue with data
for host in hosts:
queue.put(host)
#wait on the queue until everything has been processed
queue.join()
main()
print "Elapsed Time: %s" % (time.time() - start)
A:
Downloading is IO, which may be done asynchronously using non-blocking sockets or twisted. Both of these solutions will be much more efficient than threading or multiprocessing.
|
Download multiple pages concurrently?
|
I'd like to write a script in Python that can grab url's from a database, and download web pages concurrently to speed things instead of waiting for each page to download one after the other.
According to this thread, Python doesn't allow this because of something called Global Interpreter Lock that prevents lauching the same script multiple times.
Before investing time learning the Twisted framework, I'd like to make sure there isn't an easier way to do what I need to do above.
Thank you for any tip.
|
[
"Don't worry about GIL. In your case it doesn't matter.\nEasiest way to do what you want is to create thread pool, using threading module and one of thread pool implementations from ASPN. Each thread from that pool can use httplib to download your web pages.\nAnother option is to use PyCURL module -- it supports parallel downlaods natively, so you don't have to implement it yourself.\n",
"GIL prevents you from effectively doing processor load balancing with threads. As this is not processor loads balancing but preventing one IO wait from stopping the whole download, the GIL is not relevant here. *)\nSo all you need to do is create several processes that download at the same time. You can do that with the threading module or the multiprocessing module.\n*) Well... unless you have Gigabit connections and your problem is really that your processor gets overloaded before your net does. But that's obviously not the case here.\n",
"I recently solved this same problem. One thing to consider is that some people don't take kindly to having their servers bogged down and will block an IP address that does so. The standard courtesy that I've heard is about 3 seconds between page requests, but this is flexible.\nIf you are downloading from multiple websites you can group your URLs by domain and create one thread per. Then in your thread you can do something like this:\nfor url in urls:\n timer = time.time()\n # ... get your content ...\n # perhaps put content in a queue to be written back to \n # your database if it doesn't allow concurrent writes.\n while time.time() - timer < 3.0:\n time.sleep(0.5)\n\nSometimes just getting your response will take the full 3 seconds and you don't have to worry about it.\nGranted, this won't help you at all if you're only downloading from one site, but it may keep you from getting blocked.\nMy machine handles about 200 threads before the overhead of managing them slowed the process down. I ended up at something like 40-50 pages per second.\n",
"urllib & threading (or multiprocessing) packages has all you need to do the \"spider\" you need.\nWhat you have to do is get urls from DB, and for each url start a thread or process that \ngrabs the url. \njust as example (misses Data Base urls retrieving):\n#!/usr/bin/env python\nimport Queue\nimport threading\nimport urllib2\nimport time\n\nhosts = [\"http://yahoo.com\", \"http://google.com\", \"http://amazon.com\",\n \"http://ibm.com\", \"http://apple.com\"]\n\nqueue = Queue.Queue()\n\nclass ThreadUrl(threading.Thread):\n \"\"\"Threaded Url Grab\"\"\"\n def __init__(self, queue):\n threading.Thread.__init__(self)\n self.queue = queue\n\n def run(self):\n while True:\n #grabs host from queue\n host = self.queue.get()\n\n #grabs urls of hosts and prints first 1024 bytes of page\n url = urllib2.urlopen(host)\n print url.read(1024)\n\n #signals to queue job is done\n self.queue.task_done()\n\nstart = time.time()\ndef main():\n\n #spawn a pool of threads, and pass them queue instance\n for i in range(5):\n t = ThreadUrl(queue)\n t.setDaemon(True)\n t.start()\n\n #populate queue with data\n for host in hosts:\n queue.put(host)\n\n #wait on the queue until everything has been processed\n queue.join()\n\nmain()\nprint \"Elapsed Time: %s\" % (time.time() - start)\n\n",
"Downloading is IO, which may be done asynchronously using non-blocking sockets or twisted. Both of these solutions will be much more efficient than threading or multiprocessing.\n"
] |
[
9,
7,
2,
2,
0
] |
[] |
[] |
[
"concurrent_processing",
"python"
] |
stackoverflow_0001491993_concurrent_processing_python.txt
|
Q:
Identical string return FALSE with '==' in Python, why?
The data string is receive through a socket connexion. When receiving the first example where action variable would = 'IDENTIFY', it works. But when receiving the second example where action variable would = 'MSG' it does not compare.
And the most bizarre thing, when I use Telnet instead of my socket client both are being compare successfully. But the string are the same... Is there a possibility that the string are not encode in the same way? How can I know?
data example:
data = 'IDENTIFY 54143'
or
data = 'MSG allo'
action = data.partition(' ')[0]
if action == "MSG":
self.sendMessage(data)
elif action == "IDENTIFY":
self.sendIdentify(data)
else:
print "false"
A:
Can't reproduce your problem. To debug it, print or log the repr() of data and action: this will likely show you the cause (probably some non-visible binary byte has snuck into data, based on how you obtained it [[which you don't show us]] and hence into action).
|
Identical string return FALSE with '==' in Python, why?
|
The data string is receive through a socket connexion. When receiving the first example where action variable would = 'IDENTIFY', it works. But when receiving the second example where action variable would = 'MSG' it does not compare.
And the most bizarre thing, when I use Telnet instead of my socket client both are being compare successfully. But the string are the same... Is there a possibility that the string are not encode in the same way? How can I know?
data example:
data = 'IDENTIFY 54143'
or
data = 'MSG allo'
action = data.partition(' ')[0]
if action == "MSG":
self.sendMessage(data)
elif action == "IDENTIFY":
self.sendIdentify(data)
else:
print "false"
|
[
"Can't reproduce your problem. To debug it, print or log the repr() of data and action: this will likely show you the cause (probably some non-visible binary byte has snuck into data, based on how you obtained it [[which you don't show us]] and hence into action).\n"
] |
[
5
] |
[] |
[] |
[
"python",
"string_comparison"
] |
stackoverflow_0001493007_python_string_comparison.txt
|
Q:
Python WebkitGtk: How to respond to the default context menu items?
The default context menu contains items like "Open link in new window" and "Download linked file", which don't seem to do anything. I obviously like to react on these items, but can't figure out how, since the port's documentation is a bit sparse. Does anybody know?
A:
In the C port, you have to connect to the 'create-web-view', 'new-window-policy-decision-requested', and 'download-requested' signals. I think the Python port works the same way. See this page for the documentation on the C versions of those signals:
http://webkitgtk.org/reference/webkitgtk-WebKitWebView.html
|
Python WebkitGtk: How to respond to the default context menu items?
|
The default context menu contains items like "Open link in new window" and "Download linked file", which don't seem to do anything. I obviously like to react on these items, but can't figure out how, since the port's documentation is a bit sparse. Does anybody know?
|
[
"In the C port, you have to connect to the 'create-web-view', 'new-window-policy-decision-requested', and 'download-requested' signals. I think the Python port works the same way. See this page for the documentation on the C versions of those signals:\nhttp://webkitgtk.org/reference/webkitgtk-WebKitWebView.html\n"
] |
[
2
] |
[] |
[] |
[
"gtk",
"python",
"webkit"
] |
stackoverflow_0001491179_gtk_python_webkit.txt
|
Q:
Program new functionality for zoom button on Microsoft Natural Ergonomic Desktop 7000
I just bought a new keyboard and mouse (Microsoft Natural Ergonomic Desktop 7000) and it has a neat little zoom lever in the middle of the keyboard. What I'd like to do is write a little program (in C# or Python, for use on Windows Vista) which makes the zoom button act like a scroll button instead.
I have no idea where to start. Where do I start? :)
A:
This web page en comments should help you out a bit: Icool blog
|
Program new functionality for zoom button on Microsoft Natural Ergonomic Desktop 7000
|
I just bought a new keyboard and mouse (Microsoft Natural Ergonomic Desktop 7000) and it has a neat little zoom lever in the middle of the keyboard. What I'd like to do is write a little program (in C# or Python, for use on Windows Vista) which makes the zoom button act like a scroll button instead.
I have no idea where to start. Where do I start? :)
|
[
"This web page en comments should help you out a bit: Icool blog\n"
] |
[
1
] |
[] |
[] |
[
"c#",
"driver",
"keyboard",
"python"
] |
stackoverflow_0001493202_c#_driver_keyboard_python.txt
|
Q:
Rhythmbox: how do I access the 'rating' field of a track through Python script?
I would like the capability to get/set the rating associated with a specific track through a Python. How do I achieve this?
A:
You can use Rhythmbox' D-Bus interface. I have written a small script that can get/set the rating and displays a notification, all acting on the currently playing song.
The script is here: http://kaizer.se/wiki/code/rhrating.py
Addendum one: I promise I write more beautiful Python when it's not a throwaway script!
Addendum two: The missing Usage string is ./rhrating.py [NEWRATING 0..5]
Addendum three: If I filter the script and take out the parts that exactly set the rating of a song at filesystem location uri, it's this:
import dbus
bus = dbus.Bus()
service_name = "org.gnome.Rhythmbox"
sobj_name = "/org/gnome/Rhythmbox/Shell"
siface_name = "org.gnome.Rhythmbox.Shell"
def set_rating(uri, rating):
searchobj = bus.get_object(service_name, sobj_name)
shell = dbus.Interface(searchobj, siface_name)
shell.setSongProperty(uri, "rating", float(rating))
|
Rhythmbox: how do I access the 'rating' field of a track through Python script?
|
I would like the capability to get/set the rating associated with a specific track through a Python. How do I achieve this?
|
[
"You can use Rhythmbox' D-Bus interface. I have written a small script that can get/set the rating and displays a notification, all acting on the currently playing song.\nThe script is here: http://kaizer.se/wiki/code/rhrating.py\nAddendum one: I promise I write more beautiful Python when it's not a throwaway script!\nAddendum two: The missing Usage string is ./rhrating.py [NEWRATING 0..5]\nAddendum three: If I filter the script and take out the parts that exactly set the rating of a song at filesystem location uri, it's this:\nimport dbus\nbus = dbus.Bus()\n\nservice_name = \"org.gnome.Rhythmbox\"\nsobj_name = \"/org/gnome/Rhythmbox/Shell\"\nsiface_name = \"org.gnome.Rhythmbox.Shell\"\n\ndef set_rating(uri, rating):\n searchobj = bus.get_object(service_name, sobj_name)\n shell = dbus.Interface(searchobj, siface_name)\n shell.setSongProperty(uri, \"rating\", float(rating))\n\n"
] |
[
3
] |
[] |
[] |
[
"linux",
"python",
"rhythmbox"
] |
stackoverflow_0001492849_linux_python_rhythmbox.txt
|
Q:
Cannot access django app through ip address while accessing it through localhost
I have a django app on my local computer. I can access the application from a browser by using the url: http://localhost:8000/myapp/
But I cannot access the application by using the ip of the host computer: http://193.140.209.49:8000/myapp/ I get a 404 error.
What should I do? Any suggestions?
A:
I assume you're using the development server. If so, then you need to specifically bind to your external IP for the server to be available there. Try this command:
./manage.py runserver 193.140.209.49:8000
|
Cannot access django app through ip address while accessing it through localhost
|
I have a django app on my local computer. I can access the application from a browser by using the url: http://localhost:8000/myapp/
But I cannot access the application by using the ip of the host computer: http://193.140.209.49:8000/myapp/ I get a 404 error.
What should I do? Any suggestions?
|
[
"I assume you're using the development server. If so, then you need to specifically bind to your external IP for the server to be available there. Try this command:\n./manage.py runserver 193.140.209.49:8000\n\n"
] |
[
70
] |
[] |
[] |
[
"django",
"networking",
"python"
] |
stackoverflow_0001493479_django_networking_python.txt
|
Q:
Subclass of webapp.RequestHandler doesn't have a response attribute
Using the below code, my template loads fine until I submit the from, then I get the following error:
e = AttributeError("'ToDo' object has no attribute 'response'",)
Why doesn't my ToDo object not have a response attribute? It works the first time it's called.
import cgi
import os
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext.webapp import template
from google.appengine.ext import db
class Task(db.Model):
description = db.StringProperty(required=True)
complete = db.BooleanProperty()
class ToDo(webapp.RequestHandler):
def get(self):
todo_query = Task.all()
todos = todo_query.fetch(10)
template_values = { 'todos': todos }
self.renderPage('index.html', template_values)
def renderPage(self, filename, values):
path = os.path.join(os.path.dirname(__file__), filename)
self.response.out.write(template.render(path, values))
class UpdateList(webapp.RequestHandler):
def post(self):
todo = ToDo()
todo.description = self.request.get('description')
todo.put()
self.redirect('/')
application = webapp.WSGIApplication(
[('/', ToDo),
('/add', UpdateList)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
Here's the template code so far, I'm just listing the descriptions for now.
<!doctype html public "-//w3c//dtd html 4.0 transitional//en">
<html>
<head>
<title>ToDo tutorial</title>
</head>
<body>
<div>
{% for todo in todos %}
<em>{{ todo.description|escape }}</em>
{% endfor %}
</div>
<h3>Add item</h3>
<form action="/add" method="post">
<label for="description">Description:</label>
<input type="text" id="description" name="description" />
<input type="submit" value="Add Item" />
</form>
</body>
</html>
A:
why do you do what you do in post? it should be:
def post(self):
task = Task() # not ToDo()
task.description = self.request.get('description')
task.put()
self.redirect('/')
put called on a subclass of webapp.RequestHandler will try to handle PUT request, according to docs.
|
Subclass of webapp.RequestHandler doesn't have a response attribute
|
Using the below code, my template loads fine until I submit the from, then I get the following error:
e = AttributeError("'ToDo' object has no attribute 'response'",)
Why doesn't my ToDo object not have a response attribute? It works the first time it's called.
import cgi
import os
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext.webapp import template
from google.appengine.ext import db
class Task(db.Model):
description = db.StringProperty(required=True)
complete = db.BooleanProperty()
class ToDo(webapp.RequestHandler):
def get(self):
todo_query = Task.all()
todos = todo_query.fetch(10)
template_values = { 'todos': todos }
self.renderPage('index.html', template_values)
def renderPage(self, filename, values):
path = os.path.join(os.path.dirname(__file__), filename)
self.response.out.write(template.render(path, values))
class UpdateList(webapp.RequestHandler):
def post(self):
todo = ToDo()
todo.description = self.request.get('description')
todo.put()
self.redirect('/')
application = webapp.WSGIApplication(
[('/', ToDo),
('/add', UpdateList)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
Here's the template code so far, I'm just listing the descriptions for now.
<!doctype html public "-//w3c//dtd html 4.0 transitional//en">
<html>
<head>
<title>ToDo tutorial</title>
</head>
<body>
<div>
{% for todo in todos %}
<em>{{ todo.description|escape }}</em>
{% endfor %}
</div>
<h3>Add item</h3>
<form action="/add" method="post">
<label for="description">Description:</label>
<input type="text" id="description" name="description" />
<input type="submit" value="Add Item" />
</form>
</body>
</html>
|
[
"why do you do what you do in post? it should be:\ndef post(self):\n task = Task() # not ToDo()\n task.description = self.request.get('description')\n task.put()\n self.redirect('/')\n\nput called on a subclass of webapp.RequestHandler will try to handle PUT request, according to docs.\n"
] |
[
3
] |
[] |
[] |
[
"google_app_engine",
"post_redirect_get",
"python"
] |
stackoverflow_0001493467_google_app_engine_post_redirect_get_python.txt
|
Q:
Creating a news archive in Django
I looking to build a news archive in python/django, I have no idea where to start with it though. I need the view to pull out all the news articles with I have done I then need to divide them in months and years so e.g.
Sept 09
Oct 09
I then need in the view to some every time a new news article is created for a new month it needs to output the new month, so if a news article was written in November the archive would then be,
Sept 09
Oct 09
Nov 09
Any one help?
A:
An excellent place to start is the book Practical Django Projects by James Bennett. Among other things, it guides you through the development of a web blog with multiple time-based views (by month, etc) that should serve you well as the basis for your application.
A:
One option you can try is to create a custom manager for your model that provides a way to pull out the archives. Here's code that I use:
from django.db import models, connection
import datetime
class EntryManager(models.Manager):
def get_archives(self, level=0):
query = """
SELECT
YEAR(`date_posted`) AS `year`,
MONTH(`date_posted`) AS `month`,
count(*) AS `num_entries`
FROM
`blog_entry`
WHERE
`date_posted` <= %s
GROUP BY
YEAR(`date_posted`),
MONTH(`date_posted`)
ORDER BY
`year` DESC,
`month` DESC
"""
months = ('January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December')
cursor = connection.cursor()
cursor.execute(query, [datetime.datetime.now()])
return [{'year': row[0], 'month': row[1], 'month_name': months[int(row[1])-1], 'num_entries': row[2]} for row in cursor.fetchall()]
You'll of course need to attach it to the model with:
objects = EntryManager()
This returns a list of dictionaries that contains the year, the numerical month, the month name, and the number of entries. You call it as such:
archives = Entry.objects.get_archives()
A:
It seems like you are trying to make a single view that pulls out of of the data and then try to order them by dates ect. I don't think that would be the best way to go about this.
Instead what you could do is to make a view to display each month's articles. I don't know your models but something like:
articles = ArticleModel.objects.filter(date__month=month, date__year=year)
the month and year you would get from your url, fx archive/2009/9.
If you want to make sure that you only display your links to archives that has content, an easy solution would be to get all the articles and flag the months that has content. There should be a better way to do that, but if you put it inside a middleware and cache it, it shouldn't be a problem though.
|
Creating a news archive in Django
|
I looking to build a news archive in python/django, I have no idea where to start with it though. I need the view to pull out all the news articles with I have done I then need to divide them in months and years so e.g.
Sept 09
Oct 09
I then need in the view to some every time a new news article is created for a new month it needs to output the new month, so if a news article was written in November the archive would then be,
Sept 09
Oct 09
Nov 09
Any one help?
|
[
"An excellent place to start is the book Practical Django Projects by James Bennett. Among other things, it guides you through the development of a web blog with multiple time-based views (by month, etc) that should serve you well as the basis for your application.\n",
"One option you can try is to create a custom manager for your model that provides a way to pull out the archives. Here's code that I use:\nfrom django.db import models, connection\nimport datetime\n\nclass EntryManager(models.Manager):\n def get_archives(self, level=0):\n query = \"\"\"\n SELECT\n YEAR(`date_posted`) AS `year`,\n MONTH(`date_posted`) AS `month`,\n count(*) AS `num_entries`\n FROM\n `blog_entry`\n WHERE\n `date_posted` <= %s\n GROUP BY\n YEAR(`date_posted`),\n MONTH(`date_posted`)\n ORDER BY\n `year` DESC,\n `month` DESC\n \"\"\"\n months = ('January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December')\n cursor = connection.cursor()\n cursor.execute(query, [datetime.datetime.now()])\n return [{'year': row[0], 'month': row[1], 'month_name': months[int(row[1])-1], 'num_entries': row[2]} for row in cursor.fetchall()]\n\nYou'll of course need to attach it to the model with:\nobjects = EntryManager()\n\nThis returns a list of dictionaries that contains the year, the numerical month, the month name, and the number of entries. You call it as such:\narchives = Entry.objects.get_archives()\n\n",
"It seems like you are trying to make a single view that pulls out of of the data and then try to order them by dates ect. I don't think that would be the best way to go about this.\nInstead what you could do is to make a view to display each month's articles. I don't know your models but something like:\narticles = ArticleModel.objects.filter(date__month=month, date__year=year)\n\nthe month and year you would get from your url, fx archive/2009/9.\nIf you want to make sure that you only display your links to archives that has content, an easy solution would be to get all the articles and flag the months that has content. There should be a better way to do that, but if you put it inside a middleware and cache it, it shouldn't be a problem though.\n"
] |
[
3,
3,
0
] |
[] |
[] |
[
"django",
"django_models",
"django_templates",
"mysql",
"python"
] |
stackoverflow_0001492866_django_django_models_django_templates_mysql_python.txt
|
Q:
Regex find non digit and/or end of string
How do I include an end-of-string and one non-digit characters in a python 2.6 regular expression set for searching?
I want to find 10-digit numbers with a non-digit at the beginning and a non-digit or end-of-string at the end. It is a 10-digit ISBN number and 'X' is valid for the final digit.
The following do not work:
is10 = re.compile(r'\D(\d{9}[\d|X|x])[$|\D]')
is10 = re.compile(r'\D(\d{9}[\d|X|x])[\$|\D]')
is10 = re.compile(r'\D(\d{9}[\d|X|x])[\Z|\D]')
The problem arises with the last set: [\$|\D] to match a non-digit or end-of-string.
Test with:
line = "abcd0123456789"
m = is10.search(line)
print m.group(1)
line = "abcd0123456789efg"
m = is10.search(line)
print m.group(1)
A:
You have to group the alternatives with parenthesis, not brackets:
r'\D(\d{9}[\dXx])($|\D)'
| is a different construct than []. It marks an alternative between two patterns, while [] matches one of the contained characters. So | should only be used inside of [] if you want to match the actual character |. Grouping of parts of patterns is done with parenthesis, so these should be used to restrict the scope of the alternative marked by |.
If you want to avoid that this creates match groups, you can use (?: ) instead:
r'\D(\d{9}[\dXx])(?:$|\D)'
A:
\D(\d{10})(?:\Z|\D)
find non-digit followed by 10 digits, and a single non-digit or a end-of-string. Captures only digits. While I see that you're searching for nine digit followed by digit or X or x, I don't see same thing in your requirements.
|
Regex find non digit and/or end of string
|
How do I include an end-of-string and one non-digit characters in a python 2.6 regular expression set for searching?
I want to find 10-digit numbers with a non-digit at the beginning and a non-digit or end-of-string at the end. It is a 10-digit ISBN number and 'X' is valid for the final digit.
The following do not work:
is10 = re.compile(r'\D(\d{9}[\d|X|x])[$|\D]')
is10 = re.compile(r'\D(\d{9}[\d|X|x])[\$|\D]')
is10 = re.compile(r'\D(\d{9}[\d|X|x])[\Z|\D]')
The problem arises with the last set: [\$|\D] to match a non-digit or end-of-string.
Test with:
line = "abcd0123456789"
m = is10.search(line)
print m.group(1)
line = "abcd0123456789efg"
m = is10.search(line)
print m.group(1)
|
[
"You have to group the alternatives with parenthesis, not brackets:\nr'\\D(\\d{9}[\\dXx])($|\\D)'\n\n| is a different construct than []. It marks an alternative between two patterns, while [] matches one of the contained characters. So | should only be used inside of [] if you want to match the actual character |. Grouping of parts of patterns is done with parenthesis, so these should be used to restrict the scope of the alternative marked by |.\nIf you want to avoid that this creates match groups, you can use (?: ) instead:\nr'\\D(\\d{9}[\\dXx])(?:$|\\D)'\n\n",
"\\D(\\d{10})(?:\\Z|\\D)\n\nfind non-digit followed by 10 digits, and a single non-digit or a end-of-string. Captures only digits. While I see that you're searching for nine digit followed by digit or X or x, I don't see same thing in your requirements.\n"
] |
[
7,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001493871_python_regex.txt
|
Q:
POSTing a complex JSON object using Prototype
I'm using Prototype 1.6.1 to create an application under IIS, using ASP and Python.
The python is generating a complex JSON object. I want to pass this object to another page via an AJAX request, but the Prototype documentation is a little too cunning for me.
Can someone show me an example of how to create an Prototype AJAX.Request that POSTs a JSON object, and then just prints out "Ok, I got it" or something like that?
Vielen dank!
A:
new Ajax.Request('/some_url',
{
method:"post",
postBody:"{'some':'json'}",
onSuccess: function(transport){
var response = transport.responseText || "no response text";
alert("Success! \n\n" + response);
},
onFailure: function(){ alert('Something went wrong...') }
});
|
POSTing a complex JSON object using Prototype
|
I'm using Prototype 1.6.1 to create an application under IIS, using ASP and Python.
The python is generating a complex JSON object. I want to pass this object to another page via an AJAX request, but the Prototype documentation is a little too cunning for me.
Can someone show me an example of how to create an Prototype AJAX.Request that POSTs a JSON object, and then just prints out "Ok, I got it" or something like that?
Vielen dank!
|
[
"new Ajax.Request('/some_url',\n{\n method:\"post\",\n postBody:\"{'some':'json'}\",\n onSuccess: function(transport){\n var response = transport.responseText || \"no response text\";\n alert(\"Success! \\n\\n\" + response);\n },\n onFailure: function(){ alert('Something went wrong...') }\n});\n\n"
] |
[
7
] |
[] |
[] |
[
"iis",
"javascript",
"prototypejs",
"python"
] |
stackoverflow_0001494039_iis_javascript_prototypejs_python.txt
|
Q:
How to define [] for class in Python?
I feel like this question has already been asked and answered, yet I couldn't find anything on-topic, so excuse me if it is so. I want to define the behaviour of [] brackets when applied to class, similar to def []=() construct in ruby, so that calling Python obj['foo'] would actually call some [](self, what) method. How can I do that?
A:
It's all in the docs: __getitem__.
A:
This is done with __getitem___ in Python.
Here is a list of all the operators:
http://docs.python.org/library/operator.html
A:
define a method in your class with __getitem__(key) and __setitem__(key, value)
A:
http://docs.python.org/reference/datamodel.html
Section 3.4 in the above link shows you all or most of the operators you can overload in Python. The one you want to overload is
__getitem__()
|
How to define [] for class in Python?
|
I feel like this question has already been asked and answered, yet I couldn't find anything on-topic, so excuse me if it is so. I want to define the behaviour of [] brackets when applied to class, similar to def []=() construct in ruby, so that calling Python obj['foo'] would actually call some [](self, what) method. How can I do that?
|
[
"It's all in the docs: __getitem__.\n",
"This is done with __getitem___ in Python.\nHere is a list of all the operators:\nhttp://docs.python.org/library/operator.html\n",
"define a method in your class with __getitem__(key) and __setitem__(key, value)\n",
"http://docs.python.org/reference/datamodel.html\nSection 3.4 in the above link shows you all or most of the operators you can overload in Python. The one you want to overload is\n__getitem__()\n\n"
] |
[
11,
7,
5,
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001494146_python.txt
|
Q:
General Command pattern and Command Dispatch pattern in Python
I was looking for a Command pattern implementation in Python...
(According to Wikipedia,
the command pattern is a design
pattern in which an object is used to
represent and encapsulate all the
information needed to call a method at
a later time.
)
The only thing I found was Command Dispatch pattern:
class Dispatcher:
def do_get(self): ...
def do_put(self): ...
def error(self): ...
def dispatch(self, command):
mname = 'do_' + command
if hasattr(self, mname):
method = getattr(self, mname)
method()
else:
self.error()
May be I'm wrong, but it looks like these are two different concepts, which accidentally have similar names.
Am i missing something?
A:
The simplest command pattern is already built into Python, simply use a callable:
def greet(who):
print "Hello %s" % who
greet_command = lambda: greet("World")
# pass the callable around, and invoke it later
greet_command()
The command pattern as an object oriented design pattern makes more sense if your commands need to be able to do more than just be invoked. Common usecase is when you need to be able to undo/redo your actions. Then a command class is a good way to couple the forward and backwards actions together. For example:
class MoveFileCommand(object):
def __init__(self, src, dest):
self.src = src
self.dest = dest
os.rename(self.src, self.dest)
def undo(self):
os.rename(self.dest, self.src)
undo_stack = []
undo_stack.append(MoveFileCommand('foo.txt', 'bar.txt'))
undo_stack.append(MoveFileCommand('bar.txt', 'baz.txt'))
# foo.txt is now renamed to baz.txt
undo_stack.pop().undo() # Now it's bar.txt
undo_stack.pop().undo() # and back to foo.txt
A:
Yes, you do miss something: the command pattern is only necessary in languages that don't have function pointers (or functions as first class objects), such as Java. In languages with functions-as-objects, you can use the function itself; no need to have a separate command object (which then should have a "doit" method).
In the example could you quote, the getattr() call gives you the "command object" (i.e. the bound method); adding parenthesis after it "invokes" (i.e. calls) the command object.
A:
Did some searching and found this. It appears to do the job of encapsulating an action.
def demo(a,b,c):
print 'a:',a
print 'b:',b
print 'c:',c
class Command:
def __init__(self, cmd, *args):
self._cmd=cmd
self._args=args
def __call__(self, *args):
return apply(self._cmd, self._args+args)
cmd=Command(dir,__builtins__)
print cmd()
cmd=Command(demo,1,2)
cmd(3)
A:
If I recall the gang of four correctly, the Command pattern is about commands like "File - Save", not commands like "svn commit", which is what your code is good for.
Martin suggests the Command pattern is unneeded because functions as first class objects take its place, but the Command pattern is richer than just doit(), having, for example, also undo(), is_enabled(), etc.
|
General Command pattern and Command Dispatch pattern in Python
|
I was looking for a Command pattern implementation in Python...
(According to Wikipedia,
the command pattern is a design
pattern in which an object is used to
represent and encapsulate all the
information needed to call a method at
a later time.
)
The only thing I found was Command Dispatch pattern:
class Dispatcher:
def do_get(self): ...
def do_put(self): ...
def error(self): ...
def dispatch(self, command):
mname = 'do_' + command
if hasattr(self, mname):
method = getattr(self, mname)
method()
else:
self.error()
May be I'm wrong, but it looks like these are two different concepts, which accidentally have similar names.
Am i missing something?
|
[
"The simplest command pattern is already built into Python, simply use a callable:\ndef greet(who):\n print \"Hello %s\" % who\n\ngreet_command = lambda: greet(\"World\")\n# pass the callable around, and invoke it later\ngreet_command()\n\nThe command pattern as an object oriented design pattern makes more sense if your commands need to be able to do more than just be invoked. Common usecase is when you need to be able to undo/redo your actions. Then a command class is a good way to couple the forward and backwards actions together. For example:\nclass MoveFileCommand(object):\n def __init__(self, src, dest):\n self.src = src\n self.dest = dest\n os.rename(self.src, self.dest)\n def undo(self):\n os.rename(self.dest, self.src)\n\nundo_stack = []\nundo_stack.append(MoveFileCommand('foo.txt', 'bar.txt'))\nundo_stack.append(MoveFileCommand('bar.txt', 'baz.txt'))\n# foo.txt is now renamed to baz.txt\nundo_stack.pop().undo() # Now it's bar.txt\nundo_stack.pop().undo() # and back to foo.txt\n\n",
"Yes, you do miss something: the command pattern is only necessary in languages that don't have function pointers (or functions as first class objects), such as Java. In languages with functions-as-objects, you can use the function itself; no need to have a separate command object (which then should have a \"doit\" method).\nIn the example could you quote, the getattr() call gives you the \"command object\" (i.e. the bound method); adding parenthesis after it \"invokes\" (i.e. calls) the command object.\n",
"Did some searching and found this. It appears to do the job of encapsulating an action.\ndef demo(a,b,c):\n print 'a:',a\n print 'b:',b\n print 'c:',c\n\nclass Command:\n def __init__(self, cmd, *args):\n self._cmd=cmd\n self._args=args\n\n def __call__(self, *args):\n return apply(self._cmd, self._args+args)\n\n\ncmd=Command(dir,__builtins__)\nprint cmd()\n\ncmd=Command(demo,1,2)\ncmd(3)\n\n",
"If I recall the gang of four correctly, the Command pattern is about commands like \"File - Save\", not commands like \"svn commit\", which is what your code is good for.\nMartin suggests the Command pattern is unneeded because functions as first class objects take its place, but the Command pattern is richer than just doit(), having, for example, also undo(), is_enabled(), etc.\n"
] |
[
63,
5,
4,
4
] |
[] |
[] |
[
"design_patterns",
"oop",
"python"
] |
stackoverflow_0001494442_design_patterns_oop_python.txt
|
Q:
Python Auto Importing
Possible Duplicate:
Perl's AUTOLOAD in Python (getattr on a module)
I'm coming from a PHP background and attempting to learn Python, and I want to be sure to do things the "Python way" instead of how i've developed before.
My question comes from the fact in PHP5 you can set up your code so if you attempt to call a class that doesn't exist in the namespace, a function will run first that will load the class in and allow you to continue on as if it were already loaded. the advantages to this is classes weren't loaded unless they were called, and you didn't have to worry about loading classes before using them.
In python, there's alot of emphasis on the import statement, is it bad practice to attempt an auto importing trick with python, to alleviate the need for an import statement? I've found this module that offers auto importing, however I dont know if that's the best way of doing it, or if auto importing of modules is something that is recommended, thoughts?
A:
Imports serve at least two other important purposes besides making the modules or contents of the modules available:
They serve as a sort of declaration of intent -- "this module uses services from this other module" or "this module uses services belonging to a certain class" -- e.g. if you are doing a security review for socket-handling code, you can begin by only looking at modules that import socket (or other networking-related modules)
Imports serve as a proxy for the complexity of a module. If you find yourself with dozens of lines of imports, it may be time to reconsider your separation of concerns within the module, or within your application as a whole. This is also a good reason to avoid "from foo import *"-type imports.
A:
In Python, people usually avoid auto imports, just because it is not worth the effort. You may slightly remove startup costs, but otherwise, there is no (or should be no) significant effect. If you have modules that are expensive to import and do a lot of stuff that doesn't need to be done, rather rewrite the module than delay importing it.
That said, there is nothing inherently wrong with auto imports. Because of the proxy nature, there may be some pitfalls (e.g. when looking at a thing that has not been imported yet). Several auto importing libraries are floating around.
A:
If you are learning Python and want to do things the Python way, then just import the modules. It's very unusual to find autoimports in Python code.
A:
You could auto-import the modules, but the most I have ever needed to import was about 10, and that is after I tacked features on top of the original program. You won't be importing a lot, and the names are very easy to remember.
|
Python Auto Importing
|
Possible Duplicate:
Perl's AUTOLOAD in Python (getattr on a module)
I'm coming from a PHP background and attempting to learn Python, and I want to be sure to do things the "Python way" instead of how i've developed before.
My question comes from the fact in PHP5 you can set up your code so if you attempt to call a class that doesn't exist in the namespace, a function will run first that will load the class in and allow you to continue on as if it were already loaded. the advantages to this is classes weren't loaded unless they were called, and you didn't have to worry about loading classes before using them.
In python, there's alot of emphasis on the import statement, is it bad practice to attempt an auto importing trick with python, to alleviate the need for an import statement? I've found this module that offers auto importing, however I dont know if that's the best way of doing it, or if auto importing of modules is something that is recommended, thoughts?
|
[
"Imports serve at least two other important purposes besides making the modules or contents of the modules available:\n\nThey serve as a sort of declaration of intent -- \"this module uses services from this other module\" or \"this module uses services belonging to a certain class\" -- e.g. if you are doing a security review for socket-handling code, you can begin by only looking at modules that import socket (or other networking-related modules)\nImports serve as a proxy for the complexity of a module. If you find yourself with dozens of lines of imports, it may be time to reconsider your separation of concerns within the module, or within your application as a whole. This is also a good reason to avoid \"from foo import *\"-type imports.\n\n",
"In Python, people usually avoid auto imports, just because it is not worth the effort. You may slightly remove startup costs, but otherwise, there is no (or should be no) significant effect. If you have modules that are expensive to import and do a lot of stuff that doesn't need to be done, rather rewrite the module than delay importing it.\nThat said, there is nothing inherently wrong with auto imports. Because of the proxy nature, there may be some pitfalls (e.g. when looking at a thing that has not been imported yet). Several auto importing libraries are floating around.\n",
"If you are learning Python and want to do things the Python way, then just import the modules. It's very unusual to find autoimports in Python code.\n",
"You could auto-import the modules, but the most I have ever needed to import was about 10, and that is after I tacked features on top of the original program. You won't be importing a lot, and the names are very easy to remember.\n"
] |
[
15,
11,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001493888_python.txt
|
Q:
Separate Admin/User authentication system in Django
I've recently started learning/using django; I'm trying to figure out a way to have two separate authentications systems for administrators and users. Rather than create a whole new auth system, I'd like to leverage django's built-in functionality (i.e. session management, @login_required decorator, etc.).
Specifically, I want to have two separate login tables - one for admins, one for users. The admin login table should be the default table that django generates with its default fields (ie. id, username, email, is_staff, etc.). The user table, on the other hand, I want to have only 5 fields - id, email, password, first_name, last_name. Furthermore, I want to use django built-in session management for both login tables and the @login_required decorator for their respective views. Lastly, I want two separate and distinct login forms for admins and users.
Anyone have any suggestions on how I can achieve my goal or know of any articles/examples that could help me along?
A:
You could potentially write one or more custom authentication backends. This is documented here. I have written a custom backend to authenticate against an LDAP server, for example.
A:
If I understand your question correctly (and perhaps I don't), I think you're asking how to create a separate login form for non-admin users, while still using the standard Django authentication mechanisms, User model, etc. This is supported natively by Django through views in django.contrib.auth.views.
You want to start with django.contrib.auth.views.login. Add a line to your urlconf like so:
(r'^/login/$', 'django.contrib.auth.views.login', {'template_name': 'myapp/login.html'})
The login generic view accepts the template_name parameter, which is the path to your custom login template (there is a generic one you can use as well, provided by django.contrib.auth).
Full documentation on the login, logout, password_change, and other generic views are available in the Django Authentication Docs.
A:
Modify things slightly so that users have a category prefix on their username? You haven't given us much info on what you want to do, it's possible that your needs might be met by using the sites framework, or simply two separate django installs.
If what you're trying to do is make the user login page and the admin login page separate, just use the built in framework as detailed in the docs to create a "user" login page and leave the admin one alone. If you're worried that users will somehow start editing admin login stuff, don't be, they won't unless you let them.
|
Separate Admin/User authentication system in Django
|
I've recently started learning/using django; I'm trying to figure out a way to have two separate authentications systems for administrators and users. Rather than create a whole new auth system, I'd like to leverage django's built-in functionality (i.e. session management, @login_required decorator, etc.).
Specifically, I want to have two separate login tables - one for admins, one for users. The admin login table should be the default table that django generates with its default fields (ie. id, username, email, is_staff, etc.). The user table, on the other hand, I want to have only 5 fields - id, email, password, first_name, last_name. Furthermore, I want to use django built-in session management for both login tables and the @login_required decorator for their respective views. Lastly, I want two separate and distinct login forms for admins and users.
Anyone have any suggestions on how I can achieve my goal or know of any articles/examples that could help me along?
|
[
"You could potentially write one or more custom authentication backends. This is documented here. I have written a custom backend to authenticate against an LDAP server, for example.\n",
"If I understand your question correctly (and perhaps I don't), I think you're asking how to create a separate login form for non-admin users, while still using the standard Django authentication mechanisms, User model, etc. This is supported natively by Django through views in django.contrib.auth.views.\nYou want to start with django.contrib.auth.views.login. Add a line to your urlconf like so:\n(r'^/login/$', 'django.contrib.auth.views.login', {'template_name': 'myapp/login.html'})\n\nThe login generic view accepts the template_name parameter, which is the path to your custom login template (there is a generic one you can use as well, provided by django.contrib.auth).\nFull documentation on the login, logout, password_change, and other generic views are available in the Django Authentication Docs.\n",
"Modify things slightly so that users have a category prefix on their username? You haven't given us much info on what you want to do, it's possible that your needs might be met by using the sites framework, or simply two separate django installs.\nIf what you're trying to do is make the user login page and the admin login page separate, just use the built in framework as detailed in the docs to create a \"user\" login page and leave the admin one alone. If you're worried that users will somehow start editing admin login stuff, don't be, they won't unless you let them. \n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"authentication",
"django",
"python"
] |
stackoverflow_0001494524_authentication_django_python.txt
|
Q:
Percentage with variable precision
I would like to display a percentage with three decimal places unless it's greater than 99%. Then, I'd like to display the number with all the available nines plus 3 non-nine characters.
How can I write this in Python? The "%.8f" string formatting works decently, but I need to keep the last three characters after the last string of nines.
So:
54.8213% -> 54.821%
95.42332% -> 95.423%
99.9932983% -> 99.99330%
99.99999999992318 -> 99.9999999999232%
A:
Try this:
import math
def format_percentage(x, precision=3):
return ("%%.%df%%%%" % (precision - min(0,math.log10(100-x)))) % x
A:
Mark Ransom's answer is a beautiful thing. With a little bit of work, it can solve the problem for any inputs. I went ahead and did the little bit of work.
You just need to add some code to nines():
def nines(x):
x = abs(x) # avoid exception caused if x is negative
x -= int(x) # keep fractional part of x only
cx = ceilpowerof10(x) - x
if 0 == cx:
return 0 # if x is a power of 10, it doesn't have a string of 9's!
return -int(math.log10(cx))
Then threeplaces() works for anything. Here are a few test cases:
>>> threeplaces(0.9999357)
'0.9999357'
>>> threeplaces(1000.9999357)
'1000.9999357'
>>> threeplaces(-1000.9999357)
'-1000.9999357'
>>> threeplaces(0.9900357)
'0.99004'
>>> threeplaces(1000.9900357)
'1000.99004'
>>> threeplaces(-1000.9900357)
'-1000.99004'
A:
def ceilpowerof10(x):
return math.pow(10, math.ceil(math.log10(x)))
def nines(x):
return -int(math.log10(ceilpowerof10(x) - x))
def threeplaces(x):
return ('%.' + str(nines(x) + 3) + 'f') % x
Note that nines() throws an error on numbers that are a power of 10 to begin with, it would take a little more work to make it safe for all input. There are probably some issues with negative numbers as well.
A:
Try this:
def print_percent(p):
for i in range(30):
if p <= 100. - 10.**(-i):
print ("%." + str(max(3,3+i-1)) + "f") % p
return
or this if you just want to retrieve the string
def print_percent(p):
for i in range(20):
if p <= 100. - 10.**(-i):
return ("%." + str(max(3,3+i-1)) + "f") % p
A:
I am quite confident that this is not possible with standard formating. I suggest to use something like the following (C# like pseudo code). Especially I suggest to rely on string operations and not to use math code because of many possible precision and rounding problems.
string numberString = number.ToStringWithFullPrecision();
int index = numberString.IndexOf('.');
while ((index < numberString.Length - 1) && (numberString[index + 1] == '9'))
{
index++;
}
WriteLine(number.PadRightWithThreeZeros().SubString(0, index + 4));
If you like regular expression, you can use them to. Take the following expression and match it against the full precision number string padded with three zeros and you are done.
^([0-9]|[1-9][0-9]|100)\.(9*)([0-8][0-9]{2})
I just realized that both suggestion may cause rounding errors. 99.91238123 becomes 99.9123 when it should become 99.9124 - so the last digits requires additional correction. Easy to do, but makes my suggestion even uglier. This is far away from an elegant and smart algorithm.
A:
def ilike9s(f):
return re.sub(r"(\d*\.9*\d\d\d)\d*",r"\1","%.17f" % f)
So...
>>> ilike9s(1.0)
'1.000'
>>> ilike9s(12.9999991232132132)
'12.999999123'
>>> ilike9s(12.345678901234)
'12.345'
And don't forget to import re
|
Percentage with variable precision
|
I would like to display a percentage with three decimal places unless it's greater than 99%. Then, I'd like to display the number with all the available nines plus 3 non-nine characters.
How can I write this in Python? The "%.8f" string formatting works decently, but I need to keep the last three characters after the last string of nines.
So:
54.8213% -> 54.821%
95.42332% -> 95.423%
99.9932983% -> 99.99330%
99.99999999992318 -> 99.9999999999232%
|
[
"Try this:\nimport math\ndef format_percentage(x, precision=3):\n return (\"%%.%df%%%%\" % (precision - min(0,math.log10(100-x)))) % x\n\n",
"Mark Ransom's answer is a beautiful thing. With a little bit of work, it can solve the problem for any inputs. I went ahead and did the little bit of work.\nYou just need to add some code to nines():\ndef nines(x):\n x = abs(x) # avoid exception caused if x is negative\n x -= int(x) # keep fractional part of x only\n cx = ceilpowerof10(x) - x\n if 0 == cx:\n return 0 # if x is a power of 10, it doesn't have a string of 9's!\n return -int(math.log10(cx))\n\nThen threeplaces() works for anything. Here are a few test cases:\n>>> threeplaces(0.9999357)\n'0.9999357'\n>>> threeplaces(1000.9999357)\n'1000.9999357'\n>>> threeplaces(-1000.9999357)\n'-1000.9999357'\n>>> threeplaces(0.9900357)\n'0.99004'\n>>> threeplaces(1000.9900357)\n'1000.99004'\n>>> threeplaces(-1000.9900357)\n'-1000.99004'\n\n",
"def ceilpowerof10(x):\n return math.pow(10, math.ceil(math.log10(x)))\n\ndef nines(x):\n return -int(math.log10(ceilpowerof10(x) - x))\n\ndef threeplaces(x):\n return ('%.' + str(nines(x) + 3) + 'f') % x\n\nNote that nines() throws an error on numbers that are a power of 10 to begin with, it would take a little more work to make it safe for all input. There are probably some issues with negative numbers as well.\n",
"Try this:\ndef print_percent(p): \n for i in range(30):\n if p <= 100. - 10.**(-i):\n print (\"%.\" + str(max(3,3+i-1)) + \"f\") % p\n return\n\nor this if you just want to retrieve the string\ndef print_percent(p): \n for i in range(20):\n if p <= 100. - 10.**(-i):\n return (\"%.\" + str(max(3,3+i-1)) + \"f\") % p\n\n",
"I am quite confident that this is not possible with standard formating. I suggest to use something like the following (C# like pseudo code). Especially I suggest to rely on string operations and not to use math code because of many possible precision and rounding problems.\nstring numberString = number.ToStringWithFullPrecision();\n\nint index = numberString.IndexOf('.');\n\nwhile ((index < numberString.Length - 1) && (numberString[index + 1] == '9'))\n{\n index++;\n}\n\nWriteLine(number.PadRightWithThreeZeros().SubString(0, index + 4));\n\nIf you like regular expression, you can use them to. Take the following expression and match it against the full precision number string padded with three zeros and you are done.\n^([0-9]|[1-9][0-9]|100)\\.(9*)([0-8][0-9]{2})\n\n\nI just realized that both suggestion may cause rounding errors. 99.91238123 becomes 99.9123 when it should become 99.9124 - so the last digits requires additional correction. Easy to do, but makes my suggestion even uglier. This is far away from an elegant and smart algorithm.\n",
" def ilike9s(f):\n return re.sub(r\"(\\d*\\.9*\\d\\d\\d)\\d*\",r\"\\1\",\"%.17f\" % f)\n\n\nSo...\n>>> ilike9s(1.0)\n'1.000'\n>>> ilike9s(12.9999991232132132)\n'12.999999123'\n>>> ilike9s(12.345678901234)\n'12.345'\n\nAnd don't forget to import re\n"
] |
[
3,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"algorithm",
"precision",
"python"
] |
stackoverflow_0001494708_algorithm_precision_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.