content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Is there a portable way of finding if a path is absolute, using Python?
Is there some built-in function that tells me if a path is absolute or not? I'd like something that is platform independent.
A:
The os.path.isabs function does this.
|
Is there a portable way of finding if a path is absolute, using Python?
|
Is there some built-in function that tells me if a path is absolute or not? I'd like something that is platform independent.
|
[
"The os.path.isabs function does this.\n"
] |
[
13
] |
[] |
[] |
[
"absolute_path",
"python"
] |
stackoverflow_0001513155_absolute_path_python.txt
|
Q:
Python optparse defaults vs function defaults
I'm writing a python script which I would like to be able to both call from the command line and import as a library function.
Ideally the command line options and the function should use the same set of default values.
What is the best way to allow me to reuse a single set of defaults in both places?
Here's the current code with duplicate defaults.
from optparse import OptionParser
def do_stuff(opt1="a", opt2="b", opt3="c"):
print opt1, opt2, opt3
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("--opt1", default="a")
parser.add_option("--opt2", default="b")
parser.add_option("--opt3", default="c")
#parser.set_defaults(opt1="a")
options, args = parser.parse_args()
do_stuff(*args, **vars(options))
A:
I'd handle it by introspecting the function of interest to set options and defaults appropriately. For example:
import inspect
from optparse import OptionParser
import sys
def do_stuff(opt0, opt1="a", opt2="b", opt3="c"):
print opt0, opt1, opt2, opt3
if __name__ == "__main__":
parser = OptionParser()
args, varargs, varkw, defaults = inspect.getargspec(do_stuff)
if varargs or varkw:
sys.exit("Sorry, can't make opts from a function with *a and/or **k!")
lend = len(defaults)
nodef = args[:-lend]
for a in nodef:
parser.add_option("--%s" % a)
for a, d in zip(args[-lend:], defaults):
parser.add_option("--%s" % a, default=d)
options, args = parser.parse_args()
d = vars(options)
for n, v in zip(nodef, args):
d[n] = v
do_stuff(**d)
A:
Here's the solution - it's trivial if you only need keyword arguments - just use locals.update. Following handles both, positional and key word args (key word args overrides positional).
from optparse import OptionParser
ARGS = {'opt1': 'a',
'opt2': 'b',
'opt3': 'c'}
def do_stuff(*args, **kwargs):
locals = ARGS
keys = ARGS.keys()
keys.sort()
if args:
for key,arg in zip(keys,args):
locals.update({key: arg})
if kwargs:
locals.update(kwargs)
print locals['opt1'], locals['opt2'], locals['opt3']
if __name__ == "__main__":
parser = OptionParser()
for key,default in ARGS.items():
parser.add_option('--%s' % key, default='%s' % default)
options, args = parser.parse_args()
do_stuff(*args, **vars(options))
do_stuff()
do_stuff('d','e','f')
do_stuff('d','e','f', opt3='b')
do_stuff(opt1='c', opt2='a', opt3='b')
Output:
a b c
a b c
d e f
d e b
c a b
A:
The inspect solution by Alex is very powerful!
For lightweight programs, you could also simply use this:
def do_stuff(opt1="a", opt2="b", opt3="c"):
print opt1, opt2, opt3
if __name__ == "__main__":
from optparse import OptionParser
opts = do_stuff.func_defaults
parser = OptionParser()
parser.add_option("--opt1", default=opts[0], help="Option 1 (%default)")
parser.add_option("--opt2", default=opts[1], help="Option 2 (%default)")
parser.add_option("--opt3", default=opts[2], help="Option 3 (%default)")
options, args = parser.parse_args()
do_stuff(*args, **vars(options))
|
Python optparse defaults vs function defaults
|
I'm writing a python script which I would like to be able to both call from the command line and import as a library function.
Ideally the command line options and the function should use the same set of default values.
What is the best way to allow me to reuse a single set of defaults in both places?
Here's the current code with duplicate defaults.
from optparse import OptionParser
def do_stuff(opt1="a", opt2="b", opt3="c"):
print opt1, opt2, opt3
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("--opt1", default="a")
parser.add_option("--opt2", default="b")
parser.add_option("--opt3", default="c")
#parser.set_defaults(opt1="a")
options, args = parser.parse_args()
do_stuff(*args, **vars(options))
|
[
"I'd handle it by introspecting the function of interest to set options and defaults appropriately. For example:\nimport inspect\nfrom optparse import OptionParser\nimport sys\n\ndef do_stuff(opt0, opt1=\"a\", opt2=\"b\", opt3=\"c\"):\n print opt0, opt1, opt2, opt3\n\nif __name__ == \"__main__\":\n parser = OptionParser()\n args, varargs, varkw, defaults = inspect.getargspec(do_stuff)\n if varargs or varkw:\n sys.exit(\"Sorry, can't make opts from a function with *a and/or **k!\")\n lend = len(defaults)\n nodef = args[:-lend]\n for a in nodef:\n parser.add_option(\"--%s\" % a)\n for a, d in zip(args[-lend:], defaults):\n parser.add_option(\"--%s\" % a, default=d)\n\n options, args = parser.parse_args()\n d = vars(options)\n for n, v in zip(nodef, args):\n d[n] = v\n\n do_stuff(**d)\n\n",
"Here's the solution - it's trivial if you only need keyword arguments - just use locals.update. Following handles both, positional and key word args (key word args overrides positional).\nfrom optparse import OptionParser\n\nARGS = {'opt1': 'a', \n 'opt2': 'b',\n 'opt3': 'c'}\n\ndef do_stuff(*args, **kwargs):\n locals = ARGS\n\n keys = ARGS.keys()\n keys.sort()\n\n if args:\n for key,arg in zip(keys,args):\n locals.update({key: arg})\n if kwargs:\n locals.update(kwargs)\n\n print locals['opt1'], locals['opt2'], locals['opt3']\n\nif __name__ == \"__main__\":\n parser = OptionParser()\n for key,default in ARGS.items():\n parser.add_option('--%s' % key, default='%s' % default)\n\n options, args = parser.parse_args()\n\n do_stuff(*args, **vars(options))\n do_stuff()\n do_stuff('d','e','f')\n do_stuff('d','e','f', opt3='b')\n do_stuff(opt1='c', opt2='a', opt3='b')\n\nOutput:\na b c \na b c \nd e f \nd e b \nc a b \n\n",
"The inspect solution by Alex is very powerful!\nFor lightweight programs, you could also simply use this:\ndef do_stuff(opt1=\"a\", opt2=\"b\", opt3=\"c\"):\n print opt1, opt2, opt3\n\nif __name__ == \"__main__\":\n from optparse import OptionParser\n opts = do_stuff.func_defaults\n parser = OptionParser() \n parser.add_option(\"--opt1\", default=opts[0], help=\"Option 1 (%default)\")\n parser.add_option(\"--opt2\", default=opts[1], help=\"Option 2 (%default)\")\n parser.add_option(\"--opt3\", default=opts[2], help=\"Option 3 (%default)\")\n\n options, args = parser.parse_args()\n\n do_stuff(*args, **vars(options))\n\n"
] |
[
3,
2,
2
] |
[] |
[] |
[
"dry",
"optparse",
"python"
] |
stackoverflow_0001512242_dry_optparse_python.txt
|
Q:
verifiedEmail AOL OpenID
I can't seem to fetch the verifiedEmail field when trying to login to AOLs OpenID on my site. Every other provider that I know of provides this property, but not AOL.
I realize that AOL somehow uses an old OpenID version, although is it feasible to just assume that their e-mail ends in @aol.com? I'm using the RPXNow library with Python.
A:
I believe that OpenID lets the user decide how much information to "share" during the login process. I can't say that I am an expert on the subject, but I know that my identity at myopenid.com lets me specify precisely what information to make available.
Is it possible that the AOL default is to share nothing? If this is the case, then you may want to do an email authorization directly with the user if the OpenID provider doesn't seem to have the information. OpenID doesn't mandate that this information is available so I would assume that you will have to handle the case of it not being there in application code.
|
verifiedEmail AOL OpenID
|
I can't seem to fetch the verifiedEmail field when trying to login to AOLs OpenID on my site. Every other provider that I know of provides this property, but not AOL.
I realize that AOL somehow uses an old OpenID version, although is it feasible to just assume that their e-mail ends in @aol.com? I'm using the RPXNow library with Python.
|
[
"I believe that OpenID lets the user decide how much information to \"share\" during the login process. I can't say that I am an expert on the subject, but I know that my identity at myopenid.com lets me specify precisely what information to make available.\nIs it possible that the AOL default is to share nothing? If this is the case, then you may want to do an email authorization directly with the user if the OpenID provider doesn't seem to have the information. OpenID doesn't mandate that this information is available so I would assume that you will have to handle the case of it not being there in application code.\n"
] |
[
0
] |
[] |
[] |
[
"openid",
"python",
"rpxnow"
] |
stackoverflow_0001513543_openid_python_rpxnow.txt
|
Q:
can't install lxml (python 2.6.3, osx 10.6 snow leopard)
I try to:
easy_install lxml
and I get this error:
File "build/bdist.macosx-10.3-fat/egg/setuptools/command/build_ext.py", line 85, in get_ext_filename
KeyError: 'etree'
any hints?
A:
Due to incompatible changes in the 2.6.3 version of python's distutils, the old easy_install from setuptools no longer works. You need to replace it with easy_install from Distribute. Follow the instructions there, basically:
$ curl -O http://nightly.ziade.org/distribute_setup.py
$ python distribute_setup.py
assuming the 2.6.3 python is first on your $PATH.
EDIT: Besides the option to migrate from setuptools to Distribute, Python 2.6.4, which should be released in a couple of weeks, will contain a workaround in distutils that will unbreak setuptools. Thanks, Tarek, for the fix and thanks, jbastos, for bringing this issue up.
FURTHER EDIT: setuptools itself has been updated (as of 0.6c10) to work around the problem with 2.6.3.
A:
Ned :
incompatible changes in the 2.6.3 version of python's distutil
Not precisely. The API hasn't changed but Setuptools overrides them, and makes the assumption they are called in a particular order.
Lennart:
The Distribute installation doesn't seem to trigger the bug
Yes indeed, this precise bug was detected some time ago and fixed in Distribute (and in Ubuntu's setuptools package)
|
can't install lxml (python 2.6.3, osx 10.6 snow leopard)
|
I try to:
easy_install lxml
and I get this error:
File "build/bdist.macosx-10.3-fat/egg/setuptools/command/build_ext.py", line 85, in get_ext_filename
KeyError: 'etree'
any hints?
|
[
"Due to incompatible changes in the 2.6.3 version of python's distutils, the old easy_install from setuptools no longer works. You need to replace it with easy_install from Distribute. Follow the instructions there, basically:\n$ curl -O http://nightly.ziade.org/distribute_setup.py\n$ python distribute_setup.py\n\nassuming the 2.6.3 python is first on your $PATH.\nEDIT: Besides the option to migrate from setuptools to Distribute, Python 2.6.4, which should be released in a couple of weeks, will contain a workaround in distutils that will unbreak setuptools. Thanks, Tarek, for the fix and thanks, jbastos, for bringing this issue up.\nFURTHER EDIT: setuptools itself has been updated (as of 0.6c10) to work around the problem with 2.6.3. \n",
"Ned :\n\nincompatible changes in the 2.6.3 version of python's distutil\n\nNot precisely. The API hasn't changed but Setuptools overrides them, and makes the assumption they are called in a particular order.\nLennart:\n\nThe Distribute installation doesn't seem to trigger the bug\n\nYes indeed, this precise bug was detected some time ago and fixed in Distribute (and in Ubuntu's setuptools package)\n"
] |
[
7,
3
] |
[] |
[] |
[
"lxml",
"macos",
"osx_leopard",
"port",
"python"
] |
stackoverflow_0001512530_lxml_macos_osx_leopard_port_python.txt
|
Q:
Python Pypi: what is your process for releasing packages for different Python versions? (Linux)
I've got several eggs I maintain on Pypi but up until now I've always focused on Python 2.5x.
I'd like to release my eggs under both Python 2.5 & Python 2.6 in an automated fashion i.e.
running tests
generating doc
preparing eggs
uploading to Pypi
How do you guys achieve this?
A related question: how do I tag an egg to be "version independent" ? works under all version of Python?
A:
You don't need to release eggs for anything else than Windows, and then only if your package uses C extensions so that they have compiled parts. Otherwise you simply release one source distribution. That will be enough for all Python versions on all platforms.
Running the tests for different versions automated is tricky if you don't have a buildbot. But once you have run the tests with both 2.5 and 2.6 releasing is just a question of running python setup.py sdist register upload and it doesn't matter what Python version you use to run that.
A:
I use a script to switch my Python version, run the tests, switch to the next Python version, run the tests again, and so on. I use this to test on 2.3, 2.4, 2.5, 2.6, and 3.1. In addition, I run all my tests under two different configuration scenarios (C extension available, or not), so this runs my full test suite 10 times.
I use a similar script to build kits, though I build windows installers for each version, then one source kit.
For uploading, I just do it all manually.
For docs, there's only one version to build, and that's done with a Makefile target.
This is all for coverage.py, you can see the code at bitbucket, though I should warn you, they are .cmd Windows scripts.
|
Python Pypi: what is your process for releasing packages for different Python versions? (Linux)
|
I've got several eggs I maintain on Pypi but up until now I've always focused on Python 2.5x.
I'd like to release my eggs under both Python 2.5 & Python 2.6 in an automated fashion i.e.
running tests
generating doc
preparing eggs
uploading to Pypi
How do you guys achieve this?
A related question: how do I tag an egg to be "version independent" ? works under all version of Python?
|
[
"You don't need to release eggs for anything else than Windows, and then only if your package uses C extensions so that they have compiled parts. Otherwise you simply release one source distribution. That will be enough for all Python versions on all platforms.\nRunning the tests for different versions automated is tricky if you don't have a buildbot. But once you have run the tests with both 2.5 and 2.6 releasing is just a question of running python setup.py sdist register upload and it doesn't matter what Python version you use to run that.\n",
"I use a script to switch my Python version, run the tests, switch to the next Python version, run the tests again, and so on. I use this to test on 2.3, 2.4, 2.5, 2.6, and 3.1. In addition, I run all my tests under two different configuration scenarios (C extension available, or not), so this runs my full test suite 10 times.\nI use a similar script to build kits, though I build windows installers for each version, then one source kit.\nFor uploading, I just do it all manually.\nFor docs, there's only one version to build, and that's done with a Makefile target.\nThis is all for coverage.py, you can see the code at bitbucket, though I should warn you, they are .cmd Windows scripts.\n"
] |
[
1,
0
] |
[] |
[] |
[
"pypi",
"python",
"release_management"
] |
stackoverflow_0001512644_pypi_python_release_management.txt
|
Q:
Why MySQLdb for Mac has to have MySQL installed to install?
I been working on finding out how to install MySQLdb module for Python on Mac. And all pathes finally come cross to have MySQL installed since there is a mysql_config needed for the module. But I don't understand why it has to be needed?
MySQLdb suppose to be a client module for the client who wants to connect to the server. But now I have to first install a server on the client in order to connect to another server?
A:
Just to complete the fine answers, here's a small how-to for installing MySQLdb on MacOS X 10.6. This was posted on my blog and you might check that out for maybe more info (and links as I'm a new user on stackoverflow, can only add one).
Download MySQL MacOS X 10.5 (x86_64) tar ball. For v10.5 currently, until there is one for MacOS X v10.6.
Get a copy of MySQL-python (MySQLdb).
Install the latest XCode tools found on the Snow Leopard CD under option installs or download it. You need it to compile.
I'm not covering the installation of MySQL, but when you've done so, here's how you install MySQLdb.
shell> PATH="/usr/local/mysql/bin:$PATH"
shell> tar xzf MySQL-python-1.2.3c1.tar.gz
shell> cd MySQL-python-1.2.3c1
shell> ARCHFLAGS="-arch x86_64" /usr/bin/python setup.py build
shell> /usr/bin/python setup.py install
There is currently work done on MySQL Connector/Python. The is a pure Python implementation of the MySQL protocol so you don't need to install any MySQL software or compile to get you going.
A:
I'm not sure about the specifics of MySQLdb, but most likely it needs header information to compile/install. It uses the location of mysql_config to know where the appropriate headers would be. The MySQL Gem for Ruby on Rails requires the same thing, even though it simply connects to the MySQL server.
A:
What it needs is the client library and headers that come with the server, since it just a Python wrapper (which sits in _mysql.c; and DB-API interface to that wrapper in MySQLdb package) over original C MySQL API.
A:
MySQLdb is not a client module per se. It is a wrapper or interface between Python programs and the standard MySQL client libraries. MySQLdb conforms to the Python standard DB API. There are other conforming adapters implemented for other database managers. Using the common API makes it much easier to write database-independent code in Python; the adapters handle (much of) the messy differences among the various database client libraries. Thus, you do need to install MySQL client libraries; there are several ways to do this: the easiest options are probably downloading prebuilt libraries from mysql.com or you can use a package manager, like MacPorts, to install them and other dependencies.
A:
Just to clarify what the other answerers have said: you don't need to install a MySQL server, but you do need to install the MySQL client libraries. However, for whatever reasons, MySQL don't make a separate download available for just the client libraries, as they do for Linux.
|
Why MySQLdb for Mac has to have MySQL installed to install?
|
I been working on finding out how to install MySQLdb module for Python on Mac. And all pathes finally come cross to have MySQL installed since there is a mysql_config needed for the module. But I don't understand why it has to be needed?
MySQLdb suppose to be a client module for the client who wants to connect to the server. But now I have to first install a server on the client in order to connect to another server?
|
[
"Just to complete the fine answers, here's a small how-to for installing MySQLdb on MacOS X 10.6. This was posted on my blog and you might check that out for maybe more info (and links as I'm a new user on stackoverflow, can only add one).\n\nDownload MySQL MacOS X 10.5 (x86_64) tar ball. For v10.5 currently, until there is one for MacOS X v10.6.\nGet a copy of MySQL-python (MySQLdb).\nInstall the latest XCode tools found on the Snow Leopard CD under option installs or download it. You need it to compile.\n\nI'm not covering the installation of MySQL, but when you've done so, here's how you install MySQLdb.\nshell> PATH=\"/usr/local/mysql/bin:$PATH\"\nshell> tar xzf MySQL-python-1.2.3c1.tar.gz\nshell> cd MySQL-python-1.2.3c1\nshell> ARCHFLAGS=\"-arch x86_64\" /usr/bin/python setup.py build\nshell> /usr/bin/python setup.py install\n\nThere is currently work done on MySQL Connector/Python. The is a pure Python implementation of the MySQL protocol so you don't need to install any MySQL software or compile to get you going.\n",
"I'm not sure about the specifics of MySQLdb, but most likely it needs header information to compile/install. It uses the location of mysql_config to know where the appropriate headers would be. The MySQL Gem for Ruby on Rails requires the same thing, even though it simply connects to the MySQL server.\n",
"What it needs is the client library and headers that come with the server, since it just a Python wrapper (which sits in _mysql.c; and DB-API interface to that wrapper in MySQLdb package) over original C MySQL API.\n",
"MySQLdb is not a client module per se. It is a wrapper or interface between Python programs and the standard MySQL client libraries. MySQLdb conforms to the Python standard DB API. There are other conforming adapters implemented for other database managers. Using the common API makes it much easier to write database-independent code in Python; the adapters handle (much of) the messy differences among the various database client libraries. Thus, you do need to install MySQL client libraries; there are several ways to do this: the easiest options are probably downloading prebuilt libraries from mysql.com or you can use a package manager, like MacPorts, to install them and other dependencies.\n",
"Just to clarify what the other answerers have said: you don't need to install a MySQL server, but you do need to install the MySQL client libraries. However, for whatever reasons, MySQL don't make a separate download available for just the client libraries, as they do for Linux.\n"
] |
[
3,
1,
1,
1,
1
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0001483024_mysql_python.txt
|
Q:
Installing python module
I'm new to python and is trying to install the pyimage module.
http://code.google.com/p/pyimage/
I'm on Windows, and have downloaded and installed 2.6 and 3.1.
I downloaded pyimage, and used cmd and cd to get to its dir.
I then got this:
C:\Users\Jourkey\Desktop\pyimage-0.8.13\pyimage-0.8.13>python setup.py install
'python' is not recognized as an internal or external command,
operable program or batch file.
How do I install this?
Thanks!
A:
It is giving you that error because Python is not in your path. By default, the Python executable is not added to the path. You will have to do it manually. An in-depth tutorial endorsed by the Python website may be found
here.
A:
To put Python on your system's environment PATH variable, so that running python at a command prompt will work, follow the instructions in this video.
|
Installing python module
|
I'm new to python and is trying to install the pyimage module.
http://code.google.com/p/pyimage/
I'm on Windows, and have downloaded and installed 2.6 and 3.1.
I downloaded pyimage, and used cmd and cd to get to its dir.
I then got this:
C:\Users\Jourkey\Desktop\pyimage-0.8.13\pyimage-0.8.13>python setup.py install
'python' is not recognized as an internal or external command,
operable program or batch file.
How do I install this?
Thanks!
|
[
"It is giving you that error because Python is not in your path. By default, the Python executable is not added to the path. You will have to do it manually. An in-depth tutorial endorsed by the Python website may be found\nhere.\n",
"To put Python on your system's environment PATH variable, so that running python at a command prompt will work, follow the instructions in this video.\n"
] |
[
4,
1
] |
[] |
[] |
[
"module",
"python"
] |
stackoverflow_0001514371_module_python.txt
|
Q:
Python 3 and open source: Are there any good projects?
I've been studying Python 3 recently and I have come across a conundrum: I want to expand my abilities by working on an open source project, but I seem to have trouble finding any specifically for Python 3.
I know that this question has been asked before:
Such as here,
And here,
Unfortunately these all seem to be using Python <= 2.6 and I want to use >= 3.0
This leads me to another question:
Python 3.0 has been out for almost a year, yet most of the examples and 90% of the projects are for <= 2.6. I also know that the MySQL library is not in a Python 3 compatible state. Does this mean that I'd actually better off learning Python 2.x and assume the incompatible 3.0 will die?
A:
If you want to help out with Python 3, find some libraries that haven't been ported yet and help port them. Projects that depend on these libraries can't make the switch until the libraries do.
A:
Major point releases of languages take an exceedingly long time for wide adoption. Python 3 will not die -- eventually people will switch, but it may take many years.
A:
PyPI lists a handful of packages ported to Python3 of which lxml and httplib2 are some of the prominent ones.
|
Python 3 and open source: Are there any good projects?
|
I've been studying Python 3 recently and I have come across a conundrum: I want to expand my abilities by working on an open source project, but I seem to have trouble finding any specifically for Python 3.
I know that this question has been asked before:
Such as here,
And here,
Unfortunately these all seem to be using Python <= 2.6 and I want to use >= 3.0
This leads me to another question:
Python 3.0 has been out for almost a year, yet most of the examples and 90% of the projects are for <= 2.6. I also know that the MySQL library is not in a Python 3 compatible state. Does this mean that I'd actually better off learning Python 2.x and assume the incompatible 3.0 will die?
|
[
"If you want to help out with Python 3, find some libraries that haven't been ported yet and help port them. Projects that depend on these libraries can't make the switch until the libraries do. \n",
"Major point releases of languages take an exceedingly long time for wide adoption. Python 3 will not die -- eventually people will switch, but it may take many years. \n",
"PyPI lists a handful of packages ported to Python3 of which lxml and httplib2 are some of the prominent ones.\n"
] |
[
7,
4,
0
] |
[] |
[] |
[
"open_source",
"python",
"python_3.x"
] |
stackoverflow_0001462282_open_source_python_python_3.x.txt
|
Q:
Writing Cocoa applications in Python 3
It looks like PyObjC is not ported to Python 3 yet.
Meanwhile is there a way to write Cocoa applications using Python 3?
I am intending to start a new MacOSX GUI application project and though5 would want to use Python 3.x instead of Python 2.x.
A:
For full-blown Cocoa, I think PyObjC is pretty much the only game in town. If you are coming to Cocoa from a Python background rather than to Python from an Obj-C Cocoa background, surely the learning curve of the Cocoa APIs is much steeper than the differences between Python 2.x and Python 3.x. So I think, at the moment, the best strategy is writing your app in Python 2.x while trying to make it as Python 3.x friendly as possible, including periodically running 2to3 on it as a check. And I'm sure patches for PyObjC to help with Python 3 support would be very welcome. If you are just looking for simple GUI interfaces rather than a full-blown Cocoa app, you might be able to get by with calls out to other packages like CocoaDialog or a Python 2.x-PyObjC dialog app :=)
|
Writing Cocoa applications in Python 3
|
It looks like PyObjC is not ported to Python 3 yet.
Meanwhile is there a way to write Cocoa applications using Python 3?
I am intending to start a new MacOSX GUI application project and though5 would want to use Python 3.x instead of Python 2.x.
|
[
"For full-blown Cocoa, I think PyObjC is pretty much the only game in town. If you are coming to Cocoa from a Python background rather than to Python from an Obj-C Cocoa background, surely the learning curve of the Cocoa APIs is much steeper than the differences between Python 2.x and Python 3.x. So I think, at the moment, the best strategy is writing your app in Python 2.x while trying to make it as Python 3.x friendly as possible, including periodically running 2to3 on it as a check. And I'm sure patches for PyObjC to help with Python 3 support would be very welcome. If you are just looking for simple GUI interfaces rather than a full-blown Cocoa app, you might be able to get by with calls out to other packages like CocoaDialog or a Python 2.x-PyObjC dialog app :=) \n"
] |
[
3
] |
[] |
[] |
[
"cocoa",
"pyobjc",
"python",
"python_3.x"
] |
stackoverflow_0001514638_cocoa_pyobjc_python_python_3.x.txt
|
Q:
How to get a http page using mechanize cookies?
There is a Python mechanize object with a form with almost all values set, but not yet submitted. Now I want to fetch another page using cookies from mechanize instance, but without resetting the page, forms and so on, e.g. so that the values remain set (I just need to get body string of another page, nothing else). So is there a way to:
Tell mechanize not to reset the page (perhaps, through UserAgentBase)?
Make urllib2 use mechanize's cookie jar? NB: urllib2.HTTPCookieProcessor(self.br._ua_handlers["_cookies"].cookiejar) doesn't work
Any other way to pass cookie to urllib?
A:
And the correct answer:
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(self.br._ua_handlers["_cookies"].cookiejar))
opener.open(imgurl)
A:
No idea whether this will work, but why don't you try deepcopying the mechanize instance, eg
from copy import deepcopy
br = Browser()
br.open("http://www.example.com/")
# Make a copy for doing other stuff with
br2 = deepcopy(br)
# Do stuff with br2
# Now do stuff with br
A:
Some wild ideas:
Fetch the second page before filling in the form?
Or fetch the new page and then goBack()? Although maybe that will reset the values.
|
How to get a http page using mechanize cookies?
|
There is a Python mechanize object with a form with almost all values set, but not yet submitted. Now I want to fetch another page using cookies from mechanize instance, but without resetting the page, forms and so on, e.g. so that the values remain set (I just need to get body string of another page, nothing else). So is there a way to:
Tell mechanize not to reset the page (perhaps, through UserAgentBase)?
Make urllib2 use mechanize's cookie jar? NB: urllib2.HTTPCookieProcessor(self.br._ua_handlers["_cookies"].cookiejar) doesn't work
Any other way to pass cookie to urllib?
|
[
"And the correct answer:\nopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(self.br._ua_handlers[\"_cookies\"].cookiejar))\nopener.open(imgurl)\n\n",
"No idea whether this will work, but why don't you try deepcopying the mechanize instance, eg\nfrom copy import deepcopy\nbr = Browser()\nbr.open(\"http://www.example.com/\")\n# Make a copy for doing other stuff with\nbr2 = deepcopy(br)\n# Do stuff with br2\n# Now do stuff with br\n\n",
"Some wild ideas:\n\nFetch the second page before filling in the form? \nOr fetch the new page and then goBack()? Although maybe that will reset the values.\n\n"
] |
[
5,
2,
2
] |
[] |
[] |
[
"mechanize",
"python"
] |
stackoverflow_0001513823_mechanize_python.txt
|
Q:
Why my code is getting NZEC run time error?
Question source: SPOJ.. ORDERS
def swap(ary,idx1,idx2):
tmp = ary[idx1]
ary[idx1] = ary[idx2]
ary[idx2] = tmp
def mkranks(size):
tmp = []
for i in range(1, size + 1):
tmp = tmp + [i]
return tmp
def permutations(ordered, movements):
size = len(ordered)
for i in range(1, size): # The leftmost one never moves
for j in range(0, int(movements[i])):
swap(ordered, i-j, i-j-1)
return ordered
numberofcases = input()
for i in range(0, numberofcases):
sizeofcase = input()
tmp = raw_input()
movements = ""
for i in range(0, len(tmp)):
if i % 2 != 1:
movements = movements + tmp[i]
ordered = mkranks(sizeofcase)
ordered = permutations(ordered, movements)
output = ""
for i in range(0, sizeofcase - 1):
output = output + str(ordered[i]) + " "
output = output + str(ordered[sizeofcase - 1])
print output
A:
Having made your code a bit more Pythonic (but without altering its flow/algorithm):
def swap(ary, idx1, idx2):
ary[idx1], ary[idx2] = [ary[i] for i in (idx2, idx1)]
def permutations(ordered, movements):
size = len(ordered)
for i in range(1, len(ordered)):
for j in range(movements[i]):
swap(ordered, i-j, i-j-1)
return ordered
numberofcases = input()
for i in range(numberofcases):
sizeofcase = input()
movements = [int(s) for s in raw_input().split()]
ordered = [str(i) for i in range(1, sizeofcase+1)]
ordered = permutations(ordered, movements)
output = " ".join(ordered)
print output
I see it runs correctly in the sample case given at the SPOJ URL you indicate. What is your failing case?
|
Why my code is getting NZEC run time error?
|
Question source: SPOJ.. ORDERS
def swap(ary,idx1,idx2):
tmp = ary[idx1]
ary[idx1] = ary[idx2]
ary[idx2] = tmp
def mkranks(size):
tmp = []
for i in range(1, size + 1):
tmp = tmp + [i]
return tmp
def permutations(ordered, movements):
size = len(ordered)
for i in range(1, size): # The leftmost one never moves
for j in range(0, int(movements[i])):
swap(ordered, i-j, i-j-1)
return ordered
numberofcases = input()
for i in range(0, numberofcases):
sizeofcase = input()
tmp = raw_input()
movements = ""
for i in range(0, len(tmp)):
if i % 2 != 1:
movements = movements + tmp[i]
ordered = mkranks(sizeofcase)
ordered = permutations(ordered, movements)
output = ""
for i in range(0, sizeofcase - 1):
output = output + str(ordered[i]) + " "
output = output + str(ordered[sizeofcase - 1])
print output
|
[
"Having made your code a bit more Pythonic (but without altering its flow/algorithm):\ndef swap(ary, idx1, idx2):\n ary[idx1], ary[idx2] = [ary[i] for i in (idx2, idx1)]\n\ndef permutations(ordered, movements):\n size = len(ordered)\n for i in range(1, len(ordered)):\n for j in range(movements[i]):\n swap(ordered, i-j, i-j-1)\n return ordered\n\nnumberofcases = input()\nfor i in range(numberofcases):\n sizeofcase = input()\n movements = [int(s) for s in raw_input().split()]\n ordered = [str(i) for i in range(1, sizeofcase+1)]\n ordered = permutations(ordered, movements)\n output = \" \".join(ordered)\n print output\n\nI see it runs correctly in the sample case given at the SPOJ URL you indicate. What is your failing case?\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001515019_python.txt
|
Q:
Is it reasonable to save data as python modules?
This is what I've done for a project. I have a few data structures that are bascially dictionaries with some methods that operate on the data. When I save them to disk, I write them out to .py files as code that when imported as a module will load the same data into such a data structure.
Is this reasonable? Are there any big disadvantages? The advantage I see is that when I want to operate with the saved data, I can quickly import the modules I need. Also, the modules can be used seperate from the rest of the application because you don't need a separate parser or loader functionality.
A:
By operating this way, you may gain some modicum of convenience, but you pay many kinds of price for that. The space it takes to save your data, and the time it takes to both save and reload it, go up substantially; and your security exposure is unbounded -- you must ferociously guard the paths from which you reload modules, as it would provide an easy avenue for any attacker to inject code of their choice to be executed under your userid (pickle itself is not rock-solid, security-wise, but, compared to this arrangement, it shines;-).
All in all, I prefer a simpler and more traditional arrangement: executable code lives in one module (on a typical code-loading path, that does not need to be R/W once the module's compiled) -- it gets loaded just once and from an already-compiled form. Data live in their own files (or portions of DB, etc) in any of the many suitable formats, mostly standard ones (possibly including multi-language ones such as JSON, CSV, XML, ... &c, if I want to keep the option open to easily load those data from other languages in the future).
A:
The biggest drawback is that it's a potential security problem since it's hard to guarantee that the files won't contains arbitrary code, which could be really bad. So don't use this approach if anyone else than you have write-access to the files.
A:
A reasonable option might be to use the Pickle module, which is specifically designed to save and restore python structures to disk.
A:
It's reasonable, and I do it all the time. Obviously it's not a format you use to exchange data, so it's not a good format for anything like a save file.
But for example, when I do migrations of websites to Plone, I often get data about the site (such as a list of which pages should be migrated, or a list of how old urls should be mapped to new ones, aor lists of tags). These you typically get in Word och Excel format. Also the data often needs massaging a bit, and I end up with what for all intents and purposes are a dictionaries mapping one URL to some other information.
Sure, I could save that as CVS, and parse it into a dictionary. But instead I typically save it as a Python file with a dictionary. Saves code.
So, yes, it's reasonable, no it's not a format you should use for any sort of save file. It however often used for data that straddles the border to configuration, like above.
A:
Alex Martelli's answer is absolutely insightful and I agree with him. However, I'll go one step further and make a specific recommendation: use JSON.
JSON is simple, and Python's data structures map well into it; and there are several standard libraries and tools for working with JSON. The json module in Python 3.0 and newer is based on simplejson, so I would use simplejson in Python 2.x and json in Python 3.0 and newer.
Second choice is XML. XML is more complicated, and harder to just look at (or just edit with a text editor) but there is a vast wealth of tools to validate it, filter it, edit it, etc.
Also, if your data storage and retrieval needs become at all nontrivial, consider using an actual database. SQLite is terrific: it's small, and for small databases runs very fast, but it is a real actual SQL database. I would definitely use a Python ORM instead of learning SQL to interact with the database; my favorite ORM for SQLite would be Autumn (small and simple), or the ORM from Django (you don't even need to learn how to create tables in SQL!) Then if you ever outgrow SQLite, you can move up to a real database such as PostgreSQL. If you find yourself writing lots of loops that search through your saved data, and especially if you need to enforce dependencies (such as if foo is deleted, bar must be deleted too) consider going to a database.
|
Is it reasonable to save data as python modules?
|
This is what I've done for a project. I have a few data structures that are bascially dictionaries with some methods that operate on the data. When I save them to disk, I write them out to .py files as code that when imported as a module will load the same data into such a data structure.
Is this reasonable? Are there any big disadvantages? The advantage I see is that when I want to operate with the saved data, I can quickly import the modules I need. Also, the modules can be used seperate from the rest of the application because you don't need a separate parser or loader functionality.
|
[
"By operating this way, you may gain some modicum of convenience, but you pay many kinds of price for that. The space it takes to save your data, and the time it takes to both save and reload it, go up substantially; and your security exposure is unbounded -- you must ferociously guard the paths from which you reload modules, as it would provide an easy avenue for any attacker to inject code of their choice to be executed under your userid (pickle itself is not rock-solid, security-wise, but, compared to this arrangement, it shines;-).\nAll in all, I prefer a simpler and more traditional arrangement: executable code lives in one module (on a typical code-loading path, that does not need to be R/W once the module's compiled) -- it gets loaded just once and from an already-compiled form. Data live in their own files (or portions of DB, etc) in any of the many suitable formats, mostly standard ones (possibly including multi-language ones such as JSON, CSV, XML, ... &c, if I want to keep the option open to easily load those data from other languages in the future).\n",
"The biggest drawback is that it's a potential security problem since it's hard to guarantee that the files won't contains arbitrary code, which could be really bad. So don't use this approach if anyone else than you have write-access to the files.\n",
"A reasonable option might be to use the Pickle module, which is specifically designed to save and restore python structures to disk.\n",
"It's reasonable, and I do it all the time. Obviously it's not a format you use to exchange data, so it's not a good format for anything like a save file. \nBut for example, when I do migrations of websites to Plone, I often get data about the site (such as a list of which pages should be migrated, or a list of how old urls should be mapped to new ones, aor lists of tags). These you typically get in Word och Excel format. Also the data often needs massaging a bit, and I end up with what for all intents and purposes are a dictionaries mapping one URL to some other information.\nSure, I could save that as CVS, and parse it into a dictionary. But instead I typically save it as a Python file with a dictionary. Saves code.\nSo, yes, it's reasonable, no it's not a format you should use for any sort of save file. It however often used for data that straddles the border to configuration, like above.\n",
"Alex Martelli's answer is absolutely insightful and I agree with him. However, I'll go one step further and make a specific recommendation: use JSON.\nJSON is simple, and Python's data structures map well into it; and there are several standard libraries and tools for working with JSON. The json module in Python 3.0 and newer is based on simplejson, so I would use simplejson in Python 2.x and json in Python 3.0 and newer.\nSecond choice is XML. XML is more complicated, and harder to just look at (or just edit with a text editor) but there is a vast wealth of tools to validate it, filter it, edit it, etc.\nAlso, if your data storage and retrieval needs become at all nontrivial, consider using an actual database. SQLite is terrific: it's small, and for small databases runs very fast, but it is a real actual SQL database. I would definitely use a Python ORM instead of learning SQL to interact with the database; my favorite ORM for SQLite would be Autumn (small and simple), or the ORM from Django (you don't even need to learn how to create tables in SQL!) Then if you ever outgrow SQLite, you can move up to a real database such as PostgreSQL. If you find yourself writing lots of loops that search through your saved data, and especially if you need to enforce dependencies (such as if foo is deleted, bar must be deleted too) consider going to a database.\n"
] |
[
7,
3,
3,
3,
3
] |
[] |
[] |
[
"data_persistence",
"dynamic_import",
"python"
] |
stackoverflow_0001514228_data_persistence_dynamic_import_python.txt
|
Q:
Upgrading Google Application Engine program to use unicode
I have a simple Google App Engine app, that I wrote using ordinary strings. I realize I want to make it handle unicode. Are there any gotchas with this? I'm thinking of all the strings that I currently already have in the live database. (From real users who I don't want to upset.)
A:
Alexander Kojevnikov said: "The datastore internally keeps all strings in unicode."
In other words, your application is already using unicode everywhere. Thank the google folks for a sensible API. No further work required.
A:
The datastore internally keeps all strings in unicode.
A:
When storing to a db.TextProperty() you need to use db.Text() like:
instance.xml = db.Text(xml_string, encoding="utf_8")
And specify the correct encoding if the string doesn't have a BOM on it. Like if you get unexpected unicode data from an XML stream.
This happened to me when using Amazon.com's product API.
Also Google's urlfetch had unicode problems dealing with that stream. So I ended up running minidom's parse() function instead of parseString() on the urllib.urlopen()'s return which acts like a stream like so to fix the problem:
response = urllib.urlopen(url)
xml = minidom.parse(response)
|
Upgrading Google Application Engine program to use unicode
|
I have a simple Google App Engine app, that I wrote using ordinary strings. I realize I want to make it handle unicode. Are there any gotchas with this? I'm thinking of all the strings that I currently already have in the live database. (From real users who I don't want to upset.)
|
[
"Alexander Kojevnikov said: \"The datastore internally keeps all strings in unicode.\"\nIn other words, your application is already using unicode everywhere. Thank the google folks for a sensible API. No further work required.\n",
"The datastore internally keeps all strings in unicode.\n",
"When storing to a db.TextProperty() you need to use db.Text() like:\ninstance.xml = db.Text(xml_string, encoding=\"utf_8\")\nAnd specify the correct encoding if the string doesn't have a BOM on it. Like if you get unexpected unicode data from an XML stream.\nThis happened to me when using Amazon.com's product API.\nAlso Google's urlfetch had unicode problems dealing with that stream. So I ended up running minidom's parse() function instead of parseString() on the urllib.urlopen()'s return which acts like a stream like so to fix the problem:\nresponse = urllib.urlopen(url)\nxml = minidom.parse(response)\n\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"google_app_engine",
"python",
"unicode"
] |
stackoverflow_0000245094_google_app_engine_python_unicode.txt
|
Q:
is there any latest version of pyUIQ?
i have tried to install python for symbian uiq 3..i have tried signed and unsigned version,but both installation failed..anyone faced same problem?so,do you have any solution? i am using sony ericsson G900..
A:
The latest release version of PyUIQ is 0.2, from YEARS ago, available here. No sources in the CVS repo on sourceforge, either... looks pretty much like a dead project to me:-(. What do you need exactly?
A:
Even though the pyUIQ version available is very old, you should be able to install it if you sign it properly. Open Signed Online should do it.
|
is there any latest version of pyUIQ?
|
i have tried to install python for symbian uiq 3..i have tried signed and unsigned version,but both installation failed..anyone faced same problem?so,do you have any solution? i am using sony ericsson G900..
|
[
"The latest release version of PyUIQ is 0.2, from YEARS ago, available here. No sources in the CVS repo on sourceforge, either... looks pretty much like a dead project to me:-(. What do you need exactly?\n",
"Even though the pyUIQ version available is very old, you should be able to install it if you sign it properly. Open Signed Online should do it.\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"symbian"
] |
stackoverflow_0001515630_python_symbian.txt
|
Q:
Display Pretty Code in Django
I'm looking for something I can use within django to display preformatted code. Ideally this would include out-of-the-box syntax highlighting for various programming languages, although just starting with something that displayed html and xml well would be a good starting point.
Does something like this exist?
Basically I am looking for something like the widget dpaste (and also stack overflow) use to display code.
e.g. http://dpaste.com/hold/102141/
or
<?xml version="1.0" encoding='UTF-8'?>
<painting>
<img src="madonna.jpg" alt='Foligno Madonna, by Raphael'/>
<caption>This is Raphael's "Foligno" Madonna, painted in
<date>1511</date>-<date>1512</date>.</caption>
</painting>
I'm aware of this question, but mine is not about the mechanics of escaping the code, it's about the UI.
A:
You could use Pygments to do the syntax highlighting and get HTML to display.
Example code :
from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import HtmlFormatter
highlighted = highlight('# Some Python code', PythonLexer(), HtmlFormatter())
Also see the official documentation.
A:
I have found SyntaxHighlighter (http://alexgorbatchev.com) to work well within the Django part of my site.
|
Display Pretty Code in Django
|
I'm looking for something I can use within django to display preformatted code. Ideally this would include out-of-the-box syntax highlighting for various programming languages, although just starting with something that displayed html and xml well would be a good starting point.
Does something like this exist?
Basically I am looking for something like the widget dpaste (and also stack overflow) use to display code.
e.g. http://dpaste.com/hold/102141/
or
<?xml version="1.0" encoding='UTF-8'?>
<painting>
<img src="madonna.jpg" alt='Foligno Madonna, by Raphael'/>
<caption>This is Raphael's "Foligno" Madonna, painted in
<date>1511</date>-<date>1512</date>.</caption>
</painting>
I'm aware of this question, but mine is not about the mechanics of escaping the code, it's about the UI.
|
[
"You could use Pygments to do the syntax highlighting and get HTML to display.\nExample code :\nfrom pygments import highlight\nfrom pygments.lexers import PythonLexer\nfrom pygments.formatters import HtmlFormatter\n\nhighlighted = highlight('# Some Python code', PythonLexer(), HtmlFormatter())\n\nAlso see the official documentation.\n",
"I have found SyntaxHighlighter (http://alexgorbatchev.com) to work well within the Django part of my site.\n"
] |
[
8,
2
] |
[] |
[] |
[
"django",
"html",
"python"
] |
stackoverflow_0001514874_django_html_python.txt
|
Q:
Does C++ have an equivilent to Python's __setitem__
Just as the title asks, does C++ have the equivalent of Python's setitem and getitem for classes?
Basically it allows you to do something like the following.
MyClass anObject;
anObject[0] = 1;
anObject[1] = "foo";
A:
basically, you overload the subscript operator (operator[]), and it returns a reference (so it can be read as well as written to)
A:
You can overload the [] operator, but it's not quite the same as a separate getitem/setitem method pair, in that you don't get to specify different handling for getting and setting.
But you can get close by returning a temporary object that overrides the assignment operator.
A:
To expand on Earwicker post:
#include <string>
#include <iostream>
template <typename Type>
class Vector
{
public:
template <typename Element>
class ReferenceWrapper
{
public:
explicit ReferenceWrapper(Element& elem)
: elem_(elem)
{
}
// Similar to Python's __getitem__.
operator const Type&() const
{
return elem_;
}
// Similar to Python's __setitem__.
ReferenceWrapper& operator=(const Type& rhs)
{
elem_ = rhs;
return *this;
}
// Helper when Type is defined in another namespace.
friend std::ostream& operator<<(std::ostream& os, const ReferenceWrapper& rhs)
{
return os << rhs.operator const Type&();
}
private:
Element& elem_;
};
explicit Vector(size_t sz)
: vec_(sz)
{
}
ReferenceWrapper<const Type> operator[](size_t ix) const
{
return ReferenceWrapper<const Type>(vec_[ix]);
}
ReferenceWrapper<Type> operator[](size_t ix)
{
return ReferenceWrapper<Type>(vec_[ix]);
}
private:
std::vector<Type> vec_;
};
int main()
{
Vector<std::string> v(10);
std::cout << v[5] << "\n";
v[5] = "42";
std::cout << v[5] << "\n";
}
A:
It's not portable, but MSVC has __declspec(property), which also allows indexers:
struct Foo
{
void SetFoo(int index, int value) { ... }
int GetFoo(int index) { ... }
__declspec(property(propget=GetFoo, propput=SetFoo)) int Foo[];
}
other than that, Earwicker did outline the portable solution, but he's right that you'll run into many problems.
|
Does C++ have an equivilent to Python's __setitem__
|
Just as the title asks, does C++ have the equivalent of Python's setitem and getitem for classes?
Basically it allows you to do something like the following.
MyClass anObject;
anObject[0] = 1;
anObject[1] = "foo";
|
[
"basically, you overload the subscript operator (operator[]), and it returns a reference (so it can be read as well as written to)\n",
"You can overload the [] operator, but it's not quite the same as a separate getitem/setitem method pair, in that you don't get to specify different handling for getting and setting.\nBut you can get close by returning a temporary object that overrides the assignment operator.\n",
"To expand on Earwicker post:\n#include <string>\n#include <iostream>\n\ntemplate <typename Type>\nclass Vector\n{\npublic:\n template <typename Element>\n class ReferenceWrapper\n {\n public:\n explicit ReferenceWrapper(Element& elem)\n : elem_(elem)\n {\n }\n\n // Similar to Python's __getitem__.\n operator const Type&() const\n {\n return elem_;\n }\n\n // Similar to Python's __setitem__.\n ReferenceWrapper& operator=(const Type& rhs)\n {\n elem_ = rhs;\n return *this;\n }\n\n // Helper when Type is defined in another namespace.\n friend std::ostream& operator<<(std::ostream& os, const ReferenceWrapper& rhs)\n {\n return os << rhs.operator const Type&();\n }\n\n private:\n Element& elem_;\n };\n\n explicit Vector(size_t sz)\n : vec_(sz)\n {\n }\n\n ReferenceWrapper<const Type> operator[](size_t ix) const\n {\n return ReferenceWrapper<const Type>(vec_[ix]);\n }\n\n ReferenceWrapper<Type> operator[](size_t ix)\n {\n return ReferenceWrapper<Type>(vec_[ix]);\n }\n\nprivate:\n std::vector<Type> vec_;\n};\n\nint main()\n{\n Vector<std::string> v(10);\n std::cout << v[5] << \"\\n\";\n\n v[5] = \"42\";\n std::cout << v[5] << \"\\n\";\n}\n\n",
"It's not portable, but MSVC has __declspec(property), which also allows indexers:\nstruct Foo\n{\n void SetFoo(int index, int value) { ... }\n int GetFoo(int index) { ... }\n\n __declspec(property(propget=GetFoo, propput=SetFoo)) int Foo[]; \n}\n\nother than that, Earwicker did outline the portable solution, but he's right that you'll run into many problems. \n"
] |
[
7,
6,
1,
1
] |
[] |
[] |
[
"c++",
"python"
] |
stackoverflow_0001515899_c++_python.txt
|
Q:
List sorting with multiple attributes and mixed order
I have to sort a list with multiple attributes. I can do that in ascending order for ALL attributes easily with
L.sort(key=operator.attrgetter(attribute))....
but the problem is, that I have to use mixed configurations for ascending/descending... I have to "imitate" a bit the SQL Order By where you can do something like name ASC, year DESC.
Is there a way to do this easily in Python without having to implement a custom compare function?
A:
If your attributes are numeric, you have this.
def mixed_order( a ):
return ( a.attribute1, -a.attribute2 )
someList.sort( key=mixed_order )
If your attributes includes strings or other more complex objects, you have some choices.
The .sort() method is stable: you can do multiple passes. This is perhaps the simplest. It's also remarkably fast.
def key1( a ): return a.attribute1
def key2( a ): return a.attribute2
someList.sort( key=key2, reverse=True )
someList.sort( key=key1 )
If this is the only sort, you can define your own special-purpose comparison operators. Minimally, you need __eq__ and __lt__. The other four can be derived from these two by simple logic.
A:
A custom function will render your code more readable. If you have many sorting operations and you don't want to create those functions though, you can use lambda's:
L.sort(lambda x, y: cmp(x.name, y.name) or -cmp(x.year, y.year))
A:
You can't, but writing the compare function is easy:
def my_cmp(a, b):
return cmp(a.foo, b.foo) or cmp(b.bar, a.bar)
L.sort(my_cmp)
|
List sorting with multiple attributes and mixed order
|
I have to sort a list with multiple attributes. I can do that in ascending order for ALL attributes easily with
L.sort(key=operator.attrgetter(attribute))....
but the problem is, that I have to use mixed configurations for ascending/descending... I have to "imitate" a bit the SQL Order By where you can do something like name ASC, year DESC.
Is there a way to do this easily in Python without having to implement a custom compare function?
|
[
"If your attributes are numeric, you have this.\ndef mixed_order( a ):\n return ( a.attribute1, -a.attribute2 )\n\nsomeList.sort( key=mixed_order )\n\nIf your attributes includes strings or other more complex objects, you have some choices.\nThe .sort() method is stable: you can do multiple passes. This is perhaps the simplest. It's also remarkably fast.\ndef key1( a ): return a.attribute1\ndef key2( a ): return a.attribute2\n\nsomeList.sort( key=key2, reverse=True )\nsomeList.sort( key=key1 )\n\nIf this is the only sort, you can define your own special-purpose comparison operators. Minimally, you need __eq__ and __lt__. The other four can be derived from these two by simple logic.\n",
"A custom function will render your code more readable. If you have many sorting operations and you don't want to create those functions though, you can use lambda's:\nL.sort(lambda x, y: cmp(x.name, y.name) or -cmp(x.year, y.year))\n\n",
"You can't, but writing the compare function is easy:\ndef my_cmp(a, b):\n return cmp(a.foo, b.foo) or cmp(b.bar, a.bar)\nL.sort(my_cmp)\n\n"
] |
[
33,
7,
6
] |
[] |
[] |
[
"list",
"python",
"sorting"
] |
stackoverflow_0001516249_list_python_sorting.txt
|
Q:
python varargs before function name?
I'm doing some Python coding in a clients code base and I stumbled on a line of code that looks something like this (the variable names have been changed to protect the innocent):
reply = function1(a=foo, **function2(bar, b=baz))
Normally ** in the argument list collects remaining keyword arguments but what do they do in front of the function name?
A:
I'd say that this is just calling a function that returns a dict-like object and therefor the asterisks just convert the returned dict into the keyword arguments for function1, just as usual.
|
python varargs before function name?
|
I'm doing some Python coding in a clients code base and I stumbled on a line of code that looks something like this (the variable names have been changed to protect the innocent):
reply = function1(a=foo, **function2(bar, b=baz))
Normally ** in the argument list collects remaining keyword arguments but what do they do in front of the function name?
|
[
"I'd say that this is just calling a function that returns a dict-like object and therefor the asterisks just convert the returned dict into the keyword arguments for function1, just as usual.\n"
] |
[
11
] |
[] |
[] |
[
"python",
"variadic_functions"
] |
stackoverflow_0001516467_python_variadic_functions.txt
|
Q:
chess AI for GAE
I am looking for a Chess AI that can be run on Google App Engine. Most chess AI's seem to be written in C and so can not be run on the GAE. It needs to be strong enough to beat a casual player, but efficient enough that it can calculate a move within a single request (less than 10 secs).
Ideally it would be written in Python for easier integration with existing code.
I came across a few promising projects but they don't look mature:
http://code.google.com/p/chess-free
http://mariobalibrera.com/mics/ai.html
A:
What's wrong with PyChess? It's pure Python, fairly mature, and will certainly be able to beat a casual player.
It's been a while since I've used PyChess, but a quick glance through some of the source
does indicate that you can set a time limit on how long to search for a move.
The PyChess engine that is written in pure Python is in pychess.Utils. Specifically, if you look at pychess.Utils.lutils, you can see for instance the move generator written in Python.
A:
This problem is a poor match for the GAE architecture, which is designed for efficient CRUD operations, and not CPU-intensive tasks. In practice, anything that takes more than a few tens of milliseconds per request will blow out your CPU quota pretty quickly.
|
chess AI for GAE
|
I am looking for a Chess AI that can be run on Google App Engine. Most chess AI's seem to be written in C and so can not be run on the GAE. It needs to be strong enough to beat a casual player, but efficient enough that it can calculate a move within a single request (less than 10 secs).
Ideally it would be written in Python for easier integration with existing code.
I came across a few promising projects but they don't look mature:
http://code.google.com/p/chess-free
http://mariobalibrera.com/mics/ai.html
|
[
"What's wrong with PyChess? It's pure Python, fairly mature, and will certainly be able to beat a casual player.\nIt's been a while since I've used PyChess, but a quick glance through some of the source\ndoes indicate that you can set a time limit on how long to search for a move.\nThe PyChess engine that is written in pure Python is in pychess.Utils. Specifically, if you look at pychess.Utils.lutils, you can see for instance the move generator written in Python. \n",
"This problem is a poor match for the GAE architecture, which is designed for efficient CRUD operations, and not CPU-intensive tasks. In practice, anything that takes more than a few tens of milliseconds per request will blow out your CPU quota pretty quickly.\n"
] |
[
5,
1
] |
[] |
[] |
[
"artificial_intelligence",
"chess",
"google_app_engine",
"python"
] |
stackoverflow_0001516223_artificial_intelligence_chess_google_app_engine_python.txt
|
Q:
sqlite3 in Python
How do I check if the database file already exists or not?
And, if the it exists, how do I check if it already has a specific table or not?
A:
To see if a database exists, you can sqlite3.connect to the file that you think contains the database, and try running a query on it. If it is not a database, you will get this error:
>>> c.execute("SELECT * FROM tbl")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
sqlite3.DatabaseError: file is encrypted or is not a database
sqlite3.connect will create the database if it doesn't exist; as @johnp points out in the comments, os.path.exists will tell you whether the file exists.
To check for existing tables, you query against sqlite_master. For example:
>>> def foo(name):
... for row in c.execute("SELECT name FROM sqlite_master WHERE type='table'"):
... if row == (name,):
... return True
... return False
...
>>> foo("tz_data")
True
>>> foo("asdf")
False
|
sqlite3 in Python
|
How do I check if the database file already exists or not?
And, if the it exists, how do I check if it already has a specific table or not?
|
[
"To see if a database exists, you can sqlite3.connect to the file that you think contains the database, and try running a query on it. If it is not a database, you will get this error:\n>>> c.execute(\"SELECT * FROM tbl\")\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nsqlite3.DatabaseError: file is encrypted or is not a database\n\nsqlite3.connect will create the database if it doesn't exist; as @johnp points out in the comments, os.path.exists will tell you whether the file exists.\nTo check for existing tables, you query against sqlite_master. For example:\n>>> def foo(name):\n... for row in c.execute(\"SELECT name FROM sqlite_master WHERE type='table'\"):\n... if row == (name,):\n... return True\n... return False\n... \n>>> foo(\"tz_data\")\nTrue\n>>> foo(\"asdf\")\nFalse\n\n"
] |
[
10
] |
[] |
[] |
[
"python",
"sqlite"
] |
stackoverflow_0001516508_python_sqlite.txt
|
Q:
Displaying integers in a wxpython listctrl
I have a wxPython ListCtrl with five columns. Four of these hold strings, the last one has integer values. I have been storing these as strings (i.e. '4', '17', etc.). However, now that I have added a ColumnSorterMixin to let me sort specific columns in the list, I'm finding, of course, that the integer column is being sorted lexically rather than numerically.
Is there a simple way of fixing this?
A:
I think that the most robust way of doing custom sort is to use SortItems() function in wx.ListCtrl. Note that you have to provide item data for each item (using SetItemData())
Just provide your own callback, say:
def sortColumn(item1, item2):
try:
i1 = int(item1)
i2 = int(item2)
except ValueError:
return cmp(item1, item2)
else:
return cmp(i1, i2)
Didn't check it, but something along these lines should work for all columns, unless you have a column where some values are strings representing integers and some are not.
|
Displaying integers in a wxpython listctrl
|
I have a wxPython ListCtrl with five columns. Four of these hold strings, the last one has integer values. I have been storing these as strings (i.e. '4', '17', etc.). However, now that I have added a ColumnSorterMixin to let me sort specific columns in the list, I'm finding, of course, that the integer column is being sorted lexically rather than numerically.
Is there a simple way of fixing this?
|
[
"I think that the most robust way of doing custom sort is to use SortItems() function in wx.ListCtrl. Note that you have to provide item data for each item (using SetItemData()) \nJust provide your own callback, say:\ndef sortColumn(item1, item2):\n try: \n i1 = int(item1)\n i2 = int(item2)\n except ValueError:\n return cmp(item1, item2)\n else:\n return cmp(i1, i2)\n\nDidn't check it, but something along these lines should work for all columns, unless you have a column where some values are strings representing integers and some are not. \n"
] |
[
2
] |
[] |
[] |
[
"listctrl",
"python",
"wxpython"
] |
stackoverflow_0001516742_listctrl_python_wxpython.txt
|
Q:
How to remove existing background color of text when highlighting?
I'm writing a small utility in Python that does some pattern matching of text. Text that matches the pattern the user has entered gets highlighted yellow.
I'm achieving this using a Tkinter Text widget, and setting up a tag on the Text widget named "match" that gives any text with the tag name "match" a yellow background.
This all looks nice, except when I try to highlight the text using the mouse (e.g. when wanting to copy/paste). When I highlight the text with the mouse, any of the tagged text that already has a yellow background retains its yellow background, even after being highlighted. This means you can't properly read the text when it's been highlighted by the mouse, as the white text (text goes white when highlighted by mouse) on a yellow background provides bad contrast.
What I would like to happen is that, when I highlight the text in the Text widget using the mouse, all of the text gets the standard blue background color/white text color that you'd normally get on a Windows machine when highlighting a section of text.
Here's a quick code snippet to demonstrate what I mean:
from tkinter import *
root = Tk()
w = Text(root)
w.tag_config("match",background="yellow")
w.config(highlightbackground="red")
w.pack()
w.insert(INSERT,"some non-matching text.")
w.insert(INSERT,"some matching text.","match")
root.mainloop()
If you run this, and then highlight all of the text in the Text widget, you'll see that the text with the yellow background becomes very hard to read.
Note that in the code snippet above I've tried changing the highlight background color using:
w.config(highlightbackground="red")
But this hasn't worked.
A:
Tags have priority. Tags with a high priority have preference over those that have a lower priority. When you select a range of text it is given the tag "sel". You simply need to raise the priority of the "sel" tag to be above the priority of your "match" tag:
w.tag_raise("sel")
Alex Martelli writes in the comments "it will do the OP absoluely [sic] no good if he sets highlightbackground instead of selectbackground" but that is incorrect. While he is correct that setting highlightbackground has nothing to do with the selection, it has no bearing on this solution.
Raising the priority of the "sel" tag works with the code in the original question, with or without the addition of the code that sets highlightbackground.
For more information on the text widget check out the text widget tutorial on tkdocs.com. It has code examples in Tcl, Python, Ruby and Perl.
A:
I think you need to set selectbackground, not highlightbackground which means something completely different (the bg color for the "highlight rectangle" drawn around a widget when it gets focus). However, I believe the sel pseudo-tag (representing the selection, which is what I think you're calling "the highlight") is "below" user-created tags such as your match; if so, then the bg color for the user-created tag would show, not the bg color for the sel pseudo-tag (aka selectbackground).
With Tk 8.5 you could remedy that by binding to the <Selection> pseudo-event a function that places an appropriately colored user tag "on top" of pseudo-tag sel; however, there is no such event in Tk 8.4, which is what most likely you're using today. TK's docs say that 8.5 comes with Python 3.1 on the ActiveState distribution of Python for Windows; unfortunately there are only "TODO" placeholders regarding other platforms or other versions of Python -- I don't know how to best obtain Tk 8.5 for the specific platform(s) and python version(s) of your interest.
|
How to remove existing background color of text when highlighting?
|
I'm writing a small utility in Python that does some pattern matching of text. Text that matches the pattern the user has entered gets highlighted yellow.
I'm achieving this using a Tkinter Text widget, and setting up a tag on the Text widget named "match" that gives any text with the tag name "match" a yellow background.
This all looks nice, except when I try to highlight the text using the mouse (e.g. when wanting to copy/paste). When I highlight the text with the mouse, any of the tagged text that already has a yellow background retains its yellow background, even after being highlighted. This means you can't properly read the text when it's been highlighted by the mouse, as the white text (text goes white when highlighted by mouse) on a yellow background provides bad contrast.
What I would like to happen is that, when I highlight the text in the Text widget using the mouse, all of the text gets the standard blue background color/white text color that you'd normally get on a Windows machine when highlighting a section of text.
Here's a quick code snippet to demonstrate what I mean:
from tkinter import *
root = Tk()
w = Text(root)
w.tag_config("match",background="yellow")
w.config(highlightbackground="red")
w.pack()
w.insert(INSERT,"some non-matching text.")
w.insert(INSERT,"some matching text.","match")
root.mainloop()
If you run this, and then highlight all of the text in the Text widget, you'll see that the text with the yellow background becomes very hard to read.
Note that in the code snippet above I've tried changing the highlight background color using:
w.config(highlightbackground="red")
But this hasn't worked.
|
[
"Tags have priority. Tags with a high priority have preference over those that have a lower priority. When you select a range of text it is given the tag \"sel\". You simply need to raise the priority of the \"sel\" tag to be above the priority of your \"match\" tag:\nw.tag_raise(\"sel\")\n\nAlex Martelli writes in the comments \"it will do the OP absoluely [sic] no good if he sets highlightbackground instead of selectbackground\" but that is incorrect. While he is correct that setting highlightbackground has nothing to do with the selection, it has no bearing on this solution. \nRaising the priority of the \"sel\" tag works with the code in the original question, with or without the addition of the code that sets highlightbackground. \nFor more information on the text widget check out the text widget tutorial on tkdocs.com. It has code examples in Tcl, Python, Ruby and Perl. \n",
"I think you need to set selectbackground, not highlightbackground which means something completely different (the bg color for the \"highlight rectangle\" drawn around a widget when it gets focus). However, I believe the sel pseudo-tag (representing the selection, which is what I think you're calling \"the highlight\") is \"below\" user-created tags such as your match; if so, then the bg color for the user-created tag would show, not the bg color for the sel pseudo-tag (aka selectbackground).\nWith Tk 8.5 you could remedy that by binding to the <Selection> pseudo-event a function that places an appropriately colored user tag \"on top\" of pseudo-tag sel; however, there is no such event in Tk 8.4, which is what most likely you're using today. TK's docs say that 8.5 comes with Python 3.1 on the ActiveState distribution of Python for Windows; unfortunately there are only \"TODO\" placeholders regarding other platforms or other versions of Python -- I don't know how to best obtain Tk 8.5 for the specific platform(s) and python version(s) of your interest.\n"
] |
[
3,
0
] |
[] |
[] |
[
"background",
"colors",
"highlight",
"python",
"tkinter"
] |
stackoverflow_0001515809_background_colors_highlight_python_tkinter.txt
|
Q:
Python's insert returning None?
#!/usr/bin/python
numbers = [1, 2, 3, 5, 6, 7]
clean = numbers.insert(3, 'four')
print clean
# desire results [1, 2, 3, 'four', 5, 6, 7]
I am getting "None". What am I doing wrong?
A:
Mutating-methods on lists tend to return None, not the modified list as you expect -- such metods perform their effect by altering the list in-place, not by building and returning a new one. So, print numbers instead of print clean will show you the altered list.
If you need to keep numbers intact, first you make a copy, then you alter the copy:
clean = list(numbers)
clean.insert(3, 'four')
this has the overall effect you appear to desire: numbers is unchanged, clean is the changed list.
A:
The insert method modifies the list in place and does not return a new reference. Try:
>>> numbers = [1, 2, 3, 5, 6, 7]
>>> numbers.insert(3, 'four')
>>> print numbers
[1, 2, 3, 'four', 5, 6, 7]
A:
The list.insert() operator doesn't return anything, what you probably want is:
print numbers
A:
insert will insert the item into the given list. Print numbers instead and you'll see your results. insert does not return the new list.
|
Python's insert returning None?
|
#!/usr/bin/python
numbers = [1, 2, 3, 5, 6, 7]
clean = numbers.insert(3, 'four')
print clean
# desire results [1, 2, 3, 'four', 5, 6, 7]
I am getting "None". What am I doing wrong?
|
[
"Mutating-methods on lists tend to return None, not the modified list as you expect -- such metods perform their effect by altering the list in-place, not by building and returning a new one. So, print numbers instead of print clean will show you the altered list.\nIf you need to keep numbers intact, first you make a copy, then you alter the copy:\nclean = list(numbers)\nclean.insert(3, 'four')\n\nthis has the overall effect you appear to desire: numbers is unchanged, clean is the changed list.\n",
"The insert method modifies the list in place and does not return a new reference. Try:\n>>> numbers = [1, 2, 3, 5, 6, 7]\n>>> numbers.insert(3, 'four')\n>>> print numbers\n[1, 2, 3, 'four', 5, 6, 7]\n\n",
"The list.insert() operator doesn't return anything, what you probably want is:\nprint numbers\n\n",
"insert will insert the item into the given list. Print numbers instead and you'll see your results. insert does not return the new list.\n"
] |
[
18,
10,
3,
1
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0001516889_list_python.txt
|
Q:
Using Django as a Backend for Cappuccino
I'm new to both Django and Cappuccino. I have a Django site setup and running through Apache via mod_wsgi. I want to use Django as the backend for a Cappuccino application, but a VirtualHost setup in Apache and mod_wsgi to serve a Django application serves static files out of a different location than the normal web root (e.g. http://example.com/media/ or http://media.example.com).
How could I setup the environment so that http://example.com serves my Cappuccino Javascript/HTML/CSS files, while also letting me use the typical Django URL system to create endpoints for AJAX calls (e.g. http://example.com/some/json/)?
A:
Have you read:
http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines
This goes into various aspects of using WSGIScriptAlias for mod_wsgi and Alias directives for static files.
I'd suggest you ensure your read that, or reread it, and then post what configuration you have tried already as that will help explain what you are trying to do and can then just correct it.
A:
Here is the configuration I came up with that works:
Django Media Settings:
MEDIA_ROOT = '/Users/Me/Development/Web Projects/mysite/mysite/public_html'
MEDIA_URL = 'http:/mysite.local/'
ADMIN_MEDIA_PREFIX = '/'
Apache VirtualHost Configuration:
<VirtualHost *:80>
ServerAdmin [email protected]
ServerName mysite.local
ErrorLog "/private/var/log/apache2/mysite.local-error_log"
CustomLog "/private/var/log/apache2/mysite.local-access_log" common
WSGIScriptAlias / "/Users/Me/Development/Web Projects/MySite/django.wsgi"
<Directory "/Users/Me/Development/Web Projects/MySite/">
Allow from all
</Directory>
AliasMatch ^/(.*\.[A-Za-z0-9]{1,5})$ "/Users/Me/Development/Web Projects/MySite/public_html/$1"
<Directory "/Users/Me/Development/Web Projects/MySite/public_html">
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
Basically this setup will serve any request with a file extension (I limited mine to an extension of 5 characters or less) as a static file, and all other requests will go to my Django app.
|
Using Django as a Backend for Cappuccino
|
I'm new to both Django and Cappuccino. I have a Django site setup and running through Apache via mod_wsgi. I want to use Django as the backend for a Cappuccino application, but a VirtualHost setup in Apache and mod_wsgi to serve a Django application serves static files out of a different location than the normal web root (e.g. http://example.com/media/ or http://media.example.com).
How could I setup the environment so that http://example.com serves my Cappuccino Javascript/HTML/CSS files, while also letting me use the typical Django URL system to create endpoints for AJAX calls (e.g. http://example.com/some/json/)?
|
[
"Have you read:\nhttp://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines\nThis goes into various aspects of using WSGIScriptAlias for mod_wsgi and Alias directives for static files.\nI'd suggest you ensure your read that, or reread it, and then post what configuration you have tried already as that will help explain what you are trying to do and can then just correct it.\n",
"Here is the configuration I came up with that works:\nDjango Media Settings:\nMEDIA_ROOT = '/Users/Me/Development/Web Projects/mysite/mysite/public_html'\nMEDIA_URL = 'http:/mysite.local/'\nADMIN_MEDIA_PREFIX = '/'\n\nApache VirtualHost Configuration:\n<VirtualHost *:80>\n ServerAdmin [email protected]\n ServerName mysite.local\n ErrorLog \"/private/var/log/apache2/mysite.local-error_log\"\n CustomLog \"/private/var/log/apache2/mysite.local-access_log\" common\n WSGIScriptAlias / \"/Users/Me/Development/Web Projects/MySite/django.wsgi\"\n <Directory \"/Users/Me/Development/Web Projects/MySite/\">\n Allow from all\n </Directory>\n AliasMatch ^/(.*\\.[A-Za-z0-9]{1,5})$ \"/Users/Me/Development/Web Projects/MySite/public_html/$1\"\n <Directory \"/Users/Me/Development/Web Projects/MySite/public_html\">\n Order deny,allow\n Allow from all\n </Directory>\n</VirtualHost>\n\nBasically this setup will serve any request with a file extension (I limited mine to an extension of 5 characters or less) as a static file, and all other requests will go to my Django app.\n"
] |
[
1,
0
] |
[] |
[] |
[
"apache",
"cappuccino",
"django",
"mod_wsgi",
"python"
] |
stackoverflow_0001515578_apache_cappuccino_django_mod_wsgi_python.txt
|
Q:
how to run both python 2.6 and 3.0 on the same windows XP box?
What kind of setup do people use to run both python 2.6 and python 3.0 on the same windows machine?
A:
No problem, each version is installed in its own directory. On my Windows box, I have C:\Python26\ and C:\Python31\. The Start Menu items are also distinct. Just use the standard installers from the Python Programming Language Official Website, or the ready-to-install distributions from ActiveState.
A direct way to select the wanted version is to name it explicitly on the command line.
C:\> C:\Python25\python ver.py
2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)]
C:\> C:\Python31\python ver.py
3.1.1 (r311:74483, Aug 17 2009, 17:02:12) [MSC v.1500 32 bit (Intel)]
Where ver.py is:
import sys
print (sys.version)
A:
-You could use a batch file to launch with the appropriate version.
-Eclipse with pydev allows you to choose which version to launch with.
-Re-associate the .py/.pyw extension with your preferred version.
A:
Virtualenv is the solution of choice on Unix and Mac platforms.
virtualenv is a tool to create
isolated Python environments.
The basic problem being addressed is
one of dependencies and versions, and
indirectly permissions. Imagine you
have an application that needs version
1 of LibFoo, but another application
requires version 2. How can you use
both these applications? If you
install everything into
/usr/lib/python2.4/site-packages (or
whatever your platform's standard
location is), it's easy to end up in a
situation where you unintentionally
upgrade an application that shouldn't
be upgraded.
Or more generally, what if you want to
install an application and leave it
be? If an application works, any
change in its libraries or the
versions of those libraries can break
the application.
Also, what if you can't install
packages into the global site-packages
directory? For instance, on a shared
host.
I have not tried this, but the presence of documentation relating to use in Windows makes me think that it now works on Windows.
A:
Interesting, but I want to be able to learn the 3.0 syntax (print()) etc but still need to maintain some 2.5 and 2.6 code..
Python has __future__ "Future statement definitions", which make new features available in older versions of Python, the print function is one of them:
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)
>>> from __future__ import print_function
>>> print("Example", end=" Hurray!\n")
Example Hurray!
Another big Python 3.0 change is the default string becoming Unicode:
>>> from __future__ import unicode_literals
>>> type('abc')
<type 'unicode'>
>>> 'a'
u'a'
Most of the others are now part of Python 2.6, so aren't of interest (like the with_statement). The only one I didn't mention is from __future__ import braces to allow "C like" if(){} braces
A:
Why do you want to do it? Everything that's in 3.0 is in 2.6, you can use the new features in 3.0 with from __future__ import. Run 2.6 until you're ready to upgrade to 3.0 for everything, then upgrade. You're not intended to have multiple versions of Python on the same machine. If you really want to, you can specify alternative install directories, but I really don't see how it's worth the hassle.
|
how to run both python 2.6 and 3.0 on the same windows XP box?
|
What kind of setup do people use to run both python 2.6 and python 3.0 on the same windows machine?
|
[
"No problem, each version is installed in its own directory. On my Windows box, I have C:\\Python26\\ and C:\\Python31\\. The Start Menu items are also distinct. Just use the standard installers from the Python Programming Language Official Website, or the ready-to-install distributions from ActiveState.\nA direct way to select the wanted version is to name it explicitly on the command line.\nC:\\> C:\\Python25\\python ver.py\n2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)]\n\nC:\\> C:\\Python31\\python ver.py\n3.1.1 (r311:74483, Aug 17 2009, 17:02:12) [MSC v.1500 32 bit (Intel)]\n\nWhere ver.py is:\nimport sys\nprint (sys.version)\n\n",
"-You could use a batch file to launch with the appropriate version.\n-Eclipse with pydev allows you to choose which version to launch with.\n-Re-associate the .py/.pyw extension with your preferred version. \n",
"Virtualenv is the solution of choice on Unix and Mac platforms. \n\nvirtualenv is a tool to create\n isolated Python environments.\nThe basic problem being addressed is\n one of dependencies and versions, and\n indirectly permissions. Imagine you\n have an application that needs version\n 1 of LibFoo, but another application\n requires version 2. How can you use\n both these applications? If you\n install everything into\n /usr/lib/python2.4/site-packages (or\n whatever your platform's standard\n location is), it's easy to end up in a\n situation where you unintentionally\n upgrade an application that shouldn't\n be upgraded.\nOr more generally, what if you want to\n install an application and leave it\n be? If an application works, any\n change in its libraries or the\n versions of those libraries can break\n the application.\nAlso, what if you can't install\n packages into the global site-packages\n directory? For instance, on a shared\n host.\n\nI have not tried this, but the presence of documentation relating to use in Windows makes me think that it now works on Windows.\n",
"\nInteresting, but I want to be able to learn the 3.0 syntax (print()) etc but still need to maintain some 2.5 and 2.6 code..\n\nPython has __future__ \"Future statement definitions\", which make new features available in older versions of Python, the print function is one of them:\nPython 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)\n>>> from __future__ import print_function\n>>> print(\"Example\", end=\" Hurray!\\n\")\nExample Hurray!\n\nAnother big Python 3.0 change is the default string becoming Unicode:\n>>> from __future__ import unicode_literals\n>>> type('abc')\n<type 'unicode'>\n>>> 'a'\nu'a'\n\nMost of the others are now part of Python 2.6, so aren't of interest (like the with_statement). The only one I didn't mention is from __future__ import braces to allow \"C like\" if(){} braces\n",
"Why do you want to do it? Everything that's in 3.0 is in 2.6, you can use the new features in 3.0 with from __future__ import. Run 2.6 until you're ready to upgrade to 3.0 for everything, then upgrade. You're not intended to have multiple versions of Python on the same machine. If you really want to, you can specify alternative install directories, but I really don't see how it's worth the hassle.\n"
] |
[
9,
1,
1,
1,
0
] |
[] |
[] |
[
"python",
"windows_xp"
] |
stackoverflow_0001515850_python_windows_xp.txt
|
Q:
monitor service for "failure"
I'm developing a script which monitors a service for failure and launches another a different action depending on if failure is present or not.
I require a python script to monitor the output from a python program "monitor-services" and parsing the output search for an occurance of the word "failure". If present, the script should return with a true value, and run for a maxiumum of 30 seconds returning false if no occurance of "failure" occurs.
Sample output returned from "monitor-services":
{Device} [/device/xxx] Networks = dbus.Array([dbus.ObjectPath('/device/xxx/xxx'), dbus.ObjectPath('/device/00242b2e41b6/hidden')], signature=dbus.Signature('o'), variant_level=1)
{Service} [/profile/default/wifi_xxx_managed_wep] State = association
{Profile} [/profile/default] Services = dbus.Array([dbus.ObjectPath('/profile/default/wifi_xxx_managed_wep'), dbus.ObjectPath('/profile/default/wifi_xxx_managed_rsn')], signature=dbus.Signature('o'), variant_level=1)
{Manager} [/] Services = dbus.Array([dbus.ObjectPath('/profile/default/wifi_xxx_managed_wep'), dbus.ObjectPath('/profile/default/wifi_xxx_managed_rsn')], signature=dbus.Signature('o'), variant_level=1)
{Service} [/profile/default/wifi_xxx_managed_wep] **failure**
{Service} [/profile/default/wifi_xxx_managed_wep] State = idle
Any help would be appreciated.
[edit] A failure is likely to occur with 30 seconds or so of the action triggering the script, hence the script is required to terminate after 30 seconds. [/edit]
A:
#!/usr/bin/python
from subprocess import Popen, PIPE
import sys
data = Popen(["monitor-services"], stdout=PIPE).communicate()[0]
sys.exit("failure" in data)
This does everything you want, except for the 30s wait (which I don't understand). Notice that it returns 0 if failure is not found, 1 if it is found, according to the shell conventions (i.e. 0 is success, non-zero is failure).
|
monitor service for "failure"
|
I'm developing a script which monitors a service for failure and launches another a different action depending on if failure is present or not.
I require a python script to monitor the output from a python program "monitor-services" and parsing the output search for an occurance of the word "failure". If present, the script should return with a true value, and run for a maxiumum of 30 seconds returning false if no occurance of "failure" occurs.
Sample output returned from "monitor-services":
{Device} [/device/xxx] Networks = dbus.Array([dbus.ObjectPath('/device/xxx/xxx'), dbus.ObjectPath('/device/00242b2e41b6/hidden')], signature=dbus.Signature('o'), variant_level=1)
{Service} [/profile/default/wifi_xxx_managed_wep] State = association
{Profile} [/profile/default] Services = dbus.Array([dbus.ObjectPath('/profile/default/wifi_xxx_managed_wep'), dbus.ObjectPath('/profile/default/wifi_xxx_managed_rsn')], signature=dbus.Signature('o'), variant_level=1)
{Manager} [/] Services = dbus.Array([dbus.ObjectPath('/profile/default/wifi_xxx_managed_wep'), dbus.ObjectPath('/profile/default/wifi_xxx_managed_rsn')], signature=dbus.Signature('o'), variant_level=1)
{Service} [/profile/default/wifi_xxx_managed_wep] **failure**
{Service} [/profile/default/wifi_xxx_managed_wep] State = idle
Any help would be appreciated.
[edit] A failure is likely to occur with 30 seconds or so of the action triggering the script, hence the script is required to terminate after 30 seconds. [/edit]
|
[
"#!/usr/bin/python\nfrom subprocess import Popen, PIPE\nimport sys\n\ndata = Popen([\"monitor-services\"], stdout=PIPE).communicate()[0]\n\nsys.exit(\"failure\" in data)\n\nThis does everything you want, except for the 30s wait (which I don't understand). Notice that it returns 0 if failure is not found, 1 if it is found, according to the shell conventions (i.e. 0 is success, non-zero is failure).\n"
] |
[
1
] |
[] |
[] |
[
"python",
"scripting"
] |
stackoverflow_0001516943_python_scripting.txt
|
Q:
Compound UniqueConstraint with a function
A quick SQLAlchemy question...
I have a class "Document" with attributes "Number" and "Date". I need to ensure that there's no duplicated number for the same year, is
there a way to have a UniqueConstraint on "Number + year(Date)"? Should I use a unique Index instead? How would I declare the functional part?
(SQLAlchemy 0.5.5, PostgreSQL 8.3.4)
Thanks in advance!
A:
You should use a functional unique index to apply this constraint. Unfortunately the database generic database independent schema definition machinery in SQLAlchemy doesn't abstract functional indexes yet. You'll have to use the DDL construct to register custom schema definition clauses. If you are using the declarative approach to declaring your schema add the following after your class definition:
DDL(
"CREATE UNIQUE INDEX doc_year_num_uniq ON %(fullname)s "
"(EXTRACT(YEAR FROM date), number)"
).execute_at('after-create', Document.__table__)
This method works very nicely but throws a SADeprecation warning in v0.7
The syntax that I've used successfully:
from sqlalchemy import event
event.listen(ModelObject.__table__,
'after_create',
DDL("CREATE UNIQUE INDEX term_year ON %(fullname)s "
"(EXTRACT(YEAR FROM start_date), term)",
on = 'postgresql'
)
)
|
Compound UniqueConstraint with a function
|
A quick SQLAlchemy question...
I have a class "Document" with attributes "Number" and "Date". I need to ensure that there's no duplicated number for the same year, is
there a way to have a UniqueConstraint on "Number + year(Date)"? Should I use a unique Index instead? How would I declare the functional part?
(SQLAlchemy 0.5.5, PostgreSQL 8.3.4)
Thanks in advance!
|
[
"You should use a functional unique index to apply this constraint. Unfortunately the database generic database independent schema definition machinery in SQLAlchemy doesn't abstract functional indexes yet. You'll have to use the DDL construct to register custom schema definition clauses. If you are using the declarative approach to declaring your schema add the following after your class definition:\nDDL(\n \"CREATE UNIQUE INDEX doc_year_num_uniq ON %(fullname)s \"\n \"(EXTRACT(YEAR FROM date), number)\"\n).execute_at('after-create', Document.__table__)\n\nThis method works very nicely but throws a SADeprecation warning in v0.7\nThe syntax that I've used successfully:\nfrom sqlalchemy import event\n\nevent.listen(ModelObject.__table__,\n 'after_create',\n DDL(\"CREATE UNIQUE INDEX term_year ON %(fullname)s \"\n \"(EXTRACT(YEAR FROM start_date), term)\",\n on = 'postgresql'\n )\n )\n\n"
] |
[
4
] |
[
"I'm pretty sure that unique constraints can only be applied on columns that already have data in them, and not on runtime-calculated expressions. Hence, you would need to create an extra column which contains the year part of your date, over which you could create a unique constraint together with number. To best use this approach, maybe you should store your date split up in three separate columns containing the day, month and year part. This could be done using default constraints in the table definition.\n"
] |
[
-1
] |
[
"constraints",
"python",
"sqlalchemy"
] |
stackoverflow_0001510018_constraints_python_sqlalchemy.txt
|
Q:
Cutting text from IPython shell using Ctrl-X is broken
I use IPython very frequently and happily. Somehow, cutting text from the shell using the keyboard shortcut, Ctrl + X, is broken. Actually, I have a few different installations of IPython. In some of the installations, the shortcut works; in the others, it doesn't work.
What might be the reason for this? Where should I look into?
A:
You say you have multiple instances installed -- are these all on different machines? What operating system(s) are they running? If you access them remotely, what operating system are you running?
Do you get to them using ssh? Do you run something like screen, either locally or remotely, or both? There are lots of things that can interfere with your terminal settings, especially when you're working remotely.
I'm almost certain that iPython doesn't have anything to do with it -- though you might want to check the version numbers, to see if working and non-working environments are running different versions.
More likely, it is something in the terminal emulation layer, but you'll likely have to do some detective work of your own to find out what piece is causing it.
Take it one step at a time -- try to cut from a local shell, to make sure that works. Then connect to a remote machine, and cut from that shell. Start screen, if that's your normal way of doing things, and test from that shell. Then start ipython. If it stops there, then see if you can find another application on the same machine that's linked against gnu readline, and try that. You may find that none of the console apps cut proplerly on that machine, or you may find that they work, but not under screen. Or you may find that something in the terminal settings stops everything from working as soon as you ssh in.
You may also have some luck. if you can find out what terminal the remote machine thinks you are using ( echo $TERM ) by copying the termcap file from a working machine to one that doesn't. That's a bit more involved for these forums, though -- I'd repost at that point on serverfault.com or superuser.com
I hope that at least gives you a starting place -- terminals are finicky, and difficult to get right. Most people seem to not bother, as long as everything mostly works.
|
Cutting text from IPython shell using Ctrl-X is broken
|
I use IPython very frequently and happily. Somehow, cutting text from the shell using the keyboard shortcut, Ctrl + X, is broken. Actually, I have a few different installations of IPython. In some of the installations, the shortcut works; in the others, it doesn't work.
What might be the reason for this? Where should I look into?
|
[
"You say you have multiple instances installed -- are these all on different machines? What operating system(s) are they running? If you access them remotely, what operating system are you running?\nDo you get to them using ssh? Do you run something like screen, either locally or remotely, or both? There are lots of things that can interfere with your terminal settings, especially when you're working remotely.\nI'm almost certain that iPython doesn't have anything to do with it -- though you might want to check the version numbers, to see if working and non-working environments are running different versions.\nMore likely, it is something in the terminal emulation layer, but you'll likely have to do some detective work of your own to find out what piece is causing it.\nTake it one step at a time -- try to cut from a local shell, to make sure that works. Then connect to a remote machine, and cut from that shell. Start screen, if that's your normal way of doing things, and test from that shell. Then start ipython. If it stops there, then see if you can find another application on the same machine that's linked against gnu readline, and try that. You may find that none of the console apps cut proplerly on that machine, or you may find that they work, but not under screen. Or you may find that something in the terminal settings stops everything from working as soon as you ssh in.\nYou may also have some luck. if you can find out what terminal the remote machine thinks you are using ( echo $TERM ) by copying the termcap file from a working machine to one that doesn't. That's a bit more involved for these forums, though -- I'd repost at that point on serverfault.com or superuser.com\nI hope that at least gives you a starting place -- terminals are finicky, and difficult to get right. Most people seem to not bother, as long as everything mostly works.\n"
] |
[
2
] |
[] |
[] |
[
"ipython",
"keyboard_shortcuts",
"python"
] |
stackoverflow_0001517259_ipython_keyboard_shortcuts_python.txt
|
Q:
In Django, how do I filter based on all entities in a many-to-many relation instead of any?
I have a model like this:
class Task(models.model):
TASK_STATUS_CHOICES = (
(u"P", u'Pending'),
(u"A", u'Assigned'),
(u"C", u'Complete'),
(u"F", u'Failed')
)
status = models.CharField(max_length=2, choices=TASK_STATUS_CHOICES)
prerequisites = models.ManyToManyField('self', symmetrical=False, related_name="dependents")
I want to find all tasks whose prerequisites are all complete. I have tried:
Task.objects.filter(prerequisites__status=u"C")
This gets all tasks for which any prerequisite is complete. I thought maybe I need to use an annotation, but I can't see how to apply a filter on the prerequisite tasks before doing an aggregation. For example, I can find the number of prerequisites of each task like this:
Task.objects.annotate(prereq_count=Count('prerequisites'))
But how do I annotate tasks with the number of their prerequisites that have a status not equal to "C"?
A:
For your first question -- 'all tasks whose prerequisites are all complete':
>>> Task.objects.exclude(prerequisites__status__in=['A','P','F'])
This will also include tasks with no prerequisites (as they have no incomplete prerequisites). As a doctest (using your model definition), the following passes:
>>> a = Task.objects.create(status='C')
>>> b = Task.objects.create(status='A')
>>> b.prerequisites.add(a)
>>> c = Task.objects.create(status='P')
>>> c.prerequisites.add(b)
>>> prerequisites_complete = Task.objects.exclude(prerequisites__status__in=['A','P','F'])
>>> set([t.id for t in prerequisites_complete]) == set([a.id, b.id])
True
This doesn't answer how many incomplete prerequisites each task has -- which you might need for display, optimization, etc.
|
In Django, how do I filter based on all entities in a many-to-many relation instead of any?
|
I have a model like this:
class Task(models.model):
TASK_STATUS_CHOICES = (
(u"P", u'Pending'),
(u"A", u'Assigned'),
(u"C", u'Complete'),
(u"F", u'Failed')
)
status = models.CharField(max_length=2, choices=TASK_STATUS_CHOICES)
prerequisites = models.ManyToManyField('self', symmetrical=False, related_name="dependents")
I want to find all tasks whose prerequisites are all complete. I have tried:
Task.objects.filter(prerequisites__status=u"C")
This gets all tasks for which any prerequisite is complete. I thought maybe I need to use an annotation, but I can't see how to apply a filter on the prerequisite tasks before doing an aggregation. For example, I can find the number of prerequisites of each task like this:
Task.objects.annotate(prereq_count=Count('prerequisites'))
But how do I annotate tasks with the number of their prerequisites that have a status not equal to "C"?
|
[
"For your first question -- 'all tasks whose prerequisites are all complete':\n>>> Task.objects.exclude(prerequisites__status__in=['A','P','F'])\n\nThis will also include tasks with no prerequisites (as they have no incomplete prerequisites). As a doctest (using your model definition), the following passes:\n>>> a = Task.objects.create(status='C')\n>>> b = Task.objects.create(status='A')\n>>> b.prerequisites.add(a)\n>>> c = Task.objects.create(status='P')\n>>> c.prerequisites.add(b)\n>>> prerequisites_complete = Task.objects.exclude(prerequisites__status__in=['A','P','F'])\n>>> set([t.id for t in prerequisites_complete]) == set([a.id, b.id])\nTrue\n\nThis doesn't answer how many incomplete prerequisites each task has -- which you might need for display, optimization, etc. \n"
] |
[
4
] |
[] |
[] |
[
"django",
"orm",
"python"
] |
stackoverflow_0001516795_django_orm_python.txt
|
Q:
How to space a write file
Hello everyone I'm currently new to python, and I'm trying to write a program where the user can input a message and the output will show the numeric code of the message, but at the same time I need to write the numeric code to an additional file.
So for example the user inputs the word "Hello"
The user will see the output "72 101 108 108 111"
Now that output should also be copied into an external document labeled as my "EncryptedMessage.txt"
The numbers are written to the folder however there is no space between them so when I put it in the decoder it will not decode, is there anyway for me to get a space between them?
Example of my coding.
def main():
outfile = open("EncryptedMessage.txt", "w")
messgage = " "
message = raw_input("Enter a message: ")
for ch in message:
ascii = ord(ch)
outfile.write(str(ascii) )
print ascii,
outfile.close()
Sorry I'm not really good with programming terms.
A:
You can do this without the loop using Python list comprehension and the join method to add the spaces between the ascii values.
" ".join([str(ord(c)) for c in message])
This will evaluate to a string of ascii values separated by spaces.
A:
just manually add a space:
outfile.write(str(ascii) + " ")
A:
Bear in mind that cobbal's answer will add an extra space to the end of the file. You can strip it off using:
s.rstrip(str(ascii))
|
How to space a write file
|
Hello everyone I'm currently new to python, and I'm trying to write a program where the user can input a message and the output will show the numeric code of the message, but at the same time I need to write the numeric code to an additional file.
So for example the user inputs the word "Hello"
The user will see the output "72 101 108 108 111"
Now that output should also be copied into an external document labeled as my "EncryptedMessage.txt"
The numbers are written to the folder however there is no space between them so when I put it in the decoder it will not decode, is there anyway for me to get a space between them?
Example of my coding.
def main():
outfile = open("EncryptedMessage.txt", "w")
messgage = " "
message = raw_input("Enter a message: ")
for ch in message:
ascii = ord(ch)
outfile.write(str(ascii) )
print ascii,
outfile.close()
Sorry I'm not really good with programming terms.
|
[
"You can do this without the loop using Python list comprehension and the join method to add the spaces between the ascii values.\n\" \".join([str(ord(c)) for c in message])\n\nThis will evaluate to a string of ascii values separated by spaces.\n",
"just manually add a space:\noutfile.write(str(ascii) + \" \")\n\n",
"Bear in mind that cobbal's answer will add an extra space to the end of the file. You can strip it off using:\ns.rstrip(str(ascii))\n\n"
] |
[
5,
3,
0
] |
[] |
[] |
[
"file",
"python",
"spacing"
] |
stackoverflow_0001517456_file_python_spacing.txt
|
Q:
Closing sockets in python
I'm modifying Python code that have this form:
def foo(self):
try:
connect socket
except Exception, e:
some error reporting stuff
return an error
use the socket
do some other stuff
if some condition:
return
do some more stuff
socket.close()
return normally
Coming from Java I would like to have a try - finally around the whole thing to make sure that the socket is closed. Should this code also have that or is it some kind of Pythonic magic happening in the background that makes it so that you don't have to?
I read in the python docs that sockets are closed when they are garbage collected. But relying on the garbage collector to close your sockets doesn't feel so good.
A:
You can use a try-finally block, which was added in Python 2.5:
try:
open socket
do stuff with socket
finally:
close socket
Or you can use the with statement, which was added in Python 2.6 (and can be used in 2.5 with a from __future__ import with_statement declaration):
with open_the_socket() as s:
use s
This will automatically close the socket when the inner block is exited, whether it's exited normally or via an exception, provided that the socket class closes in its __exit__() method.
As of Python 2.7.2 the __exit__() method is not implemented on the socket class.
A:
You seem to be wanting the finally block added to try/except in 2.5
http://docs.python.org/whatsnew/2.5.html#pep-341-unified-try-except-finally
You are correct that relying on the autoclose when garbage collected is a bad practice, it's best to close them when you are done with them. A try/finally is a good way to do it.
Don't forget to call shutdown() before close().
A:
In addition to the try/finally block, consider setting a socket default timeout that will automatically close the socket(s) after your specified time.
http://docs.python.org/library/socket.html?highlight=setdefaulttimeout#socket.setdefaulttimeout
|
Closing sockets in python
|
I'm modifying Python code that have this form:
def foo(self):
try:
connect socket
except Exception, e:
some error reporting stuff
return an error
use the socket
do some other stuff
if some condition:
return
do some more stuff
socket.close()
return normally
Coming from Java I would like to have a try - finally around the whole thing to make sure that the socket is closed. Should this code also have that or is it some kind of Pythonic magic happening in the background that makes it so that you don't have to?
I read in the python docs that sockets are closed when they are garbage collected. But relying on the garbage collector to close your sockets doesn't feel so good.
|
[
"You can use a try-finally block, which was added in Python 2.5:\ntry:\n open socket\n do stuff with socket\nfinally:\n close socket\n\nOr you can use the with statement, which was added in Python 2.6 (and can be used in 2.5 with a from __future__ import with_statement declaration):\nwith open_the_socket() as s:\n use s\n\nThis will automatically close the socket when the inner block is exited, whether it's exited normally or via an exception, provided that the socket class closes in its __exit__() method.\nAs of Python 2.7.2 the __exit__() method is not implemented on the socket class.\n",
"You seem to be wanting the finally block added to try/except in 2.5\nhttp://docs.python.org/whatsnew/2.5.html#pep-341-unified-try-except-finally\nYou are correct that relying on the autoclose when garbage collected is a bad practice, it's best to close them when you are done with them. A try/finally is a good way to do it.\nDon't forget to call shutdown() before close().\n",
"In addition to the try/finally block, consider setting a socket default timeout that will automatically close the socket(s) after your specified time. \nhttp://docs.python.org/library/socket.html?highlight=setdefaulttimeout#socket.setdefaulttimeout\n"
] |
[
12,
1,
0
] |
[] |
[] |
[
"python",
"sockets",
"try_catch"
] |
stackoverflow_0001517418_python_sockets_try_catch.txt
|
Q:
Linux packaging abstraction layer written in Python?
I'm looking for a way to generate .deb and .rpm packages from my build scripts, containing the resulting products. Since everything is written in Python, I'm wondering if anyone knows of an abstraction layer that would allow me to drive both RPM and Deb construction from the same code?
A:
Distutils in python's standard library has a bdist_rpm command to build rpms and the third-paty stdeb module adds a command for producing debs.
|
Linux packaging abstraction layer written in Python?
|
I'm looking for a way to generate .deb and .rpm packages from my build scripts, containing the resulting products. Since everything is written in Python, I'm wondering if anyone knows of an abstraction layer that would allow me to drive both RPM and Deb construction from the same code?
|
[
"Distutils in python's standard library has a bdist_rpm command to build rpms and the third-paty stdeb module adds a command for producing debs.\n"
] |
[
2
] |
[] |
[] |
[
"deb",
"packaging",
"python",
"rpm"
] |
stackoverflow_0001517495_deb_packaging_python_rpm.txt
|
Q:
About Python's built in sort() method
What algorithm is the built in sort() method in Python using? Is it possible to have a look at the code for that method?
A:
Sure! The code's here, starting with function islt and proceeding for QUITE a while;-). As Chris's comment suggests, it's C code. You'll also want to read this text file for a textual explanation, results, etc etc.
If you prefer reading Java code than C code, you could look at Joshua Bloch's implementation of timsort in and for Java (Joshua's also the guy who implemented, in 1997, the modified mergesort that's still used in Java, and one can hope that Java will eventually switch to his recent port of timsort).
Some explanation of the Java port of timsort is here, the diff is here (with pointers to all needed files), the key file is here -- FWIW, while I'm a better C programmer than Java programmer, in this case I find Joshua's Java code more readable overall than Tim's C code;-).
A:
I just wanted to supply a very helpful link that I missed in Alex's otherwise comprehensive answer: A high-level explanation of Python's timsort (with graph visualizations!).
(Yes, the algorithm is basically known as Timsort now)
A:
In early python-versions, the sort function implemented a modified version of quicksort.
However, it was deemed unstable and as of 2.3 they switched to using an adaptive mergesort algorithm.
|
About Python's built in sort() method
|
What algorithm is the built in sort() method in Python using? Is it possible to have a look at the code for that method?
|
[
"Sure! The code's here, starting with function islt and proceeding for QUITE a while;-). As Chris's comment suggests, it's C code. You'll also want to read this text file for a textual explanation, results, etc etc.\nIf you prefer reading Java code than C code, you could look at Joshua Bloch's implementation of timsort in and for Java (Joshua's also the guy who implemented, in 1997, the modified mergesort that's still used in Java, and one can hope that Java will eventually switch to his recent port of timsort).\nSome explanation of the Java port of timsort is here, the diff is here (with pointers to all needed files), the key file is here -- FWIW, while I'm a better C programmer than Java programmer, in this case I find Joshua's Java code more readable overall than Tim's C code;-).\n",
"I just wanted to supply a very helpful link that I missed in Alex's otherwise comprehensive answer: A high-level explanation of Python's timsort (with graph visualizations!).\n(Yes, the algorithm is basically known as Timsort now)\n",
"In early python-versions, the sort function implemented a modified version of quicksort.\nHowever, it was deemed unstable and as of 2.3 they switched to using an adaptive mergesort algorithm.\n"
] |
[
146,
40,
10
] |
[] |
[] |
[
"algorithm",
"python",
"python_internals",
"sorting"
] |
stackoverflow_0001517347_algorithm_python_python_internals_sorting.txt
|
Q:
Basic cocoa application using dock in Python, but not Xcode and all that extras
It seems that if I want to create a very basic Cocoa application with a dock icon and the like, I would have to use Xcode and the GUI builder (w/ PyObjC).
The application I am intending to write is largely concerned with algorithms and basic IO - and thus, not mostly related to Apple specific stuff.
Basically the application is supposed to run periodically (say, every 3 minutes) .. pull some information via AppleScript and write HTML files to a particular directory. I would like to add a Dock icon for this application .. mainly to showing the "status" of the process (for example, if there is an error .. the dock icon would have a red flag on it). Another advantage of the dock icon is that I can make it run on startup.
Additional bonus for defining the dock right-click menu in a simple way (eg: using Python lists of callables).
Can I achieve this without using Xcode or GUI builders but simply using Emacs and Python?
A:
Install the latest py2app, then make a new directory -- cd to it -- in it make a HelloWorld.py file such as:
# generic Python imports
import datetime
import os
import sched
import sys
import tempfile
import threading
import time
# need PyObjC on sys.path...:
for d in sys.path:
if 'Extras' in d:
sys.path.append(d + '/PyObjC')
break
# objc-related imports
import objc
from Foundation import *
from AppKit import *
from PyObjCTools import AppHelper
# all stuff related to the repeating-action
thesched = sched.scheduler(time.time, time.sleep)
def tick(n, writer):
writer(n)
thesched.enter(20.0, 10, tick, (n+1, writer))
fd, name = tempfile.mkstemp('.txt', 'hello', '/tmp');
print 'writing %r' % name
f = os.fdopen(fd, 'w')
f.write(datetime.datetime.now().isoformat())
f.write('\n')
f.close()
def schedule(writer):
pool = NSAutoreleasePool.alloc().init()
thesched.enter(0.0, 10, tick, (1, writer))
thesched.run()
# normally you'd want pool.drain() here, but since this function never
# ends until end of program (thesched.run never returns since each tick
# schedules a new one) that pool.drain would never execute here;-).
# objc-related stuff
class TheDelegate(NSObject):
statusbar = None
state = 'idle'
def applicationDidFinishLaunching_(self, notification):
statusbar = NSStatusBar.systemStatusBar()
self.statusitem = statusbar.statusItemWithLength_(
NSVariableStatusItemLength)
self.statusitem.setHighlightMode_(1)
self.statusitem.setToolTip_('Example')
self.statusitem.setTitle_('Example')
self.menu = NSMenu.alloc().init()
menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_(
'Quit', 'terminate:', '')
self.menu.addItem_(menuitem)
self.statusitem.setMenu_(self.menu)
def writer(self, s):
self.badge.setBadgeLabel_(str(s))
if __name__ == "__main__":
# prepare and set our delegate
app = NSApplication.sharedApplication()
delegate = TheDelegate.alloc().init()
app.setDelegate_(delegate)
delegate.badge = app.dockTile()
delegate.writer(0)
# on a separate thread, run the scheduler
t = threading.Thread(target=schedule, args=(delegate.writer,))
t.setDaemon(1)
t.start()
# let her rip!-)
AppHelper.runEventLoop()
Of course, in your real code, you'll be performing your own periodic actions every 3 minutes (rather than writing a temp file every 20 seconds as I'm doing here), doing your own status updates (rather than just showing a counter of the number of files written so far), etc, etc, but I hope this example shows you a viable starting point.
Then in Terminal.App cd to the directory containing this source file, py2applet --make-setup HelloWorld.py, python setup.py py2app -A -p PyObjC.
You now have in subdirectory dist a directory HelloWorld.app; open dist and drag the icon to the Dock, and you're all set (on your own machine -- distributing to other machines may not work due to the -A flag, but I had trouble building without it, probably due to mis-installed egg files laying around this machine;-). No doubt you'll want to customize your icon &c.
This doesn't do the "extra credit" thingy you asked for -- it's already a lot of code and I decided to stop here (the extra credit thingy may warrant a new question). Also, I'm not quite sure that all the incantations I'm performing here are actually necessary or useful; the docs are pretty latitant for making a pyobjc .app without Xcode, as you require, so I hacked this together from bits and pieces of example code found on the net plus a substantial amount of trial and error. Still, I hope it helps!-)
A:
PyObjC, which is included with Mac OS X 10.5 and 10.6, is pretty close to what you're looking for.
A:
Chuck is correct about PyObjC.
You should then read about this NSApplication method to change your icon.
-(void)setApplicationIconImage:(NSImage *)anImage;
For the dock menu, implement the following in an application delegate. You can build an NSMenu programmatically to avoid using InterfaceBuilder.
-(NSMenu *)applicationDockMenu:(NSApplication *)sender;
|
Basic cocoa application using dock in Python, but not Xcode and all that extras
|
It seems that if I want to create a very basic Cocoa application with a dock icon and the like, I would have to use Xcode and the GUI builder (w/ PyObjC).
The application I am intending to write is largely concerned with algorithms and basic IO - and thus, not mostly related to Apple specific stuff.
Basically the application is supposed to run periodically (say, every 3 minutes) .. pull some information via AppleScript and write HTML files to a particular directory. I would like to add a Dock icon for this application .. mainly to showing the "status" of the process (for example, if there is an error .. the dock icon would have a red flag on it). Another advantage of the dock icon is that I can make it run on startup.
Additional bonus for defining the dock right-click menu in a simple way (eg: using Python lists of callables).
Can I achieve this without using Xcode or GUI builders but simply using Emacs and Python?
|
[
"Install the latest py2app, then make a new directory -- cd to it -- in it make a HelloWorld.py file such as:\n# generic Python imports\nimport datetime\nimport os\nimport sched\nimport sys\nimport tempfile\nimport threading\nimport time\n\n# need PyObjC on sys.path...:\nfor d in sys.path:\n if 'Extras' in d:\n sys.path.append(d + '/PyObjC')\n break\n\n# objc-related imports\nimport objc\nfrom Foundation import *\nfrom AppKit import *\nfrom PyObjCTools import AppHelper\n\n# all stuff related to the repeating-action\nthesched = sched.scheduler(time.time, time.sleep)\n\ndef tick(n, writer):\n writer(n)\n thesched.enter(20.0, 10, tick, (n+1, writer))\n fd, name = tempfile.mkstemp('.txt', 'hello', '/tmp');\n print 'writing %r' % name\n f = os.fdopen(fd, 'w')\n f.write(datetime.datetime.now().isoformat())\n f.write('\\n')\n f.close()\n\ndef schedule(writer):\n pool = NSAutoreleasePool.alloc().init()\n thesched.enter(0.0, 10, tick, (1, writer))\n thesched.run()\n # normally you'd want pool.drain() here, but since this function never\n # ends until end of program (thesched.run never returns since each tick\n # schedules a new one) that pool.drain would never execute here;-).\n\n# objc-related stuff\nclass TheDelegate(NSObject):\n\n statusbar = None\n state = 'idle'\n\n def applicationDidFinishLaunching_(self, notification):\n statusbar = NSStatusBar.systemStatusBar()\n self.statusitem = statusbar.statusItemWithLength_(\n NSVariableStatusItemLength)\n self.statusitem.setHighlightMode_(1)\n self.statusitem.setToolTip_('Example')\n self.statusitem.setTitle_('Example')\n\n self.menu = NSMenu.alloc().init()\n menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_(\n 'Quit', 'terminate:', '')\n self.menu.addItem_(menuitem)\n self.statusitem.setMenu_(self.menu)\n\n def writer(self, s):\n self.badge.setBadgeLabel_(str(s))\n\n\nif __name__ == \"__main__\":\n # prepare and set our delegate\n app = NSApplication.sharedApplication()\n delegate = TheDelegate.alloc().init()\n app.setDelegate_(delegate)\n delegate.badge = app.dockTile()\n delegate.writer(0)\n\n # on a separate thread, run the scheduler\n t = threading.Thread(target=schedule, args=(delegate.writer,))\n t.setDaemon(1)\n t.start()\n\n # let her rip!-)\n AppHelper.runEventLoop()\n\nOf course, in your real code, you'll be performing your own periodic actions every 3 minutes (rather than writing a temp file every 20 seconds as I'm doing here), doing your own status updates (rather than just showing a counter of the number of files written so far), etc, etc, but I hope this example shows you a viable starting point.\nThen in Terminal.App cd to the directory containing this source file, py2applet --make-setup HelloWorld.py, python setup.py py2app -A -p PyObjC.\nYou now have in subdirectory dist a directory HelloWorld.app; open dist and drag the icon to the Dock, and you're all set (on your own machine -- distributing to other machines may not work due to the -A flag, but I had trouble building without it, probably due to mis-installed egg files laying around this machine;-). No doubt you'll want to customize your icon &c.\nThis doesn't do the \"extra credit\" thingy you asked for -- it's already a lot of code and I decided to stop here (the extra credit thingy may warrant a new question). Also, I'm not quite sure that all the incantations I'm performing here are actually necessary or useful; the docs are pretty latitant for making a pyobjc .app without Xcode, as you require, so I hacked this together from bits and pieces of example code found on the net plus a substantial amount of trial and error. Still, I hope it helps!-)\n",
"PyObjC, which is included with Mac OS X 10.5 and 10.6, is pretty close to what you're looking for.\n",
"Chuck is correct about PyObjC.\nYou should then read about this NSApplication method to change your icon. \n-(void)setApplicationIconImage:(NSImage *)anImage;\nFor the dock menu, implement the following in an application delegate. You can build an NSMenu programmatically to avoid using InterfaceBuilder.\n-(NSMenu *)applicationDockMenu:(NSApplication *)sender;\n"
] |
[
9,
2,
0
] |
[] |
[] |
[
"cocoa",
"dock",
"pyobjc",
"python"
] |
stackoverflow_0001517342_cocoa_dock_pyobjc_python.txt
|
Q:
Is Google App Engine a worthy platform for a Lifestreaming app?
I'm building a Lifestreaming app that will involve pulling down lots of feeds for lots of users, and performing data-mining, and machine learning algorithms on the results. GAE's load balanced and scalable hosting sounds like a good fit for a system that could eventually be moving around a LOT of data, but it's lack of cron jobs is a nuisance. Would I be better off using Django on a co-loc and dealing with my own DB scaling?
A:
It might change when they offer paid plans, but as it stands, App Engine is not good for CPU intensive apps. It is designed to scale to handle a large number of requests, not necessarily a large amount of calculation per request. I am running into this issue with fairly minor calculations, and I fear I may have to start looking elsewhere as my data set grows.
A:
While I can not answer your question directly, my experience of building Microupdater (a news aggregator collecting a few hundred feeds on AppEngine) may give you a little insight.
Fetching feeds. Fetching lots of feeds by cron jobs (it was the only solution until SDK 1.2.5) is not efficient and scalable, which has lower limit on job frequency (say 1 min, so you could only fetch at most 60 feeds hourly). And with latest SDK 1.2.5, there is XMPP API, which I have not implemented yet. The best promising approach would be PubSubHubbub, of which you offer an callback url and HubBub will notify you new entries in real-time. And there is an demo implementation on AppEngine, which you can play around.
Parsing feeds. You may already know that parsing feeds is cpu-intensive. I use Universal Feed Parser by Mark Pilgrim, when parsing a large feed (say a public google reader topic), AppEngine may fail to process all entries. My dashboard have a lot of these CPU-limit warnings. But it may result in my incapability to optimize the code yet.
Totally said, AppEngine is not yet an ideal platform for lifestream app, but that may change in future.
A:
(This is obviously pretty old, responding just because it still comes up really high in related Google queries...)
I just started using AppEngine and haven't been using it for tons of external requests. But I do know that the info above is probably a lot less valid now, and might not even still stand. They relaxed the limits quite a bit since September 08 - check Aral Balkan's blog for his initial complaint about the above, and later developments.
A:
If you're app solely relies on Django, then App Engine is a good bet. However, if you ever need to add C-enhanced libraries, you're up a creek. App Engine doesn't support things like PIL or ReportLab, which use C to speed up processing times. I'm only mentioning this because you may want to use C to speed up some of your routines in the long run.
If you decide to use a co-loc, check out WebFaction.com. They have great Django/Python support and they have no issue with you using the aforementioned lirbaries.
A:
Take a look at Slice Host: They sell xen based virtualized server instances starting at $20.00 / month...
We’re just like you. Sick of oversold,
underperforming, ancient hosting
companies. We took matters into our
own hands. We built a hosting company
for people who know their stuff. Give
us a box, give us bandwidth, give us
performance and we get to work. Fast
machines, RAID-10 drives, Tier-1
bandwidth and root access. Managed
with a customized Xen VPS backend to
ensure that your resources are
protected and guaranteed.
It's great for starting a project on and scaling it out WITHOUT incurring the costs of a managed provider or colo.
A:
No. If you need to pull lots of things down, App Engine isn't going to work so well. You can use it as a front end by putting your data in their store after doing your offline preprocessing, but you can't do much in the ~1 second time you have per request without doing some really crazy things.
Your app would likely be better off on your own hosting.
A:
Pulling feeds or doing calculations won't be a problem. But you'll soon have to pay for your account. App engine includes Django, except you'll need to work with some adaptors for the model part. It will surely save you from maintenance headaches.
|
Is Google App Engine a worthy platform for a Lifestreaming app?
|
I'm building a Lifestreaming app that will involve pulling down lots of feeds for lots of users, and performing data-mining, and machine learning algorithms on the results. GAE's load balanced and scalable hosting sounds like a good fit for a system that could eventually be moving around a LOT of data, but it's lack of cron jobs is a nuisance. Would I be better off using Django on a co-loc and dealing with my own DB scaling?
|
[
"It might change when they offer paid plans, but as it stands, App Engine is not good for CPU intensive apps. It is designed to scale to handle a large number of requests, not necessarily a large amount of calculation per request. I am running into this issue with fairly minor calculations, and I fear I may have to start looking elsewhere as my data set grows.\n",
"While I can not answer your question directly, my experience of building Microupdater (a news aggregator collecting a few hundred feeds on AppEngine) may give you a little insight.\n\nFetching feeds. Fetching lots of feeds by cron jobs (it was the only solution until SDK 1.2.5) is not efficient and scalable, which has lower limit on job frequency (say 1 min, so you could only fetch at most 60 feeds hourly). And with latest SDK 1.2.5, there is XMPP API, which I have not implemented yet. The best promising approach would be PubSubHubbub, of which you offer an callback url and HubBub will notify you new entries in real-time. And there is an demo implementation on AppEngine, which you can play around.\nParsing feeds. You may already know that parsing feeds is cpu-intensive. I use Universal Feed Parser by Mark Pilgrim, when parsing a large feed (say a public google reader topic), AppEngine may fail to process all entries. My dashboard have a lot of these CPU-limit warnings. But it may result in my incapability to optimize the code yet.\n\nTotally said, AppEngine is not yet an ideal platform for lifestream app, but that may change in future.\n",
"(This is obviously pretty old, responding just because it still comes up really high in related Google queries...)\nI just started using AppEngine and haven't been using it for tons of external requests. But I do know that the info above is probably a lot less valid now, and might not even still stand. They relaxed the limits quite a bit since September 08 - check Aral Balkan's blog for his initial complaint about the above, and later developments.\n",
"If you're app solely relies on Django, then App Engine is a good bet. However, if you ever need to add C-enhanced libraries, you're up a creek. App Engine doesn't support things like PIL or ReportLab, which use C to speed up processing times. I'm only mentioning this because you may want to use C to speed up some of your routines in the long run.\nIf you decide to use a co-loc, check out WebFaction.com. They have great Django/Python support and they have no issue with you using the aforementioned lirbaries.\n",
"Take a look at Slice Host: They sell xen based virtualized server instances starting at $20.00 / month...\n\nWe’re just like you. Sick of oversold,\n underperforming, ancient hosting\n companies. We took matters into our\n own hands. We built a hosting company\n for people who know their stuff. Give\n us a box, give us bandwidth, give us\n performance and we get to work. Fast\n machines, RAID-10 drives, Tier-1\n bandwidth and root access. Managed\n with a customized Xen VPS backend to\n ensure that your resources are\n protected and guaranteed.\n\nIt's great for starting a project on and scaling it out WITHOUT incurring the costs of a managed provider or colo.\n",
"No. If you need to pull lots of things down, App Engine isn't going to work so well. You can use it as a front end by putting your data in their store after doing your offline preprocessing, but you can't do much in the ~1 second time you have per request without doing some really crazy things.\nYour app would likely be better off on your own hosting.\n",
"Pulling feeds or doing calculations won't be a problem. But you'll soon have to pay for your account. App engine includes Django, except you'll need to work with some adaptors for the model part. It will surely save you from maintenance headaches.\n"
] |
[
3,
3,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"django",
"google_app_engine",
"python",
"web_applications"
] |
stackoverflow_0000135169_django_google_app_engine_python_web_applications.txt
|
Q:
Putting in extra restrictions when filtering on foreignkey in django-admin
When getting members based on Unit, I only want to get the ones who are actually in that unit as of now.
I've got a model looking like this:
class Member(models.Model):
name = models.CharField(max_length=256)
unit = models.ManyToManyField(Unit, through='Membership')
class Membership(models.Model):
member = models.ForeignKey(Member)
unit = models.ForeignKey(Unit)
start = models.DateField(default=date.today)
stop = models.DateField(blank=True, null=True)
class Unit(models.Model):
name = models.CharField(max_length=256)
As you can see, members can have a "fake" membership in unit, that is only history and should not be considered in the searches and listings of the admin. They should be shown in the change-page for a single object though.
The admin looks like this:
class MembershipInline(admin.TabularInline):
model = Membership
extra = 1
class MemberAdmin(admin.ModelAdmin):
list_filter = ('unit',)
inlines = [MembershipInline,]
So how can I (if at all possible this way), when filtering on unit only get those units whose membership__stop__isnull=True?
I tried Managers, I can make them work on the model in the admin itself, but not on the filtering/searches. There is also a def queryset(self) method that is overrideable, but I can't wrap my head around how to use it to fix my problem.
Edit, how this is used: A member has only one membership in a unit, however, they could be members from before, but they are ended (with stop). So I only want to filter (and show, in the list view) those members who have an open-ended membership (like, that they are members of that unit now).
Any ideas?
A:
So you're trying to get the members of a specific Unit, right?
unit = Unit.objects.select_related().get(id=some_id)
This will pull the unit out of the database for you, along with the Memberships and Users that belong to it. You can access and filter the users by:
for member in unit.membership__set.filter(stop__isnull=True):
print member.name
I hope this helps? I may be wrong, I haven't tested this.
A:
One way to certainly achieve this is by adding a denormalized field for
has_open_ended_membership.
To do this just add a BooleaneField like that to the Member and make sure it's consistent.
From the django documentation this seems to be the only way without writing specialized code in the ModelAdmin object:
Set list_filter to activate filters in
the right sidebar of the change list
page of the admin. This should be a
list of field names, and each
specified field should be either a
BooleanField, CharField, DateField,
DateTimeField, IntegerField or
ForeignKey.
I'm curious about other approaches - list_filter certainly is limited.
A:
I fixed it with putting in a denormalized field in member, with a foreign-key to the active unit. Then, to make it work and be automatically updated in the admin, I made the specialized save-function for Membership.
class Member(models.Model):
name = models.CharField(max_length=256)
unit = models.ManyToManyField(Unit, through='Membership')
unit_denorm = models.ForeignKey(Unit)
class Membership(models.Model):
member = models.ForeignKey(Member)
unit = models.ForeignKey(Unit)
start = models.DateField(default=date.today)
stop = models.DateField(blank=True, null=True)
def save(self, *args, **kwargs):
if not self.stop:
self.member.unit_denorm = self.unit
self.member.save()
super(Membership, self).save(*args, **kwargs)
class Unit(models.Model):
name = models.CharField(max_length=256)
And with list_filter = ('unit_denorm',) in the admin, it does exactly what I want.
Great! Of course, there should only be one field with stop__isnull=True. I haven't figured out how to make that restriction. but people using the system know they shouldn't do that anyway.
|
Putting in extra restrictions when filtering on foreignkey in django-admin
|
When getting members based on Unit, I only want to get the ones who are actually in that unit as of now.
I've got a model looking like this:
class Member(models.Model):
name = models.CharField(max_length=256)
unit = models.ManyToManyField(Unit, through='Membership')
class Membership(models.Model):
member = models.ForeignKey(Member)
unit = models.ForeignKey(Unit)
start = models.DateField(default=date.today)
stop = models.DateField(blank=True, null=True)
class Unit(models.Model):
name = models.CharField(max_length=256)
As you can see, members can have a "fake" membership in unit, that is only history and should not be considered in the searches and listings of the admin. They should be shown in the change-page for a single object though.
The admin looks like this:
class MembershipInline(admin.TabularInline):
model = Membership
extra = 1
class MemberAdmin(admin.ModelAdmin):
list_filter = ('unit',)
inlines = [MembershipInline,]
So how can I (if at all possible this way), when filtering on unit only get those units whose membership__stop__isnull=True?
I tried Managers, I can make them work on the model in the admin itself, but not on the filtering/searches. There is also a def queryset(self) method that is overrideable, but I can't wrap my head around how to use it to fix my problem.
Edit, how this is used: A member has only one membership in a unit, however, they could be members from before, but they are ended (with stop). So I only want to filter (and show, in the list view) those members who have an open-ended membership (like, that they are members of that unit now).
Any ideas?
|
[
"So you're trying to get the members of a specific Unit, right?\nunit = Unit.objects.select_related().get(id=some_id)\n\nThis will pull the unit out of the database for you, along with the Memberships and Users that belong to it. You can access and filter the users by:\nfor member in unit.membership__set.filter(stop__isnull=True):\n print member.name\n\nI hope this helps? I may be wrong, I haven't tested this.\n",
"One way to certainly achieve this is by adding a denormalized field for \nhas_open_ended_membership.\nTo do this just add a BooleaneField like that to the Member and make sure it's consistent.\nFrom the django documentation this seems to be the only way without writing specialized code in the ModelAdmin object:\n\nSet list_filter to activate filters in\n the right sidebar of the change list\n page of the admin. This should be a\n list of field names, and each\n specified field should be either a\n BooleanField, CharField, DateField,\n DateTimeField, IntegerField or\n ForeignKey.\n\nI'm curious about other approaches - list_filter certainly is limited.\n",
"I fixed it with putting in a denormalized field in member, with a foreign-key to the active unit. Then, to make it work and be automatically updated in the admin, I made the specialized save-function for Membership.\nclass Member(models.Model):\n name = models.CharField(max_length=256)\n unit = models.ManyToManyField(Unit, through='Membership')\n unit_denorm = models.ForeignKey(Unit)\n\nclass Membership(models.Model):\n member = models.ForeignKey(Member)\n unit = models.ForeignKey(Unit)\n start = models.DateField(default=date.today)\n stop = models.DateField(blank=True, null=True)\n\n def save(self, *args, **kwargs):\n if not self.stop:\n self.member.unit_denorm = self.unit\n self.member.save()\n super(Membership, self).save(*args, **kwargs)\n\nclass Unit(models.Model):\n name = models.CharField(max_length=256)\n\nAnd with list_filter = ('unit_denorm',) in the admin, it does exactly what I want.\nGreat! Of course, there should only be one field with stop__isnull=True. I haven't figured out how to make that restriction. but people using the system know they shouldn't do that anyway.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"django",
"django_admin",
"filter",
"many_to_many",
"python"
] |
stackoverflow_0001485103_django_django_admin_filter_many_to_many_python.txt
|
Q:
Python Image Library ellipse with wide outline
When creating an ellipse with PIL, is it possible to have a thicker/wider outline? Currently, I'm trying to do canvas.ellipse(box, outline=colour, fill=None), but would like to be able to give the outline parameter a width.
A:
You could use the aggdraw advanced-graphics add-on module to PIL -- with it, the method to draw an ellipse, like others, takes a pen object which you can make with your favorite width (as well as color and opacity).
|
Python Image Library ellipse with wide outline
|
When creating an ellipse with PIL, is it possible to have a thicker/wider outline? Currently, I'm trying to do canvas.ellipse(box, outline=colour, fill=None), but would like to be able to give the outline parameter a width.
|
[
"You could use the aggdraw advanced-graphics add-on module to PIL -- with it, the method to draw an ellipse, like others, takes a pen object which you can make with your favorite width (as well as color and opacity).\n"
] |
[
1
] |
[] |
[] |
[
"image",
"python",
"python_imaging_library"
] |
stackoverflow_0001517778_image_python_python_imaging_library.txt
|
Q:
Can anyone point out the pros and cons of TG2 over Django?
Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.
But I've never looked into TurboGears with much enthusiasm.
Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.
A:
TG2 has several advantages that I think are important:
Multi-database support
sharding/data partitioning support
longstanding support for aggregates, multi-column primary keys
a transaction system that handles multi-database transactions for you
an admin system that works with all of the above
out of the box support for reusable template snipits
an easy method for creating reusable template tag-libraries
more flexibility in using non-standard components
There are more, but I think it's also important to know that Django has some advantages over TG2:
Larger, community, more active IRC channel
more re-usable app-components
a bit more developed documentation
All of this means that it's a bit easier to get started in Django than TG2, but I personally think the added power and flexibility that you get is worth it. But your needs may always be different.
A:
TG2 takes Pylons and changes some defaults - object dispatching instead of Routes, and Genshi instead of Mako. They believe there's only one way to do it, so apps can rely on the same API for any TurboGears website.
Similarities
TG2 and Django both distinguish between websites and components, so you'll eventually see reusable building blocks for TurboGears, too.
Differences
Django uses its own handlers for HTTP, routing, templating, and persistence. Django also has stellar documentation and an established community.
TurboGears defaults to best-of-breed libraries, which apparently are Paste, object dispatching, Genshi, and SqlAlchemy. This philosophy produces a better all-round toolset, but at the risk of instability - because it means throwing away backwards compatibility if and when better libraries appear.
A:
Pros.
SQLAlchemy > django ORM
Multiple template languages out of the box (genshi,mako,jinja2)
more WSGI friendly
Object Dispatch > routes > regexp routing. You can get the first 2 with TG2
Almost all components are optional you can keep the core and use any ORM, template, auth library, etc.
Sprox > django forms
Cons.
- Admin is more basic (no inline objects yet!)
- less third party apps
- "app" system still in the making.
- given it's modularity you need to read documentation from different sources (SQLAlchemy, Genshi or Mako, repoze.who, Pylons, etc.)
A:
I was struggling with the same question months ago and decided for Turbogears 2, and my reasoning was simple. "I'm new to python, I want to learn it not just for web-projects but as a substitute to php for scripting small helpers"
What I didn't like about Django, to me looks like a "close platform". ORM, Template system, sessions, etc they all are Django's
On the other hand, Turbogears 2 uses already known open platforms and just glued them, just like Appfuse does it for Java
With TurboGears 2 I learn SQLAlchemy that I can use later for small python scripts, or from the python shell to solve common tasks.
Main drawbacks are the lack of complete documentation and error messages.
Sometimes you have to search very deep to find simple solutions, the learning curve is steep, but it pays long term. The error messages where to me very confusing (coming from more than 10 years in Java development). I had lost many hours trying to find an "ascii encode error" when the real problem was a module not being imported.
That's my opinion, just remember I'm new to python and I could be wrong about many things stated here.
A:
Besides what Nikhil gave in his answer, I think another minor difference is that Turbogears provdes some support for javascript widgets and integration with Mochikit.
Whereas Django steadfastly remains javascript framework neutral.
(At least this was true with older versions of Turbogears... this might have changed with TG2)
Edit: I just went over TG2 documentation and see that it did indeed change. Turbogears now uses ToscaWidgets which can use jQuery, ExtJS, Dojo, etc. underneath. This nicely makes it more framework neutral while still providing nice javascript widgets.
This strikes me as a pro for Turbogears if you don't have any javascript experience and a pro for Django if you are writing a lot of specialized javascript.
A:
One of the most important questions is not just what technical features this platform provides or that platform provides, but the driving philosophy of the open source project and the nature of the community supporting it.
I've got no dog in this fight myself, but I found Mark Ramm's talk at DjangoCon 2008 to be very interesting on this point (Google will yield no end of subsequent discussion, no doubt).
A:
Because Django uses its own ORM it limits you to learn that ORM for that specific web framework. I think using an web framework with a more popular ORM (like SqlAlchemy which TG uses) increases your employability chances. Just my 2 cents ..
A:
Last I checked, django has a very poor data implementation. And that's a huge weakness in my book. Django's orm doesn't allow me to use the power of the underlying database. For example I can't use compound primary keys, which are important to good db design. It also doesn't support more than a single database, which is not a big deal until you really need it and find that you can't do it without resorting to doing it manually. Lastly if you have to make changes to your database structure in a team-friendly way, you have to try to choose between a set of 3rd party migration tools.
Turbogears seems to be more architecturally sound, doing its best to integrate individual tools that are awesome in their own right. And because TG is more of an integrator, you're able to switch out pieces to suit your preferences. Don't like SQL Alchemy? You can use SQLObject. Don't like Genshi templates? You can use Mako or even django's, although you're not exactly stuck with the default on django either.
Time for tg2's cons:
TG has a much smaller community, and community usually has its benefit.
Django has a much better name. I really like that name ;-)
Django seems simpler for the beginning web developer, with pretty cool admin tools.
TG has decent documentation, but you also need to go to Genshi's site to learn Genshi, SQL Alchemy's site to learn that, etc. Django has great docs.
My 2 cents.
|
Can anyone point out the pros and cons of TG2 over Django?
|
Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.
But I've never looked into TurboGears with much enthusiasm.
Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.
|
[
"TG2 has several advantages that I think are important: \n\nMulti-database support\nsharding/data partitioning support\nlongstanding support for aggregates, multi-column primary keys\na transaction system that handles multi-database transactions for you\nan admin system that works with all of the above\nout of the box support for reusable template snipits\nan easy method for creating reusable template tag-libraries\nmore flexibility in using non-standard components\n\nThere are more, but I think it's also important to know that Django has some advantages over TG2: \n\nLarger, community, more active IRC channel\nmore re-usable app-components\na bit more developed documentation\n\nAll of this means that it's a bit easier to get started in Django than TG2, but I personally think the added power and flexibility that you get is worth it. But your needs may always be different. \n",
"TG2 takes Pylons and changes some defaults - object dispatching instead of Routes, and Genshi instead of Mako. They believe there's only one way to do it, so apps can rely on the same API for any TurboGears website.\nSimilarities\n\nTG2 and Django both distinguish between websites and components, so you'll eventually see reusable building blocks for TurboGears, too.\n\nDifferences\n\nDjango uses its own handlers for HTTP, routing, templating, and persistence. Django also has stellar documentation and an established community.\nTurboGears defaults to best-of-breed libraries, which apparently are Paste, object dispatching, Genshi, and SqlAlchemy. This philosophy produces a better all-round toolset, but at the risk of instability - because it means throwing away backwards compatibility if and when better libraries appear.\n\n",
"Pros.\n\nSQLAlchemy > django ORM\nMultiple template languages out of the box (genshi,mako,jinja2)\nmore WSGI friendly\nObject Dispatch > routes > regexp routing. You can get the first 2 with TG2\nAlmost all components are optional you can keep the core and use any ORM, template, auth library, etc.\nSprox > django forms\n\nCons.\n - Admin is more basic (no inline objects yet!)\n - less third party apps\n - \"app\" system still in the making.\n - given it's modularity you need to read documentation from different sources (SQLAlchemy, Genshi or Mako, repoze.who, Pylons, etc.)\n",
"I was struggling with the same question months ago and decided for Turbogears 2, and my reasoning was simple. \"I'm new to python, I want to learn it not just for web-projects but as a substitute to php for scripting small helpers\"\nWhat I didn't like about Django, to me looks like a \"close platform\". ORM, Template system, sessions, etc they all are Django's\nOn the other hand, Turbogears 2 uses already known open platforms and just glued them, just like Appfuse does it for Java\nWith TurboGears 2 I learn SQLAlchemy that I can use later for small python scripts, or from the python shell to solve common tasks.\nMain drawbacks are the lack of complete documentation and error messages.\nSometimes you have to search very deep to find simple solutions, the learning curve is steep, but it pays long term. The error messages where to me very confusing (coming from more than 10 years in Java development). I had lost many hours trying to find an \"ascii encode error\" when the real problem was a module not being imported.\nThat's my opinion, just remember I'm new to python and I could be wrong about many things stated here.\n",
"Besides what Nikhil gave in his answer, I think another minor difference is that Turbogears provdes some support for javascript widgets and integration with Mochikit.\nWhereas Django steadfastly remains javascript framework neutral.\n(At least this was true with older versions of Turbogears... this might have changed with TG2)\nEdit: I just went over TG2 documentation and see that it did indeed change. Turbogears now uses ToscaWidgets which can use jQuery, ExtJS, Dojo, etc. underneath. This nicely makes it more framework neutral while still providing nice javascript widgets.\nThis strikes me as a pro for Turbogears if you don't have any javascript experience and a pro for Django if you are writing a lot of specialized javascript.\n",
"One of the most important questions is not just what technical features this platform provides or that platform provides, but the driving philosophy of the open source project and the nature of the community supporting it. \nI've got no dog in this fight myself, but I found Mark Ramm's talk at DjangoCon 2008 to be very interesting on this point (Google will yield no end of subsequent discussion, no doubt).\n",
"Because Django uses its own ORM it limits you to learn that ORM for that specific web framework. I think using an web framework with a more popular ORM (like SqlAlchemy which TG uses) increases your employability chances. Just my 2 cents ..\n",
"Last I checked, django has a very poor data implementation. And that's a huge weakness in my book. Django's orm doesn't allow me to use the power of the underlying database. For example I can't use compound primary keys, which are important to good db design. It also doesn't support more than a single database, which is not a big deal until you really need it and find that you can't do it without resorting to doing it manually. Lastly if you have to make changes to your database structure in a team-friendly way, you have to try to choose between a set of 3rd party migration tools.\nTurbogears seems to be more architecturally sound, doing its best to integrate individual tools that are awesome in their own right. And because TG is more of an integrator, you're able to switch out pieces to suit your preferences. Don't like SQL Alchemy? You can use SQLObject. Don't like Genshi templates? You can use Mako or even django's, although you're not exactly stuck with the default on django either.\nTime for tg2's cons:\n\nTG has a much smaller community, and community usually has its benefit.\nDjango has a much better name. I really like that name ;-)\nDjango seems simpler for the beginning web developer, with pretty cool admin tools.\nTG has decent documentation, but you also need to go to Genshi's site to learn Genshi, SQL Alchemy's site to learn that, etc. Django has great docs.\n\nMy 2 cents.\n"
] |
[
15,
14,
5,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"django",
"python",
"turbogears",
"turbogears2"
] |
stackoverflow_0000640877_django_python_turbogears_turbogears2.txt
|
Q:
Django: How to modify a text field before showing it in admin
I have a Django Model with a text field. I would like to modify the content of the text field before it's presented to the user in Django Admin.
I was expecting to see signal equivalent of post_load but it doesn't seem to exist.
To be more specific:
I have a text field that takes user input. In this text field there is a read more separator. Text before the separator is going to go into introtext field, everything after goes into fulltext field.
At the same time, I only want to show the user 1 text field when they're editing the article.
My plan was to on_load read the data from introtext and fulltext field and combine them into fulltext textarea. On pre_save, I would split the text using the read more separator and store intro in introtext and remainder in fulltext.
So, before the form is displayed, I need to populate the fulltext field with
introtext + '<!--readmore-->' + fulltext
and I need to be able to do this for existing items.
A:
Have a look into Providing your own form for the admin pages.
Once you have your own form, you can use the default param in the form to provide the initial value you want. See the docs on the Initial param for the form field. As this link will show you, it is possible to use a callable or a constant as your initial value.
A:
There is no post_load because there is no load function.
Loading of the instance is done in init function, therefore the right answer is to use post_init signal.
|
Django: How to modify a text field before showing it in admin
|
I have a Django Model with a text field. I would like to modify the content of the text field before it's presented to the user in Django Admin.
I was expecting to see signal equivalent of post_load but it doesn't seem to exist.
To be more specific:
I have a text field that takes user input. In this text field there is a read more separator. Text before the separator is going to go into introtext field, everything after goes into fulltext field.
At the same time, I only want to show the user 1 text field when they're editing the article.
My plan was to on_load read the data from introtext and fulltext field and combine them into fulltext textarea. On pre_save, I would split the text using the read more separator and store intro in introtext and remainder in fulltext.
So, before the form is displayed, I need to populate the fulltext field with
introtext + '<!--readmore-->' + fulltext
and I need to be able to do this for existing items.
|
[
"Have a look into Providing your own form for the admin pages. \nOnce you have your own form, you can use the default param in the form to provide the initial value you want. See the docs on the Initial param for the form field. As this link will show you, it is possible to use a callable or a constant as your initial value.\n",
"There is no post_load because there is no load function.\nLoading of the instance is done in init function, therefore the right answer is to use post_init signal.\n"
] |
[
2,
1
] |
[] |
[] |
[
"django",
"django_models",
"django_signals",
"python"
] |
stackoverflow_0001517626_django_django_models_django_signals_python.txt
|
Q:
Using Embedded Dictionary for iterative character replacement
I'm trying to understand an iterative function that that takes a string "12345" and returns all the possible misspellings based upon a dictionary of keys close to each character in the string.
outerDic = {}
Dict1 = {'1':'2','2':'q'}
outerDic['1'] = Dict1
Dict1 = {'1':'1','2':'q','3':'w','4':'3'}
outerDic['2'] = Dict1
Dict1 = {'1':'2','2':'w','3':'e','4':'4'}
outerDic['3'] = Dict1
Dict1 = {'1':'3','2':'e','3':'r','4':'5' }
outerDic['4'] = Dict1
Dict1 = {'1':'4','2':'r','3':'t','4':'6' }
outerDic['5'] = Dict1
outerDic
The output should return a list of strings
12345
22345
q2345
11345
1q345
13345
12245
12e45
12445
and so on...
I've set the function up as follows:
def split_line(text):
words = text.split()
for current_word in words:
getWordsIterations()
I'd like to understand how to set up the getWordsIterations () function to go through the dictionary and systematically replace the characters.
A:
I'm not sure what the inner dicts, all with keys '1', '2', etc, signify -- are they basically just stand-ins for lists presenting possible typos? But then some (but not all) would also include the "right" character... a non-typo...?! Sorry, but you're really being extremely confusing in this presentation -- the example doesn't help much (why is there never a "w" in the second position, which is supposed to be a possible typo there if I understand your weird data structure...? etc, etc).
So, while awaiting clarification, let me assume that all you want is to represent for each input character all possible single-character typos for it -- lists would be fine but strings are more compact in this case, and essentially equivalent:
possible_typos = {
'1': '2q',
'2': '1qw3',
'3': '2we4',
'4': '3er5',
'5': '4rt6',
}
now if you only care about cases with exactly 1 mis-spelling:
def one_typo(word):
L = list(word)
for i, c in enumerate(L):
for x in possible_typos[c]:
L[i] = x
yield ''.join(L)
L[i] = c
so for example, for w in one_typo("12345"): print w emits:
22345
q2345
11345
1q345
1w345
13345
12245
12w45
12e45
12445
12335
123e5
123r5
12355
12344
1234r
1234t
12346
"Any number of typos" would produce an enormous list -- is that what you want? Or "0 to 2 typos"? Or what else exactly...?
|
Using Embedded Dictionary for iterative character replacement
|
I'm trying to understand an iterative function that that takes a string "12345" and returns all the possible misspellings based upon a dictionary of keys close to each character in the string.
outerDic = {}
Dict1 = {'1':'2','2':'q'}
outerDic['1'] = Dict1
Dict1 = {'1':'1','2':'q','3':'w','4':'3'}
outerDic['2'] = Dict1
Dict1 = {'1':'2','2':'w','3':'e','4':'4'}
outerDic['3'] = Dict1
Dict1 = {'1':'3','2':'e','3':'r','4':'5' }
outerDic['4'] = Dict1
Dict1 = {'1':'4','2':'r','3':'t','4':'6' }
outerDic['5'] = Dict1
outerDic
The output should return a list of strings
12345
22345
q2345
11345
1q345
13345
12245
12e45
12445
and so on...
I've set the function up as follows:
def split_line(text):
words = text.split()
for current_word in words:
getWordsIterations()
I'd like to understand how to set up the getWordsIterations () function to go through the dictionary and systematically replace the characters.
|
[
"I'm not sure what the inner dicts, all with keys '1', '2', etc, signify -- are they basically just stand-ins for lists presenting possible typos? But then some (but not all) would also include the \"right\" character... a non-typo...?! Sorry, but you're really being extremely confusing in this presentation -- the example doesn't help much (why is there never a \"w\" in the second position, which is supposed to be a possible typo there if I understand your weird data structure...? etc, etc).\nSo, while awaiting clarification, let me assume that all you want is to represent for each input character all possible single-character typos for it -- lists would be fine but strings are more compact in this case, and essentially equivalent:\npossible_typos = {\n '1': '2q',\n '2': '1qw3',\n '3': '2we4',\n '4': '3er5',\n '5': '4rt6',\n}\n\nnow if you only care about cases with exactly 1 mis-spelling:\ndef one_typo(word):\n L = list(word)\n for i, c in enumerate(L):\n for x in possible_typos[c]:\n L[i] = x\n yield ''.join(L)\n L[i] = c\n\nso for example, for w in one_typo(\"12345\"): print w emits:\n22345\nq2345\n11345\n1q345\n1w345\n13345\n12245\n12w45\n12e45\n12445\n12335\n123e5\n123r5\n12355\n12344\n1234r\n1234t\n12346\n\n\"Any number of typos\" would produce an enormous list -- is that what you want? Or \"0 to 2 typos\"? Or what else exactly...?\n"
] |
[
1
] |
[] |
[] |
[
"function",
"iteration",
"python"
] |
stackoverflow_0001518358_function_iteration_python.txt
|
Q:
Python - Google App Engine
I'm just starting learning google app engine.
When I enter the following code below, all I got is "Hello world!"
I think the desired result is "Hello, webapp World!"
What am I missing? I even try copying the google framework folder to live in the same folder as my app.
Thanks.
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
class MainPage(webapp.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('Hello, webapp World!')
application = webapp.WSGIApplication(
[('/', MainPage)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
A:
Running exactly this code DOES give me the desired output. I suspect you must be erroneously running some other code, since there's absolutely no way this code as given could possibly emit what you observe.
|
Python - Google App Engine
|
I'm just starting learning google app engine.
When I enter the following code below, all I got is "Hello world!"
I think the desired result is "Hello, webapp World!"
What am I missing? I even try copying the google framework folder to live in the same folder as my app.
Thanks.
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
class MainPage(webapp.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('Hello, webapp World!')
application = webapp.WSGIApplication(
[('/', MainPage)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
|
[
"Running exactly this code DOES give me the desired output. I suspect you must be erroneously running some other code, since there's absolutely no way this code as given could possibly emit what you observe.\n"
] |
[
5
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001518383_python.txt
|
Q:
Declaring Unknown Type Variable in Python?
I have a situation in Python(cough, homework) where I need to multiply EACH ELEMENT in a given list of objects a specified number of times and return the output of the elements. The problem is that the sample inputs given are of different types. For example, one case may input a list of strings whose elements I need to multiply while the others may be ints. So my return type needs to vary. I would like to do this without having to test what every type of object is. Is there a way to do this? I know in C# i could just use "var" but I don't know if such a thing exists in Python?
I realize that variables don't have to be declared, but in this case I can't see any way around it. Here's the function I made:
def multiplyItemsByFour(argsList):
output = ????
for arg in argsList:
output += arg * 4
return output
See how I need to add to the output variable. If I just try to take away the output assignment on the first line, I get an error that the variable was not defined. But if I assign it a 0 or a "" for an empty string, an exception could be thrown since you can't add 3 to a string or "a" to an integer, etc...
Here are some sample inputs and outputs:
Input: ('a','b') Output: 'aaaabbbb'
Input: (2,3,4) Output: 36
Thanks!
A:
def fivetimes(anylist):
return anylist * 5
As you see, if you're given a list argument, there's no need for any assignment whatsoever in order to "multiply it a given number of times and return the output". You talk about a given list; how is it given to you, if not (the most natural way) as an argument to your function? Not that it matters much -- if it's a global variable, a property of the object that's your argument, and so forth, this still doesn't necessitate any assignment.
If you were "homeworkically" forbidden from using the * operator of lists, and just required to implement it yourself, this would require assignment, but no declaration:
def multiply_the_hard_way(inputlist, multiplier):
outputlist = []
for i in range(multiplier):
outputlist.extend(inputlist)
return outputlist
You can simply make the empty list "magicaly appear": there's no need to "declare" it as being anything whatsoever, it's an empty list and the Python compiler knows it as well as you or any reader of your code does. Binding it to the name outputlist doesn't require you to perform any special ritual either, just the binding (aka assignment) itself: names don't have types, only objects have types... that's Python!-)
Edit: OP now says output must not be a list, but rather int, float, or maybe string, and he is given no indication of what. I've asked for clarification -- multiplying a list ALWAYS returns a list, so clearly he must mean something different from what he originally said, that he had to multiply a list. Meanwhile, here's another attempt at mind-reading. Perhaps he must return a list where EACH ITEM of the input list is multiplied by the same factor (whether that item is an int, float, string, list, ...). Well then:
define multiply_each_item(somelist, multiplier):
return [item * multiplier for item in somelist]
Look ma, no hands^H^H^H^H^H assignment. (This is known as a "list comprehension", btw).
Or maybe (unlikely, but my mind-reading hat may be suffering interference from my tinfoil hat, will need to go to the mad hatter's shop to have them tuned) he needs to (say) multiply each list item as if they were the same type as the first item, but return them as their original type, so that for example
>>> mystic(['zap', 1, 23, 'goo'], 2)
['zapzap', 11, 2323, 'googoo']
>>> mystic([23, '12', 15, 2.5], 2)
[46, '24', 30, 4.0]
Even this highly-mystical spec COULD be accomodated...:
>>> def mystic(alist, mul):
... multyp = type(alist[0])
... return [type(x)(mul*multyp(x)) for x in alist]
...
...though I very much doubt it's the spec actually encoded in the mysterious runes of that homework assignment. Just about ANY precise spec can be either implemented or proven to be likely impossible as stated (by requiring you to solve the Halting Problem or demanding that P==NP, say;-). That may take some work ("prove the 4-color theorem", for example;-)... but still less than it takes to magically divine what the actual spec IS, from a collection of mutually contradictory observations, no examples, etc. Though in our daily work as software developer (ah for the good old times when all we had to face was homework!-) we DO meet a lot of such cases of course (and have to solve them to earn our daily bread;-).
EditEdit: finally seeing a precise spec I point out I already implemented that one, anyway, here it goes again:
def multiplyItemsByFour(argsList):
return [item * 4 for item in argsList]
EditEditEdit: finally/finally seeing a MORE precise spec, with (luxury!-) examples:
Input: ('a','b') Output: 'aaaabbbb' Input: (2,3,4) Output: 36
So then what's wanted it the summation (and you can't use sum as it wouldn't work on strings) of the items in the input list, each multiplied by four. My preferred solution:
def theFinalAndTrulyRealProblemAsPosed(argsList):
items = iter(argsList)
output = next(items, []) * 4
for item in items:
output += item * 4
return output
If you're forbidden from using some of these constructs, such as built-ins items and iter, there are many other possibilities (slightly inferior ones) such as:
def theFinalAndTrulyRealProblemAsPosed(argsList):
if not argsList: return None
output = argsList[0] * 4
for item in argsList[1:]:
output += item * 4
return output
For an empty argsList, the first version returns [], the second one returns None -- not sure what you're supposed to do in that corner case anyway.
A:
My guess is that the purpose of your homework is to expose you to "duck typing". The basic idea is that you don't worry about the types too much, you just worry about whether the behaviors work correctly. A classic example:
def add_two(a, b):
return a + b
print add_two(1, 2) # prints 3
print add_two("foo", "bar") # prints "foobar"
print add_two([0, 1, 2], [3, 4, 5]) # prints [0, 1, 2, 3, 4, 5]
Notice that when you def a function in Python, you don't declare a return type anywhere. It is perfectly okay for the same function to return different types based on its arguments. It's considered a virtue, even; consider that in Python we only need one definition of add_two() and we can add integers, add floats, concatenate strings, and join lists with it. Statically typed languages would require multiple implementations, unless they had an escape such as variant, but Python is dynamically typed. (Python is strongly typed, but dynamically typed. Some will tell you Python is weakly typed, but it isn't. In a weakly typed language such as JavaScript, the expression 1 + "1" will give you a result of 2; in Python this expression just raises a TypeError exception.)
It is considered very poor style to try to test the arguments to figure out their types, and then do things based on the types. If you need to make your code robust, you can always use a try block:
def safe_add_two(a, b):
try:
return a + b
except TypeError:
return None
See also the Wikipedia page on duck typing.
A:
Are you sure this is for Python beginners? To me, the cleanest way to do this is with reduce() and lambda, both of which are not typical beginner tools, and sometimes discouraged even for experienced Python programmers:
def multiplyItemsByFour(argsList):
if not argsList:
return None
newItems = [item * 4 for item in argsList]
return reduce(lambda x, y: x + y, newItems)
Like Alex Martelli, I've thrown in a quick test for an empty list at the beginning which returns None. Note that if you are using Python 3, you must import functools to use reduce().
Essentially, the reduce(lambda...) solution is very similar to the other suggestions to set up an accumulator using the first input item, and then processing the rest of the input items; but is simply more concise.
A:
Very easy in Python. You need to get the type of the data in your list - use the type() function on the first item - type(argsList[0]). Then to initialize output (where you now have ????) you need the 'zero' or nul value for that type. So just as int() or float() or str() returns the zero or nul for their type so to will type(argsList[0])() return the zero or nul value for whatever type you have in your list.
So, here is your function with one minor modification:
def multiplyItemsByFour(argsList):
output = type(argsList[0])()
for arg in argsList:
output += arg * 4
return output
Works with::
argsList = [1, 2, 3, 4] or [1.0, 2.0, 3.0, 4.0] or "abcdef" ... etc,
A:
You don't need to declare variable types in python; a variable has the type of whatever's assigned to it.
EDIT:
To solve the re-stated problem, try this:
def multiplyItemsByFour(argsList):
output = argsList.pop(0) * 4
for arg in argsList:
output += arg * 4
return output
(This is probably not the most pythonic way of doing this, but it should at least start off your output variable as the right type, assuming the whole list is of the same type)
A:
Python is dynamically typed, you don't need to declare the type of a variable, because a variable doesn't have a type, only values do. (Any variable can store any value, a value never changes its type during its lifetime.)
def do_something(x):
return x * 5
This will work for any x you pass to it, the actual result depending on what type the value in x has. If x contains a number it will just do regular multiplication, if it contains a string the string will be repeated five times in a row, for lists and such it will repeat the list five times, and so on. For custom types (classes) it depends on whether the class has an operation defined for the multiplication operator.
A:
You gave these sample inputs and outputs:
Input: ('a','b') Output: 'aaaabbbb' Input: (2,3,4) Output: 36
I don't want to write the solution to your homework for you, but I do want to steer you in the correct direction. But I'm still not sure I understand what your problem is, because the problem as I understand it seems a bit difficult for an intro to Python class.
The most straightforward way to solve this requires that the arguments be passed in a list. Then, you can look at the first item in the list, and work from that. Here is a function that requires the caller to pass in a list of two items:
def handle_list_of_len_2(lst):
return lst[0] * 4 + lst[1] * 4
Now, how can we make this extend past two items? Well, in your sample code you weren't sure what to assign to your variable output. How about assigning lst[0]? Then it always has the correct type. Then you could loop over all the other elements in lst and accumulate to your output variable using += as you wrote. If you don't know how to loop over a list of items but skip the first thing in the list, Google search for "python list slice".
Now, how can we make this not require the user to pack up everything into a list, but just call the function? What we really want is some way to accept whatever arguments the user wants to pass to the function, and make a list out of them. Perhaps there is special syntax for declaring a function where you tell Python you just want the arguments bundled up into a list. You might check a good tutorial and see what it says about how to define a function.
Now that we have covered (very generally) how to accumulate an answer using +=, let's consider other ways to accumulate an answer. If you know how to use a list comprehension, you could use one of those to return a new list based on the argument list, with the multiply performed on each argument; you could then somehow reduce the list down to a single item and return it. Python 2.3 and newer have a built-in function called sum() and you might want to read up on that. [EDIT: Oh drat, sum() only works on numbers. See note added at end.]
I hope this helps. If you are still very confused, I suggest you contact your teacher and ask for clarification. Good luck.
P.S. Python 2.x have a built-in function called reduce() and it is possible to implement sum() using reduce(). However, the creator of Python thinks it is better to just use sum() and in fact he removed reduce() from Python 3.0 (well, he moved it into a module called functools).
P.P.S. If you get the list comprehension working, here's one more thing to think about. If you use a list comprehension and then pass the result to sum(), you build a list to be used once and then discarded. Wouldn't it be neat if we could get the result, but instead of building the whole list and then discarding it we could just have the sum() function consume the list items as fast as they are generated? You might want to read this: Generator Expressions vs. List Comprehension
EDIT: Oh drat, I assumed that Python's sum() builtin would use duck typing. Actually it is documented to work on numbers, only. I'm disappointed! I'll have to search and see if there were any discussions about that, and see why they did it the way they did; they probably had good reasons. Meanwhile, you might as well use your += solution. Sorry about that.
EDIT: Okay, reading through other answers, I now notice two ways suggested for peeling off the first element in the list.
For simplicity, because you seem like a Python beginner, I suggested simply using output = lst[0] and then using list slicing to skip past the first item in the list. However, Wooble in his answer suggested using output = lst.pop(0) which is a very clean solution: it gets the zeroth thing on the list, and then you can just loop over the list and you automatically skip the zeroth thing. However, this "mutates" the list! It's better if a function like this does not have "side effects" such as modifying the list passed to it. (Unless the list is a special list made just for that function call, such as a *args list.) Another way would be to use the "list slice" trick to make a copy of the list that has the first item removed. Alex Martelli provided an example of how to make an "iterator" using a Python feature called iter(), and then using iterator to get the "next" thing. Since the iterator hasn't been used yet, the next thing is the zeroth thing in the list. That's not really a beginner solution but it is the most elegant way to do this in Python; you could pass a really huge list to the function, and Alex Martelli's solution will neither mutate the list nor waste memory by making a copy of the list.
A:
No need to test the objects, just multiply away!
'this is a string' * 6
14 * 6
[1,2,3] * 6
all just work
A:
Try this:
def timesfourlist(list):
nextstep = map(times_four, list)
sum(nextstep)
map performs the function passed in on each element of the list(returning a new list) and then sum does the += on the list.
A:
If you just want to fill in the blank in your code, you could try setting object=arglist[0].__class__() to give it the zero equivalent value of that class.
>>> def multiplyItemsByFour(argsList):
output = argsList[0].__class__()
for arg in argsList:
output += arg * 4
return output
>>> multiplyItemsByFour('ab')
'aaaabbbb'
>>> multiplyItemsByFour((2,3,4))
36
>>> multiplyItemsByFour((2.0,3.3))
21.199999999999999
This will crash if the list is empty, but you can check for that case at the beginning of the function and return whatever you feel appropriate.
A:
Thanks to Alex Martelli, you have the best possible solution:
def theFinalAndTrulyRealProblemAsPosed(argsList):
items = iter(argsList)
output = next(items, []) * 4
for item in items:
output += item * 4
return output
This is beautiful and elegant. First we create an iterator with iter(), then we use next() to get the first object in the list. Then we accumulate as we iterate through the rest of the list, and we are done. We never need to know the type of the objects in argsList, and indeed they can be of different types as long as all the types can have operator + applied with them. This is duck typing.
For a moment there last night I was confused and thought that you wanted a function that, instead of taking an explicit list, just took one or more arguments.
def four_x_args(*args):
return theFinalAndTrulyRealProblemAsPosed(args)
The *args argument to the function tells Python to gather up all arguments to this function and make a tuple out of them; then the tuple is bound to the name args. You can easily make a list out of it, and then you could use the .pop(0) method to get the first item from the list. This costs the memory and time to build the list, which is why the iter() solution is so elegant.
def four_x_args(*args):
argsList = list(args) # convert from tuple to list
output = argsList.pop(0) * 4
for arg in argsList:
output += arg * 4
return output
This is just Wooble's solution, rewritten to use *args.
Examples of calling it:
print four_x_args(1) # prints 4
print four_x_args(1, 2) # prints 12
print four_x_args('a') # prints 'aaaa'
print four_x_args('ab', 'c') # prints 'ababababcccc'
Finally, I'm going to be malicious and complain about the solution you accepted. That solution depends on the object's base class having a sensible null or zero, but not all classes have this. int() returns 0, and str() returns '' (null string), so they work. But how about this:
class NaturalNumber(int):
"""
Exactly like an int, but only values >= 1 are possible.
"""
def __new__(cls, initial_value=1):
try:
n = int(initial_value)
if n < 1:
raise ValueError
except ValueError:
raise ValueError, "NaturalNumber() initial value must be an int() >= 1"
return super(NaturalNumber, cls).__new__ (cls, n)
argList = [NaturalNumber(n) for n in xrange(1, 4)]
print theFinalAndTrulyRealProblemAsPosed(argList) # prints correct answer: 24
print NaturalNumber() # prints 1
print type(argList[0])() # prints 1, same as previous line
print multiplyItemsByFour(argList) # prints 25!
Good luck in your studies, and I hope you enjoy Python as much as I do.
|
Declaring Unknown Type Variable in Python?
|
I have a situation in Python(cough, homework) where I need to multiply EACH ELEMENT in a given list of objects a specified number of times and return the output of the elements. The problem is that the sample inputs given are of different types. For example, one case may input a list of strings whose elements I need to multiply while the others may be ints. So my return type needs to vary. I would like to do this without having to test what every type of object is. Is there a way to do this? I know in C# i could just use "var" but I don't know if such a thing exists in Python?
I realize that variables don't have to be declared, but in this case I can't see any way around it. Here's the function I made:
def multiplyItemsByFour(argsList):
output = ????
for arg in argsList:
output += arg * 4
return output
See how I need to add to the output variable. If I just try to take away the output assignment on the first line, I get an error that the variable was not defined. But if I assign it a 0 or a "" for an empty string, an exception could be thrown since you can't add 3 to a string or "a" to an integer, etc...
Here are some sample inputs and outputs:
Input: ('a','b') Output: 'aaaabbbb'
Input: (2,3,4) Output: 36
Thanks!
|
[
"def fivetimes(anylist):\n return anylist * 5\n\nAs you see, if you're given a list argument, there's no need for any assignment whatsoever in order to \"multiply it a given number of times and return the output\". You talk about a given list; how is it given to you, if not (the most natural way) as an argument to your function? Not that it matters much -- if it's a global variable, a property of the object that's your argument, and so forth, this still doesn't necessitate any assignment.\nIf you were \"homeworkically\" forbidden from using the * operator of lists, and just required to implement it yourself, this would require assignment, but no declaration:\ndef multiply_the_hard_way(inputlist, multiplier):\n outputlist = []\n for i in range(multiplier):\n outputlist.extend(inputlist)\n return outputlist\n\nYou can simply make the empty list \"magicaly appear\": there's no need to \"declare\" it as being anything whatsoever, it's an empty list and the Python compiler knows it as well as you or any reader of your code does. Binding it to the name outputlist doesn't require you to perform any special ritual either, just the binding (aka assignment) itself: names don't have types, only objects have types... that's Python!-)\nEdit: OP now says output must not be a list, but rather int, float, or maybe string, and he is given no indication of what. I've asked for clarification -- multiplying a list ALWAYS returns a list, so clearly he must mean something different from what he originally said, that he had to multiply a list. Meanwhile, here's another attempt at mind-reading. Perhaps he must return a list where EACH ITEM of the input list is multiplied by the same factor (whether that item is an int, float, string, list, ...). Well then:\ndefine multiply_each_item(somelist, multiplier):\n return [item * multiplier for item in somelist]\n\nLook ma, no hands^H^H^H^H^H assignment. (This is known as a \"list comprehension\", btw).\nOr maybe (unlikely, but my mind-reading hat may be suffering interference from my tinfoil hat, will need to go to the mad hatter's shop to have them tuned) he needs to (say) multiply each list item as if they were the same type as the first item, but return them as their original type, so that for example\n>>> mystic(['zap', 1, 23, 'goo'], 2)\n['zapzap', 11, 2323, 'googoo']\n>>> mystic([23, '12', 15, 2.5], 2)\n[46, '24', 30, 4.0]\n\nEven this highly-mystical spec COULD be accomodated...:\n>>> def mystic(alist, mul):\n... multyp = type(alist[0])\n... return [type(x)(mul*multyp(x)) for x in alist]\n... \n\n...though I very much doubt it's the spec actually encoded in the mysterious runes of that homework assignment. Just about ANY precise spec can be either implemented or proven to be likely impossible as stated (by requiring you to solve the Halting Problem or demanding that P==NP, say;-). That may take some work (\"prove the 4-color theorem\", for example;-)... but still less than it takes to magically divine what the actual spec IS, from a collection of mutually contradictory observations, no examples, etc. Though in our daily work as software developer (ah for the good old times when all we had to face was homework!-) we DO meet a lot of such cases of course (and have to solve them to earn our daily bread;-).\nEditEdit: finally seeing a precise spec I point out I already implemented that one, anyway, here it goes again:\ndef multiplyItemsByFour(argsList):\n return [item * 4 for item in argsList]\n\nEditEditEdit: finally/finally seeing a MORE precise spec, with (luxury!-) examples:\nInput: ('a','b') Output: 'aaaabbbb' Input: (2,3,4) Output: 36\n\nSo then what's wanted it the summation (and you can't use sum as it wouldn't work on strings) of the items in the input list, each multiplied by four. My preferred solution:\ndef theFinalAndTrulyRealProblemAsPosed(argsList):\n items = iter(argsList)\n output = next(items, []) * 4\n for item in items:\n output += item * 4\n return output\n\nIf you're forbidden from using some of these constructs, such as built-ins items and iter, there are many other possibilities (slightly inferior ones) such as:\ndef theFinalAndTrulyRealProblemAsPosed(argsList):\n if not argsList: return None\n output = argsList[0] * 4\n for item in argsList[1:]:\n output += item * 4\n return output\n\nFor an empty argsList, the first version returns [], the second one returns None -- not sure what you're supposed to do in that corner case anyway.\n",
"My guess is that the purpose of your homework is to expose you to \"duck typing\". The basic idea is that you don't worry about the types too much, you just worry about whether the behaviors work correctly. A classic example:\ndef add_two(a, b):\n return a + b\n\nprint add_two(1, 2) # prints 3\n\nprint add_two(\"foo\", \"bar\") # prints \"foobar\"\n\nprint add_two([0, 1, 2], [3, 4, 5]) # prints [0, 1, 2, 3, 4, 5]\n\nNotice that when you def a function in Python, you don't declare a return type anywhere. It is perfectly okay for the same function to return different types based on its arguments. It's considered a virtue, even; consider that in Python we only need one definition of add_two() and we can add integers, add floats, concatenate strings, and join lists with it. Statically typed languages would require multiple implementations, unless they had an escape such as variant, but Python is dynamically typed. (Python is strongly typed, but dynamically typed. Some will tell you Python is weakly typed, but it isn't. In a weakly typed language such as JavaScript, the expression 1 + \"1\" will give you a result of 2; in Python this expression just raises a TypeError exception.)\nIt is considered very poor style to try to test the arguments to figure out their types, and then do things based on the types. If you need to make your code robust, you can always use a try block:\ndef safe_add_two(a, b):\n try:\n return a + b\n except TypeError:\n return None\n\nSee also the Wikipedia page on duck typing.\n",
"Are you sure this is for Python beginners? To me, the cleanest way to do this is with reduce() and lambda, both of which are not typical beginner tools, and sometimes discouraged even for experienced Python programmers:\ndef multiplyItemsByFour(argsList):\n if not argsList:\n return None\n newItems = [item * 4 for item in argsList]\n return reduce(lambda x, y: x + y, newItems)\n\nLike Alex Martelli, I've thrown in a quick test for an empty list at the beginning which returns None. Note that if you are using Python 3, you must import functools to use reduce().\nEssentially, the reduce(lambda...) solution is very similar to the other suggestions to set up an accumulator using the first input item, and then processing the rest of the input items; but is simply more concise.\n",
"Very easy in Python. You need to get the type of the data in your list - use the type() function on the first item - type(argsList[0]). Then to initialize output (where you now have ????) you need the 'zero' or nul value for that type. So just as int() or float() or str() returns the zero or nul for their type so to will type(argsList[0])() return the zero or nul value for whatever type you have in your list.\nSo, here is your function with one minor modification:\ndef multiplyItemsByFour(argsList):\n output = type(argsList[0])()\n for arg in argsList:\n output += arg * 4\n return output\n\nWorks with::\nargsList = [1, 2, 3, 4] or [1.0, 2.0, 3.0, 4.0] or \"abcdef\" ... etc,\n",
"You don't need to declare variable types in python; a variable has the type of whatever's assigned to it. \nEDIT:\nTo solve the re-stated problem, try this:\ndef multiplyItemsByFour(argsList):\n\noutput = argsList.pop(0) * 4\n\nfor arg in argsList:\n output += arg * 4\n\nreturn output\n\n(This is probably not the most pythonic way of doing this, but it should at least start off your output variable as the right type, assuming the whole list is of the same type)\n",
"Python is dynamically typed, you don't need to declare the type of a variable, because a variable doesn't have a type, only values do. (Any variable can store any value, a value never changes its type during its lifetime.)\ndef do_something(x):\n return x * 5\n\nThis will work for any x you pass to it, the actual result depending on what type the value in x has. If x contains a number it will just do regular multiplication, if it contains a string the string will be repeated five times in a row, for lists and such it will repeat the list five times, and so on. For custom types (classes) it depends on whether the class has an operation defined for the multiplication operator.\n",
"You gave these sample inputs and outputs:\nInput: ('a','b') Output: 'aaaabbbb' Input: (2,3,4) Output: 36\nI don't want to write the solution to your homework for you, but I do want to steer you in the correct direction. But I'm still not sure I understand what your problem is, because the problem as I understand it seems a bit difficult for an intro to Python class.\nThe most straightforward way to solve this requires that the arguments be passed in a list. Then, you can look at the first item in the list, and work from that. Here is a function that requires the caller to pass in a list of two items:\ndef handle_list_of_len_2(lst):\n return lst[0] * 4 + lst[1] * 4\n\nNow, how can we make this extend past two items? Well, in your sample code you weren't sure what to assign to your variable output. How about assigning lst[0]? Then it always has the correct type. Then you could loop over all the other elements in lst and accumulate to your output variable using += as you wrote. If you don't know how to loop over a list of items but skip the first thing in the list, Google search for \"python list slice\".\nNow, how can we make this not require the user to pack up everything into a list, but just call the function? What we really want is some way to accept whatever arguments the user wants to pass to the function, and make a list out of them. Perhaps there is special syntax for declaring a function where you tell Python you just want the arguments bundled up into a list. You might check a good tutorial and see what it says about how to define a function.\nNow that we have covered (very generally) how to accumulate an answer using +=, let's consider other ways to accumulate an answer. If you know how to use a list comprehension, you could use one of those to return a new list based on the argument list, with the multiply performed on each argument; you could then somehow reduce the list down to a single item and return it. Python 2.3 and newer have a built-in function called sum() and you might want to read up on that. [EDIT: Oh drat, sum() only works on numbers. See note added at end.]\nI hope this helps. If you are still very confused, I suggest you contact your teacher and ask for clarification. Good luck.\nP.S. Python 2.x have a built-in function called reduce() and it is possible to implement sum() using reduce(). However, the creator of Python thinks it is better to just use sum() and in fact he removed reduce() from Python 3.0 (well, he moved it into a module called functools).\nP.P.S. If you get the list comprehension working, here's one more thing to think about. If you use a list comprehension and then pass the result to sum(), you build a list to be used once and then discarded. Wouldn't it be neat if we could get the result, but instead of building the whole list and then discarding it we could just have the sum() function consume the list items as fast as they are generated? You might want to read this: Generator Expressions vs. List Comprehension\nEDIT: Oh drat, I assumed that Python's sum() builtin would use duck typing. Actually it is documented to work on numbers, only. I'm disappointed! I'll have to search and see if there were any discussions about that, and see why they did it the way they did; they probably had good reasons. Meanwhile, you might as well use your += solution. Sorry about that.\nEDIT: Okay, reading through other answers, I now notice two ways suggested for peeling off the first element in the list.\nFor simplicity, because you seem like a Python beginner, I suggested simply using output = lst[0] and then using list slicing to skip past the first item in the list. However, Wooble in his answer suggested using output = lst.pop(0) which is a very clean solution: it gets the zeroth thing on the list, and then you can just loop over the list and you automatically skip the zeroth thing. However, this \"mutates\" the list! It's better if a function like this does not have \"side effects\" such as modifying the list passed to it. (Unless the list is a special list made just for that function call, such as a *args list.) Another way would be to use the \"list slice\" trick to make a copy of the list that has the first item removed. Alex Martelli provided an example of how to make an \"iterator\" using a Python feature called iter(), and then using iterator to get the \"next\" thing. Since the iterator hasn't been used yet, the next thing is the zeroth thing in the list. That's not really a beginner solution but it is the most elegant way to do this in Python; you could pass a really huge list to the function, and Alex Martelli's solution will neither mutate the list nor waste memory by making a copy of the list.\n",
"No need to test the objects, just multiply away!\n'this is a string' * 6\n14 * 6\n[1,2,3] * 6\n\nall just work\n",
"Try this:\ndef timesfourlist(list):\n nextstep = map(times_four, list)\n sum(nextstep)\n\nmap performs the function passed in on each element of the list(returning a new list) and then sum does the += on the list.\n",
"If you just want to fill in the blank in your code, you could try setting object=arglist[0].__class__() to give it the zero equivalent value of that class.\n>>> def multiplyItemsByFour(argsList):\n output = argsList[0].__class__()\n for arg in argsList:\n output += arg * 4\n return output\n\n>>> multiplyItemsByFour('ab')\n'aaaabbbb'\n>>> multiplyItemsByFour((2,3,4))\n36\n>>> multiplyItemsByFour((2.0,3.3))\n21.199999999999999\n\nThis will crash if the list is empty, but you can check for that case at the beginning of the function and return whatever you feel appropriate.\n",
"Thanks to Alex Martelli, you have the best possible solution:\ndef theFinalAndTrulyRealProblemAsPosed(argsList):\n items = iter(argsList)\n output = next(items, []) * 4\n for item in items:\n output += item * 4\n return output\n\nThis is beautiful and elegant. First we create an iterator with iter(), then we use next() to get the first object in the list. Then we accumulate as we iterate through the rest of the list, and we are done. We never need to know the type of the objects in argsList, and indeed they can be of different types as long as all the types can have operator + applied with them. This is duck typing.\nFor a moment there last night I was confused and thought that you wanted a function that, instead of taking an explicit list, just took one or more arguments.\ndef four_x_args(*args):\n return theFinalAndTrulyRealProblemAsPosed(args)\n\nThe *args argument to the function tells Python to gather up all arguments to this function and make a tuple out of them; then the tuple is bound to the name args. You can easily make a list out of it, and then you could use the .pop(0) method to get the first item from the list. This costs the memory and time to build the list, which is why the iter() solution is so elegant.\ndef four_x_args(*args):\n argsList = list(args) # convert from tuple to list\n output = argsList.pop(0) * 4\n for arg in argsList:\n output += arg * 4\n return output\n\nThis is just Wooble's solution, rewritten to use *args.\nExamples of calling it:\nprint four_x_args(1) # prints 4\nprint four_x_args(1, 2) # prints 12\nprint four_x_args('a') # prints 'aaaa'\nprint four_x_args('ab', 'c') # prints 'ababababcccc'\n\nFinally, I'm going to be malicious and complain about the solution you accepted. That solution depends on the object's base class having a sensible null or zero, but not all classes have this. int() returns 0, and str() returns '' (null string), so they work. But how about this:\nclass NaturalNumber(int):\n \"\"\"\n Exactly like an int, but only values >= 1 are possible.\n \"\"\"\n def __new__(cls, initial_value=1): \n try:\n n = int(initial_value)\n if n < 1:\n raise ValueError\n except ValueError:\n raise ValueError, \"NaturalNumber() initial value must be an int() >= 1\"\n return super(NaturalNumber, cls).__new__ (cls, n)\n\n\nargList = [NaturalNumber(n) for n in xrange(1, 4)]\n\nprint theFinalAndTrulyRealProblemAsPosed(argList) # prints correct answer: 24\n\nprint NaturalNumber() # prints 1\nprint type(argList[0])() # prints 1, same as previous line\n\nprint multiplyItemsByFour(argList) # prints 25!\n\nGood luck in your studies, and I hope you enjoy Python as much as I do.\n"
] |
[
7,
2,
2,
2,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"python",
"types"
] |
stackoverflow_0001515412_python_types.txt
|
Q:
How to parse Youtube search results?
When a user submits a query, I would like to append this to search results in Youtube and download this "page" on the backend of my code (Python-Django).
What is the best way to do this?
Do you suggest that I do this manually? And create a custom parser? Scanning each line for a certain pattern...
Or is there an easier way, such as using their API?
By the way, I am familiar with crawling. I just want to know the method for YOUTUBE. I understand how to download/parse pages. (I will be using urllib2.)
A:
The preferred way is probably with the YouTube API
|
How to parse Youtube search results?
|
When a user submits a query, I would like to append this to search results in Youtube and download this "page" on the backend of my code (Python-Django).
What is the best way to do this?
Do you suggest that I do this manually? And create a custom parser? Scanning each line for a certain pattern...
Or is there an easier way, such as using their API?
By the way, I am familiar with crawling. I just want to know the method for YOUTUBE. I understand how to download/parse pages. (I will be using urllib2.)
|
[
"The preferred way is probably with the YouTube API\n"
] |
[
5
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001518481_python.txt
|
Q:
Is it possible to save a list of values into a SQLite column?
I want 3 columns to have 9 different values, like a list in Python.
Is it possible? If not in SQLite, then on another database engine?
A:
You must serialize the list (or other Python object) into a string of bytes, aka "BLOB";-), through your favorite means (marshal is good for lists of elementary values such as numbers or strings &c, cPickle if you want a very general solution, etc), and deserialize it when you fetch it back. Of course, that basically carries the list (or other Python object) as a passive "payload" -- can't meaningfully use it in WHERE clauses, ORDER BY, etc.
Relational databases just don't deal all that well with non-atomic values and would prefer other, normalized alternatives (store the list's items in a different table which includes a "listID" column, put the "listID" in your main table, etc). NON-relational databases, while they typically have limitations wrt relational ones (e.g., no joins), may offer more direct support for your requirement.
Some relational DBs do have non-relational extensions. For example, PostGreSQL supports an array data type (not quite as general as Python's lists -- PgSQL's arrays are intrinsically homogeneous).
A:
Generally, you do this by stringifying the list (with repr()), and then saving the string. On reading the string from the database, use eval() to re-create the list. Be careful, though that you are certain no user-generated data can get into the column, or the eval() is a security risk.
A:
Your question is difficult to understand. Here it is again:
I want 3 columns to have 9 different values, like a list in Python. Is it possible? If not in SQLite, then on another database engine?
Here is what I believe you are asking: is it possible to take a Python list of 9 different values, and save the values under a particular column in a database?
The answer to this question is "yes". I suggest using a Python ORM library instead of trying to write the SQL code yourself. This example code uses Autumn:
import autumn
import autumn.util
from autumn.util import create_table
# get a database connection object
my_test_db = autumn.util.AutoConn("my_test.db")
# code to create the database table
_create_sql = """\
DROP TABLE IF EXISTS mytest;
CREATE TABLE mytest (
id INTEGER PRIMARY KEY AUTOINCREMENT,
value INTEGER NOT NULL,
UNIQUE(value)
);"""
# create the table, dropping any previous table of same name
create_table(my_test_db, _create_sql)
# create ORM class; Autumn introspects the database to find out columns
class MyTest(autumn.model.Model):
db = my_test_db
lst = [3, 6, 9, 2, 4, 8, 1, 5, 7] # list of 9 unique values
for n in lst:
row = MyTest(value=n) # create MyTest() row instance with value initialized
row.save() # write the data to the database
Run this code, then exit Python and run sqlite3 my_test.db. Then run this SQL command inside SQLite: select * from mytest; Here is the result:
1|3
2|6
3|9
4|2
5|4
6|8
7|1
8|5
9|7
This example pulls values from one list, and uses the values to populate one column from the database. It could be trivially extended to add additional columns and populate them as well.
If this is not the answer you are looking for, please rephrase your request to clarify.
P.S. This example uses autumn.util. The setup.py included with the current release of Autumn does not install util.py in the correct place; you will need to finish the setup of Autumn by hand.
You could use a more mature ORM such as SQLAlchemy or the ORM from Django. However, I really do like Autumn, especially for SQLite.
|
Is it possible to save a list of values into a SQLite column?
|
I want 3 columns to have 9 different values, like a list in Python.
Is it possible? If not in SQLite, then on another database engine?
|
[
"You must serialize the list (or other Python object) into a string of bytes, aka \"BLOB\";-), through your favorite means (marshal is good for lists of elementary values such as numbers or strings &c, cPickle if you want a very general solution, etc), and deserialize it when you fetch it back. Of course, that basically carries the list (or other Python object) as a passive \"payload\" -- can't meaningfully use it in WHERE clauses, ORDER BY, etc. \nRelational databases just don't deal all that well with non-atomic values and would prefer other, normalized alternatives (store the list's items in a different table which includes a \"listID\" column, put the \"listID\" in your main table, etc). NON-relational databases, while they typically have limitations wrt relational ones (e.g., no joins), may offer more direct support for your requirement.\nSome relational DBs do have non-relational extensions. For example, PostGreSQL supports an array data type (not quite as general as Python's lists -- PgSQL's arrays are intrinsically homogeneous).\n",
"Generally, you do this by stringifying the list (with repr()), and then saving the string. On reading the string from the database, use eval() to re-create the list. Be careful, though that you are certain no user-generated data can get into the column, or the eval() is a security risk.\n",
"Your question is difficult to understand. Here it is again:\nI want 3 columns to have 9 different values, like a list in Python. Is it possible? If not in SQLite, then on another database engine?\nHere is what I believe you are asking: is it possible to take a Python list of 9 different values, and save the values under a particular column in a database?\nThe answer to this question is \"yes\". I suggest using a Python ORM library instead of trying to write the SQL code yourself. This example code uses Autumn:\nimport autumn\nimport autumn.util\nfrom autumn.util import create_table\n\n# get a database connection object\nmy_test_db = autumn.util.AutoConn(\"my_test.db\")\n\n\n# code to create the database table\n_create_sql = \"\"\"\\\nDROP TABLE IF EXISTS mytest;\nCREATE TABLE mytest (\n id INTEGER PRIMARY KEY AUTOINCREMENT,\n value INTEGER NOT NULL,\n UNIQUE(value)\n);\"\"\"\n\n# create the table, dropping any previous table of same name\ncreate_table(my_test_db, _create_sql)\n\n# create ORM class; Autumn introspects the database to find out columns\nclass MyTest(autumn.model.Model):\n db = my_test_db\n\n\nlst = [3, 6, 9, 2, 4, 8, 1, 5, 7] # list of 9 unique values\n\nfor n in lst:\n row = MyTest(value=n) # create MyTest() row instance with value initialized\n row.save() # write the data to the database\n\nRun this code, then exit Python and run sqlite3 my_test.db. Then run this SQL command inside SQLite: select * from mytest; Here is the result:\n1|3\n2|6\n3|9\n4|2\n5|4\n6|8\n7|1\n8|5\n9|7\n\nThis example pulls values from one list, and uses the values to populate one column from the database. It could be trivially extended to add additional columns and populate them as well.\nIf this is not the answer you are looking for, please rephrase your request to clarify.\nP.S. This example uses autumn.util. The setup.py included with the current release of Autumn does not install util.py in the correct place; you will need to finish the setup of Autumn by hand.\nYou could use a more mature ORM such as SQLAlchemy or the ORM from Django. However, I really do like Autumn, especially for SQLite.\n"
] |
[
18,
13,
4
] |
[] |
[] |
[
"python",
"sqlite"
] |
stackoverflow_0001517771_python_sqlite.txt
|
Q:
How to install numpy and scipy on Windows XP
I have a problem installing Numpy and Scipy from
http://www.scipy.org/Installing_SciPy/Windows
I went to download page and downloaded .exe files for Python26. I have Python26 on my machine. After installation, I tried
>>> import nympy, scipy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named nympy
>>>
How to proceed?
A:
try with numpy instead of nympy
|
How to install numpy and scipy on Windows XP
|
I have a problem installing Numpy and Scipy from
http://www.scipy.org/Installing_SciPy/Windows
I went to download page and downloaded .exe files for Python26. I have Python26 on my machine. After installation, I tried
>>> import nympy, scipy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named nympy
>>>
How to proceed?
|
[
"try with numpy instead of nympy\n"
] |
[
6
] |
[] |
[] |
[
"numpy",
"python",
"scipy"
] |
stackoverflow_0001518779_numpy_python_scipy.txt
|
Q:
String manipulation in Python docstrings
I've been trying to do the following:
#[...]
def __history_dependent_simulate(self, node, iterations=1,
*args, **kwargs):
"""
For history-dependent simulations only:
""" + self.simulate.__doc___
What I tried to accomplish here is to have the same documentation for this private method as the documentation of the method simulate, except with a short introduction. This would allow me to avoid copy-pasting, keep a shorter file and not have to update the documentation for two functions every time.
But it doesn't work. Does anyone know of a reason why, or whether there is a solution?
A:
A better solution is probably to use a decorator, eg:
def add_docs_for(other_func):
def dec(func):
func.__doc__ = other_func.__doc__ + "\n\n" + func.__doc__
return func
return dec
def foo():
"""documentation for foo"""
pass
@add_docs_for(foo)
def bar():
"""additional notes for bar"""
pass
help(bar) # --> "documentation for foo // additional notes for bar"
That way you can do arbitrary manipulations of docstrings.
A:
I think this section makes it pretty clear:
What is a Docstring?
A docstring is a string literal that
occurs as the first statement in a
module, function, class, or method
definition. Such a docstring becomes
the doc special attribute of that
object.
So, it's not an expression that evaluates into a string, it's a string literal.
|
String manipulation in Python docstrings
|
I've been trying to do the following:
#[...]
def __history_dependent_simulate(self, node, iterations=1,
*args, **kwargs):
"""
For history-dependent simulations only:
""" + self.simulate.__doc___
What I tried to accomplish here is to have the same documentation for this private method as the documentation of the method simulate, except with a short introduction. This would allow me to avoid copy-pasting, keep a shorter file and not have to update the documentation for two functions every time.
But it doesn't work. Does anyone know of a reason why, or whether there is a solution?
|
[
"A better solution is probably to use a decorator, eg:\ndef add_docs_for(other_func): \n def dec(func): \n func.__doc__ = other_func.__doc__ + \"\\n\\n\" + func.__doc__\n return func\n return dec\n\ndef foo():\n \"\"\"documentation for foo\"\"\"\n pass\n\n@add_docs_for(foo)\ndef bar():\n \"\"\"additional notes for bar\"\"\"\n pass\n\nhelp(bar) # --> \"documentation for foo // additional notes for bar\"\n\nThat way you can do arbitrary manipulations of docstrings.\n",
"I think this section makes it pretty clear:\n\nWhat is a Docstring?\nA docstring is a string literal that\n occurs as the first statement in a\n module, function, class, or method\n definition. Such a docstring becomes\n the doc special attribute of that\n object.\n\nSo, it's not an expression that evaluates into a string, it's a string literal.\n"
] |
[
9,
2
] |
[] |
[] |
[
"documentation",
"python",
"string"
] |
stackoverflow_0001519029_documentation_python_string.txt
|
Q:
Combine two lists: aggregate values that have similar keys
I have two lists or more than . Some thing like this:
listX = [('A', 1, 10), ('B', 2, 20), ('C', 3, 30), ('D', 4, 30)]
listY = [('a', 5, 50), ('b', 4, 40), ('c', 3, 30), ('d', 1, 20),
('A', 6, 60), ('D', 7, 70])
i want to get the result that move the duplicate elements like this:
my result is to get all the list from listX + listY,but in the case there are duplicated
for example
the element ('A', 1, 10), ('D', 4, 30) of listX is presented or exitst in listY.so the result so be like this
result = [('A', 7, 70), ('B', 2, 20), ('C', 3, 30), ('D', 11, 100),
('a', 5, 50), ('b', 4, 40), ('c', 3, 30), ('d', 1, 20)]
(A, 7, 70) is obtained by adding ('A', 1, 10) and ('A', '6', '60') together
Anybody could me to solve this problem.?
Thanks.
A:
This is pretty easy if you use a dictionary.
combined = {}
for item in listX + listY:
key = item[0]
if key in combined:
combined[key][0] += item[1]
combined[key][1] += item[2]
else:
combined[key] = [item[1], item[2]]
result = [(key, value[0], value[1]) for key, value in combined.items()]
A:
You appear to be using lists like a dictionary. Any reason you're using lists instead of dictionaries?
My understanding of this garbled question, is that you want to add up values in tuples where the first element in the same.
I'd do something like this:
counter = dict(
(a[0], (a[1], a[2]))
for a in listX
)
for key, v1, v2 in listY:
if key not in counter:
counter[key] = (0, 0)
counter[key][0] += v1
counter[key][1] += v2
result = [(key, value[0], value[1]) for key, value in counter.items()]
A:
I'd say use a dictionary:
result = {}
for eachlist in (ListX, ListY,):
for item in eachlist:
if item[0] not in result:
result[item[0]] = item
It's always tricky do do data manipulation if you have data in a structure that doesn't represent the data well. Consider using better data structures.
A:
Use dictionary and its 'get' method.
d = {}
for x in (listX + listY):
y = d.get(x[0], (0, 0, 0))
d[x[0]] = (x[0], x[1] + y[1], x[2] + y[2])
d.values()
|
Combine two lists: aggregate values that have similar keys
|
I have two lists or more than . Some thing like this:
listX = [('A', 1, 10), ('B', 2, 20), ('C', 3, 30), ('D', 4, 30)]
listY = [('a', 5, 50), ('b', 4, 40), ('c', 3, 30), ('d', 1, 20),
('A', 6, 60), ('D', 7, 70])
i want to get the result that move the duplicate elements like this:
my result is to get all the list from listX + listY,but in the case there are duplicated
for example
the element ('A', 1, 10), ('D', 4, 30) of listX is presented or exitst in listY.so the result so be like this
result = [('A', 7, 70), ('B', 2, 20), ('C', 3, 30), ('D', 11, 100),
('a', 5, 50), ('b', 4, 40), ('c', 3, 30), ('d', 1, 20)]
(A, 7, 70) is obtained by adding ('A', 1, 10) and ('A', '6', '60') together
Anybody could me to solve this problem.?
Thanks.
|
[
"This is pretty easy if you use a dictionary.\ncombined = {}\nfor item in listX + listY:\n key = item[0] \n if key in combined:\n combined[key][0] += item[1]\n combined[key][1] += item[2]\n else:\n combined[key] = [item[1], item[2]]\n\nresult = [(key, value[0], value[1]) for key, value in combined.items()]\n\n",
"You appear to be using lists like a dictionary. Any reason you're using lists instead of dictionaries?\nMy understanding of this garbled question, is that you want to add up values in tuples where the first element in the same.\nI'd do something like this:\ncounter = dict(\n (a[0], (a[1], a[2]))\n for a in listX\n)\n\nfor key, v1, v2 in listY:\n if key not in counter:\n counter[key] = (0, 0)\n counter[key][0] += v1\n counter[key][1] += v2\n\nresult = [(key, value[0], value[1]) for key, value in counter.items()]\n\n",
"I'd say use a dictionary:\nresult = {}\nfor eachlist in (ListX, ListY,):\n for item in eachlist:\n if item[0] not in result:\n result[item[0]] = item\n\nIt's always tricky do do data manipulation if you have data in a structure that doesn't represent the data well. Consider using better data structures.\n",
"Use dictionary and its 'get' method.\nd = {}\nfor x in (listX + listY):\n y = d.get(x[0], (0, 0, 0))\n d[x[0]] = (x[0], x[1] + y[1], x[2] + y[2])\n\nd.values()\n\n"
] |
[
8,
2,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001518858_python.txt
|
Q:
Learning to write reusable libraries
We need to write simple scripts to manipulate the configuration of our load balancers (ie, drain nodes from pools, enabled or disable traffic rules). The load balancers have a SOAP API (defined through a bunch of WSDL files) which is very comprehensive but using it is quite low-level with a lot of manual error checking and list manipulation. It doesn't tend to produce reusable, robust code.
I'd like to write a Python library to handle the nitty-gritty of interacting with the SOAP interface but I don't really know where to start; all of my coding experience is with writing one-off monolithic programs for specific jobs. This is fine for small jobs but it's not helping me or my coworkers -- we're reinventing the wheel with a different number of spokes each time :~)
The API already provides methods like getPoolNames() and getDrainingNodes() but they're a bit awkward to use. Most take a list of nodes and return another list, so (say) working out which virtual servers are enabled involves this sort of thing:
names = conn.getVirtualServerNames()
enabled = conn.getEnabled(names)
for i in range(0, len(names)):
if (enabled[i]):
print names[i]
conn.setEnabled(['www.example.com'], [0])
Whereas something like this:
lb = LoadBalancer('hostname')
for name in [vs.name for vs in lb.virtualServers() if vs.isEnabled()]:
print name
www = lb.virtualServer('www.example.com').disable()
is more Pythonic and (IMHO) easier.
There are a lot of things I'm not sure about: how to handle errors, how to deal with 20-odd WSDL files (a SOAPpy/suds instance for each?) and how much boilerplate translation from the API methods to my methods I'll need to do.
This is more an example of a wider problem (how to learn to write libraries instead of one-off scripts) so I don't want answers to these specific questions -- they're there to demonstrate my thinking and illustrate my problem. I recognise a code smell in the way I do things at the moment (one-off, non-reusable code) but I don't know how to fix it. How does one get into the mindset for tackling problems at a more abstract level? How do you 'learn' software design?
A:
"I don't really know where to start"
Clearly false. You provided an excellent example. Just do more of that. It's that simple.
"There are a lot of things I'm not sure about: how to handle errors, how to deal with 20-odd WSDL files (a SOAPpy/suds instance for each?) and how much boilerplate translation from the API methods to my methods I'll need to do."
Handle errors by raising an exception. That's enough. Remember, you're still going to have high-level scripts using your API library.
20-odd WSDL files? Just pick something for now. Don't overengineer this. Design the API -- as you did with your example -- for the things you want to do. The WSDL's and the number of instances will become clear as you go. One, Ten, Twenty doesn't really matter to users of your API library. It only matters to you, the maintainer. Focus on the users.
Boilerplate translation? As little as possible. Focus on what parts of these interfaces you use with your actual scripts. Translate just what you need and nothing more.
An API is not fixed, cast in concrete, a thing of beauty and a joy forever. It's just a module (in your case a package might be better) that does some useful stuff.
It will undergo constant change and evolution.
Don't overengineer the first release. Build something useful that works for one use case. Then add use cases to it.
"But what if I realize I did something wrong?" That's inevitable, you'll always reach this point. Don't worry about it now.
The most important thing about writing an API library is writing the unit tests that (a) demonstrate how it works and (b) prove that it actually works.
A:
There's an excellent presentation by Joshua Bloch on API design (and thus leading to library design). It's well worth watching. IIRC it's Java-focused, but the principles will apply to any language.
A:
If you are not afraid of C++, there is an excellent book on the subject called "Large-scale C++ Software Design".
This book will guide you through the steps of designing a library by introducing "physical" and "logical" design.
For instance, you'll learn to flatten your components' hierarchy, to restrict dependency between components, to create levels of abstraction.
The is really "the" book on software design IMHO.
|
Learning to write reusable libraries
|
We need to write simple scripts to manipulate the configuration of our load balancers (ie, drain nodes from pools, enabled or disable traffic rules). The load balancers have a SOAP API (defined through a bunch of WSDL files) which is very comprehensive but using it is quite low-level with a lot of manual error checking and list manipulation. It doesn't tend to produce reusable, robust code.
I'd like to write a Python library to handle the nitty-gritty of interacting with the SOAP interface but I don't really know where to start; all of my coding experience is with writing one-off monolithic programs for specific jobs. This is fine for small jobs but it's not helping me or my coworkers -- we're reinventing the wheel with a different number of spokes each time :~)
The API already provides methods like getPoolNames() and getDrainingNodes() but they're a bit awkward to use. Most take a list of nodes and return another list, so (say) working out which virtual servers are enabled involves this sort of thing:
names = conn.getVirtualServerNames()
enabled = conn.getEnabled(names)
for i in range(0, len(names)):
if (enabled[i]):
print names[i]
conn.setEnabled(['www.example.com'], [0])
Whereas something like this:
lb = LoadBalancer('hostname')
for name in [vs.name for vs in lb.virtualServers() if vs.isEnabled()]:
print name
www = lb.virtualServer('www.example.com').disable()
is more Pythonic and (IMHO) easier.
There are a lot of things I'm not sure about: how to handle errors, how to deal with 20-odd WSDL files (a SOAPpy/suds instance for each?) and how much boilerplate translation from the API methods to my methods I'll need to do.
This is more an example of a wider problem (how to learn to write libraries instead of one-off scripts) so I don't want answers to these specific questions -- they're there to demonstrate my thinking and illustrate my problem. I recognise a code smell in the way I do things at the moment (one-off, non-reusable code) but I don't know how to fix it. How does one get into the mindset for tackling problems at a more abstract level? How do you 'learn' software design?
|
[
"\"I don't really know where to start\"\nClearly false. You provided an excellent example. Just do more of that. It's that simple.\n\"There are a lot of things I'm not sure about: how to handle errors, how to deal with 20-odd WSDL files (a SOAPpy/suds instance for each?) and how much boilerplate translation from the API methods to my methods I'll need to do.\"\n\nHandle errors by raising an exception. That's enough. Remember, you're still going to have high-level scripts using your API library.\n20-odd WSDL files? Just pick something for now. Don't overengineer this. Design the API -- as you did with your example -- for the things you want to do. The WSDL's and the number of instances will become clear as you go. One, Ten, Twenty doesn't really matter to users of your API library. It only matters to you, the maintainer. Focus on the users.\nBoilerplate translation? As little as possible. Focus on what parts of these interfaces you use with your actual scripts. Translate just what you need and nothing more. \n\nAn API is not fixed, cast in concrete, a thing of beauty and a joy forever. It's just a module (in your case a package might be better) that does some useful stuff.\nIt will undergo constant change and evolution. \nDon't overengineer the first release. Build something useful that works for one use case. Then add use cases to it.\n\"But what if I realize I did something wrong?\" That's inevitable, you'll always reach this point. Don't worry about it now. \nThe most important thing about writing an API library is writing the unit tests that (a) demonstrate how it works and (b) prove that it actually works.\n",
"There's an excellent presentation by Joshua Bloch on API design (and thus leading to library design). It's well worth watching. IIRC it's Java-focused, but the principles will apply to any language.\n",
"If you are not afraid of C++, there is an excellent book on the subject called \"Large-scale C++ Software Design\".\nThis book will guide you through the steps of designing a library by introducing \"physical\" and \"logical\" design.\nFor instance, you'll learn to flatten your components' hierarchy, to restrict dependency between components, to create levels of abstraction.\nThe is really \"the\" book on software design IMHO.\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001518948_python.txt
|
Q:
Stripping code for production
So I want to physically get rid of debugging code before deploying it to appengine.
What I'm doing right now is a simple check:
settings.py:
DEBUG = os.environ['HTTP_HOST'] == 'localhost'
from settings import DEBUG
if DEBUG:
#ensure I haven't screwed smth up during refactoring
But this check will be consuming CPU cycles during each call, right?
In Java, there's a pattern that would strip debugging code at compile time:
public static final boolean DEBUG = true; // change to false before going production
if(DEBUG){
//debug logging
}
Is there a clean way to achieve the same effect in Python or should I wrap the code to be stripped with smth like #%STRIP_ME% and then run a custom script against it?
A:
No, there isn't clean way.
Conditional compilation in Python
Differrent question, similar goals.
A:
If you are worried about the code and able to do this, you could use a decorator approach to inject the debugs, and then for production code, redefine the decorator functions to be no-ops.
Honestly, I wouldn't worry about it though since the if statement should correspond to a jump statement in assembly, or maybe a couple more than that.
A:
First, I'd suggest making your check "os.environ['SERVER_SOFTWARE'].startswith('Dev')". This will work even if you access the dev server on another host.
Also, your 'DEBUG' statement will only be evaluated the first time the config module is imported on each App server instance. This is fine, since the debug status won't change from request to request, and it also saves a few CPU cycles. I wouldn't worry about the CPU cycles consumed checking if you're in debug mode later, though - there is much, much lower hanging fruit, and I doubt you can even measure the performance degradation of a simple if statement.
A:
You would most likely benefit from using the logging module and importance levels.
Code wouldn't need to be removed before publishing, you can granularly configure the importance level should you ever need to debug in production and it becomes easy to search your code for where notices originate from.
|
Stripping code for production
|
So I want to physically get rid of debugging code before deploying it to appengine.
What I'm doing right now is a simple check:
settings.py:
DEBUG = os.environ['HTTP_HOST'] == 'localhost'
from settings import DEBUG
if DEBUG:
#ensure I haven't screwed smth up during refactoring
But this check will be consuming CPU cycles during each call, right?
In Java, there's a pattern that would strip debugging code at compile time:
public static final boolean DEBUG = true; // change to false before going production
if(DEBUG){
//debug logging
}
Is there a clean way to achieve the same effect in Python or should I wrap the code to be stripped with smth like #%STRIP_ME% and then run a custom script against it?
|
[
"No, there isn't clean way. \nConditional compilation in Python\nDifferrent question, similar goals.\n",
"If you are worried about the code and able to do this, you could use a decorator approach to inject the debugs, and then for production code, redefine the decorator functions to be no-ops. \nHonestly, I wouldn't worry about it though since the if statement should correspond to a jump statement in assembly, or maybe a couple more than that.\n",
"First, I'd suggest making your check \"os.environ['SERVER_SOFTWARE'].startswith('Dev')\". This will work even if you access the dev server on another host.\nAlso, your 'DEBUG' statement will only be evaluated the first time the config module is imported on each App server instance. This is fine, since the debug status won't change from request to request, and it also saves a few CPU cycles. I wouldn't worry about the CPU cycles consumed checking if you're in debug mode later, though - there is much, much lower hanging fruit, and I doubt you can even measure the performance degradation of a simple if statement.\n",
"You would most likely benefit from using the logging module and importance levels.\nCode wouldn't need to be removed before publishing, you can granularly configure the importance level should you ever need to debug in production and it becomes easy to search your code for where notices originate from.\n"
] |
[
3,
2,
2,
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001518484_google_app_engine_python.txt
|
Q:
How can I interact with the android scripting environment from an android app?
I'd like to use python scripts as plugins for an app I'm developing. This seems to be possible by interacting with android-scripting-environment (ASE), as is done by Locale, but I haven't found any documentation about this. How you execute ASE scripts from your own app?
A:
It looks like the Locale "plugin" is just recieving a broadcasted Intent to com.twofortyfouram.locale.intent.action.FIRE_SETTING which includs the script as an Extra with the name com.google.ase.extra.SCRIPT_NAME.
The relevant bits are in com.google.ase.locale.LocaleReceiver.
|
How can I interact with the android scripting environment from an android app?
|
I'd like to use python scripts as plugins for an app I'm developing. This seems to be possible by interacting with android-scripting-environment (ASE), as is done by Locale, but I haven't found any documentation about this. How you execute ASE scripts from your own app?
|
[
"It looks like the Locale \"plugin\" is just recieving a broadcasted Intent to com.twofortyfouram.locale.intent.action.FIRE_SETTING which includs the script as an Extra with the name com.google.ase.extra.SCRIPT_NAME.\nThe relevant bits are in com.google.ase.locale.LocaleReceiver.\n"
] |
[
0
] |
[] |
[] |
[
"android",
"android_scripting",
"ase",
"python"
] |
stackoverflow_0001517372_android_android_scripting_ase_python.txt
|
Q:
Windows Mobile development in Python
What is the best way to start developing Windows Mobile Professional applications in Python? Is there a reasonable SDK including an emulator? Is it even possible without doing excessive amount of underlaying Windows API calls for UI for instance?
A:
Python CE
Python port for Windows CE (Pocket PC) devices. Intended to be as close to desktop version as possible (console, current directory support, testsuite passed).
(source: sourceforge.net)
A:
(I used to write customer apps for Windows Mobile.)
Forget about python. Even if it's technically possible:
your app will be big (you'll have to bundle the whole python runtime with your app)
your app will use lots of memory (python is a memory hog, relative to C/C++)
your app will be slow
you wont find any documentation or discussion groups to help you when you (inevitably) encounter problems
Go with C/C++ (or C#). Visual Studio 2005/2008 have decent tools for those (SDK for winmo built-in, debugging on the emulator or device connected through USB), the best documentation is for those technologies plus there are active forums/discussion groups/mailing lists where you can ask for help.
A:
If the IronPython and .Net Compact Framework teams work together, Visual Studio may one day support Python for Windows Mobile development out-of-the-box. Unfortunately, this feature request has been sitting on their issue tracker for ages...
A:
Just found this: http://ejr44.blogspot.com/2008/05/python-for-windows-mobile-cab.html
Looks like a complete set of .CAB files to provide Python on Windows Mobile.
|
Windows Mobile development in Python
|
What is the best way to start developing Windows Mobile Professional applications in Python? Is there a reasonable SDK including an emulator? Is it even possible without doing excessive amount of underlaying Windows API calls for UI for instance?
|
[
"Python CE\nPython port for Windows CE (Pocket PC) devices. Intended to be as close to desktop version as possible (console, current directory support, testsuite passed). \n\n(source: sourceforge.net) \n\n",
"(I used to write customer apps for Windows Mobile.)\nForget about python. Even if it's technically possible:\n\nyour app will be big (you'll have to bundle the whole python runtime with your app)\nyour app will use lots of memory (python is a memory hog, relative to C/C++)\nyour app will be slow\nyou wont find any documentation or discussion groups to help you when you (inevitably) encounter problems\n\nGo with C/C++ (or C#). Visual Studio 2005/2008 have decent tools for those (SDK for winmo built-in, debugging on the emulator or device connected through USB), the best documentation is for those technologies plus there are active forums/discussion groups/mailing lists where you can ask for help.\n",
"If the IronPython and .Net Compact Framework teams work together, Visual Studio may one day support Python for Windows Mobile development out-of-the-box. Unfortunately, this feature request has been sitting on their issue tracker for ages...\n",
"Just found this: http://ejr44.blogspot.com/2008/05/python-for-windows-mobile-cab.html\nLooks like a complete set of .CAB files to provide Python on Windows Mobile.\n"
] |
[
9,
9,
1,
1
] |
[] |
[] |
[
"python",
"windows_mobile"
] |
stackoverflow_0000060446_python_windows_mobile.txt
|
Q:
In managed code, how do I achieve good locality of reference?
Since RAM seems to be the new disk, and since that statement also means that access to memory is now considered slow similarly to how disk access has always been, I do want to maximize locality of reference in memory for high performance applications. For example, in a sorted index, I want adjacent values to be close (unlike say, in a hashtable), and I want the data the index is pointing to close by, too.
In C, I can whip up a data structure with a specialized memory manager, like the developers of the (immensely complex) Judy array did. With direct control over the pointers, they even went so far as to encode additional information in the pointer value itself. When working in Python, Java or C#, I am deliberately one (or more) level(s) of abstraction away from this type of solution and I'm entrusting the JIT compilers and optimizing runtimes with doing clever tricks on the low levels for me.
Still, I guess, even at this high level of abstraction, there are things that can be semantically considered "closer" and therefore are likely to be actually closer at the low levels. For example, I was wondering about the following (my guess in parentheses):
Can I expect an array to be an adjacent block of memory (yes)?
Are two integers in the same instance closer than two in different instances of the same class (probably)?
Does an object occupy a contigous region in memory (no)?
What's the difference between an array of objects with only two int fields and a single object with two int[] fields? (this example is probably Java specific)
I started wondering about these in a Java context, but my wondering has become more general, so I'd suggest to not treat this as a Java question.
A:
In .NET, elements of an array are certainly contiguous. In Java I'd expect them to be in most implementations, but it appears not to be guaranteed.
I think it's reasonable to assume that the memory used by an instance for fields is in a single block... but don't forget that some of those fields may be references to other objects.
For the Java array part, Sun's JNI documentation includes this comment, tucked away in a discussion about strings:
For example, the Java virtual machine may not store arrays contiguously.
For your last question, if you have two int[] then each of those arrays will be a contiguous block of memory, but they could be very "far apart" in memory. If you have an array of objects with two int fields, then each object could be a long way from each other, but the two integers within each object will be close together. Potentially more importantly, you'll end up taking a lot more memory with the "lots of objects" solution due to the per-object overhead. In .NET you could use a custom struct with two integers instead, and have an array of those - that would keep all the data in one big block.
I believe that in both Java and .NET, if you allocate a lot of smallish objects in quick succession within a single thread then those objects are likely to have good locality of reference. When the GC compacts a heap, this may improve - or it may potentially become worse, if a heap with
A B C D E
is compacted to
A D E B
(where C is collected) - suddenly A and B, which may have been "close" before, are far apart. I don't know whether this actually happens in any garbage collector (there are loads around!) but it's possible.
Basically in a managed environment you don't usually have as much control over locality of reference as you do in an unmanaged environment - you have to trust that the managed environment is sufficiently good at managing it, and that you'll have saved enough time by coding to a higher level platform to let you spend time optimising elsewhere.
A:
First, your title is implying C#. "Managed code" is a term coined by Microsoft, if I'm not mistaken.
Java primitive arrays are guaranteed to be a continuous block of memory. If you have a
int[] array = new int[4];
you can from JNI (native C) get a int *p to point to the actual array. I think this goes for the Array* class of containers as well (ArrayList, ArrayBlockingQueue, etc).
Early implementations of the JVM had objects as contiuous struct, I think, but this cannot be assumed with newer JVMs. (JNI abstracts away this).
Two integers in the same object will as you say probably be "closer", but they may not be. This will probably vary even using the same JVM.
An object with two int fields is an object and I don't think any JVM makes any guarantee that the members will be "close". An int-array with two elements will very likely be backed by a 8 byte long array.
A:
With regards to arrays here is an excerpt from CLI (Common Language Infrastructure) specification:
Array elements shall be laid out
within the array object in row-major
order (i.e., the elements associated
with the rightmost array dimension
shall be laid out contiguously from lowest to highest index). The
actual storage allocated for each
array element can include
platform-specific padding. (The size
of this storage, in bytes, is returned
by the sizeof instruction when it is
applied to the type of that array’s
elements.
A:
Good question! I think I would resort to writing extensions in C++ that handle memory in a more carefully managed way and just exposing enough of an interface to allow the rest of the application to manipulate the objects. If I was that concerned about performance I would probably resort to a C++ extension anyway.
A:
I don't think anyone has talked about Python so I'll have a go
Can I expect an array to be an adjacent block of memory (yes)?
In python arrays are more like arrays of pointers in C. So the pointers will be adjacent, but the actual objects are unlikely to be.
Are two integers in the same instance closer than two in different instances of the same class (probably)?
Probably not for the same reason as above. The instance will only hold pointers to the objects which are the actual integers. Python doesn't have native int (like Java), only boxed Int (in Java-speak).
Does an object occupy a contigous region in memory (no)?
Probably not. However if you use the __slots__ optimisation then some parts of it will be contiguous!
What's the difference between an array of objects with only two int fields and a single object with two int[] fields?
(this example is probably Java specific)
In python, in terms of memory locality, they are both pretty much the same! One will make an array of pointers to objects which will in turn contain two pointers to ints, the other will make two arrays of pointers to integers.
|
In managed code, how do I achieve good locality of reference?
|
Since RAM seems to be the new disk, and since that statement also means that access to memory is now considered slow similarly to how disk access has always been, I do want to maximize locality of reference in memory for high performance applications. For example, in a sorted index, I want adjacent values to be close (unlike say, in a hashtable), and I want the data the index is pointing to close by, too.
In C, I can whip up a data structure with a specialized memory manager, like the developers of the (immensely complex) Judy array did. With direct control over the pointers, they even went so far as to encode additional information in the pointer value itself. When working in Python, Java or C#, I am deliberately one (or more) level(s) of abstraction away from this type of solution and I'm entrusting the JIT compilers and optimizing runtimes with doing clever tricks on the low levels for me.
Still, I guess, even at this high level of abstraction, there are things that can be semantically considered "closer" and therefore are likely to be actually closer at the low levels. For example, I was wondering about the following (my guess in parentheses):
Can I expect an array to be an adjacent block of memory (yes)?
Are two integers in the same instance closer than two in different instances of the same class (probably)?
Does an object occupy a contigous region in memory (no)?
What's the difference between an array of objects with only two int fields and a single object with two int[] fields? (this example is probably Java specific)
I started wondering about these in a Java context, but my wondering has become more general, so I'd suggest to not treat this as a Java question.
|
[
"\nIn .NET, elements of an array are certainly contiguous. In Java I'd expect them to be in most implementations, but it appears not to be guaranteed.\nI think it's reasonable to assume that the memory used by an instance for fields is in a single block... but don't forget that some of those fields may be references to other objects.\n\nFor the Java array part, Sun's JNI documentation includes this comment, tucked away in a discussion about strings:\n\nFor example, the Java virtual machine may not store arrays contiguously.\n\nFor your last question, if you have two int[] then each of those arrays will be a contiguous block of memory, but they could be very \"far apart\" in memory. If you have an array of objects with two int fields, then each object could be a long way from each other, but the two integers within each object will be close together. Potentially more importantly, you'll end up taking a lot more memory with the \"lots of objects\" solution due to the per-object overhead. In .NET you could use a custom struct with two integers instead, and have an array of those - that would keep all the data in one big block.\nI believe that in both Java and .NET, if you allocate a lot of smallish objects in quick succession within a single thread then those objects are likely to have good locality of reference. When the GC compacts a heap, this may improve - or it may potentially become worse, if a heap with\nA B C D E\n\nis compacted to\nA D E B\n\n(where C is collected) - suddenly A and B, which may have been \"close\" before, are far apart. I don't know whether this actually happens in any garbage collector (there are loads around!) but it's possible.\nBasically in a managed environment you don't usually have as much control over locality of reference as you do in an unmanaged environment - you have to trust that the managed environment is sufficiently good at managing it, and that you'll have saved enough time by coding to a higher level platform to let you spend time optimising elsewhere.\n",
"First, your title is implying C#. \"Managed code\" is a term coined by Microsoft, if I'm not mistaken.\nJava primitive arrays are guaranteed to be a continuous block of memory. If you have a \nint[] array = new int[4];\n\nyou can from JNI (native C) get a int *p to point to the actual array. I think this goes for the Array* class of containers as well (ArrayList, ArrayBlockingQueue, etc).\nEarly implementations of the JVM had objects as contiuous struct, I think, but this cannot be assumed with newer JVMs. (JNI abstracts away this).\nTwo integers in the same object will as you say probably be \"closer\", but they may not be. This will probably vary even using the same JVM.\nAn object with two int fields is an object and I don't think any JVM makes any guarantee that the members will be \"close\". An int-array with two elements will very likely be backed by a 8 byte long array.\n",
"With regards to arrays here is an excerpt from CLI (Common Language Infrastructure) specification:\n\nArray elements shall be laid out\n within the array object in row-major\n order (i.e., the elements associated\n with the rightmost array dimension\n shall be laid out contiguously from lowest to highest index). The\n actual storage allocated for each\n array element can include\n platform-specific padding. (The size\n of this storage, in bytes, is returned\n by the sizeof instruction when it is\n applied to the type of that array’s\n elements.\n\n",
"Good question! I think I would resort to writing extensions in C++ that handle memory in a more carefully managed way and just exposing enough of an interface to allow the rest of the application to manipulate the objects. If I was that concerned about performance I would probably resort to a C++ extension anyway.\n",
"I don't think anyone has talked about Python so I'll have a go\n\nCan I expect an array to be an adjacent block of memory (yes)?\n\nIn python arrays are more like arrays of pointers in C. So the pointers will be adjacent, but the actual objects are unlikely to be.\n\nAre two integers in the same instance closer than two in different instances of the same class (probably)?\n\nProbably not for the same reason as above. The instance will only hold pointers to the objects which are the actual integers. Python doesn't have native int (like Java), only boxed Int (in Java-speak).\n\nDoes an object occupy a contigous region in memory (no)?\n\nProbably not. However if you use the __slots__ optimisation then some parts of it will be contiguous!\n\nWhat's the difference between an array of objects with only two int fields and a single object with two int[] fields? \n (this example is probably Java specific)\n\nIn python, in terms of memory locality, they are both pretty much the same! One will make an array of pointers to objects which will in turn contain two pointers to ints, the other will make two arrays of pointers to integers.\n"
] |
[
10,
4,
2,
2,
2
] |
[
"If you need to optimise to that level then I suspect a VM based language is not for you ;)\n"
] |
[
-3
] |
[
"c#",
"java",
"memory_management",
"optimization",
"python"
] |
stackoverflow_0001518915_c#_java_memory_management_optimization_python.txt
|
Q:
NumPy and memmap: [Errno 24] Too many open files
I am working with large matrixes, so I am using NumPy's memmap. However, I am getting an error as apparently the file descriptors used by memmap are not being closed.
import numpy
import tempfile
counter = 0
while True:
temp_fd, temporary_filename = tempfile.mkstemp(suffix='.memmap')
map = numpy.memmap(temporary_filename, dtype=float, mode="w+", shape=1000)
counter += 1
print counter
map.close()
os.remove(temporary_filename)
From what I understand, the memmap file is closed when the method close() is called. However, the code above cannot loop forever, as it eventually throws the "[Errno 24] Too many open files" error:
1016
1017
1018
1019
Traceback (most recent call last):
File "./memmap_loop.py", line 11, in <module>
File "/usr/lib/python2.5/site-packages/numpy/core/memmap.py", line 226, in __new__
EnvironmentError: [Errno 24] Too many open files
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/apport_python_hook.py", line 38, in apport_excepthook
ImportError: No module named packaging_impl
Original exception was:
Traceback (most recent call last):
File "./memmap_loop.py", line 11, in <module>
File "/usr/lib/python2.5/site-packages/numpy/core/memmap.py", line 226, in __new__
EnvironmentError: [Errno 24] Too many open files
Does anybody know what I am overlooking?
A:
Since the memmap does not take the open file descriptor, but the file name, I suppose you leak the temp_fd file descriptor. Does os.close(temp_fd) help?
Great that it works.
Since you can pass numpy.memmap a file-like object, you could create one from the file descriptor you already have, temp_fd.
fobj = os.fdopen(temp_fd, "w+")
numpy.memmap(fobj, ...
|
NumPy and memmap: [Errno 24] Too many open files
|
I am working with large matrixes, so I am using NumPy's memmap. However, I am getting an error as apparently the file descriptors used by memmap are not being closed.
import numpy
import tempfile
counter = 0
while True:
temp_fd, temporary_filename = tempfile.mkstemp(suffix='.memmap')
map = numpy.memmap(temporary_filename, dtype=float, mode="w+", shape=1000)
counter += 1
print counter
map.close()
os.remove(temporary_filename)
From what I understand, the memmap file is closed when the method close() is called. However, the code above cannot loop forever, as it eventually throws the "[Errno 24] Too many open files" error:
1016
1017
1018
1019
Traceback (most recent call last):
File "./memmap_loop.py", line 11, in <module>
File "/usr/lib/python2.5/site-packages/numpy/core/memmap.py", line 226, in __new__
EnvironmentError: [Errno 24] Too many open files
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/apport_python_hook.py", line 38, in apport_excepthook
ImportError: No module named packaging_impl
Original exception was:
Traceback (most recent call last):
File "./memmap_loop.py", line 11, in <module>
File "/usr/lib/python2.5/site-packages/numpy/core/memmap.py", line 226, in __new__
EnvironmentError: [Errno 24] Too many open files
Does anybody know what I am overlooking?
|
[
"Since the memmap does not take the open file descriptor, but the file name, I suppose you leak the temp_fd file descriptor. Does os.close(temp_fd) help?\n\nGreat that it works.\nSince you can pass numpy.memmap a file-like object, you could create one from the file descriptor you already have, temp_fd.\nfobj = os.fdopen(temp_fd, \"w+\")\nnumpy.memmap(fobj, ...\n\n"
] |
[
5
] |
[] |
[] |
[
"memory_management",
"numpy",
"python"
] |
stackoverflow_0001519956_memory_management_numpy_python.txt
|
Q:
Python SQLAlchemy/Elixer Question
I am trying to define a SQLAlchemy/Elixer model that can describe the following relationship. I have an SSP table, which has multiple Foreign Keys to the POC table. I've defined the ManyToOne relationships correctly within the SSP object (allowing me to SSP.get(1).action.first_name correctly). What I would also like to add is the other side of this relationship, where I can perform something like POC.get(1).csa and return a list of SSP objects in which this POC is defined as the idPOCCSA.
I know this would be best for a polymorphic association but I really can not change the DB schema at all (creating a new poc2ssp table with a column for type of association).
class POC(Entity):
using_options(tablename = 'poc', autoload = True)
# These two line visually display my "issue":
# csa = OneToMany('SSP')
# action = OneToMany('SSP')
class SSP(Entity):
'''
Many to One Relationships:
- csa: ssp.idPOCCSA = poc.id
- action: ssp.idPOCAction = poc.id
- super: ssp.idSuper = poc.id
'''
using_options(tablename = 'spp', autoload = True)
csa = ManyToOne('POC', colname = 'idPOCCSA')
action = ManyToOne('POC', colname = 'idPOCAction')
super = ManyToOne('POC', colname = 'idPOCSuper')
Any ideas to accomplish this? The Elixer FAQ has a good example utilizing the primaryjoin and foreign_keys parameters but I can't find them in the documentation. I was kind of hoping OneToMany() just supported a colname parameter like ManyToOne() does. Something a bit less verbose.
A:
Try the following:
class POC(Entity):
# ...
#declare the one-to-many relationships
csas = OneToMany('SSP')
actions = OneToMany('SSP')
# ...
class SSP(Entity):
# ...
#Tell Elixir how to disambiguate POC/SSP relationships by specifying
#the inverse explicitly.
csa = ManyToOne('POC', colname = 'idPOCCSA', inverse='csas')
action = ManyToOne('POC', colname = 'idPOCAction', inverse='actions')
# ...
|
Python SQLAlchemy/Elixer Question
|
I am trying to define a SQLAlchemy/Elixer model that can describe the following relationship. I have an SSP table, which has multiple Foreign Keys to the POC table. I've defined the ManyToOne relationships correctly within the SSP object (allowing me to SSP.get(1).action.first_name correctly). What I would also like to add is the other side of this relationship, where I can perform something like POC.get(1).csa and return a list of SSP objects in which this POC is defined as the idPOCCSA.
I know this would be best for a polymorphic association but I really can not change the DB schema at all (creating a new poc2ssp table with a column for type of association).
class POC(Entity):
using_options(tablename = 'poc', autoload = True)
# These two line visually display my "issue":
# csa = OneToMany('SSP')
# action = OneToMany('SSP')
class SSP(Entity):
'''
Many to One Relationships:
- csa: ssp.idPOCCSA = poc.id
- action: ssp.idPOCAction = poc.id
- super: ssp.idSuper = poc.id
'''
using_options(tablename = 'spp', autoload = True)
csa = ManyToOne('POC', colname = 'idPOCCSA')
action = ManyToOne('POC', colname = 'idPOCAction')
super = ManyToOne('POC', colname = 'idPOCSuper')
Any ideas to accomplish this? The Elixer FAQ has a good example utilizing the primaryjoin and foreign_keys parameters but I can't find them in the documentation. I was kind of hoping OneToMany() just supported a colname parameter like ManyToOne() does. Something a bit less verbose.
|
[
"Try the following:\nclass POC(Entity):\n # ...\n #declare the one-to-many relationships\n csas = OneToMany('SSP')\n actions = OneToMany('SSP')\n # ...\n\nclass SSP(Entity):\n # ...\n #Tell Elixir how to disambiguate POC/SSP relationships by specifying\n #the inverse explicitly.\n csa = ManyToOne('POC', colname = 'idPOCCSA', inverse='csas')\n action = ManyToOne('POC', colname = 'idPOCAction', inverse='actions')\n # ... \n\n"
] |
[
1
] |
[] |
[] |
[
"orm",
"python",
"python_elixir",
"sqlalchemy"
] |
stackoverflow_0001520031_orm_python_python_elixir_sqlalchemy.txt
|
Q:
Drupal or Wordpress CMS as a Social Network?
I am making a community for web-comic artist who will be able to sync their existing website to this site.
However, I am in debate for what CMS I should use: Drupal or Wordpress.
I have heard great things about Drupal, where it is really aimed for Social Networking. I actually got to play a little bit in the back end of Drupal and it seemed quite complicated to me, but I am not going to give up to fully understand how Drupal works.
As for Wordpress, I am very familiar with the Framework. I have the ability to extend it to do what I want, but I am hesitating because I think the framework is not built for communities (I think it may slow down in the future).
I also have a unrelated question as well: Should I go with a Python CMS?
I heard very great things about Python and how much better it is compare to PHP.
Your advice is appreciated.
A:
Difficult decision. Normally I would say 'definitely Drupal' without hesitation, as Drupal was build as a System for community sites from the beginning, whereas Wordpress still shows its heritage as a blogging solution, at least that's what I hear quite often. But then I'm working with Drupal all the time recently and haven't had a closer look at Wordpress for quite a while.
That said, Drupal has grown into a pretty complex system over the years, so there is quite a learning curve for newcomers. Given that you are already familiar with Wordpress, it might be more efficient for you to go with that, provided it can do all that you need.
So I would recommend Drupal, but you should probably get some opinions from people experienced with Wordpress concerning the possibility to turn it into a community site first.
As for the Python vs. PHP CMS question, I'd say that the quality of a CMS is a function of the ability of its developers, the maturity of the system, the surrounding 'ecosystem', etc. and not of the particular language used to build it. (And discussions about the quality of one established language vs. another? Well - let's just not go there ;)
A:
I make websites both using Drupal and Django - sometimes with Pinax (Python). So let me try to set up the differences between Python and PHP, and the different CMS's.
Python - PHP
Pros for Python.
You tend to write more readable code making it easier to maintain. This has a big impact if you are going to do a lot of custom coding, now or in the future. However if you aren't going to make that much custom functionality, this doesn't matters.
Python and Django is buildt on OO, making it easy to reuse code, and is built on the DRY princip.
I find, that python is more intuitive to program in. In many cases it has a less weird / obscure syntax than PHP.
Cons for Python.
PHP is easier to host. More providers will allow you to run PHP and you can generally find PHP hosters a bit more cheaper than python hosters. If you have your own server, this wont matter.
Generally it's easier to code with python in many regards, but this is something that can be overcome simply by using more time with PHP. Also if you don't know python, that means you will have to invest some time learning it, and the things you can do with python. On the other hand it's a bit more difficult to find cheap hosting for Python projects.
Django/Pinax vs Drupal vs Wordpress.
It's always difficult to be able to say, which CMS?CMF to use. Which to choose is dependent on several factors.
How much custom coding are you going to do?
How much customization do you need?
How fine grained control over the system do you want?
Wordpress' strength is it's ease of use, and how you quickly and easily can setup a lot of things. You might be able to get a site like what you want with only a few hours spent. The problem with wordpress however, is when you want to make custom functionality. It doesn't have a strong API like Drupal, and you might have problems changing the output to give you exactly what you want.
Drupal's great strength is it's powerfull API, ability to customize and overwrite anything. In addition to all this, it also has a lot of modules giving you the ability to in many cases build your most/all of your site in a very short time. The problem with Drupal is, that it's not easy to use. You have to spend time learning the system and API before you can take advantage of it. the Drupal AI is also hard to navigate for newcomers, and it takes a while before you learn where the different things are. Drupal is a big machine though, and it can get a bit slow, unless you setup something like Varnish in front of it.
Django is made for rapid development. So once you get into it, which isn't that hard, you can quickly create apps to suite your needs. You have complete control over the urls. The problem with django is that it's not so easy to find the different apps that has been made and figure out which are good. The template system makes it easy to make the markup like you want, but you can't change the functionality of the apps the same way you can with Drupal. One thing to note, is that Pinax doesn't have a 1.0 release yet, while Drupal is on code freeze for it's 7.0 release.
All in all, with all these tools, the biggest challenge is finding out how to use them. If you know wordpress very well and just want to make this one site, you can just use it and be done with it. If however you want to take it further, I would suggest that you use either Django or Drupal. These two has some great development potential.
A:
If you're open to Python, and are building a social / community site, I would check out Pinax for the Django web framework. It provides a lot of common social site features like user accounts, blogging, tagging, friend invites, etc.
Here's an example of a social site built using Pinax.
A:
There's a WordPress extension called BuddyPress that'll give you a ready-to-go social network. If it suits you, it may be an easier solution than a Drupal install. If it doesn't suit you, though, I find Drupal more suited to extending in the long run.
A:
I'd do it Drupal as it's a proven social networking platform and has te ability to be upgraded to do just about anything, from the vast range of modules on offer (read up on cck and views- they basically let you add your own customised page type (cck) and views lets you show data in various different ways, and based on various other parameters.)
I run my own mini social network site in Drupal - Tunstall Communities - Bankeyfields,
Heres a social network/news site using Wordpress, which they've now opted to upgrade to Drupal, as they want more social networking features.
A:
DrupalSN is a social network site designed for showing you how to build Drupal sites, and a lot of the Tutorials on there are focussed on user interaction, so it will be a great resource if you go with Drupal.
|
Drupal or Wordpress CMS as a Social Network?
|
I am making a community for web-comic artist who will be able to sync their existing website to this site.
However, I am in debate for what CMS I should use: Drupal or Wordpress.
I have heard great things about Drupal, where it is really aimed for Social Networking. I actually got to play a little bit in the back end of Drupal and it seemed quite complicated to me, but I am not going to give up to fully understand how Drupal works.
As for Wordpress, I am very familiar with the Framework. I have the ability to extend it to do what I want, but I am hesitating because I think the framework is not built for communities (I think it may slow down in the future).
I also have a unrelated question as well: Should I go with a Python CMS?
I heard very great things about Python and how much better it is compare to PHP.
Your advice is appreciated.
|
[
"Difficult decision. Normally I would say 'definitely Drupal' without hesitation, as Drupal was build as a System for community sites from the beginning, whereas Wordpress still shows its heritage as a blogging solution, at least that's what I hear quite often. But then I'm working with Drupal all the time recently and haven't had a closer look at Wordpress for quite a while.\nThat said, Drupal has grown into a pretty complex system over the years, so there is quite a learning curve for newcomers. Given that you are already familiar with Wordpress, it might be more efficient for you to go with that, provided it can do all that you need.\nSo I would recommend Drupal, but you should probably get some opinions from people experienced with Wordpress concerning the possibility to turn it into a community site first.\n\nAs for the Python vs. PHP CMS question, I'd say that the quality of a CMS is a function of the ability of its developers, the maturity of the system, the surrounding 'ecosystem', etc. and not of the particular language used to build it. (And discussions about the quality of one established language vs. another? Well - let's just not go there ;)\n",
"I make websites both using Drupal and Django - sometimes with Pinax (Python). So let me try to set up the differences between Python and PHP, and the different CMS's.\nPython - PHP\n\nPros for Python.\n\n\nYou tend to write more readable code making it easier to maintain. This has a big impact if you are going to do a lot of custom coding, now or in the future. However if you aren't going to make that much custom functionality, this doesn't matters.\nPython and Django is buildt on OO, making it easy to reuse code, and is built on the DRY princip.\nI find, that python is more intuitive to program in. In many cases it has a less weird / obscure syntax than PHP.\n\nCons for Python.\n\n\nPHP is easier to host. More providers will allow you to run PHP and you can generally find PHP hosters a bit more cheaper than python hosters. If you have your own server, this wont matter.\n\n\nGenerally it's easier to code with python in many regards, but this is something that can be overcome simply by using more time with PHP. Also if you don't know python, that means you will have to invest some time learning it, and the things you can do with python. On the other hand it's a bit more difficult to find cheap hosting for Python projects.\nDjango/Pinax vs Drupal vs Wordpress.\nIt's always difficult to be able to say, which CMS?CMF to use. Which to choose is dependent on several factors.\n\nHow much custom coding are you going to do?\nHow much customization do you need?\nHow fine grained control over the system do you want?\n\nWordpress' strength is it's ease of use, and how you quickly and easily can setup a lot of things. You might be able to get a site like what you want with only a few hours spent. The problem with wordpress however, is when you want to make custom functionality. It doesn't have a strong API like Drupal, and you might have problems changing the output to give you exactly what you want.\nDrupal's great strength is it's powerfull API, ability to customize and overwrite anything. In addition to all this, it also has a lot of modules giving you the ability to in many cases build your most/all of your site in a very short time. The problem with Drupal is, that it's not easy to use. You have to spend time learning the system and API before you can take advantage of it. the Drupal AI is also hard to navigate for newcomers, and it takes a while before you learn where the different things are. Drupal is a big machine though, and it can get a bit slow, unless you setup something like Varnish in front of it.\nDjango is made for rapid development. So once you get into it, which isn't that hard, you can quickly create apps to suite your needs. You have complete control over the urls. The problem with django is that it's not so easy to find the different apps that has been made and figure out which are good. The template system makes it easy to make the markup like you want, but you can't change the functionality of the apps the same way you can with Drupal. One thing to note, is that Pinax doesn't have a 1.0 release yet, while Drupal is on code freeze for it's 7.0 release.\nAll in all, with all these tools, the biggest challenge is finding out how to use them. If you know wordpress very well and just want to make this one site, you can just use it and be done with it. If however you want to take it further, I would suggest that you use either Django or Drupal. These two has some great development potential. \n",
"If you're open to Python, and are building a social / community site, I would check out Pinax for the Django web framework. It provides a lot of common social site features like user accounts, blogging, tagging, friend invites, etc.\nHere's an example of a social site built using Pinax.\n",
"There's a WordPress extension called BuddyPress that'll give you a ready-to-go social network. If it suits you, it may be an easier solution than a Drupal install. If it doesn't suit you, though, I find Drupal more suited to extending in the long run.\n",
"I'd do it Drupal as it's a proven social networking platform and has te ability to be upgraded to do just about anything, from the vast range of modules on offer (read up on cck and views- they basically let you add your own customised page type (cck) and views lets you show data in various different ways, and based on various other parameters.)\nI run my own mini social network site in Drupal - Tunstall Communities - Bankeyfields, \nHeres a social network/news site using Wordpress, which they've now opted to upgrade to Drupal, as they want more social networking features.\n",
"DrupalSN is a social network site designed for showing you how to build Drupal sites, and a lot of the Tutorials on there are focussed on user interaction, so it will be a great resource if you go with Drupal.\n"
] |
[
9,
8,
3,
2,
2,
1
] |
[] |
[] |
[
"content_management_system",
"drupal",
"python",
"social_networking",
"wordpress"
] |
stackoverflow_0001513062_content_management_system_drupal_python_social_networking_wordpress.txt
|
Q:
How do you know when two objects can communicate?
class GuiMaker(Frame):
#more code
def __init__(self, parent=None):
Frame.__init__(self, parent)
self.pack(expand=YES, fill=BOTH) # make frame stretchable
self.start() # for subclass: set menu/toolBar
self.makeMenuBar() # done here: build menu-bar
self.makeToolBar() # done here: build tool-bar
self.makeWidgets() # for subclass: add middle part
#more code
class TextEditor:
#more code
def start(self):
#more code
How come self.start() will call TextEditor's start if self refers to GuiMaker or else how come self refers to TextEditor?
A:
Does GuiMaker inherit from TextEditor somewhere? In other words is Frame a descendant of TextEditor? That would cause TextEditor's start method to be called.
Other than that, I don't see any way for the code (as written) to have GuiMaker.start call TextEditor.start
|
How do you know when two objects can communicate?
|
class GuiMaker(Frame):
#more code
def __init__(self, parent=None):
Frame.__init__(self, parent)
self.pack(expand=YES, fill=BOTH) # make frame stretchable
self.start() # for subclass: set menu/toolBar
self.makeMenuBar() # done here: build menu-bar
self.makeToolBar() # done here: build tool-bar
self.makeWidgets() # for subclass: add middle part
#more code
class TextEditor:
#more code
def start(self):
#more code
How come self.start() will call TextEditor's start if self refers to GuiMaker or else how come self refers to TextEditor?
|
[
"Does GuiMaker inherit from TextEditor somewhere? In other words is Frame a descendant of TextEditor? That would cause TextEditor's start method to be called.\nOther than that, I don't see any way for the code (as written) to have GuiMaker.start call TextEditor.start\n"
] |
[
5
] |
[] |
[] |
[
"class",
"object",
"python"
] |
stackoverflow_0001521368_class_object_python.txt
|
Q:
What is a good size (in bytes) for a log file?
I am using the python logging modules RotatingFileHandler, and you can set the maximum size of each log file. What is a good maximum size for a log file? Please give your answer in bytes.
A:
My default logging setup:
RotatingFileHandler(filename, maxBytes=10*1024*1024, backupCount=5)
A:
As the other answers have said, there is a no hard and fast answer. It depends so much on your app and your environment. Here's some guidelines I use.
For a multi-user app on a typical server:
Configure your logging to generate no more than 1 or 2 entries per user action for production, and then rotate it daily. Keep as many days as you have disk space for, or your data retention/privacy policies allow for. If you want auditing, you probably want a separate solution.
For a single-user app:
Try and keep enough information to diagnose anything weird that might happen. No more than 2 or 3 entries per user action though, unless you are doing batch operations. Don't put more than 2MB in a file, so the user can email it you. Don't keep more than 50MB of logs, because it's probably not your space you are wasting here.
A:
Size isn't as important to me as dividing at sensible chronological points. I prefer one log file per day, however, if the file won't open with any notepad program you have at your disposal, it is too big and you might want to go with hourly logs.
A:
It completely depends on the external variables of the system. For instance:
Are you running on an embedded device whose only external storage is a 1MB SD card, or do you have full access to a 1TB hard drive?
Are you logging each time you enter/exit a function, or are you only logging one or two caught exceptions throughout the whole system?
Is the purpose of these logs to be sent back to the developer for support? A 1kb log file isn't going to help you much, but you probably don't need 200MB of logs for a single support issue.
Without these kinds of details, there is no good answer to your question (and there might not be a good answer even with these details).
|
What is a good size (in bytes) for a log file?
|
I am using the python logging modules RotatingFileHandler, and you can set the maximum size of each log file. What is a good maximum size for a log file? Please give your answer in bytes.
|
[
"My default logging setup:\nRotatingFileHandler(filename, maxBytes=10*1024*1024, backupCount=5)\n\n",
"As the other answers have said, there is a no hard and fast answer. It depends so much on your app and your environment. Here's some guidelines I use.\nFor a multi-user app on a typical server:\nConfigure your logging to generate no more than 1 or 2 entries per user action for production, and then rotate it daily. Keep as many days as you have disk space for, or your data retention/privacy policies allow for. If you want auditing, you probably want a separate solution.\nFor a single-user app:\nTry and keep enough information to diagnose anything weird that might happen. No more than 2 or 3 entries per user action though, unless you are doing batch operations. Don't put more than 2MB in a file, so the user can email it you. Don't keep more than 50MB of logs, because it's probably not your space you are wasting here.\n",
"Size isn't as important to me as dividing at sensible chronological points. I prefer one log file per day, however, if the file won't open with any notepad program you have at your disposal, it is too big and you might want to go with hourly logs.\n",
"It completely depends on the external variables of the system. For instance:\n\nAre you running on an embedded device whose only external storage is a 1MB SD card, or do you have full access to a 1TB hard drive?\nAre you logging each time you enter/exit a function, or are you only logging one or two caught exceptions throughout the whole system?\nIs the purpose of these logs to be sent back to the developer for support? A 1kb log file isn't going to help you much, but you probably don't need 200MB of logs for a single support issue.\n\nWithout these kinds of details, there is no good answer to your question (and there might not be a good answer even with these details).\n"
] |
[
32,
17,
9,
3
] |
[] |
[] |
[
"logging",
"python"
] |
stackoverflow_0001521082_logging_python.txt
|
Q:
Using Python's isinstance
The following usage of isinstance doesn't seem to work in Python 2.5.2 or 2.6.2:
class BananaCake:
def __init__(self):
print 'Banana Cake'
class ChocolateCake:
def __init__(self):
print 'Chocolate Cake'
class CakeFactory:
@staticmethod
def create(name):
if name == 'banana':
return BananaCake
elif name == 'chocolate':
return ChocolateCake
else:
return None
if __name__ == '__main__':
banana_cake = CakeFactory.create('banana')
print isinstance(banana_cake, BananaCake)
The above isinstance is returning False even though banana_cake is an instance of BananaCake. Does anyone know what I might be missing? I'm performing this check in my test scripts. You should be able to copy and paste the above and run it easily in a Python script.
A:
You're returning a class instead of an instance.
>>> banana_cake is BananaCake
True
I believe you want this:
class CakeFactory:
@staticmethod
def create(name):
if name == 'banana':
return BananaCake() # call the constructor
elif name == 'chocolate':
return ChocolateCake() # call the constructor
else:
return None
To make isinstance()work with classes you need to define them as new-style classes:
class ChocolateCake(object):
pass
You should declare all your classes as new-style classes anyway, there is a lot of extra functionality and old-style classes were dropped in Python 3.
A:
That's because 'banana_cake' is not an instance of BananaCake.
The CakeFactory implementation returns the BananaCake CLASS, not an instance of BananaCake.
Try modifying CakeFactory to return "BananaCake()". Calling the BananCake constructor will return an instance, rather than the class.
|
Using Python's isinstance
|
The following usage of isinstance doesn't seem to work in Python 2.5.2 or 2.6.2:
class BananaCake:
def __init__(self):
print 'Banana Cake'
class ChocolateCake:
def __init__(self):
print 'Chocolate Cake'
class CakeFactory:
@staticmethod
def create(name):
if name == 'banana':
return BananaCake
elif name == 'chocolate':
return ChocolateCake
else:
return None
if __name__ == '__main__':
banana_cake = CakeFactory.create('banana')
print isinstance(banana_cake, BananaCake)
The above isinstance is returning False even though banana_cake is an instance of BananaCake. Does anyone know what I might be missing? I'm performing this check in my test scripts. You should be able to copy and paste the above and run it easily in a Python script.
|
[
"You're returning a class instead of an instance.\n\n>>> banana_cake is BananaCake\nTrue\n\nI believe you want this:\nclass CakeFactory:\n @staticmethod\n def create(name):\n if name == 'banana':\n return BananaCake() # call the constructor\n elif name == 'chocolate':\n return ChocolateCake() # call the constructor\n else:\n return None\n\n\nTo make isinstance()work with classes you need to define them as new-style classes:\nclass ChocolateCake(object):\n pass\n\nYou should declare all your classes as new-style classes anyway, there is a lot of extra functionality and old-style classes were dropped in Python 3.\n",
"That's because 'banana_cake' is not an instance of BananaCake.\nThe CakeFactory implementation returns the BananaCake CLASS, not an instance of BananaCake.\nTry modifying CakeFactory to return \"BananaCake()\". Calling the BananCake constructor will return an instance, rather than the class.\n"
] |
[
8,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001521655_python.txt
|
Q:
How to prevent log file truncation with python logging module?
I need to use python logging module to print debugging info to a file with statements like:
logging.debug(something)
The file is truncated (i am assuming - by the logging module) and the messages get deleted before I can see them - how can that be prevented?
Here is my logging config:
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(message)s',
filename = '/tmp/my-log.txt',
filemode = 'w'
)
Thanks!
A:
logging
If you run the script repeatedly, the additional log messages are appended to the file. To create a new file each time, you can pass a filemode argument to basicConfig() with a value of 'w'. Rather than managing the file size yourself, though, it is simpler to use a RotatingFileHandler.
To prevent overwriting the file, you should not set filemode to 'w', or set it to 'a' (that is the default setting anyway).
I believe that you are simply overwriting the file.
|
How to prevent log file truncation with python logging module?
|
I need to use python logging module to print debugging info to a file with statements like:
logging.debug(something)
The file is truncated (i am assuming - by the logging module) and the messages get deleted before I can see them - how can that be prevented?
Here is my logging config:
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(message)s',
filename = '/tmp/my-log.txt',
filemode = 'w'
)
Thanks!
|
[
"logging\n\nIf you run the script repeatedly, the additional log messages are appended to the file. To create a new file each time, you can pass a filemode argument to basicConfig() with a value of 'w'. Rather than managing the file size yourself, though, it is simpler to use a RotatingFileHandler.\n\nTo prevent overwriting the file, you should not set filemode to 'w', or set it to 'a' (that is the default setting anyway).\nI believe that you are simply overwriting the file.\n"
] |
[
13
] |
[] |
[] |
[
"logging",
"python"
] |
stackoverflow_0001521681_logging_python.txt
|
Q:
How can you extract Hardware ID using Python?
How do you extract an HD and Bios Unique ID, using python script?
A:
Go Get Microsoft's Scriptomatic
Run it, Select the appropriate class from the dropdown (WIN32_BIOS)
It will produce the necessary Python/WMI code for you.
(It will also generate VBScript, Perl, and JScript)
A:
Solutions that come to my mind:
use Win32 Python Extensions and call Windows APIs to do that directly
Use a WMI-wrapper for Python
(some WMI interface code for reference)
Edit: I assumed your OS was MS Windows :)
A:
On Linux, look in the /proc directory. You'll have to parse the files to find what you are looking for.
This might help.
A:
Try this library: Hardware ID Extractor
Small description of the tool:
The Hardware ID Extractor is a Microsoft Windows program that shows data about your computer's hardware:
Hard disk:
Hard drive ID (unique hardware serial number written in drive's IDE electronic chip)
Partition ID (volume serial number)
CPU:
CPU ID (unique hardware ID)
CPU vendor
CPU running speed
CPU theoretic speed
Physical memory:
Memory Load ( Total memory used in percentage (%) )
*Total Physical ( Total physical memory in bytes )
*Avail Physical ( Physical memory left in bytes )
*Total PageFile ( Total page file in bytes )
*Available PageFile( Page file left in bytes )
*Total Virtual( Total virtual memory in bytes )
*Available Virtual ( Virtual memory left in bytes )
|
How can you extract Hardware ID using Python?
|
How do you extract an HD and Bios Unique ID, using python script?
|
[
"Go Get Microsoft's Scriptomatic\nRun it, Select the appropriate class from the dropdown (WIN32_BIOS)\nIt will produce the necessary Python/WMI code for you.\n(It will also generate VBScript, Perl, and JScript)\n",
"Solutions that come to my mind:\n\nuse Win32 Python Extensions and call Windows APIs to do that directly\nUse a WMI-wrapper for Python\n\n(some WMI interface code for reference)\nEdit: I assumed your OS was MS Windows :)\n",
"On Linux, look in the /proc directory. You'll have to parse the files to find what you are looking for.\nThis might help.\n",
"Try this library: Hardware ID Extractor\n\nSmall description of the tool: \nThe Hardware ID Extractor is a Microsoft Windows program that shows data about your computer's hardware:\nHard disk:\n\nHard drive ID (unique hardware serial number written in drive's IDE electronic chip)\nPartition ID (volume serial number)\n\nCPU:\n\nCPU ID (unique hardware ID)\nCPU vendor\nCPU running speed\nCPU theoretic speed\n\nPhysical memory:\n\nMemory Load ( Total memory used in percentage (%) )\n*Total Physical ( Total physical memory in bytes )\n*Avail Physical ( Physical memory left in bytes )\n*Total PageFile ( Total page file in bytes )\n*Available PageFile( Page file left in bytes )\n*Total Virtual( Total virtual memory in bytes )\n*Available Virtual ( Virtual memory left in bytes )\n\n"
] |
[
3,
2,
1,
1
] |
[] |
[] |
[
"extract",
"hardware",
"python"
] |
stackoverflow_0000227222_extract_hardware_python.txt
|
Q:
Python and indentation, having trouble getting started.
I have just started learning python and am getting caught up. I come from mostly C background.
class Alarm:
def timer():
def main():
print ("Timer has Started")
main()
I always get a silly error when I try to run this code:
alarm > python alarm.py
File "alarm.py", line 5
def main():
^
IndentationError: expected an indented block
A:
You have an empty def
def timer():
use
def timer():
pass
instead.
A:
Learn about the pass statement, main is usually not part of the class.
A global (module level) main() function is simpler than an Alarm.main() class method. Usually, main() functions come at module level.
class Alarm:
def timer():
pass
def main():
print ("Timer has Started")
main()
A:
try deindent main() and add pass to timer and define an init method:
class Alarm():
def __init__(self):
print ("Timer has Started")
<shell>$ Alarm()
A:
Your timer function is not defined. (And your space/tab indentation may be mixed)
See the tutorial (classes) for more details on classes.
class Alarm:
def timer(self):
pass
def main(self):
print ("Timer has Started")
if __name__ == '__main__':
class_inst = Alarm()
class_inst.main()
If you getting into python read PEP8.
Also, using pylint helps, it will point out indentation and many other errors you'll run across before you 'execute' your code.
A:
I think you want to use __init__ though, which is the constructor...
class Alarm:
def timer(self):
print('timer has started')
def __init__(self):
print('constructor')
self.timer()
x = Alarm()
constructor
timer has started
My example differs from the others in that I'm actually instantiating a new object.
Notes:
specify self as the first argument to any method defined in the class
__init__ is the method to define for the constructor
invoke the class by doing variableName = className() like you would invoke a function, no new keyword
if you have an empty function, use the pass keyword like def foo(self): pass
A:
Invoking main() will give an undefined function error, as it is a Alarm method.
IMHO the right form you should use is the following:
class Alarm:
def timer():
pass
@staticmethod
def main():
print ("Timer has Started")
if __name__ == "__main__" :
Alarm.main()
A:
As others have pointed out, you have a syntax error because timer() has no body.
You don't need to use main() in python at all. Usually people use it to indicate that the file is the top level program and not a module to be imported, but it is just by convention
You may also see this idiom
def main():
blah blah
if __name__ == "__main__":
main()
Here __name__ is a special variable. If the file has been imported it will contain the module name, so the comparison fails and main does not run.
For the top level program __name__ contains "__main__" so the main() function will be run.
This is useful because sometimes your module might run tests when it is loaded as a program but you don't want those test to run if you are importing it into a larger program
A:
In Python, you don't need to define everything as a class. There's nothing to encapsulate in this code, so there's no reason to define an Alarm class. Just have the functions in a module.
A:
Thanks for all the help everybody. I was making a little alarm/timer to remind me to get up and take a walk every now and then. I got most of it working, and it works great. Checked it against a stop watch and it works great.
import time
def timer(num):
seconds = num*60
print (num , "minutes", seconds , "seconds")
while (seconds > 0):
print (seconds, "seconds")
time.sleep(1)
seconds = seconds-1
print ("Time to get up and take a WALK!!!!")
main()
def main():
number = input("Input time : ")
int(number)
timer(number)
if __name__ == '__main__':
main()
|
Python and indentation, having trouble getting started.
|
I have just started learning python and am getting caught up. I come from mostly C background.
class Alarm:
def timer():
def main():
print ("Timer has Started")
main()
I always get a silly error when I try to run this code:
alarm > python alarm.py
File "alarm.py", line 5
def main():
^
IndentationError: expected an indented block
|
[
"You have an empty def\ndef timer():\n\nuse\ndef timer():\n pass\n\ninstead.\n",
"Learn about the pass statement, main is usually not part of the class.\nA global (module level) main() function is simpler than an Alarm.main() class method. Usually, main() functions come at module level.\nclass Alarm:\n\n def timer():\n pass\n\ndef main():\n print (\"Timer has Started\")\n\nmain()\n\n",
"try deindent main() and add pass to timer and define an init method:\nclass Alarm():\n\n def __init__(self):\n print (\"Timer has Started\")\n\n<shell>$ Alarm()\n\n",
"Your timer function is not defined. (And your space/tab indentation may be mixed)\nSee the tutorial (classes) for more details on classes.\nclass Alarm:\n\n def timer(self):\n pass\n def main(self):\n print (\"Timer has Started\")\n\nif __name__ == '__main__':\n class_inst = Alarm()\n class_inst.main()\n\nIf you getting into python read PEP8.\nAlso, using pylint helps, it will point out indentation and many other errors you'll run across before you 'execute' your code.\n",
"I think you want to use __init__ though, which is the constructor...\nclass Alarm:\n\n def timer(self): \n print('timer has started')\n\n def __init__(self): \n print('constructor')\n self.timer()\n\n\nx = Alarm()\n\n\nconstructor\ntimer has started\n\nMy example differs from the others in that I'm actually instantiating a new object.\nNotes:\n\nspecify self as the first argument to any method defined in the class\n__init__ is the method to define for the constructor\ninvoke the class by doing variableName = className() like you would invoke a function, no new keyword\nif you have an empty function, use the pass keyword like def foo(self): pass\n\n",
"Invoking main() will give an undefined function error, as it is a Alarm method.\nIMHO the right form you should use is the following:\nclass Alarm:\n def timer():\n pass\n\n @staticmethod\n def main():\n print (\"Timer has Started\")\n\nif __name__ == \"__main__\" :\n Alarm.main()\n\n",
"As others have pointed out, you have a syntax error because timer() has no body.\nYou don't need to use main() in python at all. Usually people use it to indicate that the file is the top level program and not a module to be imported, but it is just by convention\nYou may also see this idiom\ndef main():\n blah blah\n\nif __name__ == \"__main__\":\n main()\n\nHere __name__ is a special variable. If the file has been imported it will contain the module name, so the comparison fails and main does not run.\nFor the top level program __name__ contains \"__main__\" so the main() function will be run.\nThis is useful because sometimes your module might run tests when it is loaded as a program but you don't want those test to run if you are importing it into a larger program\n",
"In Python, you don't need to define everything as a class. There's nothing to encapsulate in this code, so there's no reason to define an Alarm class. Just have the functions in a module.\n",
"Thanks for all the help everybody. I was making a little alarm/timer to remind me to get up and take a walk every now and then. I got most of it working, and it works great. Checked it against a stop watch and it works great. \nimport time\n\ndef timer(num):\n seconds = num*60\n print (num , \"minutes\", seconds , \"seconds\")\n\n while (seconds > 0):\n print (seconds, \"seconds\")\n time.sleep(1)\n seconds = seconds-1\n\n print (\"Time to get up and take a WALK!!!!\")\n main()\n\n\ndef main():\n number = input(\"Input time : \")\n int(number)\n timer(number)\n\n\nif __name__ == '__main__':\n main()\n\n"
] |
[
11,
3,
1,
1,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001518659_python.txt
|
Q:
How can I specify date and time in Python?
What is the object used in Python to specify date (and time) in Python?
For instance, to create an object that holds a given date and time, (let's say '05/10/09 18:00').
As per S.Lott's request, so far I have:
class Some:
date =
I stop there. After the "=" sign for, I realize I didn't knew what the right object was ;)
A:
Simple example:
>>> import datetime
# 05/10/09 18:00
>>> d = datetime.datetime(2009, 10, 5, 18, 00)
>>> print d.year, d.month, d.day, d.hour, d.second
2009 10 5 18 0
>>> print d.isoformat(' ')
2009-10-05 18:00:00
>>>
A:
Nick D has the official way of handling your problem. If you want to pass in a string like you did in your question, the dateutil module (http://labix.org/python-dateutil) has excellent support for that kind of thing.
For examples, I'm going to copy and paste from another answer I gave a while back now:
Simple example:
>>> parse("Thu Sep 25 2003")
datetime.datetime(2003, 9, 25, 0, 0)
>>> parse("Sep 25 2003")
datetime.datetime(2003, 9, 25, 0, 0)
>>> parse("Sep 2003", default=DEFAULT)
datetime.datetime(2003, 9, 25, 0, 0)
>>> parse("Sep", default=DEFAULT)
datetime.datetime(2003, 9, 25, 0, 0)
>>> parse("2003", default=DEFAULT)
datetime.datetime(2003, 9, 25, 0, 0)
To ambigous:
>>> parse("10-09-2003")
datetime.datetime(2003, 10, 9, 0, 0)
>>> parse("10-09-2003", dayfirst=True)
datetime.datetime(2003, 9, 10, 0, 0)
>>> parse("10-09-03")
datetime.datetime(2003, 10, 9, 0, 0)
>>> parse("10-09-03", yearfirst=True)
datetime.datetime(2010, 9, 3, 0, 0)
To all over the board:
>>> parse("Wed, July 10, '96")
datetime.datetime(1996, 7, 10, 0, 0)
>>> parse("1996.07.10 AD at 15:08:56 PDT", ignoretz=True)
datetime.datetime(1996, 7, 10, 15, 8, 56)
>>> parse("Tuesday, April 12, 1952 AD 3:30:42pm PST", ignoretz=True)
datetime.datetime(1952, 4, 12, 15, 30, 42)
>>> parse("November 5, 1994, 8:15:30 am EST", ignoretz=True)
datetime.datetime(1994, 11, 5, 8, 15, 30)
>>> parse("3rd of May 2001")
datetime.datetime(2001, 5, 3, 0, 0)
>>> parse("5:50 A.M. on June 13, 1990")
datetime.datetime(1990, 6, 13, 5, 50)
Take a look at the documentation for it here:
http://labix.org/python-dateutil#head-c0e81a473b647dfa787dc11e8c69557ec2c3ecd2
A:
Look at the datetime module; there are datetime, date and timedelta class definitions.
A:
>>> import datetime
>>> datetime.datetime.strptime('05/10/09 18:00', '%d/%m/%y %H:%M')
datetime.datetime(2009, 10, 5, 18, 0)
>>> datetime.datetime.today()
datetime.datetime(2009, 10, 5, 21, 3, 55, 827787)
So, you can either use format string to convert to datetime.datetime object or if you're particularly looking at today's date could use today() function.
|
How can I specify date and time in Python?
|
What is the object used in Python to specify date (and time) in Python?
For instance, to create an object that holds a given date and time, (let's say '05/10/09 18:00').
As per S.Lott's request, so far I have:
class Some:
date =
I stop there. After the "=" sign for, I realize I didn't knew what the right object was ;)
|
[
"Simple example:\n>>> import datetime\n# 05/10/09 18:00\n>>> d = datetime.datetime(2009, 10, 5, 18, 00)\n>>> print d.year, d.month, d.day, d.hour, d.second\n2009 10 5 18 0\n>>> print d.isoformat(' ')\n2009-10-05 18:00:00\n>>> \n\n",
"Nick D has the official way of handling your problem. If you want to pass in a string like you did in your question, the dateutil module (http://labix.org/python-dateutil) has excellent support for that kind of thing.\nFor examples, I'm going to copy and paste from another answer I gave a while back now:\nSimple example:\n>>> parse(\"Thu Sep 25 2003\")\ndatetime.datetime(2003, 9, 25, 0, 0)\n\n>>> parse(\"Sep 25 2003\")\ndatetime.datetime(2003, 9, 25, 0, 0)\n\n>>> parse(\"Sep 2003\", default=DEFAULT)\ndatetime.datetime(2003, 9, 25, 0, 0)\n\n>>> parse(\"Sep\", default=DEFAULT)\ndatetime.datetime(2003, 9, 25, 0, 0)\n\n>>> parse(\"2003\", default=DEFAULT)\ndatetime.datetime(2003, 9, 25, 0, 0)\n\nTo ambigous:\n>>> parse(\"10-09-2003\")\ndatetime.datetime(2003, 10, 9, 0, 0)\n\n>>> parse(\"10-09-2003\", dayfirst=True)\ndatetime.datetime(2003, 9, 10, 0, 0)\n\n>>> parse(\"10-09-03\")\ndatetime.datetime(2003, 10, 9, 0, 0)\n\n>>> parse(\"10-09-03\", yearfirst=True)\ndatetime.datetime(2010, 9, 3, 0, 0)\n\nTo all over the board:\n>>> parse(\"Wed, July 10, '96\")\ndatetime.datetime(1996, 7, 10, 0, 0)\n\n>>> parse(\"1996.07.10 AD at 15:08:56 PDT\", ignoretz=True)\ndatetime.datetime(1996, 7, 10, 15, 8, 56)\n\n>>> parse(\"Tuesday, April 12, 1952 AD 3:30:42pm PST\", ignoretz=True)\ndatetime.datetime(1952, 4, 12, 15, 30, 42)\n\n>>> parse(\"November 5, 1994, 8:15:30 am EST\", ignoretz=True)\ndatetime.datetime(1994, 11, 5, 8, 15, 30)\n\n>>> parse(\"3rd of May 2001\")\ndatetime.datetime(2001, 5, 3, 0, 0)\n\n>>> parse(\"5:50 A.M. on June 13, 1990\")\ndatetime.datetime(1990, 6, 13, 5, 50)\n\nTake a look at the documentation for it here:\nhttp://labix.org/python-dateutil#head-c0e81a473b647dfa787dc11e8c69557ec2c3ecd2\n",
"Look at the datetime module; there are datetime, date and timedelta class definitions.\n",
">>> import datetime\n>>> datetime.datetime.strptime('05/10/09 18:00', '%d/%m/%y %H:%M')\ndatetime.datetime(2009, 10, 5, 18, 0)\n>>> datetime.datetime.today()\ndatetime.datetime(2009, 10, 5, 21, 3, 55, 827787)\n\nSo, you can either use format string to convert to datetime.datetime object or if you're particularly looking at today's date could use today() function.\n"
] |
[
99,
8,
4,
2
] |
[] |
[] |
[
"datetime",
"python"
] |
stackoverflow_0001521906_datetime_python.txt
|
Q:
How to fix such ClientForm bug?
from mechanize import Browser
br = Browser()
page = br.open('http://wow.interzet.ru/news.php?readmore=23')
br.form = br.forms().next()
print br.form
gives me the following error:
Traceback (most recent call last):
File "C:\Users\roddik\Desktop\mech.py", line 6, in <module>
br.form = br.forms().next()
File "build\bdist.win32\egg\mechanize\_mechanize.py", line 426, in forms
File "D:\py26\lib\site-package\mechanize-0.1.11-py2.6.egg\mechanize\_html.py", line 559, in forms
File "D:\py26\lib\site-packages\mechanize-0.1.11-py2.6.egg\mechanize\_html.py", line 225, in forms
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 967, in ParseResponseEx
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 1100, in _ParseFileEx
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 870, in feed
File "D:\py26\lib\sgmllib.py", line 104, in feed
self.goahead(0)
File "D:\py26\lib\sgmllib.py", line 138, in goahead
k = self.parse_starttag(i)
File "D:\py26\lib\sgmllib.py", line 290, in parse_starttag
self._convert_ref, attrvalue)
File "D:\py26\lib\sgmllib.py", line 302, in _convert_ref
return self.convert_charref(match.group(2)) or \
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 850, in convert_charref
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 244, in unescape_charref
ValueError: invalid literal for int() with base 10: 'e'
How can I fix it?
Edit:
I've fixed it this way. Is it ok? If not, how instead?
import ClientForm
from mechanize import Browser
def myunescape_charref(data, encoding):
if not str(data).isdigit(): return 0
name, base = data, 10
if name.startswith("x"):
name, base= name[1:], 16
uc = unichr(int(name, base))
if encoding is None:
return uc
else:
try:
repl = uc.encode(encoding)
except UnicodeError:
repl = "&#%s;" % data
return repl
ClientForm.unescape_charref = myunescape_charref
A:
The problem is caused by urls like this
http://wow.zet/forum/index.php?showtopic=1197&pid=30419&st=0&#entry30419
ClientForm is looking for an integer after the &#
It is ok to have the # in the url, but it should be escaped in the html
as &# means a character encoding
|
How to fix such ClientForm bug?
|
from mechanize import Browser
br = Browser()
page = br.open('http://wow.interzet.ru/news.php?readmore=23')
br.form = br.forms().next()
print br.form
gives me the following error:
Traceback (most recent call last):
File "C:\Users\roddik\Desktop\mech.py", line 6, in <module>
br.form = br.forms().next()
File "build\bdist.win32\egg\mechanize\_mechanize.py", line 426, in forms
File "D:\py26\lib\site-package\mechanize-0.1.11-py2.6.egg\mechanize\_html.py", line 559, in forms
File "D:\py26\lib\site-packages\mechanize-0.1.11-py2.6.egg\mechanize\_html.py", line 225, in forms
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 967, in ParseResponseEx
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 1100, in _ParseFileEx
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 870, in feed
File "D:\py26\lib\sgmllib.py", line 104, in feed
self.goahead(0)
File "D:\py26\lib\sgmllib.py", line 138, in goahead
k = self.parse_starttag(i)
File "D:\py26\lib\sgmllib.py", line 290, in parse_starttag
self._convert_ref, attrvalue)
File "D:\py26\lib\sgmllib.py", line 302, in _convert_ref
return self.convert_charref(match.group(2)) or \
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 850, in convert_charref
File "D:\py26\lib\site-packages\clientform-0.2.10-py2.6.egg\ClientForm.py", line 244, in unescape_charref
ValueError: invalid literal for int() with base 10: 'e'
How can I fix it?
Edit:
I've fixed it this way. Is it ok? If not, how instead?
import ClientForm
from mechanize import Browser
def myunescape_charref(data, encoding):
if not str(data).isdigit(): return 0
name, base = data, 10
if name.startswith("x"):
name, base= name[1:], 16
uc = unichr(int(name, base))
if encoding is None:
return uc
else:
try:
repl = uc.encode(encoding)
except UnicodeError:
repl = "&#%s;" % data
return repl
ClientForm.unescape_charref = myunescape_charref
|
[
"The problem is caused by urls like this\nhttp://wow.zet/forum/index.php?showtopic=1197&pid=30419&st=0&#entry30419\n\nClientForm is looking for an integer after the &#\nIt is ok to have the # in the url, but it should be escaped in the html\nas &# means a character encoding\n"
] |
[
1
] |
[] |
[] |
[
"clientform",
"mechanize",
"python"
] |
stackoverflow_0001522125_clientform_mechanize_python.txt
|
Q:
map string position to line number in regex output
I'm working on a "grep-like" utility in Python for searching Oracle source code files. Coding standards have changed over time, so trying to find something like "all deletes from table a.foo" could span multiple lines, or not, depending on the age of that piece of code:
s = """-- multiline DDL statement
DELETE
a.foo f
WHERE
f.bar = 'XYZ';
DELETE a.foo f
WHERE f.bar = 'ABC';
DELETE a.foo WHERE bar = 'PDQ';
"""
import re
p = re.compile( r'\bDELETE\b.+?a\.foo', re.MULTILINE | re.DOTALL )
for m in re.finditer( p, s ):
print s[ m.start() : m.end() ]
This outputs:
DELETE
a.foo
DELETE a.foo
DELETE a.foo
What I want:
[2] DELETE
[3] a.foo
[7] DELETE a.foo
[10] DELETE a.foo
Is there a quick/simple/builtin way to map string indices to line numbers?
A:
lineno = s.count("\n",0,m.start())+1
|
map string position to line number in regex output
|
I'm working on a "grep-like" utility in Python for searching Oracle source code files. Coding standards have changed over time, so trying to find something like "all deletes from table a.foo" could span multiple lines, or not, depending on the age of that piece of code:
s = """-- multiline DDL statement
DELETE
a.foo f
WHERE
f.bar = 'XYZ';
DELETE a.foo f
WHERE f.bar = 'ABC';
DELETE a.foo WHERE bar = 'PDQ';
"""
import re
p = re.compile( r'\bDELETE\b.+?a\.foo', re.MULTILINE | re.DOTALL )
for m in re.finditer( p, s ):
print s[ m.start() : m.end() ]
This outputs:
DELETE
a.foo
DELETE a.foo
DELETE a.foo
What I want:
[2] DELETE
[3] a.foo
[7] DELETE a.foo
[10] DELETE a.foo
Is there a quick/simple/builtin way to map string indices to line numbers?
|
[
"lineno = s.count(\"\\n\",0,m.start())+1\n\n"
] |
[
8
] |
[] |
[] |
[
"grep",
"multiline",
"python",
"regex"
] |
stackoverflow_0001522510_grep_multiline_python_regex.txt
|
Q:
Get partial stdout and stderr from Popen that runs indefinitely
Possible Duplicate:
Bypassing buffering of subprocess output with popen in C or Python
I'm building a wrapper around a server cmd line script that should run indefinitely.
What I need to do is to get the current stout without waiting for the subprocess to finish.
I mean, if I run the following, everything works fine:
ls = Popen(["ls"], stdout=PIPE)
output = ls.stdout.read()
But if I do the same with an indefinitelly running program:
server = Popen(["python","-m","SimpleHTTPServer"], stdout=PIPE)
output = server.stdout.read()
It will not come back...
Update: Even
output = server.stdout.read(1)
hangs...
Do you know if there's a way to capture partial output from a Popen (or similar threading implementation) in an OS independent way?
|
Get partial stdout and stderr from Popen that runs indefinitely
|
Possible Duplicate:
Bypassing buffering of subprocess output with popen in C or Python
I'm building a wrapper around a server cmd line script that should run indefinitely.
What I need to do is to get the current stout without waiting for the subprocess to finish.
I mean, if I run the following, everything works fine:
ls = Popen(["ls"], stdout=PIPE)
output = ls.stdout.read()
But if I do the same with an indefinitelly running program:
server = Popen(["python","-m","SimpleHTTPServer"], stdout=PIPE)
output = server.stdout.read()
It will not come back...
Update: Even
output = server.stdout.read(1)
hangs...
Do you know if there's a way to capture partial output from a Popen (or similar threading implementation) in an OS independent way?
|
[] |
[] |
[
"i would guess that read() returns the entire contents? If you read a fixed size chunk in a loop you may get better results.\n"
] |
[
-1
] |
[
"python",
"subprocess"
] |
stackoverflow_0001522542_python_subprocess.txt
|
Q:
Problem running a very simple Python program
Why is my program giving me an error here?
import random
TheNumber = random.randrange(1,200,1)
NotGuessed = True
Tries = 0
GuessedNumber = int(input("Take a guess at the magic number!: "))
while NotGuessed == True:
if GuessedNumber < TheNumber:
print("Your guess is a bit too low.")
Tries = Tries + 1
GuessedNumber = int(input("Take another guess at the magic number!: "))
if GuessedNumber > TheNumber:
print("Your guess is a bit too high!")
Tries = Tries + 1
GuessedNumber = int(input("Take another guess at the magic number!: "))
if GuessedNumber == TheNumber:
print("You've guess the number, and it only took you " + string(Tries) + "!")
The error is on the last line. What can I do?
Edit:
Also, why can;t I use Tries++ here in Python? Isn't there an autoincrement code?
Edit 2: Error is:
Traceback (most recent call last):
File "C:/Users/Sergio/Desktop/GuessingGame.py", line 21, in <module>
print("You've guess the number, and it only took you " + string(Tries) + "!")
NameError: name 'string' is not defined
A:
In your last line, replace string with str -- that should take care of the error python is complaining about, at least.
A:
it's str, not string. but your infinite loop is a bigger problem. auto-increment is written like this:
Tries += 1
General comment: you could improve your code slightly:
the_number = random.randrange(1,200,1)
tries = 1
guessed_number = int(input("Take a guess at the magic number!: "))
while True:
if guessed_number < the_number:
print("Your guess is a bit too low.")
if guessed_number > the_number:
print("Your guess is a bit too high!")
if guessed_number == the_number:
break
else:
guessed_number = int(input("Take another guess at the magic number!: "))
tries += 1
print("You've guessed the number, and it only took you %d tries!" % tries)
|
Problem running a very simple Python program
|
Why is my program giving me an error here?
import random
TheNumber = random.randrange(1,200,1)
NotGuessed = True
Tries = 0
GuessedNumber = int(input("Take a guess at the magic number!: "))
while NotGuessed == True:
if GuessedNumber < TheNumber:
print("Your guess is a bit too low.")
Tries = Tries + 1
GuessedNumber = int(input("Take another guess at the magic number!: "))
if GuessedNumber > TheNumber:
print("Your guess is a bit too high!")
Tries = Tries + 1
GuessedNumber = int(input("Take another guess at the magic number!: "))
if GuessedNumber == TheNumber:
print("You've guess the number, and it only took you " + string(Tries) + "!")
The error is on the last line. What can I do?
Edit:
Also, why can;t I use Tries++ here in Python? Isn't there an autoincrement code?
Edit 2: Error is:
Traceback (most recent call last):
File "C:/Users/Sergio/Desktop/GuessingGame.py", line 21, in <module>
print("You've guess the number, and it only took you " + string(Tries) + "!")
NameError: name 'string' is not defined
|
[
"In your last line, replace string with str -- that should take care of the error python is complaining about, at least.\n",
"it's str, not string. but your infinite loop is a bigger problem. auto-increment is written like this:\nTries += 1\n\nGeneral comment: you could improve your code slightly:\nthe_number = random.randrange(1,200,1)\ntries = 1\n\nguessed_number = int(input(\"Take a guess at the magic number!: \")) \nwhile True:\n if guessed_number < the_number:\n print(\"Your guess is a bit too low.\")\n\n if guessed_number > the_number:\n print(\"Your guess is a bit too high!\")\n\n if guessed_number == the_number:\n break\n else:\n guessed_number = int(input(\"Take another guess at the magic number!: \"))\n tries += 1\n\nprint(\"You've guessed the number, and it only took you %d tries!\" % tries)\n\n"
] |
[
3,
2
] |
[] |
[] |
[
"ide",
"python",
"python_idle"
] |
stackoverflow_0001522708_ide_python_python_idle.txt
|
Q:
How do I get the module instance for a class in Python?
I am learning Python, and as always, I'm being ambitious with my starter projects. I am working on a plugin system for a community site toolkit for App Engine. My plugin superclass has a method called install_path. I would like to obtain the __path__ for the __module__ for self (which in this case will be the subclass). Problem is, __module__ returns a str rather than the module instance itself. eval() is unreliable and undesirable, so I need a good way of getting my hands on the actual module instance that doesn't involve evalling the str I get back from __module__.
A:
The sys.modules dict contains all imported modules, so you can use:
mod = sys.modules[__module__]
A:
How about Importing modules
X = __import__(‘X’) works like import
X, with the difference that you 1)
pass the module name as a string, and
2) explicitly assign it to a variable
in your current namespace.
You could pass the module name (instead of 'X') and get the module instance back. Python will ensure that you don't import the same module twice, so you should get back the instance that you imported earlier.
A:
Alternatively, you can use the module's global __file__ variable if all you want is the module's path.
|
How do I get the module instance for a class in Python?
|
I am learning Python, and as always, I'm being ambitious with my starter projects. I am working on a plugin system for a community site toolkit for App Engine. My plugin superclass has a method called install_path. I would like to obtain the __path__ for the __module__ for self (which in this case will be the subclass). Problem is, __module__ returns a str rather than the module instance itself. eval() is unreliable and undesirable, so I need a good way of getting my hands on the actual module instance that doesn't involve evalling the str I get back from __module__.
|
[
"The sys.modules dict contains all imported modules, so you can use:\nmod = sys.modules[__module__]\n\n",
"How about Importing modules\n\nX = __import__(‘X’) works like import\n X, with the difference that you 1)\n pass the module name as a string, and\n 2) explicitly assign it to a variable\n in your current namespace.\n\nYou could pass the module name (instead of 'X') and get the module instance back. Python will ensure that you don't import the same module twice, so you should get back the instance that you imported earlier.\n",
"Alternatively, you can use the module's global __file__ variable if all you want is the module's path.\n"
] |
[
14,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001522651_python.txt
|
Q:
how to apply "catch-all" exception clause to complex python web-scraping script?
I've got a list of 100 websites in CSV format. All of the sites have the same general format, including a large table with 7 columns. I wrote this script to extract the data from the 7th column of each of the websites and then write this data to file. The script below partially works, however: opening the output file (after running the script) shows that something is being skipped because it only shows 98 writes (clearly the script also registers a number of exceptions). Guidance on how to implement a "catching exception" in this context would be much appreciated. Thank you!
import csv, urllib2, re
def replace(variab): return variab.replace(",", " ")
urls = csv.reader(open('input100.txt', 'rb')) #access list of 100 URLs
for url in urls:
html = urllib2.urlopen(url[0]).read() #get HTML starting with the first URL
col7 = re.findall('td7.*?td', html) #use regex to get data from column 7
string = str(col7) #stringify data
neat = re.findall('div3.*?div', string) #use regex to get target text
result = map(replace, neat) #apply function to remove','s from elements
string2 = ", ".join(result) #separate list elements with ', ' for export to csv
output = open('output.csv', 'ab') #open file for writing
output.write(string2 + '\n') #append output to file and create new line
output.close()
Return:
Traceback (most recent call last):
File "C:\Python26\supertest3.py", line 6, in <module>
html = urllib2.urlopen(url[0]).read()
File "C:\Python26\lib\urllib2.py", line 124, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python26\lib\urllib2.py", line 383, in open
response = self._open(req, data)
File "C:\Python26\lib\urllib2.py", line 401, in _open
'_open', req)
File "C:\Python26\lib\urllib2.py", line 361, in _call_chain
result = func(*args)
File "C:\Python26\lib\urllib2.py", line 1130, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "C:\Python26\lib\urllib2.py", line 1103, in do_open
r = h.getresponse()
File "C:\Python26\lib\httplib.py", line 950, in getresponse
response.begin()
File "C:\Python26\lib\httplib.py", line 390, in begin
version, status, reason = self._read_status()
File "C:\Python26\lib\httplib.py", line 354, in _read_status
raise BadStatusLine(line)
BadStatusLine
>>>>
A:
Make the body of your for loop into:
for url in urls:
try:
...the body you have now...
except Exception, e:
print>>sys.stderr, "Url %r not processed: error (%s) % (url, e)
(Or, use logging.error instead of the goofy print>>, if you're already using the logging module of the standard library [and you should;-)]).
A:
I'd recommend reading the Errors and Exceptions Python documentation, especially section 8.3 -- Handling Exceptions.
|
how to apply "catch-all" exception clause to complex python web-scraping script?
|
I've got a list of 100 websites in CSV format. All of the sites have the same general format, including a large table with 7 columns. I wrote this script to extract the data from the 7th column of each of the websites and then write this data to file. The script below partially works, however: opening the output file (after running the script) shows that something is being skipped because it only shows 98 writes (clearly the script also registers a number of exceptions). Guidance on how to implement a "catching exception" in this context would be much appreciated. Thank you!
import csv, urllib2, re
def replace(variab): return variab.replace(",", " ")
urls = csv.reader(open('input100.txt', 'rb')) #access list of 100 URLs
for url in urls:
html = urllib2.urlopen(url[0]).read() #get HTML starting with the first URL
col7 = re.findall('td7.*?td', html) #use regex to get data from column 7
string = str(col7) #stringify data
neat = re.findall('div3.*?div', string) #use regex to get target text
result = map(replace, neat) #apply function to remove','s from elements
string2 = ", ".join(result) #separate list elements with ', ' for export to csv
output = open('output.csv', 'ab') #open file for writing
output.write(string2 + '\n') #append output to file and create new line
output.close()
Return:
Traceback (most recent call last):
File "C:\Python26\supertest3.py", line 6, in <module>
html = urllib2.urlopen(url[0]).read()
File "C:\Python26\lib\urllib2.py", line 124, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python26\lib\urllib2.py", line 383, in open
response = self._open(req, data)
File "C:\Python26\lib\urllib2.py", line 401, in _open
'_open', req)
File "C:\Python26\lib\urllib2.py", line 361, in _call_chain
result = func(*args)
File "C:\Python26\lib\urllib2.py", line 1130, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "C:\Python26\lib\urllib2.py", line 1103, in do_open
r = h.getresponse()
File "C:\Python26\lib\httplib.py", line 950, in getresponse
response.begin()
File "C:\Python26\lib\httplib.py", line 390, in begin
version, status, reason = self._read_status()
File "C:\Python26\lib\httplib.py", line 354, in _read_status
raise BadStatusLine(line)
BadStatusLine
>>>>
|
[
"Make the body of your for loop into:\nfor url in urls:\n try:\n ...the body you have now...\n except Exception, e:\n print>>sys.stderr, \"Url %r not processed: error (%s) % (url, e)\n\n(Or, use logging.error instead of the goofy print>>, if you're already using the logging module of the standard library [and you should;-)]).\n",
"I'd recommend reading the Errors and Exceptions Python documentation, especially section 8.3 -- Handling Exceptions.\n"
] |
[
2,
1
] |
[] |
[] |
[
"exception_handling",
"list",
"loops",
"python",
"urllib2"
] |
stackoverflow_0001522823_exception_handling_list_loops_python_urllib2.txt
|
Q:
Python: List comprehension to assign different values
I'm making a 2D list and I would like to initialize it with a list comprehension. I would like it to do something like this:
[[x for i in range(3) if j <= 1: x=1 else x=2] for j in range(3)]
so it should return something like:
[[1,1,1],
[1,1,1],
[2,2,2]]
How might I go about doing this?
Thanks for your help.
A:
It appears as though you're looking for something like this:
[[1 if j <= 1 else 2 for i in range(3)] for j in range(3)]
The Python conditional expression is a bit different from what you might be used to if you're coming from something like C or Java:
The expression x if C else y first evaluates C (not x); if C is true, x is evaluated and its value is returned; otherwise, y is evaluated and its value is returned.
A slightly shorter way to do the same thing is:
[[1 if j <= 1 else 2]*3 for j in range(3)]
A:
Greg's response is correct, though a much simpler and faster expression to produce your desired result would be
[[j] * 3 for j in (1, 1, 2)]
i.e., remember that for need not apply to a range only;-), list-multiplication exists, and so on;-).
A:
Try that
>>> [[(1 if j<1 else 2) for i in range(3)] for j in range(3)]
[[1, 1, 1], [2, 2, 2], [2, 2, 2]]
The second time j=1 so j<1 fails
|
Python: List comprehension to assign different values
|
I'm making a 2D list and I would like to initialize it with a list comprehension. I would like it to do something like this:
[[x for i in range(3) if j <= 1: x=1 else x=2] for j in range(3)]
so it should return something like:
[[1,1,1],
[1,1,1],
[2,2,2]]
How might I go about doing this?
Thanks for your help.
|
[
"It appears as though you're looking for something like this:\n[[1 if j <= 1 else 2 for i in range(3)] for j in range(3)]\n\nThe Python conditional expression is a bit different from what you might be used to if you're coming from something like C or Java:\n\nThe expression x if C else y first evaluates C (not x); if C is true, x is evaluated and its value is returned; otherwise, y is evaluated and its value is returned.\n\nA slightly shorter way to do the same thing is:\n[[1 if j <= 1 else 2]*3 for j in range(3)]\n\n",
"Greg's response is correct, though a much simpler and faster expression to produce your desired result would be\n[[j] * 3 for j in (1, 1, 2)]\n\ni.e., remember that for need not apply to a range only;-), list-multiplication exists, and so on;-).\n",
"Try that\n>>> [[(1 if j<1 else 2) for i in range(3)] for j in range(3)]\n[[1, 1, 1], [2, 2, 2], [2, 2, 2]]\n\nThe second time j=1 so j<1 fails \n"
] |
[
14,
14,
1
] |
[] |
[] |
[
"list_comprehension",
"python"
] |
stackoverflow_0001522960_list_comprehension_python.txt
|
Q:
python: how do you setup your workspace on Ubuntu?
Lets say I have my workspace (on Eclipse) where I develop my Python modules and I would like to "link" my working files to system Python paths. I know I can drop .pth files etc. but I would like to get the community's wisdom as to the best practices.
A:
One thing you can try is creating a virtual environment and then pointing pydev at the interpreter inside the virtual environment.
$ virtualenv --no-site-packages myProject
$ cd myProject
$ source bin/activate
(myproject)$
at that point you have a python interpreter that will reference libraries in ~/myProject/lib/python2.x/site-packages
So in pydev in your workspace select ~/myProject/bin/python as your python interpreter. In this way you aren't infecting your system install of python, won't need root permissions to install stuff etc....
Speaking of which, virtualenv sets up an "easy_install" bin so you can install whatever libs you need, again without infecting your system python install.
(myproject)$easy_install sqlalchemy paste pylons ipython sphinx
#...download to win...
And if you do install paste, you can create package templates rather than doing it by hand like so...
(myproject)$ paster create mynewlib
#...do stuff to win...
(myproject)$ cd mynewlib
(myproject)$ python setup.py develop
#...puts links in your virtualenv site-packages but does not move the source
(myproject)$ <start hacking>
Check out this screencast series on ShowMeDo, it helped me A LOT
Hope that helps.
|
python: how do you setup your workspace on Ubuntu?
|
Lets say I have my workspace (on Eclipse) where I develop my Python modules and I would like to "link" my working files to system Python paths. I know I can drop .pth files etc. but I would like to get the community's wisdom as to the best practices.
|
[
"One thing you can try is creating a virtual environment and then pointing pydev at the interpreter inside the virtual environment.\n$ virtualenv --no-site-packages myProject\n$ cd myProject\n$ source bin/activate\n(myproject)$\n\nat that point you have a python interpreter that will reference libraries in ~/myProject/lib/python2.x/site-packages\nSo in pydev in your workspace select ~/myProject/bin/python as your python interpreter. In this way you aren't infecting your system install of python, won't need root permissions to install stuff etc....\nSpeaking of which, virtualenv sets up an \"easy_install\" bin so you can install whatever libs you need, again without infecting your system python install.\n(myproject)$easy_install sqlalchemy paste pylons ipython sphinx\n#...download to win...\n\nAnd if you do install paste, you can create package templates rather than doing it by hand like so...\n(myproject)$ paster create mynewlib\n#...do stuff to win...\n(myproject)$ cd mynewlib\n(myproject)$ python setup.py develop\n#...puts links in your virtualenv site-packages but does not move the source\n(myproject)$ <start hacking>\n\nCheck out this screencast series on ShowMeDo, it helped me A LOT\nHope that helps.\n"
] |
[
1
] |
[] |
[] |
[
"eclipse",
"python"
] |
stackoverflow_0001522867_eclipse_python.txt
|
Q:
Could someone recommend video tutorial websites for beginners?
I'm not sure what it is about me but I seem to learn and retain information better through a classroom setting where what's being shown is explained clearly and easy to understand examples are presented. I rarely do my own reading or research, but I do occasionally stumble upon some neat things. Maybe I'm just used to the classroom setting from all the years of the education process or it could just be the lazy man in me.
In any case, if anyone could recommend some video tutorial sites, particularly for beginners, that would be great.
I am particularly interested in the following...
Web 2.0 (AJAX, XML, DHTML, Javascript, CSS, etc)
Python
Of course, if anyone knows some sort of wide-range, general site for tutorials of all kinds to help programmers out there, that would be great too.
Thank you.
PS - For the purposes of my software development needs I've decided to give Eclipse a try as it seems to be one of the most widely used IDEs in the industry.
A:
MIT has a great Intro to Computer Science course using Python.
MIT 6.00 Introduction to Computer Science and Programming
A:
For Django (Python MVC framework) try here
For CSS try here
For jQuery try here
For DHTML try here
My advice don't go for eclipse if oyu are beginner use a texteditor. Eclispes features can be overwhelming for beginners.
A:
ajaxprojects, learning python through videos, python link on showmedo, a video tutorial site where you can find many other topics being covered as well.
A:
http://pycon.blip.tv/
A:
ShowMeDo(Python) has plenty of screenscasts for the whole gamut of experience levels.
|
Could someone recommend video tutorial websites for beginners?
|
I'm not sure what it is about me but I seem to learn and retain information better through a classroom setting where what's being shown is explained clearly and easy to understand examples are presented. I rarely do my own reading or research, but I do occasionally stumble upon some neat things. Maybe I'm just used to the classroom setting from all the years of the education process or it could just be the lazy man in me.
In any case, if anyone could recommend some video tutorial sites, particularly for beginners, that would be great.
I am particularly interested in the following...
Web 2.0 (AJAX, XML, DHTML, Javascript, CSS, etc)
Python
Of course, if anyone knows some sort of wide-range, general site for tutorials of all kinds to help programmers out there, that would be great too.
Thank you.
PS - For the purposes of my software development needs I've decided to give Eclipse a try as it seems to be one of the most widely used IDEs in the industry.
|
[
"MIT has a great Intro to Computer Science course using Python.\nMIT 6.00 Introduction to Computer Science and Programming\n",
"For Django (Python MVC framework) try here\nFor CSS try here\nFor jQuery try here\nFor DHTML try here\nMy advice don't go for eclipse if oyu are beginner use a texteditor. Eclispes features can be overwhelming for beginners.\n",
"ajaxprojects, learning python through videos, python link on showmedo, a video tutorial site where you can find many other topics being covered as well.\n",
"http://pycon.blip.tv/\n",
"ShowMeDo(Python) has plenty of screenscasts for the whole gamut of experience levels. \n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"ajax",
"python",
"video"
] |
stackoverflow_0001522237_ajax_python_video.txt
|
Q:
How do I do this in Python? List to function
def getStuff(x):
return 'stuff'+x
def getData(x):
return 'data'+x
thefunctions = []
thefunctions.append("getStuff")
thefunctions.append("getData")
for i in thefunctions:
print i('abc')
Is this possible? Thank you.
A:
thefunctions = [ getStuff, getData ]
for f in thefunctions:
print f('shazam')
Once you've done a def statement, you've associated a name with a function. Just use that name to refer to the function.
|
How do I do this in Python? List to function
|
def getStuff(x):
return 'stuff'+x
def getData(x):
return 'data'+x
thefunctions = []
thefunctions.append("getStuff")
thefunctions.append("getData")
for i in thefunctions:
print i('abc')
Is this possible? Thank you.
|
[
"thefunctions = [ getStuff, getData ]\nfor f in thefunctions:\n print f('shazam')\n\nOnce you've done a def statement, you've associated a name with a function. Just use that name to refer to the function.\n"
] |
[
12
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001523348_python.txt
|
Q:
python sax: is there a way to halt the parsing from inside a content handler?
Is there a way to halt the parsing from inside a content handler? Or is throwing an exception the only way?
Note that I am using xml.sax.parseString.
A:
The complete API for Python's SAX content handlers is documented here: as you can see, the information flow is entirely one-way, parser to handler -- no way for the handler to supply info back to the parser (such as whether the parse should be terminated).
Therefore, as you had surmised and the commenters confirmed, "control-flow exceptions" are indeed the only way to achieve such a "premature termination". As the commenters mention, it's not too bad, after all.
|
python sax: is there a way to halt the parsing from inside a content handler?
|
Is there a way to halt the parsing from inside a content handler? Or is throwing an exception the only way?
Note that I am using xml.sax.parseString.
|
[
"The complete API for Python's SAX content handlers is documented here: as you can see, the information flow is entirely one-way, parser to handler -- no way for the handler to supply info back to the parser (such as whether the parse should be terminated).\nTherefore, as you had surmised and the commenters confirmed, \"control-flow exceptions\" are indeed the only way to achieve such a \"premature termination\". As the commenters mention, it's not too bad, after all.\n"
] |
[
4
] |
[] |
[] |
[
"python",
"sax"
] |
stackoverflow_0001523225_python_sax.txt
|
Q:
Put bar at the end of every line that includes foo
I have a list with a large number of lines, each taking the subject-verb-object form, eg:
Jane likes Fred
Chris dislikes Joe
Nate knows Jill
To plot a network graph that expresses the different relationships between the nodes in directed color-coded edges, I will need to replace the verb with an arrow and place a color code at the end of each line, thus, somewhat simplified:
Jane -> Fred red;
Chris -> Joe blue;
Nate -> Jill black;
There's only a small number of verbs, so replacing them with an arrow is just a matter of a few search and replace commands. Before doing that, however, I will need to put a color code at the end of every line that corresponds to the line's verb. I'd like to do this using Python.
These are my baby steps in programming, so please be explicit and include the code that reads in the text file.
Thanks for your help!
A:
It sounds like you will want to research dictionaries and string formatting. In general, if you need help programming, just break down any problem you have into extremely small, discrete chunks, search those chunks independently, and then you should be able to formulate it all into a larger answer. Stack Overflow is a great resource for this type of searching.
Also, if you have any general curiosities about Python, search or browse the official Python documentation. If you find yourself constantly not knowing where to begin, read the Python tutorial or find a book to go through. A week or two investment to get a good foundational knowledge of what you are doing will pay off over and over again as you complete work.
verb_color_map = {
'likes': 'red',
'dislikes': 'blue',
'knows': 'black',
}
with open('infile.txt') as infile: # assuming you've stored your data in 'infile.txt'
for line in infile:
# Python uses the name object, so I use object_
subject, verb, object_ = line.split()
print "%s -> %s %s;" % (subject, object_, verb_color_map[verb])
A:
Simple enough; assuming the lists of verbs is fixed and small, this is easy to do with a dictionary and for loop:
VERBS = {
"likes": "red"
, "dislikes": "blue"
, "knows": "black"
}
def replace_verb (line):
for verb, color in VERBS.items():
if verb in line:
return "%s %s;" % (
line.replace (verb, "->")
, color
)
return line
def main ():
filename = "my_file.txt"
with open (filename, "r") as fp:
for line in fp:
print replace_verb (line)
# Allow the module to be executed directly on the command line
if __name__ == "__main__":
main ()
A:
verbs = {"dislikes":"blue", "knows":"black", "likes":"red"}
for s in open("/tmp/infile"):
s = s.strip()
for verb in verbs.keys():
if (s.count(verb) > 0):
print s.replace(verb,"->")+" "+verbs[verb]+";"
break
Edit: Rather use "for s in open"
A:
Are you sure this isn't a little homeworky :) If so, it's okay to fess up. Without going into too much detail, think about the tasks you're trying to do:
For each line:
read it
split it into words (on whitespace - .split() )
convert the middle word into a color (based on a mapping -> cf: python dict()
print the first word, arrow, third word and the color
Code using NetworkX (networkx.lanl.gov/)
'''
plot relationships in a social network
'''
import networkx
## make a fake file 'ex.txt' in this directory
## then write fake relationships to it.
example_relationships = file('ex.txt','w')
print >> example_relationships, '''\
Jane Doe likes Fred
Chris dislikes Joe
Nate knows Jill \
'''
example_relationships.close()
rel_colors = {
'likes': 'blue',
'dislikes' : 'black',
'knows' : 'green',
}
def split_on_verb(sentence):
''' we know the verb is the only lower cased word
>>> split_on_verb("Jane Doe likes Fred")
('Jane Does','Fred','likes')
'''
words = sentence.strip().split() # take off any outside whitespace, then split
# on whitespace
if not words:
return None # if there aren't any words, just return nothing
verbs = [x for x in words if x.islower()]
verb = verbs[0] # we want the '1st' one (python numbers from 0,1,2...)
verb_index = words.index(verb) # where is the verb?
subject = ' '.join(words[:verb_index])
obj = ' '.join(words[(verb_index+1):]) # 'object' is already used in python
return (subject, obj, verb)
def graph_from_relationships(fh,color_dict):
'''
fh: a filehandle, i.e., an opened file, from which we can read lines
and loop over
'''
G = networkx.DiGraph()
for line in fh:
if not line.strip(): continue # move on to the next line,
# if our line is empty-ish
(subj,obj,verb) = split_on_verb(line)
color = color_dict[verb]
# cf: python 'string templates', there are other solutions here
# this is the
print "'%s' -> '%s' [color='%s'];" % (subj,obj,color)
G.add_edge(subj,obj,color)
#
return G
G = graph_from_relationships(file('ex.txt'),rel_colors)
print G.edges()
# from here you can use the various networkx plotting tools on G, as you're inclined.
A:
Python 2.5:
import sys
from collections import defaultdict
codes = defaultdict(lambda: ("---", "Missing action!"))
codes["likes"] = ("-->", "red")
codes["dislikes"] = ("-/>", "green")
codes["loves"] = ("==>", "blue")
for line in sys.stdin:
subject, verb, object_ = line.strip().split(" ")
arrow, color = codes[verb]
print subject, arrow, object_, color, ";"
A:
In addition to the question, Karasu also said (in a comment on one answer): "In the actual input both subjects and objects vary unpredictably between one and two words."
Okay, here's how I would solve this.
color_map = \
{
"likes" : "red",
"dislikes" : "blue",
"knows" : "black",
}
def is_verb(word):
return word in color_map
def make_noun(lst):
if not lst:
return "--NONE--"
elif len(lst) == 1:
return lst[0]
else:
return "_".join(lst)
for line in open("filename").readlines():
words = line.split()
# subject could be one or two words
if is_verb(words[1]):
# subject was one word
s = words[0]
v = words[1]
o = make_noun(words[2:])
else:
# subject was two words
assert is_verb(words[2])
s = make_noun(words[0:2])
v = words[2]
o = make_noun(words[3:])
color = color_map[v]
print "%s -> %s %s;" % (s, o, color)
Some notes:
0) We don't really need "with" for this problem, and writing it this way makes the program more portable to older versions of Python. This should work on Python 2.2 and newer, I think (I only tested on Python 2.6).
1) You can change make_noun() to have whatever strategy you deem useful for handling multiple words. I showed just chaining them together with underscores, but you could have a dictionary with adjectives and throw those out, have a dictionary of nouns and choose those, or whatever.
2) You could also use regular expressions for fuzzier matching. Instead of simply using a dictionary for color_map you could have a list of tuples, with a regular expression paired with the replacement color, and then when the regular expression matches, replace the color.
A:
Here is an improved version of my previous answer. This one uses regular expression matching to make a fuzzy match on the verb. These all work:
Steve loves Denise
Bears love honey
Maria interested Anders
Maria interests Anders
The regular expression pattern "loves?" matches "love" plus an optional 's'. The pattern "interest.*" matches "interest" plus anything. Patterns with multiple alternatives separated by vertical bars match if any one of the alternatives matches.
import re
re_map = \
[
("likes?|loves?|interest.*", "red"),
("dislikes?|hates?", "blue"),
("knows?|tolerates?|ignores?", "black"),
]
# compile the regular expressions one time, then use many times
pat_map = [(re.compile(s), color) for s, color in re_map]
# We dont use is_verb() in this version, but here it is.
# A word is a verb if any of the patterns match.
def is_verb(word):
return any(pat.match(word) for pat, color in pat_map)
# Return color from matched verb, or None if no match.
# This detects whether a word is a verb, and looks up the color, at the same time.
def color_from_verb(word):
for pat, color in pat_map:
if pat.match(word):
return color
return None
def make_noun(lst):
if not lst:
return "--NONE--"
elif len(lst) == 1:
return lst[0]
else:
return "_".join(lst)
for line in open("filename"):
words = line.split()
# subject could be one or two words
color = color_from_verb(words[1])
if color:
# subject was one word
s = words[0]
o = make_noun(words[2:])
else:
# subject was two words
color = color_from_verb(words[1])
assert color
s = make_noun(words[0:2])
o = make_noun(words[3:])
print "%s -> %s %s;" % (s, o, color)
I hope it is clear how to take this answer and extend it. You can easily add more patterns to match more verbs. You could add logic to detect "is" and "in" and discard them, so that "Anders is interested in Maria" would match. And so on.
If you have any questions, I'd be happy to explain this further. Good luck.
|
Put bar at the end of every line that includes foo
|
I have a list with a large number of lines, each taking the subject-verb-object form, eg:
Jane likes Fred
Chris dislikes Joe
Nate knows Jill
To plot a network graph that expresses the different relationships between the nodes in directed color-coded edges, I will need to replace the verb with an arrow and place a color code at the end of each line, thus, somewhat simplified:
Jane -> Fred red;
Chris -> Joe blue;
Nate -> Jill black;
There's only a small number of verbs, so replacing them with an arrow is just a matter of a few search and replace commands. Before doing that, however, I will need to put a color code at the end of every line that corresponds to the line's verb. I'd like to do this using Python.
These are my baby steps in programming, so please be explicit and include the code that reads in the text file.
Thanks for your help!
|
[
"It sounds like you will want to research dictionaries and string formatting. In general, if you need help programming, just break down any problem you have into extremely small, discrete chunks, search those chunks independently, and then you should be able to formulate it all into a larger answer. Stack Overflow is a great resource for this type of searching.\nAlso, if you have any general curiosities about Python, search or browse the official Python documentation. If you find yourself constantly not knowing where to begin, read the Python tutorial or find a book to go through. A week or two investment to get a good foundational knowledge of what you are doing will pay off over and over again as you complete work.\nverb_color_map = {\n 'likes': 'red',\n 'dislikes': 'blue',\n 'knows': 'black',\n}\n\nwith open('infile.txt') as infile: # assuming you've stored your data in 'infile.txt'\n for line in infile:\n # Python uses the name object, so I use object_\n subject, verb, object_ = line.split()\n print \"%s -> %s %s;\" % (subject, object_, verb_color_map[verb])\n\n",
"Simple enough; assuming the lists of verbs is fixed and small, this is easy to do with a dictionary and for loop:\nVERBS = {\n \"likes\": \"red\"\n , \"dislikes\": \"blue\"\n , \"knows\": \"black\"\n }\n\ndef replace_verb (line):\n for verb, color in VERBS.items():\n if verb in line:\n return \"%s %s;\" % (\n line.replace (verb, \"->\")\n , color\n )\n return line\n\ndef main ():\n filename = \"my_file.txt\"\n with open (filename, \"r\") as fp:\n for line in fp:\n print replace_verb (line)\n\n# Allow the module to be executed directly on the command line\nif __name__ == \"__main__\":\n main ()\n\n",
"verbs = {\"dislikes\":\"blue\", \"knows\":\"black\", \"likes\":\"red\"}\nfor s in open(\"/tmp/infile\"):\n s = s.strip()\n for verb in verbs.keys():\n if (s.count(verb) > 0):\n print s.replace(verb,\"->\")+\" \"+verbs[verb]+\";\"\n break\n\nEdit: Rather use \"for s in open\"\n",
"Are you sure this isn't a little homeworky :) If so, it's okay to fess up. Without going into too much detail, think about the tasks you're trying to do:\nFor each line:\n\nread it\nsplit it into words (on whitespace - .split() )\nconvert the middle word into a color (based on a mapping -> cf: python dict()\nprint the first word, arrow, third word and the color\n\nCode using NetworkX (networkx.lanl.gov/)\n'''\nplot relationships in a social network\n'''\n\nimport networkx\n## make a fake file 'ex.txt' in this directory\n## then write fake relationships to it.\nexample_relationships = file('ex.txt','w') \nprint >> example_relationships, '''\\\nJane Doe likes Fred\nChris dislikes Joe\nNate knows Jill \\\n'''\nexample_relationships.close()\n\nrel_colors = {\n 'likes': 'blue',\n 'dislikes' : 'black',\n 'knows' : 'green',\n}\n\ndef split_on_verb(sentence):\n ''' we know the verb is the only lower cased word\n\n >>> split_on_verb(\"Jane Doe likes Fred\")\n ('Jane Does','Fred','likes')\n\n '''\n words = sentence.strip().split() # take off any outside whitespace, then split\n # on whitespace\n if not words:\n return None # if there aren't any words, just return nothing\n\n verbs = [x for x in words if x.islower()]\n verb = verbs[0] # we want the '1st' one (python numbers from 0,1,2...)\n verb_index = words.index(verb) # where is the verb?\n subject = ' '.join(words[:verb_index])\n obj = ' '.join(words[(verb_index+1):]) # 'object' is already used in python\n return (subject, obj, verb)\n\n\ndef graph_from_relationships(fh,color_dict):\n '''\n fh: a filehandle, i.e., an opened file, from which we can read lines\n and loop over\n '''\n G = networkx.DiGraph()\n\n for line in fh:\n if not line.strip(): continue # move on to the next line,\n # if our line is empty-ish\n (subj,obj,verb) = split_on_verb(line)\n color = color_dict[verb]\n # cf: python 'string templates', there are other solutions here\n # this is the \n print \"'%s' -> '%s' [color='%s'];\" % (subj,obj,color)\n G.add_edge(subj,obj,color)\n # \n\n return G\n\nG = graph_from_relationships(file('ex.txt'),rel_colors)\nprint G.edges()\n# from here you can use the various networkx plotting tools on G, as you're inclined.\n\n",
"Python 2.5:\nimport sys\nfrom collections import defaultdict\n\ncodes = defaultdict(lambda: (\"---\", \"Missing action!\"))\ncodes[\"likes\"] = (\"-->\", \"red\")\ncodes[\"dislikes\"] = (\"-/>\", \"green\")\ncodes[\"loves\"] = (\"==>\", \"blue\")\n\nfor line in sys.stdin:\n subject, verb, object_ = line.strip().split(\" \")\n arrow, color = codes[verb]\n print subject, arrow, object_, color, \";\"\n\n",
"In addition to the question, Karasu also said (in a comment on one answer): \"In the actual input both subjects and objects vary unpredictably between one and two words.\"\nOkay, here's how I would solve this.\ncolor_map = \\\n{\n \"likes\" : \"red\",\n \"dislikes\" : \"blue\",\n \"knows\" : \"black\",\n}\n\ndef is_verb(word):\n return word in color_map\n\ndef make_noun(lst):\n if not lst:\n return \"--NONE--\"\n elif len(lst) == 1:\n return lst[0]\n else:\n return \"_\".join(lst)\n\n\nfor line in open(\"filename\").readlines():\n words = line.split()\n # subject could be one or two words\n if is_verb(words[1]):\n # subject was one word\n s = words[0]\n v = words[1]\n o = make_noun(words[2:])\n else:\n # subject was two words\n assert is_verb(words[2])\n s = make_noun(words[0:2])\n v = words[2]\n o = make_noun(words[3:])\n color = color_map[v]\n print \"%s -> %s %s;\" % (s, o, color)\n\nSome notes:\n0) We don't really need \"with\" for this problem, and writing it this way makes the program more portable to older versions of Python. This should work on Python 2.2 and newer, I think (I only tested on Python 2.6).\n1) You can change make_noun() to have whatever strategy you deem useful for handling multiple words. I showed just chaining them together with underscores, but you could have a dictionary with adjectives and throw those out, have a dictionary of nouns and choose those, or whatever.\n2) You could also use regular expressions for fuzzier matching. Instead of simply using a dictionary for color_map you could have a list of tuples, with a regular expression paired with the replacement color, and then when the regular expression matches, replace the color.\n",
"Here is an improved version of my previous answer. This one uses regular expression matching to make a fuzzy match on the verb. These all work:\nSteve loves Denise\nBears love honey\nMaria interested Anders\nMaria interests Anders\n\nThe regular expression pattern \"loves?\" matches \"love\" plus an optional 's'. The pattern \"interest.*\" matches \"interest\" plus anything. Patterns with multiple alternatives separated by vertical bars match if any one of the alternatives matches.\nimport re\n\nre_map = \\\n[\n (\"likes?|loves?|interest.*\", \"red\"),\n (\"dislikes?|hates?\", \"blue\"),\n (\"knows?|tolerates?|ignores?\", \"black\"),\n]\n\n# compile the regular expressions one time, then use many times\npat_map = [(re.compile(s), color) for s, color in re_map]\n\n# We dont use is_verb() in this version, but here it is.\n# A word is a verb if any of the patterns match.\ndef is_verb(word):\n return any(pat.match(word) for pat, color in pat_map)\n\n# Return color from matched verb, or None if no match.\n# This detects whether a word is a verb, and looks up the color, at the same time.\ndef color_from_verb(word):\n for pat, color in pat_map:\n if pat.match(word):\n return color\n return None\n\ndef make_noun(lst):\n if not lst:\n return \"--NONE--\"\n elif len(lst) == 1:\n return lst[0]\n else:\n return \"_\".join(lst)\n\n\nfor line in open(\"filename\"):\n words = line.split()\n # subject could be one or two words\n color = color_from_verb(words[1])\n if color:\n # subject was one word\n s = words[0]\n o = make_noun(words[2:])\n else:\n # subject was two words\n color = color_from_verb(words[1])\n assert color\n s = make_noun(words[0:2])\n o = make_noun(words[3:])\n print \"%s -> %s %s;\" % (s, o, color)\n\nI hope it is clear how to take this answer and extend it. You can easily add more patterns to match more verbs. You could add logic to detect \"is\" and \"in\" and discard them, so that \"Anders is interested in Maria\" would match. And so on.\nIf you have any questions, I'd be happy to explain this further. Good luck.\n"
] |
[
5,
3,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"python",
"text_processing"
] |
stackoverflow_0001522386_python_text_processing.txt
|
Q:
Python module search path problem
I am trying to work on a dev environment but am find problems in that python seems to be using modules from the site-packages directory. I want it to be using the modules from my dev directory.
sys.path returns a bunch of dirs, like this
['', '/usr/lib/python26.zip', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/site-packages' etc
This is good, it's using the current directory as the first place of lookup (at least this is how I understand it to be).
Ok now if I create say a file called command.py in the current directory, things work as I would expect them.
>>> import commands
>>> commands.__file__
'commands.pyc'
I then exit out of the python shell, and start another one. I then do this.
>>> import foo.bar.commands
Now, what I'm expecting it to do is go down from the current directory to ./foo/bar/ and get me the commands module from there. What I get though is this
>>> foo.bar.commands.__file__
'/usr/lib/python2.6/site-packages/foo/bar/commands.pyc'
Even though from my current directory there is a ./foo/bar/commands.py
Using imp.find_module() and imp.load_module() I can load the local module properly. Whats actually interesting (although I don't really know what it means) is the last line that is printed out in this sequence
>>> import foo.bar.commands
>>> foo.bar.commands.__file__
'/usr/lib/python2.6/site-packages/foo/bar/commands.pyc'
>>> foo.bar.__file__
'/usr/lib/python2.6/site-packages/foo/bar/__int__.pyc'
>>> foo.__file__
'./foo/__init__.pyc'
So if it can find the foo/init.pyc in the local dir why can't it find the other files in the local dir?
Cheers
A:
You mention that there's a foo directory under your current directory, but you don't tell us whether foo/__init__.py exists (even possibly empty): if it doesn't, this tells Python that foo is not a package. Similarly for foo/bar/__init__.py -- if that file doesn't exist, even if foo/__init__.py does, then foo.bar is not a package.
You can play around a little by placing .pth files and/or setting __path__ explicitly in your packages, but the basic, simple rule is to just place an __init__.py in every directory that you want Python to recognize as a package. The contents of that file are "the body" of the package itself, so if you import foo and foo is a directory with a foo/__init__.py file, then that's what you're importing (in any case, the package's body executes the first time you import anything from the package or any subpackage thereof).
If that is not the problem, it looks like some other import (or explicit sys.path manipulation) may be messing you up. Running python with a -v flag makes imports highly visible, which can help. Another good technique is to place an
import pdb; pdb.set_trace()
just before the import that you think is misbehaving, and examining sys.path, sys.modules (and possibly other advanced structures such as import hooks) at that point - is there a sys.modules['foo'] already defined, for example? Interactively trying the functions from standard library module imp that locate modules on your behalf given a path may also prove instructive.
A:
What is foo doing in /usr/lib/python2.6/site-packages?
It sounds like you have created foo in your local directory but that is not necessarily the one you are importing.
Try getting rid of the foo/bar in site-packages
Make sure your directory structure looks like this
/foo/__init__.py
/bar/__init__.py
/commands.py
Also, it is a good idea to not reuse python standard library names for your own modules -- can you call your commands.py something else?
|
Python module search path problem
|
I am trying to work on a dev environment but am find problems in that python seems to be using modules from the site-packages directory. I want it to be using the modules from my dev directory.
sys.path returns a bunch of dirs, like this
['', '/usr/lib/python26.zip', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/site-packages' etc
This is good, it's using the current directory as the first place of lookup (at least this is how I understand it to be).
Ok now if I create say a file called command.py in the current directory, things work as I would expect them.
>>> import commands
>>> commands.__file__
'commands.pyc'
I then exit out of the python shell, and start another one. I then do this.
>>> import foo.bar.commands
Now, what I'm expecting it to do is go down from the current directory to ./foo/bar/ and get me the commands module from there. What I get though is this
>>> foo.bar.commands.__file__
'/usr/lib/python2.6/site-packages/foo/bar/commands.pyc'
Even though from my current directory there is a ./foo/bar/commands.py
Using imp.find_module() and imp.load_module() I can load the local module properly. Whats actually interesting (although I don't really know what it means) is the last line that is printed out in this sequence
>>> import foo.bar.commands
>>> foo.bar.commands.__file__
'/usr/lib/python2.6/site-packages/foo/bar/commands.pyc'
>>> foo.bar.__file__
'/usr/lib/python2.6/site-packages/foo/bar/__int__.pyc'
>>> foo.__file__
'./foo/__init__.pyc'
So if it can find the foo/init.pyc in the local dir why can't it find the other files in the local dir?
Cheers
|
[
"You mention that there's a foo directory under your current directory, but you don't tell us whether foo/__init__.py exists (even possibly empty): if it doesn't, this tells Python that foo is not a package. Similarly for foo/bar/__init__.py -- if that file doesn't exist, even if foo/__init__.py does, then foo.bar is not a package.\nYou can play around a little by placing .pth files and/or setting __path__ explicitly in your packages, but the basic, simple rule is to just place an __init__.py in every directory that you want Python to recognize as a package. The contents of that file are \"the body\" of the package itself, so if you import foo and foo is a directory with a foo/__init__.py file, then that's what you're importing (in any case, the package's body executes the first time you import anything from the package or any subpackage thereof).\nIf that is not the problem, it looks like some other import (or explicit sys.path manipulation) may be messing you up. Running python with a -v flag makes imports highly visible, which can help. Another good technique is to place an\nimport pdb; pdb.set_trace()\n\njust before the import that you think is misbehaving, and examining sys.path, sys.modules (and possibly other advanced structures such as import hooks) at that point - is there a sys.modules['foo'] already defined, for example? Interactively trying the functions from standard library module imp that locate modules on your behalf given a path may also prove instructive.\n",
"What is foo doing in /usr/lib/python2.6/site-packages?\nIt sounds like you have created foo in your local directory but that is not necessarily the one you are importing.\nTry getting rid of the foo/bar in site-packages\nMake sure your directory structure looks like this\n/foo/__init__.py \n /bar/__init__.py\n /commands.py\n\nAlso, it is a good idea to not reuse python standard library names for your own modules -- can you call your commands.py something else?\n"
] |
[
3,
0
] |
[] |
[] |
[
"module",
"path",
"python",
"search"
] |
stackoverflow_0001523551_module_path_python_search.txt
|
Q:
Python: Running command line application to the script. Sending parameters in files
I want to try to use a command line script with my python application. The task is the following, my database stores some initial data for the script and
I need to execute a command line application in a following way:
$ application -parameter1 -file1
Here file1 is a file which contains my initial data, and parameter1 is an unrelated parameter.
The workflow as I see it now is the following:
initial_data = get_initial_data_from_db()
file = open('temp.txt', 'w+')
file.write(initial_data)
file.save()
os.popen4("application -parameter1 -file temp.txt")
I wonder if it's possible to execute this script (called application) without writing the file with initial data to the hard disk? E.g. is there a way send the files contents to the command directly?
A:
You can use the subprocess modul
something like that:
import subprocess
bufsize =1024
initial_data = get_initial_data_from_db()
p = subprocess.Popen("application -parameter1", shell=True, bufsize=bufsize,
stdin=subprocess.PIPE, close_fds=True)
p.stdin.write(initial_data)
print p.communicate()
! if your application can read from stdin
Testing with Python as Application (in Eclipse) / after remark from Oleg Tarasenko :
import subprocess
initial_data = """
import sys
print sys.path
"""
for test in [1,2,3] :
p = subprocess.Popen("C:/python26/python", shell=True, bufsize=512,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
p.stdin.write(initial_data)
print p.communicate()
Output:
("['', 'C:\\\\dev\\\\ide\\\\eclipse\\\\plugins\\\\org.python.pydev_1.5.0.1251989166\\\\PySrc\\\\pydev_sitecustomize', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\libs', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jacob.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jiffie.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jaxen-1.1.1.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\swt.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\qpslib.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\ifxjdbc.jar', 'C:\\\\server\\\\jboss\\\\client\\\\jbossall-client.jar', 'C:\\\\usr\\\\local\\\\machine', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol\\\\config', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\oknos\\\\tickcardimp\\\\bin', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\common\\\\jar\\\\shared.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\libs', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jacob.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jiffie.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jaxen-1.1.1.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\swt.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\qpslib.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\ifxjdbc.jar', 'C:\\\\server\\\\jboss\\\\client\\\\jbossall-client.jar', 'C:\\\\usr\\\\local\\\\machine', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol\\\\config', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\oknos\\\\tickcardimp\\\\bin', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\common\\\\jar\\\\shared.jar', 'C:\\\\jython\\\\jython2.5.0\\\\Lib', 'C:\\\\jython\\\\jython2.5.0\\\\Lib\\\\site-packages', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\rt.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\jsse.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\jce.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\charsets.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\dnsns.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\localedata.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\sunjce_provider.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\sunpkcs11.jar', 'C:\\\\WINDOWS\\\\system32\\\\python26.zip', 'C:\\\\python26\\\\DLLs', 'C:\\\\python26\\\\lib', 'C:\\\\python26\\\\lib\\\\plat-win', 'C:\\\\python26\\\\lib\\\\lib-tk', 'C:\\\\python26']\r\n", "'import site' failed; use -v for traceback\r\n")
("['', 'C:\\\\dev\\\\ide\\\\eclipse\\\\plugins\\\\org.python.pydev_1.5.0.1251989166\\\\PySrc\\\\pydev_sitecustomize', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\libs', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jacob.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jiffie.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jaxen-1.1.1.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\swt.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\qpslib.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\ifxjdbc.jar', 'C:\\\\server\\\\jboss\\\\client\\\\jbossall-client.jar', 'C:\\\\usr\\\\local\\\\machine', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol\\\\config', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\oknos\\\\tickcardimp\\\\bin', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\common\\\\jar\\\\shared.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\libs', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jacob.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jiffie.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jaxen-1.1.1.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\swt.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\qpslib.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\ifxjdbc.jar', 'C:\\\\server\\\\jboss\\\\client\\\\jbossall-client.jar', 'C:\\\\usr\\\\local\\\\machine', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol\\\\config', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\oknos\\\\tickcardimp\\\\bin', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\common\\\\jar\\\\shared.jar', 'C:\\\\jython\\\\jython2.5.0\\\\Lib', 'C:\\\\jython\\\\jython2.5.0\\\\Lib\\\\site-packages', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\rt.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\jsse.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\jce.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\charsets.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\dnsns.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\localedata.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\sunjce_provider.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\sunpkcs11.jar', 'C:\\\\WINDOWS\\\\system32\\\\python26.zip', 'C:\\\\python26\\\\DLLs', 'C:\\\\python26\\\\lib', 'C:\\\\python26\\\\lib\\\\plat-win', 'C:\\\\python26\\\\lib\\\\lib-tk', 'C:\\\\python26']\r\n", "'import site' failed; use -v for traceback\r\n")
("['', 'C:\\\\dev\\\\ide\\\\eclipse\\\\plugins\\\\org.python.pydev_1.5.0.1251989166\\\\PySrc\\\\pydev_sitecustomize', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\libs', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jacob.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jiffie.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jaxen-1.1.1.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\swt.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\qpslib.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\ifxjdbc.jar', 'C:\\\\server\\\\jboss\\\\client\\\\jbossall-client.jar', 'C:\\\\usr\\\\local\\\\machine', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol\\\\config', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\oknos\\\\tickcardimp\\\\bin', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\common\\\\jar\\\\shared.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\libs', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jacob.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jiffie.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\jaxen-1.1.1.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\swt.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\script_jy\\\\jars\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\qpslib.jar', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\nlibs\\\\ifxjdbc.jar', 'C:\\\\server\\\\jboss\\\\client\\\\jbossall-client.jar', 'C:\\\\usr\\\\local\\\\machine', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol\\\\config', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src\\\\build\\\\components\\\\jobcontrol', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\event\\\\src', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\oknos\\\\tickcardimp\\\\bin', 'C:\\\\dev\\\\ws\\\\central\\\\head\\\\common\\\\jar\\\\shared.jar', 'C:\\\\jython\\\\jython2.5.0\\\\Lib', 'C:\\\\jython\\\\jython2.5.0\\\\Lib\\\\site-packages', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\rt.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\jsse.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\jce.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\charsets.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\dnsns.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\localedata.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\sunjce_provider.jar', 'C:\\\\dev\\\\java\\\\jdk1.5.0_17\\\\jre\\\\lib\\\\ext\\\\sunpkcs11.jar', 'C:\\\\WINDOWS\\\\system32\\\\python26.zip', 'C:\\\\python26\\\\DLLs', 'C:\\\\python26\\\\lib', 'C:\\\\python26\\\\lib\\\\plat-win', 'C:\\\\python26\\\\lib\\\\lib-tk', 'C:\\\\python26']\r\n", "'import site' failed; use -v for traceback\r\n")
A:
This depends entirely on the features of the command line program you're calling. If it requires that you give it a filename as a parameter, you'll have to do just that. It could be, though, that if you give it a filename of "-" it will read from stdin. Some programs work that way. You'll need to read the documentation on that program to learn what options you have available to you.
A:
Bryan Oakley is correct, a lot of this depends on the capabilities of the called application. But if it can't directly take stdin, there might be another way. If you're on a Unix-like system, you might be able to replace the temporary file with a named pipe. Wikipedia claims Windows has similar functionality, but I'm not familiar with it. This may require kicking off the application from another Python thread.
A:
As an additional idea, you could use the built in temp file that Python has built in. I'm not extremely familiar with Python or this feature but I wanted to share this idea.
You'd still be writing a file out and executing the client program in the same fashion. But, this way, Python manages the clean up for you which might solve your question enough to not need to worry about a more complicated implementation.
A:
if you are on linux, you can try passing /dev/stdin as the file which works the same as using "-" but supports programs that don't know about "-"
|
Python: Running command line application to the script. Sending parameters in files
|
I want to try to use a command line script with my python application. The task is the following, my database stores some initial data for the script and
I need to execute a command line application in a following way:
$ application -parameter1 -file1
Here file1 is a file which contains my initial data, and parameter1 is an unrelated parameter.
The workflow as I see it now is the following:
initial_data = get_initial_data_from_db()
file = open('temp.txt', 'w+')
file.write(initial_data)
file.save()
os.popen4("application -parameter1 -file temp.txt")
I wonder if it's possible to execute this script (called application) without writing the file with initial data to the hard disk? E.g. is there a way send the files contents to the command directly?
|
[
"You can use the subprocess modul\nsomething like that: \nimport subprocess\nbufsize =1024\ninitial_data = get_initial_data_from_db()\np = subprocess.Popen(\"application -parameter1\", shell=True, bufsize=bufsize,\n stdin=subprocess.PIPE, close_fds=True)\n\np.stdin.write(initial_data)\nprint p.communicate()\n\n! if your application can read from stdin\nTesting with Python as Application (in Eclipse) / after remark from Oleg Tarasenko :\nimport subprocess\n\ninitial_data = \"\"\"\nimport sys\nprint sys.path\n\"\"\"\n\nfor test in [1,2,3] :\n p = subprocess.Popen(\"C:/python26/python\", shell=True, bufsize=512,\n stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)\n\n p.stdin.write(initial_data)\n print p.communicate()\n\nOutput:\n(\"['', 'C:\\\\\\\\dev\\\\\\\\ide\\\\\\\\eclipse\\\\\\\\plugins\\\\\\\\org.python.pydev_1.5.0.1251989166\\\\\\\\PySrc\\\\\\\\pydev_sitecustomize', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\libs', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jacob.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jiffie.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jaxen-1.1.1.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\swt.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\qpslib.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\ifxjdbc.jar', 'C:\\\\\\\\server\\\\\\\\jboss\\\\\\\\client\\\\\\\\jbossall-client.jar', 'C:\\\\\\\\usr\\\\\\\\local\\\\\\\\machine', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol\\\\\\\\config', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\oknos\\\\\\\\tickcardimp\\\\\\\\bin', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\common\\\\\\\\jar\\\\\\\\shared.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\libs', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jacob.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jiffie.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jaxen-1.1.1.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\swt.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\qpslib.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\ifxjdbc.jar', 'C:\\\\\\\\server\\\\\\\\jboss\\\\\\\\client\\\\\\\\jbossall-client.jar', 'C:\\\\\\\\usr\\\\\\\\local\\\\\\\\machine', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol\\\\\\\\config', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\oknos\\\\\\\\tickcardimp\\\\\\\\bin', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\common\\\\\\\\jar\\\\\\\\shared.jar', 'C:\\\\\\\\jython\\\\\\\\jython2.5.0\\\\\\\\Lib', 'C:\\\\\\\\jython\\\\\\\\jython2.5.0\\\\\\\\Lib\\\\\\\\site-packages', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\rt.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\jsse.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\jce.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\charsets.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\dnsns.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\localedata.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\sunjce_provider.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\sunpkcs11.jar', 'C:\\\\\\\\WINDOWS\\\\\\\\system32\\\\\\\\python26.zip', 'C:\\\\\\\\python26\\\\\\\\DLLs', 'C:\\\\\\\\python26\\\\\\\\lib', 'C:\\\\\\\\python26\\\\\\\\lib\\\\\\\\plat-win', 'C:\\\\\\\\python26\\\\\\\\lib\\\\\\\\lib-tk', 'C:\\\\\\\\python26']\\r\\n\", \"'import site' failed; use -v for traceback\\r\\n\")\n(\"['', 'C:\\\\\\\\dev\\\\\\\\ide\\\\\\\\eclipse\\\\\\\\plugins\\\\\\\\org.python.pydev_1.5.0.1251989166\\\\\\\\PySrc\\\\\\\\pydev_sitecustomize', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\libs', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jacob.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jiffie.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jaxen-1.1.1.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\swt.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\qpslib.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\ifxjdbc.jar', 'C:\\\\\\\\server\\\\\\\\jboss\\\\\\\\client\\\\\\\\jbossall-client.jar', 'C:\\\\\\\\usr\\\\\\\\local\\\\\\\\machine', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol\\\\\\\\config', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\oknos\\\\\\\\tickcardimp\\\\\\\\bin', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\common\\\\\\\\jar\\\\\\\\shared.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\libs', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jacob.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jiffie.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jaxen-1.1.1.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\swt.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\qpslib.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\ifxjdbc.jar', 'C:\\\\\\\\server\\\\\\\\jboss\\\\\\\\client\\\\\\\\jbossall-client.jar', 'C:\\\\\\\\usr\\\\\\\\local\\\\\\\\machine', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol\\\\\\\\config', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\oknos\\\\\\\\tickcardimp\\\\\\\\bin', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\common\\\\\\\\jar\\\\\\\\shared.jar', 'C:\\\\\\\\jython\\\\\\\\jython2.5.0\\\\\\\\Lib', 'C:\\\\\\\\jython\\\\\\\\jython2.5.0\\\\\\\\Lib\\\\\\\\site-packages', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\rt.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\jsse.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\jce.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\charsets.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\dnsns.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\localedata.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\sunjce_provider.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\sunpkcs11.jar', 'C:\\\\\\\\WINDOWS\\\\\\\\system32\\\\\\\\python26.zip', 'C:\\\\\\\\python26\\\\\\\\DLLs', 'C:\\\\\\\\python26\\\\\\\\lib', 'C:\\\\\\\\python26\\\\\\\\lib\\\\\\\\plat-win', 'C:\\\\\\\\python26\\\\\\\\lib\\\\\\\\lib-tk', 'C:\\\\\\\\python26']\\r\\n\", \"'import site' failed; use -v for traceback\\r\\n\")\n(\"['', 'C:\\\\\\\\dev\\\\\\\\ide\\\\\\\\eclipse\\\\\\\\plugins\\\\\\\\org.python.pydev_1.5.0.1251989166\\\\\\\\PySrc\\\\\\\\pydev_sitecustomize', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\libs', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jacob.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jiffie.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jaxen-1.1.1.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\swt.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\qpslib.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\ifxjdbc.jar', 'C:\\\\\\\\server\\\\\\\\jboss\\\\\\\\client\\\\\\\\jbossall-client.jar', 'C:\\\\\\\\usr\\\\\\\\local\\\\\\\\machine', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol\\\\\\\\config', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\oknos\\\\\\\\tickcardimp\\\\\\\\bin', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\common\\\\\\\\jar\\\\\\\\shared.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\libs', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jacob.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jiffie.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\jaxen-1.1.1.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\swt.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\script_jy\\\\\\\\jars\\\\\\\\mysql-connector-java-3.0.17-ga-bin.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\qpslib.jar', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\nlibs\\\\\\\\ifxjdbc.jar', 'C:\\\\\\\\server\\\\\\\\jboss\\\\\\\\client\\\\\\\\jbossall-client.jar', 'C:\\\\\\\\usr\\\\\\\\local\\\\\\\\machine', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol\\\\\\\\config', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src\\\\\\\\build\\\\\\\\components\\\\\\\\jobcontrol', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\event\\\\\\\\src', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\oknos\\\\\\\\tickcardimp\\\\\\\\bin', 'C:\\\\\\\\dev\\\\\\\\ws\\\\\\\\central\\\\\\\\head\\\\\\\\common\\\\\\\\jar\\\\\\\\shared.jar', 'C:\\\\\\\\jython\\\\\\\\jython2.5.0\\\\\\\\Lib', 'C:\\\\\\\\jython\\\\\\\\jython2.5.0\\\\\\\\Lib\\\\\\\\site-packages', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\rt.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\jsse.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\jce.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\charsets.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\dnsns.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\localedata.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\sunjce_provider.jar', 'C:\\\\\\\\dev\\\\\\\\java\\\\\\\\jdk1.5.0_17\\\\\\\\jre\\\\\\\\lib\\\\\\\\ext\\\\\\\\sunpkcs11.jar', 'C:\\\\\\\\WINDOWS\\\\\\\\system32\\\\\\\\python26.zip', 'C:\\\\\\\\python26\\\\\\\\DLLs', 'C:\\\\\\\\python26\\\\\\\\lib', 'C:\\\\\\\\python26\\\\\\\\lib\\\\\\\\plat-win', 'C:\\\\\\\\python26\\\\\\\\lib\\\\\\\\lib-tk', 'C:\\\\\\\\python26']\\r\\n\", \"'import site' failed; use -v for traceback\\r\\n\")\n\n",
"This depends entirely on the features of the command line program you're calling. If it requires that you give it a filename as a parameter, you'll have to do just that. It could be, though, that if you give it a filename of \"-\" it will read from stdin. Some programs work that way. You'll need to read the documentation on that program to learn what options you have available to you.\n",
"Bryan Oakley is correct, a lot of this depends on the capabilities of the called application. But if it can't directly take stdin, there might be another way. If you're on a Unix-like system, you might be able to replace the temporary file with a named pipe. Wikipedia claims Windows has similar functionality, but I'm not familiar with it. This may require kicking off the application from another Python thread.\n",
"As an additional idea, you could use the built in temp file that Python has built in. I'm not extremely familiar with Python or this feature but I wanted to share this idea. \nYou'd still be writing a file out and executing the client program in the same fashion. But, this way, Python manages the clean up for you which might solve your question enough to not need to worry about a more complicated implementation. \n",
"if you are on linux, you can try passing /dev/stdin as the file which works the same as using \"-\" but supports programs that don't know about \"-\"\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001520442_python.txt
|
Q:
Why do I get encoding error in python warnings.formatwarning on format string?
I get encoding error on this line:
s = "%s:%s: %s: %s\n" % (filename, lineno, category.__name__, message)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xc4' in position 44: ordinal not in range(128)
I tried to reproduce this error by passing all combinations of parameters to string format, but closest I got was "ascii decode" error (by passing unicode and high ascii string simultaneously, which forced conversion of string to unicode, using ascii decoder.
However, I did not manage to get "ascii encode" error. Anybody has an idea?
A:
This happens when Python tries to coerce an argument:
s = u"\u00fc"
print str(s)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 0: ordinal not in range(128)
This happens because one of your arguments is an object (not a string of any kind) and Python calls str() on it. There are two solutions: Use a unicode string for the format (s = u"%s...") or wrap each argument with repr().
A:
You are mixing unicode and str objects.
Explanation:
In Python 2.x, there are two kinds of objects that can contain text strings. str, and unicode. str is a string of bytes, so it can only contain characters between 0 and 255.
Unicode is a string of unicode characters.
You can convert between str and unicode with the "encode" and "decode" methods:
>>> "thisisastring".decode('ascii')
u'thisisastring'
>>> u"This is ä string".encode('utf8')
'This is \xc3\xa4 string'
Note the encodings. Encodings are ways of representing unicode text as only strings of bytes.
If you try to add str and unicode together, Python will try to convert one to the other. But by default it will use ASCII as a encoding, which means a-z, A-Z, and some extra characters like !"#$%&/()=?'{[]]} etc. Anything else will fail.
You will at that point either get a encoding error or a decoding error, depending on if Python tries to convert the unicode to str or str to unicode. Usually it tries to decode, that is convert to unicode. But sometimes it decides not to but to coerce to string. I'm not entirely sure why.
Update:
The reason you get an encode error and not a decode error above is that message in the above code is neither str nor unicode. It's another object, that has a str method. Python therefore does str(message) before passing it in, and that fails, since the internally stores message is a unicode object that can't be coerced to ascii.
Or, more simply answered: It fails because warnings.warn() doesn't accept unicode messages.
Now, the solution:
Don't mix str and unicode. If you need to use unicode, and you apparently do, try to make sure all strings are unicode all the time. That's the only way to be sure you avoid this. This means that whenever you read in a string from disk, or a call to a function that may return anything else than pure ascii str, decode it to unicode as soon as possible.
And when you need to save it to disk or send it over a network or pass it in to a method that do not understand unicode, encode it to str as late as possible.
In this specific case, the problem is that you pass unicode to warnings.warn() and you can't do that. Pass a string. If you don't know what it is (as seems to be the case here) because it comes from somewhere else, your try/except solutions with a repr works fine, although doing a encode would be a possibility to.
A:
One of the operands you are passing is not suitable for ASCII encoding - perhaps it contains either Unicode or Latin-1 characters. Change the format string to Unicode and see what happens.
|
Why do I get encoding error in python warnings.formatwarning on format string?
|
I get encoding error on this line:
s = "%s:%s: %s: %s\n" % (filename, lineno, category.__name__, message)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xc4' in position 44: ordinal not in range(128)
I tried to reproduce this error by passing all combinations of parameters to string format, but closest I got was "ascii decode" error (by passing unicode and high ascii string simultaneously, which forced conversion of string to unicode, using ascii decoder.
However, I did not manage to get "ascii encode" error. Anybody has an idea?
|
[
"This happens when Python tries to coerce an argument:\ns = u\"\\u00fc\"\nprint str(s)\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\xfc' in position 0: ordinal not in range(128)\n\nThis happens because one of your arguments is an object (not a string of any kind) and Python calls str() on it. There are two solutions: Use a unicode string for the format (s = u\"%s...\") or wrap each argument with repr().\n",
"You are mixing unicode and str objects. \nExplanation:\nIn Python 2.x, there are two kinds of objects that can contain text strings. str, and unicode. str is a string of bytes, so it can only contain characters between 0 and 255.\nUnicode is a string of unicode characters.\nYou can convert between str and unicode with the \"encode\" and \"decode\" methods:\n>>> \"thisisastring\".decode('ascii')\nu'thisisastring'\n\n>>> u\"This is ä string\".encode('utf8') \n'This is \\xc3\\xa4 string'\n\nNote the encodings. Encodings are ways of representing unicode text as only strings of bytes. \nIf you try to add str and unicode together, Python will try to convert one to the other. But by default it will use ASCII as a encoding, which means a-z, A-Z, and some extra characters like !\"#$%&/()=?'{[]]} etc. Anything else will fail.\nYou will at that point either get a encoding error or a decoding error, depending on if Python tries to convert the unicode to str or str to unicode. Usually it tries to decode, that is convert to unicode. But sometimes it decides not to but to coerce to string. I'm not entirely sure why.\nUpdate: \nThe reason you get an encode error and not a decode error above is that message in the above code is neither str nor unicode. It's another object, that has a str method. Python therefore does str(message) before passing it in, and that fails, since the internally stores message is a unicode object that can't be coerced to ascii.\nOr, more simply answered: It fails because warnings.warn() doesn't accept unicode messages.\nNow, the solution:\nDon't mix str and unicode. If you need to use unicode, and you apparently do, try to make sure all strings are unicode all the time. That's the only way to be sure you avoid this. This means that whenever you read in a string from disk, or a call to a function that may return anything else than pure ascii str, decode it to unicode as soon as possible.\nAnd when you need to save it to disk or send it over a network or pass it in to a method that do not understand unicode, encode it to str as late as possible.\nIn this specific case, the problem is that you pass unicode to warnings.warn() and you can't do that. Pass a string. If you don't know what it is (as seems to be the case here) because it comes from somewhere else, your try/except solutions with a repr works fine, although doing a encode would be a possibility to.\n",
"One of the operands you are passing is not suitable for ASCII encoding - perhaps it contains either Unicode or Latin-1 characters. Change the format string to Unicode and see what happens.\n"
] |
[
8,
8,
1
] |
[] |
[] |
[
"encoding",
"python",
"warnings"
] |
stackoverflow_0001524262_encoding_python_warnings.txt
|
Q:
Is an applet a proper solution for hardware detection and driver install?
Can I use applets to inform me which hardware is installed on client system (fingerprint reader)? And if it is installed, can it tell me its version, so that it can download the proper plugin from a site? So that after everything is OK, the user can use his fingerprint reader to authenticate himself?
A:
Applet should be better if your application is not depended on IE.
If for IE only then use COM/ActiveX
|
Is an applet a proper solution for hardware detection and driver install?
|
Can I use applets to inform me which hardware is installed on client system (fingerprint reader)? And if it is installed, can it tell me its version, so that it can download the proper plugin from a site? So that after everything is OK, the user can use his fingerprint reader to authenticate himself?
|
[
"Applet should be better if your application is not depended on IE.\nIf for IE only then use COM/ActiveX\n"
] |
[
0
] |
[] |
[] |
[
"c#",
"python",
"ruby"
] |
stackoverflow_0001524606_c#_python_ruby.txt
|
Q:
How do I display images at different times on webpage
I'm supposed to display images at certain times of the day on the webpage, Please can anyone tell me how to go about it
A:
You'll need a template tag which checks the time of day and outputs the relevant HTML. How you do that depends on how you want to determine which image is for what time of day. For example, you could have a model which records a change_time and an image path, and the tag would get the image with the most recent change_time.
For instance:
import datetime
@register.simple_tag
def image_change_by_time():
now = datetime.datetime.now().time()
image = Images.objects.filter(change_time__lte=now).order_by('-change_time')[0]
return mark_safe('<img src="%s">' % image.url.path)
A:
The safest way is to do this on the server side, do not do this on the client side using javascript because clients may have different timezones.
for instance in PHP:
<style type="text/css"> #someimage{ position:absolute;left:200px;top:100px; }</style>
<?php
$h = intval(date('h'));
$m = intval(date('m'));
if($h==12 && $m==00){
echo '<img src="someimage.gif" id="someimg"/>';
}
?>
in python:
#!/usr/bin/python
import time
localtime = time.localtime(time.time())
h = localtime[3]
m = localtime[4]
if h==12 and m==0: print "<img src='some.gif' id='someimg'/>"
A:
You could make a Date object in javascript. Check the current time and depending on the time, you set the img src to whatever image you want for that time of day :) or hide the image through myimg.style.visibility = "hidden" if you dont want to display an image at that moment.
A:
If you need to change the image before a page refresh, you could use jquery ajax call to get the correct image. jquery has some interval functionality which would allow this.
|
How do I display images at different times on webpage
|
I'm supposed to display images at certain times of the day on the webpage, Please can anyone tell me how to go about it
|
[
"You'll need a template tag which checks the time of day and outputs the relevant HTML. How you do that depends on how you want to determine which image is for what time of day. For example, you could have a model which records a change_time and an image path, and the tag would get the image with the most recent change_time.\nFor instance:\nimport datetime\n\[email protected]_tag\ndef image_change_by_time():\n now = datetime.datetime.now().time()\n image = Images.objects.filter(change_time__lte=now).order_by('-change_time')[0]\n return mark_safe('<img src=\"%s\">' % image.url.path)\n\n",
"The safest way is to do this on the server side, do not do this on the client side using javascript because clients may have different timezones.\nfor instance in PHP:\n<style type=\"text/css\"> #someimage{ position:absolute;left:200px;top:100px; }</style> \n\n<?php\n $h = intval(date('h'));\n $m = intval(date('m'));\n if($h==12 && $m==00){\n echo '<img src=\"someimage.gif\" id=\"someimg\"/>';\n }\n?> \n\nin python:\n#!/usr/bin/python\nimport time\n\nlocaltime = time.localtime(time.time())\nh = localtime[3]\nm = localtime[4]\nif h==12 and m==0: print \"<img src='some.gif' id='someimg'/>\"\n\n",
"You could make a Date object in javascript. Check the current time and depending on the time, you set the img src to whatever image you want for that time of day :) or hide the image through myimg.style.visibility = \"hidden\" if you dont want to display an image at that moment.\n",
"If you need to change the image before a page refresh, you could use jquery ajax call to get the correct image. jquery has some interval functionality which would allow this.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001524713_django_python.txt
|
Q:
An ALGORITHM to create one list from "X" nested lists in Python
What is the simplest way to create this list in Python?
First, suppose I have this nested list:
oldList = [ [{'letter':'a'}], [{'letter':'b'}], [{'letter':'c'}] ]
I want a function to spit out:
newList = [ {'letter':a}, {'letter':'b'}, {'letter':'c'} ]
Well, this could be done manually. However, what if there are three nested? ...X nested
Tricky? :)
A:
Recursive solutions are simplest, but limited to at most a few thousand levels of nesting before you get an exception about too-deep recursion. For real generality, you can eliminate recursion by keeping your own stack; iterators are good things to keep on said stack, and the whole function's best written as a generator (just call list(flatten(thelist)) if you really want a huge list result).
def flatten(alist):
stack = [iter(alist)]
while stack:
current = stack.pop()
for item in current:
if isinstance(item, list):
stack.append(current)
stack.append(iter(item))
break
yield item
Now this should let you handle as many levels of nesting as you have virtual memory for;-).
A:
http://www.daniel-lemire.com/blog/archives/2006/05/10/flattening-lists-in-python/
from that link (with a couple minor changes:
def flatten(l):
if isinstance(l, list):
return sum(map(flatten,l),[])
else:
return [l]
A:
The simplest answer is this
Only you can prevent nested lists
Do not create a list of lists using append. Create a flat list using extend.
A:
The Python Cookbook Martelli, Ravenscroft and Asher 2005 O'Reilley also offers a solution to this flattening problem.
See 4.6 Flattening a nested sequence.
This solution uses generators which can be a good thing if the lists are long.
Also, this solution deals equaly well with lists or tuples.
Note: Oops... I rushed a bit. I'm unsure of the legality of reproducing this snippet here...
let me look for policy / precedents in this area.
Edit: later found reference as a Google Books preview.
Here's a link to this section of the book in Google books
A:
I prefer Alex Martelli's post as usual. Just want to add a not recommended trick:
from Tkinter import _flatten
print _flatten(oldList)
|
An ALGORITHM to create one list from "X" nested lists in Python
|
What is the simplest way to create this list in Python?
First, suppose I have this nested list:
oldList = [ [{'letter':'a'}], [{'letter':'b'}], [{'letter':'c'}] ]
I want a function to spit out:
newList = [ {'letter':a}, {'letter':'b'}, {'letter':'c'} ]
Well, this could be done manually. However, what if there are three nested? ...X nested
Tricky? :)
|
[
"Recursive solutions are simplest, but limited to at most a few thousand levels of nesting before you get an exception about too-deep recursion. For real generality, you can eliminate recursion by keeping your own stack; iterators are good things to keep on said stack, and the whole function's best written as a generator (just call list(flatten(thelist)) if you really want a huge list result).\ndef flatten(alist):\n stack = [iter(alist)]\n while stack:\n current = stack.pop()\n for item in current:\n if isinstance(item, list):\n stack.append(current)\n stack.append(iter(item))\n break\n yield item\n\nNow this should let you handle as many levels of nesting as you have virtual memory for;-).\n",
"http://www.daniel-lemire.com/blog/archives/2006/05/10/flattening-lists-in-python/\nfrom that link (with a couple minor changes:\ndef flatten(l):\n if isinstance(l, list):\n return sum(map(flatten,l),[])\n else:\n return [l]\n\n",
"The simplest answer is this\nOnly you can prevent nested lists\nDo not create a list of lists using append. Create a flat list using extend. \n",
"The Python Cookbook Martelli, Ravenscroft and Asher 2005 O'Reilley also offers a solution to this flattening problem.\nSee 4.6 Flattening a nested sequence.\nThis solution uses generators which can be a good thing if the lists are long.\nAlso, this solution deals equaly well with lists or tuples.\nNote: Oops... I rushed a bit. I'm unsure of the legality of reproducing this snippet here...\nlet me look for policy / precedents in this area.\nEdit: later found reference as a Google Books preview.\nHere's a link to this section of the book in Google books\n",
"I prefer Alex Martelli's post as usual. Just want to add a not recommended trick: \nfrom Tkinter import _flatten\nprint _flatten(oldList)\n\n"
] |
[
4,
2,
1,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001523675_python.txt
|
Q:
Reasons to use distutils when packaging C/Python project
I have an open source project containing both Python and C code. I'm wondering that is there any use for distutils for me, because I'm planning to do a ubuntu/debian package. The C code is not something that I could or want to use as Python extension. C and Python programs communicate with TCP/IP through localhost.
So the bottom line here is that while I'm learning packaging, does the usage of distutils specific files only make me more confused since I can't use my C-code as Python extensions? Or should I divide my C and Python functionality to separate projects to be able to understand packaging concepts better?
A:
distutils can be used to install end user programs, but it's most useful when using it for Python libraries, as it can create source packages and also install them in the correct place. For that I would say it's more or less required.
But for an end user Python program you can also use make or whatever you like and are used to, as you don't need to install any code in the Python site-packages directory, and you don't need to put your code onto PyPI and it doesn't need to be accessible from other Python-code.
I don't think distutils will be neither more or less complicated to use in installing an end-user program compared to other tools. All such install/packaging tools are hella-complex, as Cartman would have said.
A:
Because it uses an unified python setup.py install command? distutils, or setuptools? Whatever, just use one of those.
For development, it's also really useful because you don't have to care where to find such and such dependency. As long as it's standard Python/basic system library stuff, setup.py should find it for you. With setup.py, you don't require anymore ./configure stuff or ugly autotools to create huge Makefiles. It just works (tm)
|
Reasons to use distutils when packaging C/Python project
|
I have an open source project containing both Python and C code. I'm wondering that is there any use for distutils for me, because I'm planning to do a ubuntu/debian package. The C code is not something that I could or want to use as Python extension. C and Python programs communicate with TCP/IP through localhost.
So the bottom line here is that while I'm learning packaging, does the usage of distutils specific files only make me more confused since I can't use my C-code as Python extensions? Or should I divide my C and Python functionality to separate projects to be able to understand packaging concepts better?
|
[
"distutils can be used to install end user programs, but it's most useful when using it for Python libraries, as it can create source packages and also install them in the correct place. For that I would say it's more or less required.\nBut for an end user Python program you can also use make or whatever you like and are used to, as you don't need to install any code in the Python site-packages directory, and you don't need to put your code onto PyPI and it doesn't need to be accessible from other Python-code.\nI don't think distutils will be neither more or less complicated to use in installing an end-user program compared to other tools. All such install/packaging tools are hella-complex, as Cartman would have said.\n",
"Because it uses an unified python setup.py install command? distutils, or setuptools? Whatever, just use one of those.\nFor development, it's also really useful because you don't have to care where to find such and such dependency. As long as it's standard Python/basic system library stuff, setup.py should find it for you. With setup.py, you don't require anymore ./configure stuff or ugly autotools to create huge Makefiles. It just works (tm)\n"
] |
[
1,
1
] |
[] |
[] |
[
"c",
"distutils",
"packaging",
"python"
] |
stackoverflow_0001523874_c_distutils_packaging_python.txt
|
Q:
Boo vs C# vs Python?
Compared to C#, Boo feels a bit more Pythonic but it's also compiled down to .NET MSIL. I liked its syntax, even more than C#'s syntax. But I couldn't find a single book teaching Boo.
And I really don't know, if learning Boo is better than C# or learning C# is better than Boo. I just want to use some Python-like data types. Those are:
{key1:value1, key2:value2} → dictionary
[Value1,Value2,Value3] → List (can be edited/changed)
(Value1,Value2,Value3) → Tuple (can't be edited/changed)
I use dictionaries more than list and tuples. I want to know, which one is better?
A:
I have found Boo to be very useful in creating simple one-off scripts, while retaining my Pythonic source style. And since it compiles to runnable EXE or DLL, I can package up a single EXE with all the needed DLLs (including Boo.Lang.dll) using ILMerge, and then send that off to a client, usually for some kind of quick troubleshooting or system diagnosis.
I also use Boo to support my C# development. I often fire up a Boo interpreter to try out variations of string or date formatting, then I can replicate the final version almost directly into C#.
But it is darned difficult to find docs for Boo. I had to Google quite a bit to find the syntax for generics, since they are a relatively new addition to Boo, and not yet mentioned in any tutorials, or even reference pages. And googling for "boo" generates quite a few unwanted hits, making the search even more difficult.
So in short, don't make this a choice between Boo and C# - they actually complement each other pretty well.
A:
My general opinion is that it would be better to go for C# since it is from my point of view, easier to find resources, documentation and tutorials for C#.
A:
Knowing C# will be very useful to you if you want a career in .NET development. But learning Boo would allow you to use the Python-like features you are after in a .NET environment. You should probably also look into IronPython, which does have books available (Iron Python in Action)
A:
You have lists and dictionaries in .Net: System.Collections.Generic.List and System.collections.Generic.Dictionary.
As for the language: Just learn the one that is more fun for you. The choice of language is most often religious. Expecially on the .Net platform, where each language has almost the same capabilities.
A:
I'm not sure what your end goal is, but before you give up on python please do check out the python/Qt combo for building a gui. You can build complex cross-platform guis and it's fairly easy to pick up. Qt, Python Bindings
|
Boo vs C# vs Python?
|
Compared to C#, Boo feels a bit more Pythonic but it's also compiled down to .NET MSIL. I liked its syntax, even more than C#'s syntax. But I couldn't find a single book teaching Boo.
And I really don't know, if learning Boo is better than C# or learning C# is better than Boo. I just want to use some Python-like data types. Those are:
{key1:value1, key2:value2} → dictionary
[Value1,Value2,Value3] → List (can be edited/changed)
(Value1,Value2,Value3) → Tuple (can't be edited/changed)
I use dictionaries more than list and tuples. I want to know, which one is better?
|
[
"I have found Boo to be very useful in creating simple one-off scripts, while retaining my Pythonic source style. And since it compiles to runnable EXE or DLL, I can package up a single EXE with all the needed DLLs (including Boo.Lang.dll) using ILMerge, and then send that off to a client, usually for some kind of quick troubleshooting or system diagnosis.\nI also use Boo to support my C# development. I often fire up a Boo interpreter to try out variations of string or date formatting, then I can replicate the final version almost directly into C#.\nBut it is darned difficult to find docs for Boo. I had to Google quite a bit to find the syntax for generics, since they are a relatively new addition to Boo, and not yet mentioned in any tutorials, or even reference pages. And googling for \"boo\" generates quite a few unwanted hits, making the search even more difficult.\nSo in short, don't make this a choice between Boo and C# - they actually complement each other pretty well.\n",
"My general opinion is that it would be better to go for C# since it is from my point of view, easier to find resources, documentation and tutorials for C#.\n",
"Knowing C# will be very useful to you if you want a career in .NET development. But learning Boo would allow you to use the Python-like features you are after in a .NET environment. You should probably also look into IronPython, which does have books available (Iron Python in Action) \n",
"You have lists and dictionaries in .Net: System.Collections.Generic.List and System.collections.Generic.Dictionary.\nAs for the language: Just learn the one that is more fun for you. The choice of language is most often religious. Expecially on the .Net platform, where each language has almost the same capabilities.\n",
"I'm not sure what your end goal is, but before you give up on python please do check out the python/Qt combo for building a gui. You can build complex cross-platform guis and it's fairly easy to pick up. Qt, Python Bindings\n"
] |
[
15,
5,
3,
2,
0
] |
[] |
[] |
[
"boo",
"c#",
"programming_languages",
"python"
] |
stackoverflow_0001524609_boo_c#_programming_languages_python.txt
|
Q:
PyQt context menu
I'm adding a contextmenu to a QTableWidget dynamically:
playlistContenxt = QAction("Add to %s" % (currentItem.text()), self.musicTable)
playlistContenxt.setData(currentData)
self.connect(playlistContenxt, SIGNAL("triggered()"), self.addToPlaylistAction)
self.musicTable.addAction(playlistContenxt)
currentItem.text() is a playlist name thats being fetched from db, as you can see only one function (addToPlaylistAction) receives all triggers from different actions. On my addToPlaylistAction function, how do I determine which menu has been clicked?
A:
The correct way is to use signal mapper: You can assign data to each of the senders and get a signal with that data.
A:
You can use QAction.setData to set some data, so that the slot knows which playlist to add to. Then from the slot you can call self.sender() to get the action that triggered the signal, and use action.data() to get the data back.
|
PyQt context menu
|
I'm adding a contextmenu to a QTableWidget dynamically:
playlistContenxt = QAction("Add to %s" % (currentItem.text()), self.musicTable)
playlistContenxt.setData(currentData)
self.connect(playlistContenxt, SIGNAL("triggered()"), self.addToPlaylistAction)
self.musicTable.addAction(playlistContenxt)
currentItem.text() is a playlist name thats being fetched from db, as you can see only one function (addToPlaylistAction) receives all triggers from different actions. On my addToPlaylistAction function, how do I determine which menu has been clicked?
|
[
"The correct way is to use signal mapper: You can assign data to each of the senders and get a signal with that data.\n",
"You can use QAction.setData to set some data, so that the slot knows which playlist to add to. Then from the slot you can call self.sender() to get the action that triggered the signal, and use action.data() to get the data back.\n"
] |
[
5,
3
] |
[] |
[] |
[
"pyqt4",
"python",
"qt"
] |
stackoverflow_0001524349_pyqt4_python_qt.txt
|
Q:
Creating files on AppEngine
I'm trying to store some data in a binary file:
import os, pickle
o = some_object()
file = open('test', 'wb') #this causes the error
pickle.dump(o, file)
file.close()
I get this error: IOError: invalid mode: wb
It doesn't work (neither on test server or GAE itself), apparantly because it does't have the write permission.
How can I create files on Appengine?
A:
GAE is readonly, you can only store stuff in the datastore
here is a link to the docs
A:
I think the short answer is you shouldn't. I suggest you store this info in BigTable as a Blob and serve it up that way.
|
Creating files on AppEngine
|
I'm trying to store some data in a binary file:
import os, pickle
o = some_object()
file = open('test', 'wb') #this causes the error
pickle.dump(o, file)
file.close()
I get this error: IOError: invalid mode: wb
It doesn't work (neither on test server or GAE itself), apparantly because it does't have the write permission.
How can I create files on Appengine?
|
[
"GAE is readonly, you can only store stuff in the datastore\nhere is a link to the docs\n",
"I think the short answer is you shouldn't. I suggest you store this info in BigTable as a Blob and serve it up that way.\n"
] |
[
5,
1
] |
[] |
[] |
[
"file",
"google_app_engine",
"io",
"python"
] |
stackoverflow_0001525625_file_google_app_engine_io_python.txt
|
Q:
How to fix encoding in Python Mechanize?
here is the sample code:
from mechanize import Browser
br = Browser()
page = br.open('http://hunters.tclans.ru/news.php?readmore=2')
br.form = br.forms().next()
print br.form
The problem is that server return incorrect encoding (windows-cp1251). How can I manually set the encoding of the current page in mechanize?
Error:
Traceback (most recent call last):
File "/tmp/stackoverflow.py", line 5, in <module>
br.form = br.forms().next()
File "/usr/local/lib/python2.6/dist-packages/mechanize/_mechanize.py", line 426, in forms
return self._factory.forms()
File "/usr/local/lib/python2.6/dist-packages/mechanize/_html.py", line 559, in forms
self._forms_factory.forms())
File "/usr/local/lib/python2.6/dist-packages/mechanize/_html.py", line 225, in forms
_urlunparse=_rfc3986.urlunsplit,
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 967, in ParseResponseEx
_urlunparse=_urlunparse,
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 1104, in _ParseFileEx
fp.feed(data)
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 870, in feed
sgmllib.SGMLParser.feed(self, data)
File "/usr/lib/python2.6/sgmllib.py", line 104, in feed
self.goahead(0)
File "/usr/lib/python2.6/sgmllib.py", line 193, in goahead
self.handle_entityref(name)
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 751, in handle_entityref
'&%s;' % name, self._entitydefs, self._encoding))
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 238, in unescape
return re.sub(r"&#?[A-Za-z0-9]+?;", replace_entities, data)
File "/usr/lib/python2.6/re.py", line 151, in sub
return _compile(pattern, 0).sub(repl, string, count)
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 230, in replace_entities
repl = repl.encode(encoding)
LookupError: unknown encoding: windows-cp1251
A:
I don't know about Mechanize, but you could hack codecs to accept wrong encoding names that have both ‘windows’ and ‘cp’:
>>> def fixcp(name):
... if name.lower().startswith('windows-cp'):
... try:
... return codecs.lookup(name[:8]+name[10:])
... except LookupError:
... pass
... return None
...
>>> codecs.register(fixcp)
>>> '\xcd\xe0\xef\xee\xec\xe8\xed\xe0\xe5\xec'.decode('windows-cp1251')
u'Напоминаем'
A:
Fixed by setting
br._factory.encoding = enc
br._factory._forms_factory.encoding = enc
br._factory._links_factory._encoding = enc
(note the underscores) after br.open()
|
How to fix encoding in Python Mechanize?
|
here is the sample code:
from mechanize import Browser
br = Browser()
page = br.open('http://hunters.tclans.ru/news.php?readmore=2')
br.form = br.forms().next()
print br.form
The problem is that server return incorrect encoding (windows-cp1251). How can I manually set the encoding of the current page in mechanize?
Error:
Traceback (most recent call last):
File "/tmp/stackoverflow.py", line 5, in <module>
br.form = br.forms().next()
File "/usr/local/lib/python2.6/dist-packages/mechanize/_mechanize.py", line 426, in forms
return self._factory.forms()
File "/usr/local/lib/python2.6/dist-packages/mechanize/_html.py", line 559, in forms
self._forms_factory.forms())
File "/usr/local/lib/python2.6/dist-packages/mechanize/_html.py", line 225, in forms
_urlunparse=_rfc3986.urlunsplit,
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 967, in ParseResponseEx
_urlunparse=_urlunparse,
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 1104, in _ParseFileEx
fp.feed(data)
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 870, in feed
sgmllib.SGMLParser.feed(self, data)
File "/usr/lib/python2.6/sgmllib.py", line 104, in feed
self.goahead(0)
File "/usr/lib/python2.6/sgmllib.py", line 193, in goahead
self.handle_entityref(name)
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 751, in handle_entityref
'&%s;' % name, self._entitydefs, self._encoding))
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 238, in unescape
return re.sub(r"&#?[A-Za-z0-9]+?;", replace_entities, data)
File "/usr/lib/python2.6/re.py", line 151, in sub
return _compile(pattern, 0).sub(repl, string, count)
File "/usr/local/lib/python2.6/dist-packages/ClientForm.py", line 230, in replace_entities
repl = repl.encode(encoding)
LookupError: unknown encoding: windows-cp1251
|
[
"I don't know about Mechanize, but you could hack codecs to accept wrong encoding names that have both ‘windows’ and ‘cp’:\n>>> def fixcp(name):\n... if name.lower().startswith('windows-cp'):\n... try:\n... return codecs.lookup(name[:8]+name[10:])\n... except LookupError:\n... pass\n... return None\n... \n>>> codecs.register(fixcp)\n>>> '\\xcd\\xe0\\xef\\xee\\xec\\xe8\\xed\\xe0\\xe5\\xec'.decode('windows-cp1251')\nu'Напоминаем'\n\n",
"Fixed by setting\nbr._factory.encoding = enc\nbr._factory._forms_factory.encoding = enc\nbr._factory._links_factory._encoding = enc\n\n(note the underscores) after br.open()\n"
] |
[
3,
2
] |
[] |
[] |
[
"encoding",
"mechanize",
"python"
] |
stackoverflow_0001525295_encoding_mechanize_python.txt
|
Q:
(Python) Passing a threading.Thread object through a function?
I have a timer function:
# This is a class that schedules tasks. It will call it's ring() function
# when the timer starts, and call it's running() function when within the
# time limit, and call it's over() function when the time is up.
# This class uses SYSTEM time.
import time, threading
import settings
from object import Object
class Timer(Object, threading.Thread):
# INIT -------------------------------------------------------------
# Init vars
#
# If autotick is True (default) the timer will run in a seperate
# process. Other wise it will need to be updated automatically by
# calling tick()
def __init__(self, autotick=False):
# Call inherited __init__ first.
threading.Thread.__init__(self)
Object.__init__(self)
# Now our vars
self.startTimeString = "" # The time when the timer starts as a string
self.endTimeString = "" # The time when the timer stops as a string
self.timeFormat = "" # The string to use as the format for the string
self.set = False # The timer starts deactivated
self.process = autotick # Wether or not to run in a seperate process.
self.rung = False # Has the timer rang yet?
# ACTIVATE --------------------------------------------------------------
# Sets the timer
def activate(self, startTime, endTime, format):
# Set the timer.
self.startTimeString = startTime
self.endTimeString = endTime
self.timeFormat = format
# Conver the strings to time using format
try:
self.startTime = time.strptime(startTime, self.timeFormat)
self.endTime = time.strptime(endTime, self.timeFormat)
except ValueError:
# Error
print ("Error: Cannot convert time according to format")
return False
# Try and convert the time to seconds
try:
self.startTimeSecs = time.mktime(self.startTime)
self.endTimeSecs = time.mktime(self.endTime)
except OverflowError, ValueError:
# Error
print ("Error: Cannot convert time to seconds")
return False
# The timer is now set
self.set = True
# If self.process is true, we need to start calling tick in a
# seperate process.
if self.process:
self.deamon = True # We don't want python to hang if a timer
# is still running at exit.
self.start()
# RING -------------------------------------------------------------
# This function is called when the timer starts.
def ring(self):
pass
# RUNNING ----------------------------------------------------------
# Called when the the time is whithin the time limits.
def running(self):
pass
# OVER -------------------------------------------------------------
# Called when the time is up
def over(self):
pass
# TICK -------------------------------------------------------------
# Call this every loop (or in a seperate process)
def tick(self):
print time.time(), self.startTimeSecs, self.endTimeSecs, time.strftime("%A %H:%M", time.localtime(self.startTimeSecs))
# Check the time
if time.mktime(time.localtime()) > self.startTimeSecs and time.mktime(time.localtime()) < self.endTimeSecs and not self.rung:
# The time has come =)
# Call ring()
self.ring()
# Now set self.rung to True
self.rung = True
# If the time is up..
elif time.mktime(time.localtime()) > self.endTimeSecs and self.rung:
self.over()
# Unset the timer
self.set = False
self.rung = False
# If we are inbetween the starttime and endtime.
elif time.mktime(time.localtime()) > self.startTimeSecs and time.mktime(time.localtime()) < self.endTimeSecs and self.rung:
self.running()
# If any of those aren't true, then the timer hasn't started yet
else:
# Check if the endTime has already passed
if time.mktime(time.localtime()) > self.endTimeSecs:
# The time has already passed.
self.set = False
# THREADING STUFF --------------------------------------------------
# This is run by Threads start() method.
def run(self):
while self.set == True:
# Tick
self.tick()
# Sleep for a bit to save CPU
time.sleep(settings.TIMER_SLEEP)
And I am added schedule blocks to a scheduler:
# LOAD -------------------------------------------------------------
# Loads schedule from a file (schedule_settings.py).
def load(self):
# Add blocks
for block in schedule_settings.BLOCKS:
# Calculate the day
start_day = str(getDate(block[1].split()[0]))
end_day = str(getDate(block[2].split()[0]))
self.scheduler.add(start_day + " " + block[1].split()[1], end_day + " " + block[2].split()[1], "%j %H:%M", block[0])
for block in self.scheduler.blocks:
block.timer.tick()
print len(self.scheduler.blocks)
# Start the scheduler (if it isn't already)
if not self.scheduler.running:
self.scheduler.start()
The add function looks like this:
# ADD --------------------------------------------------------------
# Add a scheduled time
#
# block should be a Block instance, describing what to do at this time.
def add(self, startTime, endTime, format, block):
# Add this block
newBlock = block
# Start a timer for this block
newBlock.timer = Timer()
# Figure out the time
year = time.strftime("%Y")
# Add the block timer
newBlock.timer.activate(year + " " + startTime, year + " " + endTime, "%Y " + format)
# Add this block to the list
self.blocks.append(newBlock)
return
Basically with my program you can make a week's schedule and play your videos as if it were a TV channel. A block is a period of time where the channel will play certain episodes, or certain series.
My problem is that the blocks get completley messed up after using the add function. Some get duplicated, they're in the wrong order, etc. However before the add function they are completely fine. If I use a small amount of blocks (2 or 3) it seems to work fine, though.
For example if my schedule_settings.py (set's up a weeks schedule) looks like this:
# -*- coding: utf-8 -*-
# This file contains settings for a week's schedule
from block import Block
from series import Series
# MAIN BLOCK (All old episodes)
mainBlock = Block()
mainBlock.picker = 'random'
mainBlock.name = "Main Block"
mainBlock.auto(".")
mainBlock.old_episodes = True
# ONE PIECE
onepieceBlock = Block()
onepieceBlock.picker = 'latest'
onepieceBlock.name = "One Piece"
onepieceBlock.series = [
Series(auto="One Piece"),
]
# NEWISH STUFF
newishBlock = Block()
newishBlock.picker = 'random'
newishBlock.auto(".")
newishBlock.name = "NewishBlock"
newishBlock.exclude_series = [
#Series(auto="One Piece"),
#Series(auto="Nyan Koi!"),
]
# NEW STUFF
newBlock = Block()
newBlock.picker = 'latest'
newBlock.name = "New Stuff"
newBlock.series = [
Series(auto="Nyan Koi!"),
]
# ACTIVE BLOCKS
BLOCKS = (
# MONDAY
(mainBlock, "Monday 08:00", "Monday 22:20"),
(onepieceBlock, "Monday 22:20", "Monday 22:30"),
(newishBlock, "Monday 22:30", "Monday 23:00"),
# TUESDAY
(mainBlock, "Tuesday 08:00", "Tuesday 18:00"),
(newBlock, "Tuesday 18:00", "Tuesday 18:30"),
(newishBlock, "Tuesday 18:30", "Tuesday 22:00"),
# WEDNESDAY
(mainBlock, "Wednesday 08:00", "Wednesday 18:00"),
(newBlock, "Wednesday 18:00", "Wednesday 18:30"),
(newishBlock, "Wednesday 18:30", "Wednesday 22:00"),
# THURSDAY
(mainBlock, "Thursday 08:00", "Thursday 18:00"),
(newBlock, "Thursday 18:00", "Thursday 18:30"),
(newishBlock, "Thursday 18:30", "Thursday 22:00"),
# FRIDAY
(mainBlock, "Friday 08:00", "Friday 18:00"),
(newBlock, "Friday 18:00", "Friday 18:30"),
# WEEKEND
(newishBlock, "Saturday 08:00", "Saturday 23:00"),
(newishBlock, "Sunday 08:00", "Sunday 23:00"),
)
Before adding to the scheduler, The list produced looks fine, but after adding, then printing it out, I get:
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1254777600.0 1254778200.0 Monday 22:20
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1255107600.0 1255109400.0 Friday 18:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1255107600.0 1255109400.0 Friday 18:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1255107600.0 1255109400.0 Friday 18:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1255107600.0 1255109400.0 Friday 18:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
I'm assuming this has somthing to do with the subclassing of threading.Thread I have done in my Timer class. Does passing it through a function and adding it to a list mess this up somehow?
(Edit) Sorry if that wasn't very concise, I was in rush and forgot to post the most important code =( Using the timer to tick manually was just some debug code, normally I would have auto=True in the timer class.
You can find all my code at: http://github.com/bombpersons/MYOT
A:
You show us tons of code but not the key parts -- the Scheduler part (and what's that peculiar upper-case-O Object class...?). Anyway, to answer your question, passing instances of Thread subclasses through functions, adding them to lists, etc etc, is perfectly fine (though other things you're doing may not be -- e.g. you may not realize that, just because tick is a method of a Thread subclass, calling it from another thread does NOT mean it will execute in its own thread... rather, when called, it will execute in the calling thread).
May I suggest using the sched module in Python standard library to implement scheduling functionality, rather than running your own...?
A:
Ugh.. I feel stupid now. The problem was that, the blocks were being passed by reference to the scheduler, therefore everytime I added a timer to the block, I was overwriting the older timer.
I made a schedulerBlock class containing a timer and a block. It now works perfectly =)
|
(Python) Passing a threading.Thread object through a function?
|
I have a timer function:
# This is a class that schedules tasks. It will call it's ring() function
# when the timer starts, and call it's running() function when within the
# time limit, and call it's over() function when the time is up.
# This class uses SYSTEM time.
import time, threading
import settings
from object import Object
class Timer(Object, threading.Thread):
# INIT -------------------------------------------------------------
# Init vars
#
# If autotick is True (default) the timer will run in a seperate
# process. Other wise it will need to be updated automatically by
# calling tick()
def __init__(self, autotick=False):
# Call inherited __init__ first.
threading.Thread.__init__(self)
Object.__init__(self)
# Now our vars
self.startTimeString = "" # The time when the timer starts as a string
self.endTimeString = "" # The time when the timer stops as a string
self.timeFormat = "" # The string to use as the format for the string
self.set = False # The timer starts deactivated
self.process = autotick # Wether or not to run in a seperate process.
self.rung = False # Has the timer rang yet?
# ACTIVATE --------------------------------------------------------------
# Sets the timer
def activate(self, startTime, endTime, format):
# Set the timer.
self.startTimeString = startTime
self.endTimeString = endTime
self.timeFormat = format
# Conver the strings to time using format
try:
self.startTime = time.strptime(startTime, self.timeFormat)
self.endTime = time.strptime(endTime, self.timeFormat)
except ValueError:
# Error
print ("Error: Cannot convert time according to format")
return False
# Try and convert the time to seconds
try:
self.startTimeSecs = time.mktime(self.startTime)
self.endTimeSecs = time.mktime(self.endTime)
except OverflowError, ValueError:
# Error
print ("Error: Cannot convert time to seconds")
return False
# The timer is now set
self.set = True
# If self.process is true, we need to start calling tick in a
# seperate process.
if self.process:
self.deamon = True # We don't want python to hang if a timer
# is still running at exit.
self.start()
# RING -------------------------------------------------------------
# This function is called when the timer starts.
def ring(self):
pass
# RUNNING ----------------------------------------------------------
# Called when the the time is whithin the time limits.
def running(self):
pass
# OVER -------------------------------------------------------------
# Called when the time is up
def over(self):
pass
# TICK -------------------------------------------------------------
# Call this every loop (or in a seperate process)
def tick(self):
print time.time(), self.startTimeSecs, self.endTimeSecs, time.strftime("%A %H:%M", time.localtime(self.startTimeSecs))
# Check the time
if time.mktime(time.localtime()) > self.startTimeSecs and time.mktime(time.localtime()) < self.endTimeSecs and not self.rung:
# The time has come =)
# Call ring()
self.ring()
# Now set self.rung to True
self.rung = True
# If the time is up..
elif time.mktime(time.localtime()) > self.endTimeSecs and self.rung:
self.over()
# Unset the timer
self.set = False
self.rung = False
# If we are inbetween the starttime and endtime.
elif time.mktime(time.localtime()) > self.startTimeSecs and time.mktime(time.localtime()) < self.endTimeSecs and self.rung:
self.running()
# If any of those aren't true, then the timer hasn't started yet
else:
# Check if the endTime has already passed
if time.mktime(time.localtime()) > self.endTimeSecs:
# The time has already passed.
self.set = False
# THREADING STUFF --------------------------------------------------
# This is run by Threads start() method.
def run(self):
while self.set == True:
# Tick
self.tick()
# Sleep for a bit to save CPU
time.sleep(settings.TIMER_SLEEP)
And I am added schedule blocks to a scheduler:
# LOAD -------------------------------------------------------------
# Loads schedule from a file (schedule_settings.py).
def load(self):
# Add blocks
for block in schedule_settings.BLOCKS:
# Calculate the day
start_day = str(getDate(block[1].split()[0]))
end_day = str(getDate(block[2].split()[0]))
self.scheduler.add(start_day + " " + block[1].split()[1], end_day + " " + block[2].split()[1], "%j %H:%M", block[0])
for block in self.scheduler.blocks:
block.timer.tick()
print len(self.scheduler.blocks)
# Start the scheduler (if it isn't already)
if not self.scheduler.running:
self.scheduler.start()
The add function looks like this:
# ADD --------------------------------------------------------------
# Add a scheduled time
#
# block should be a Block instance, describing what to do at this time.
def add(self, startTime, endTime, format, block):
# Add this block
newBlock = block
# Start a timer for this block
newBlock.timer = Timer()
# Figure out the time
year = time.strftime("%Y")
# Add the block timer
newBlock.timer.activate(year + " " + startTime, year + " " + endTime, "%Y " + format)
# Add this block to the list
self.blocks.append(newBlock)
return
Basically with my program you can make a week's schedule and play your videos as if it were a TV channel. A block is a period of time where the channel will play certain episodes, or certain series.
My problem is that the blocks get completley messed up after using the add function. Some get duplicated, they're in the wrong order, etc. However before the add function they are completely fine. If I use a small amount of blocks (2 or 3) it seems to work fine, though.
For example if my schedule_settings.py (set's up a weeks schedule) looks like this:
# -*- coding: utf-8 -*-
# This file contains settings for a week's schedule
from block import Block
from series import Series
# MAIN BLOCK (All old episodes)
mainBlock = Block()
mainBlock.picker = 'random'
mainBlock.name = "Main Block"
mainBlock.auto(".")
mainBlock.old_episodes = True
# ONE PIECE
onepieceBlock = Block()
onepieceBlock.picker = 'latest'
onepieceBlock.name = "One Piece"
onepieceBlock.series = [
Series(auto="One Piece"),
]
# NEWISH STUFF
newishBlock = Block()
newishBlock.picker = 'random'
newishBlock.auto(".")
newishBlock.name = "NewishBlock"
newishBlock.exclude_series = [
#Series(auto="One Piece"),
#Series(auto="Nyan Koi!"),
]
# NEW STUFF
newBlock = Block()
newBlock.picker = 'latest'
newBlock.name = "New Stuff"
newBlock.series = [
Series(auto="Nyan Koi!"),
]
# ACTIVE BLOCKS
BLOCKS = (
# MONDAY
(mainBlock, "Monday 08:00", "Monday 22:20"),
(onepieceBlock, "Monday 22:20", "Monday 22:30"),
(newishBlock, "Monday 22:30", "Monday 23:00"),
# TUESDAY
(mainBlock, "Tuesday 08:00", "Tuesday 18:00"),
(newBlock, "Tuesday 18:00", "Tuesday 18:30"),
(newishBlock, "Tuesday 18:30", "Tuesday 22:00"),
# WEDNESDAY
(mainBlock, "Wednesday 08:00", "Wednesday 18:00"),
(newBlock, "Wednesday 18:00", "Wednesday 18:30"),
(newishBlock, "Wednesday 18:30", "Wednesday 22:00"),
# THURSDAY
(mainBlock, "Thursday 08:00", "Thursday 18:00"),
(newBlock, "Thursday 18:00", "Thursday 18:30"),
(newishBlock, "Thursday 18:30", "Thursday 22:00"),
# FRIDAY
(mainBlock, "Friday 08:00", "Friday 18:00"),
(newBlock, "Friday 18:00", "Friday 18:30"),
# WEEKEND
(newishBlock, "Saturday 08:00", "Saturday 23:00"),
(newishBlock, "Sunday 08:00", "Sunday 23:00"),
)
Before adding to the scheduler, The list produced looks fine, but after adding, then printing it out, I get:
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1254777600.0 1254778200.0 Monday 22:20
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1255107600.0 1255109400.0 Friday 18:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1255107600.0 1255109400.0 Friday 18:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1255107600.0 1255109400.0 Friday 18:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255071600.0 1255107600.0 Friday 08:00
1254810368.0 1255107600.0 1255109400.0 Friday 18:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
1254810368.0 1255244400.0 1255298400.0 Sunday 08:00
I'm assuming this has somthing to do with the subclassing of threading.Thread I have done in my Timer class. Does passing it through a function and adding it to a list mess this up somehow?
(Edit) Sorry if that wasn't very concise, I was in rush and forgot to post the most important code =( Using the timer to tick manually was just some debug code, normally I would have auto=True in the timer class.
You can find all my code at: http://github.com/bombpersons/MYOT
|
[
"You show us tons of code but not the key parts -- the Scheduler part (and what's that peculiar upper-case-O Object class...?). Anyway, to answer your question, passing instances of Thread subclasses through functions, adding them to lists, etc etc, is perfectly fine (though other things you're doing may not be -- e.g. you may not realize that, just because tick is a method of a Thread subclass, calling it from another thread does NOT mean it will execute in its own thread... rather, when called, it will execute in the calling thread).\nMay I suggest using the sched module in Python standard library to implement scheduling functionality, rather than running your own...?\n",
"Ugh.. I feel stupid now. The problem was that, the blocks were being passed by reference to the scheduler, therefore everytime I added a timer to the block, I was overwriting the older timer. \nI made a schedulerBlock class containing a timer and a block. It now works perfectly =)\n"
] |
[
1,
0
] |
[] |
[] |
[
"class",
"inheritance",
"multithreading",
"python"
] |
stackoverflow_0001522409_class_inheritance_multithreading_python.txt
|
Q:
Python Table engine binding for Tokyo Cabinet
I am looking for python bindings for Table engine of Tokyo cabinet. I tried Pytc but can only find Hash and B-tree engine support. Is there any other bindings available?
A:
Here is an implementation of search of table engine using PyTyrant:
http://github.com/ericflo/pytyrant/tree/master
A:
I was in contact with the author of tc and he told me the following:
Currently, the table (tdb) driver
exist in the master branch (unit
tests) and the fdb driver is
being developed in a separate branch.
I tried the table driver for a small test with success, am planning on trying it on larger tables soon.
A:
I've been monitoring (and sometimes improving) various Python bindings for TC for more than a year, so here's an updated list of best bindings matching your criteria.
For Tokyo Cabinet, including Tyrant: tokyo-python
For Tokyo Tyrant (pure-Python): pyrant
There are many stale and/or incomplete alternatives.
A:
My branch of pytc called "tc" do have support for tables (TDB) http://github.com/rsms/tc
Basic example:
>>> import tc
>>> db = tc.TDB("slab.tdb", tc.TDBOWRITER | tc.TDBOCREAT)
>>> db.put('some key', {'name': 'John Doe', 'age': '45', 'city': u'Internets'})
>>> rec = db.get('some key')
>>> print rec['name']
John Doe
Performing queries:
>>> import tc
>>> db = tc.TDB("slab.tdb", tc.TDBOWRITER | tc.TDBOCREAT)
>>> db.put('torgny', {'name': 'Torgny Korv', 'age': '31', 'colors': 'red,blue,green'})
>>> db.put('rosa', {'name': 'Rosa Flying', 'age': '29', 'colors': 'pink,blue,green'})
>>> db.put('jdoe', {'name': 'John Doe', 'age': '45', 'colors': 'red,green,orange'})
>>> q = db.query()
>>> q.keys()
['torgny', 'rosa', 'jdoe']
>>> q.filter('age', tc.TDBQCNUMGE, '30')
>>> q.keys()
['torgny', 'jdoe']
>>> q.filter('colors', tc.TDBQCSTROR, 'blue')
>>> q.keys()
['torgny']
>>> # new query:
>>> q = db.query()
>>> q.order('name') # Ascending order by default
>>> q.keys()
['jdoe', 'rosa', 'torgny']
>>> q.order(type=tc.TDBQONUMASC, column='age')
>>> q.keys()
['jdoe', 'torgny', 'rosa']
More examples in the TDB unit test: http://github.com/rsms/tc/blob/master/lib/tc/test/tdb.py
A:
The only other one I know of is a fork of pytc but it looks like they have only done some refactoring and documentation work, so probably still only hash and b-tree support:
tc
If this doesn't work you are probably out of luck. I think all the tyrant bindings only use the hash engine.
|
Python Table engine binding for Tokyo Cabinet
|
I am looking for python bindings for Table engine of Tokyo cabinet. I tried Pytc but can only find Hash and B-tree engine support. Is there any other bindings available?
|
[
"Here is an implementation of search of table engine using PyTyrant:\nhttp://github.com/ericflo/pytyrant/tree/master\n",
"I was in contact with the author of tc and he told me the following:\n\nCurrently, the table (tdb) driver\n exist in the master branch (unit\n tests) and the fdb driver is\n being developed in a separate branch.\n\nI tried the table driver for a small test with success, am planning on trying it on larger tables soon.\n",
"I've been monitoring (and sometimes improving) various Python bindings for TC for more than a year, so here's an updated list of best bindings matching your criteria.\n\nFor Tokyo Cabinet, including Tyrant: tokyo-python\nFor Tokyo Tyrant (pure-Python): pyrant\n\nThere are many stale and/or incomplete alternatives.\n",
"My branch of pytc called \"tc\" do have support for tables (TDB) http://github.com/rsms/tc\nBasic example:\n>>> import tc\n>>> db = tc.TDB(\"slab.tdb\", tc.TDBOWRITER | tc.TDBOCREAT)\n>>> db.put('some key', {'name': 'John Doe', 'age': '45', 'city': u'Internets'})\n>>> rec = db.get('some key')\n>>> print rec['name']\nJohn Doe\n\nPerforming queries:\n>>> import tc\n>>> db = tc.TDB(\"slab.tdb\", tc.TDBOWRITER | tc.TDBOCREAT)\n>>> db.put('torgny', {'name': 'Torgny Korv', 'age': '31', 'colors': 'red,blue,green'})\n>>> db.put('rosa', {'name': 'Rosa Flying', 'age': '29', 'colors': 'pink,blue,green'})\n>>> db.put('jdoe', {'name': 'John Doe', 'age': '45', 'colors': 'red,green,orange'})\n>>> q = db.query()\n>>> q.keys()\n['torgny', 'rosa', 'jdoe']\n>>> q.filter('age', tc.TDBQCNUMGE, '30')\n>>> q.keys()\n['torgny', 'jdoe']\n>>> q.filter('colors', tc.TDBQCSTROR, 'blue')\n>>> q.keys()\n['torgny']\n>>> # new query:\n>>> q = db.query()\n>>> q.order('name') # Ascending order by default\n>>> q.keys()\n['jdoe', 'rosa', 'torgny']\n>>> q.order(type=tc.TDBQONUMASC, column='age')\n>>> q.keys()\n['jdoe', 'torgny', 'rosa']\n\nMore examples in the TDB unit test: http://github.com/rsms/tc/blob/master/lib/tc/test/tdb.py\n",
"The only other one I know of is a fork of pytc but it looks like they have only done some refactoring and documentation work, so probably still only hash and b-tree support:\ntc\nIf this doesn't work you are probably out of luck. I think all the tyrant bindings only use the hash engine.\n"
] |
[
7,
4,
2,
2,
1
] |
[] |
[] |
[
"python",
"tokyo_cabinet"
] |
stackoverflow_0000601865_python_tokyo_cabinet.txt
|
Q:
How to disable URL redirection in Python when using M2Crypto SSL?
This is what my code looks like:
url_object = urlparse(url)
hostname = url_object.hostname
port = url_object.port
uri = url_object.path if url_object.path else '/'
ctx = SSL.Context()
if ctx.load_verify_locations(cafile='ca-bundle.crt') != 1: raise Exception("Could not load CA certificates.")
ctx.set_verify(SSL.verify_peer | SSL.verify_fail_if_no_peer_cert, depth=5)
c = httpslib.HTTPSConnection(hostname, port, ssl_context=ctx)
c.request('GET', uri)
data = c.getresponse().read()
c.close()
return data
How can I disable url redirection in this code? I am hoping there would be some way of setting this option on connection object 'c' in the above code.
Thanks in advance for the help.
A:
httpslib.HTTPSConnection does not redirect automatically. Like Lennart says in the comment, it subclasses from httplib.HTTPConnection which does not redirect either. It is easy to test with httplib:
import httplib
c = httplib.HTTPConnection('www.heikkitoivonen.net', 80)
c.request('GET', '/deadwinter/disclaimer.html')
data = c.getresponse().read()
c.close()
print data
This prints:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="http://www.heikkitoivonen.net/deadwinter/copyright.html">here</a>.</p>
</body></html>
You have to handle redirects yourself with http(s)lib, see for example the request function in silmut (a simple test framework I wrote a while back).
|
How to disable URL redirection in Python when using M2Crypto SSL?
|
This is what my code looks like:
url_object = urlparse(url)
hostname = url_object.hostname
port = url_object.port
uri = url_object.path if url_object.path else '/'
ctx = SSL.Context()
if ctx.load_verify_locations(cafile='ca-bundle.crt') != 1: raise Exception("Could not load CA certificates.")
ctx.set_verify(SSL.verify_peer | SSL.verify_fail_if_no_peer_cert, depth=5)
c = httpslib.HTTPSConnection(hostname, port, ssl_context=ctx)
c.request('GET', uri)
data = c.getresponse().read()
c.close()
return data
How can I disable url redirection in this code? I am hoping there would be some way of setting this option on connection object 'c' in the above code.
Thanks in advance for the help.
|
[
"httpslib.HTTPSConnection does not redirect automatically. Like Lennart says in the comment, it subclasses from httplib.HTTPConnection which does not redirect either. It is easy to test with httplib:\nimport httplib\n\nc = httplib.HTTPConnection('www.heikkitoivonen.net', 80)\nc.request('GET', '/deadwinter/disclaimer.html')\ndata = c.getresponse().read()\nc.close()\nprint data\n\nThis prints:\n<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n<html><head>\n<title>301 Moved Permanently</title>\n</head><body>\n<h1>Moved Permanently</h1>\n<p>The document has moved <a href=\"http://www.heikkitoivonen.net/deadwinter/copyright.html\">here</a>.</p>\n</body></html>\n\nYou have to handle redirects yourself with http(s)lib, see for example the request function in silmut (a simple test framework I wrote a while back).\n"
] |
[
0
] |
[] |
[] |
[
"m2crypto",
"python",
"redirect",
"url"
] |
stackoverflow_0001523654_m2crypto_python_redirect_url.txt
|
Q:
Extracting data from a CSV file in Python
I just got my data and it is given to me as a csv file.
It looks like this in data studio(where the file was taken).
Counts frequency
300 1
302 5
303 7
Excel can't handle the computations so that's why I'm trying to load it in python(it has scipy :D).
I want to load the data in an array:
Counts = [300, 302, 303]
frequency = [1, 5, 7]
How will I code this?
A:
Use the Python csv module.
A:
import csv
counts = []
frequencies = []
for d in csv.DictReader(open('yourfile.csv'), delimiter='\t'):
counts.append(int(d['Counts']))
frequencies.append(int(d['frequency']))
print 'Counts = ', counts
print 'frequency = ', frequencies
|
Extracting data from a CSV file in Python
|
I just got my data and it is given to me as a csv file.
It looks like this in data studio(where the file was taken).
Counts frequency
300 1
302 5
303 7
Excel can't handle the computations so that's why I'm trying to load it in python(it has scipy :D).
I want to load the data in an array:
Counts = [300, 302, 303]
frequency = [1, 5, 7]
How will I code this?
|
[
"Use the Python csv module.\n",
"import csv\n\ncounts = []\nfrequencies = []\n\nfor d in csv.DictReader(open('yourfile.csv'), delimiter='\\t'):\n counts.append(int(d['Counts']))\n frequencies.append(int(d['frequency']))\n\nprint 'Counts = ', counts\nprint 'frequency = ', frequencies\n\n"
] |
[
9,
7
] |
[] |
[] |
[
"csv",
"python"
] |
stackoverflow_0001526607_csv_python.txt
|
Q:
redirect() Not Actually Redirecting (Routes for Pylons)
I am constructing a table of routes in my Pylons app. I can't get redirect() to work the way that it should work. I'm not sure what I'm doing wrong.
Here is an example of redirect() usage from the Routes documentation:
map.redirect("/home/index", "/", _redirect_code="301 Moved Permanently")
Here is what appears in my routing.py file:
map.redirect("/view", "/", _redirect_code="301 Moved Permanently")
Here is a route using redirect() that appears at the end of my routing.py file:
map.redirect('/*(url)/', '/{url}', _redirect_code="301 Moved Permanently")
This route works just fine, so I know that redirect() is present and functional. Therefore, I am doing something wrong in the /view redirect. I know this because when I point my browser at /view, I get the 404 page instead of getting redirected. Going to / works just fine, so I don't think that the problem is there, either. I think that redirect() in Routes is a great idea and I would like to be able to use it as intended, but I'm not sure what I'm doing wrong here.
ETA @jasonjs: I believe that no other route matches. When I try to access /view and then look at Paster's output, here's what I get:
21:22:26,276 DEBUG [routes.middleware] No route matched for GET /view
which seems pretty conclusive to me. I should mention that there is a route that matches POST requests to /view:
map.connect('/view', controller='view', action='search', conditions=dict(method=['POST'])
That route also works correctly, and I have it listed earlier in routing.py so that it tries to match that before it tries to match the /view[GET] route.
A:
That syntax works, so there must be something else going on.
Routes is sensitive to order. Is there another route above the redirect that would match /view to a controller that would return a 404?
Routes is also sensitive to trailing slashes. Did you accidentally type a trailing slash in the browser but not in the route, or vice versa?
Finally, in development.ini, if you set level = DEBUG under [logger_routes], you can then check the log messages after visiting /view to see what was matched.
ETA: I just tried putting a POST-matching rule before a redirect for the same path, and it worked as expected. Are you on the latest Routes version (1.11)? Otherwise I don't have anything else for you, not being able to see code. It may just be a matter of starting with a barebones test case and building up until it breaks, or taking things away until it works...
|
redirect() Not Actually Redirecting (Routes for Pylons)
|
I am constructing a table of routes in my Pylons app. I can't get redirect() to work the way that it should work. I'm not sure what I'm doing wrong.
Here is an example of redirect() usage from the Routes documentation:
map.redirect("/home/index", "/", _redirect_code="301 Moved Permanently")
Here is what appears in my routing.py file:
map.redirect("/view", "/", _redirect_code="301 Moved Permanently")
Here is a route using redirect() that appears at the end of my routing.py file:
map.redirect('/*(url)/', '/{url}', _redirect_code="301 Moved Permanently")
This route works just fine, so I know that redirect() is present and functional. Therefore, I am doing something wrong in the /view redirect. I know this because when I point my browser at /view, I get the 404 page instead of getting redirected. Going to / works just fine, so I don't think that the problem is there, either. I think that redirect() in Routes is a great idea and I would like to be able to use it as intended, but I'm not sure what I'm doing wrong here.
ETA @jasonjs: I believe that no other route matches. When I try to access /view and then look at Paster's output, here's what I get:
21:22:26,276 DEBUG [routes.middleware] No route matched for GET /view
which seems pretty conclusive to me. I should mention that there is a route that matches POST requests to /view:
map.connect('/view', controller='view', action='search', conditions=dict(method=['POST'])
That route also works correctly, and I have it listed earlier in routing.py so that it tries to match that before it tries to match the /view[GET] route.
|
[
"That syntax works, so there must be something else going on.\nRoutes is sensitive to order. Is there another route above the redirect that would match /view to a controller that would return a 404?\nRoutes is also sensitive to trailing slashes. Did you accidentally type a trailing slash in the browser but not in the route, or vice versa?\nFinally, in development.ini, if you set level = DEBUG under [logger_routes], you can then check the log messages after visiting /view to see what was matched.\nETA: I just tried putting a POST-matching rule before a redirect for the same path, and it worked as expected. Are you on the latest Routes version (1.11)? Otherwise I don't have anything else for you, not being able to see code. It may just be a matter of starting with a barebones test case and building up until it breaks, or taking things away until it works...\n"
] |
[
1
] |
[] |
[] |
[
"pylons",
"python",
"routes"
] |
stackoverflow_0001523613_pylons_python_routes.txt
|
Q:
I'm trying to figure out how to get python to simply post the lyrics to a song
I'm a beginner in python programming and on one of my assignments I am to use python to write a code that will say these lyrics.
This old man, he played one
He played knick-knack on my thumb
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played two
He played knick-knack on my shoe
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played three
He played knick-knack on my knee
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played four
He played knick-knack on my door
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played five
He played knick-knack on my hive
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played six
He played knick-knack on my sticks
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played seven
He played knick-knack up in heaven
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played eight
He played knick-knack on my gate
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played nine
He played knick-knack on my spine
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played ten
He played knick-knack once again
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This is what I have so far, but it doesn't do what I want it to do. I'm not sure how to phrase the while loop or get it to choose a single word from the list in order.
num = ['one','two','three','four','five','six','nine','ten']
end = ['on my thumb','on my shoe','on my knee','on my door','on my hive','on my sticks','up in heaven','on my gate','on my spine','once again']
z=1
print "This old man, he played",(num)
print "He played knick-knack", (end)
print "Knick-knack paddywhack, give your dog a bone"
print "This old man came rolling home"
A:
Hint: You will need to use a loop
I would look at the Python Flow Control documentation. Also note the range() function.
You can grab the n'th element from an array like this:
val = some_array[n]
And remember that in Python, arrays start counting at 0, not 1.
A:
There is in fact a more Pythonic answer to your homework. Again, I can't give it to you directly, but as well as looking at the loop documentation, you might also want to look at zip. You can do a lot in Python without directly using index variables.
A:
First, read up on tuples and lists in Python; then read up on the for loop.
I suggest that you use a tuple to store things that go together.
# define a list of tuples
lst = [ ("eggs", "an omelet"), ("bread", "a sandwich"), ("sugar", "cookies") ]
for ingredient, food in lst:
print "I need", ingredient, "to make", food + "."
If you run the above code, here is the output you will get:
I need eggs to make an omelet.
I need bread to make a sandwich.
I need sugar to make cookies.
This is the Pythonic way to solve this problem. Here is another way, which I don't like as well:
ingredients = ["eggs", "bread", "sugar"]
foods = ["an omelet", "a sandwich", "cookies"]
for i in range(len(ingredients)):
print "I need", ingredients[i], "to make", foods[i] + "."
This will print the same output as the previous example, but it's harder to work with. You need to make sure that the two lists stay synchronized. The whole "list of tuples" thing may seem weird, but it's actually much easier once you are used to it.
I suggest you get the book Learning Python and study that; it will teach you a lot and it is very clear.
A:
I am not sure if I want to do your homework, but I will give you some hints: To me, this looks like a good place for a for loop. Use range, xrange, or enumerate.
A:
First, you probably want to say z = 0 because arrays in Python (and most programming languages) are zero-indexed - the first element of num is num[0], the second is num[1], and so on. Though we don't really even need z. More on that later.
Second, I would change num and end to nums and endings. If we give them a plural name it's clearer that we're using a list. Also, lists aren't individual values, but a collection of values. We need to get an element of that collection in this case. We wouldn't say (nums) (or (num) if you keep it your way) - that gets the entire list. We would say nums[x] (or num[x]) which gets element x (remember that arrays are zero-indexed!), where x can be a number, a variable holding a number, or any arbitrary expression that evaluates to a number.
Third, you could use a while loop, but even better would be a for loop and the range() and len() functions. The syntax for for loops is:
for x in y:
Where x is a temporary variable, and y is a list of items. The loop iterates over all the items in y, setting each one to x for the body of the loop. The range() function creates a list of numbers (which can be easily used as array indices, hint hint). The len() function takes a list and returns the length of the list. Combine these in the appropriate order to create a loop.
A:
Your rhyme encourages violence against the Irish. Nevertheless, here's some help.
Since the rhymes themselves have two parts, I've put each rhyme as a two part list, put together into a list of all those rhymes.
# List of rhymes, with each part of the rhyme as another list.
rhymes = ['one','on my thumb'],['two','on my shoe']
# The total amount of rhymes len(rhymes) lets us know when to stop...
count = 0
while count < len(rhymes):
print "This old man, he played "+rhymes[count][0]
print "He played knick-knack "+rhymes[count][1]
print "Knick-knack paddywhack, give your dog a bone"
print "This old man came rolling home \n"
count += 1
A:
first half shamelessly copied from from nailer's answer:
# List of rhymes, with each part of the rhyme as another list.
rhymes = ['one','on my thumb'],['two','on my shoe']
for number, position in rhymes:
# print your output in terms of number and position
|
I'm trying to figure out how to get python to simply post the lyrics to a song
|
I'm a beginner in python programming and on one of my assignments I am to use python to write a code that will say these lyrics.
This old man, he played one
He played knick-knack on my thumb
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played two
He played knick-knack on my shoe
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played three
He played knick-knack on my knee
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played four
He played knick-knack on my door
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played five
He played knick-knack on my hive
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played six
He played knick-knack on my sticks
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played seven
He played knick-knack up in heaven
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played eight
He played knick-knack on my gate
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played nine
He played knick-knack on my spine
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This old man, he played ten
He played knick-knack once again
Knick-knack paddywhack, give your dog a bone
This old man came rolling home
This is what I have so far, but it doesn't do what I want it to do. I'm not sure how to phrase the while loop or get it to choose a single word from the list in order.
num = ['one','two','three','four','five','six','nine','ten']
end = ['on my thumb','on my shoe','on my knee','on my door','on my hive','on my sticks','up in heaven','on my gate','on my spine','once again']
z=1
print "This old man, he played",(num)
print "He played knick-knack", (end)
print "Knick-knack paddywhack, give your dog a bone"
print "This old man came rolling home"
|
[
"Hint: You will need to use a loop\nI would look at the Python Flow Control documentation. Also note the range() function.\nYou can grab the n'th element from an array like this:\nval = some_array[n]\n\nAnd remember that in Python, arrays start counting at 0, not 1.\n",
"There is in fact a more Pythonic answer to your homework. Again, I can't give it to you directly, but as well as looking at the loop documentation, you might also want to look at zip. You can do a lot in Python without directly using index variables.\n",
"First, read up on tuples and lists in Python; then read up on the for loop.\nI suggest that you use a tuple to store things that go together.\n# define a list of tuples\nlst = [ (\"eggs\", \"an omelet\"), (\"bread\", \"a sandwich\"), (\"sugar\", \"cookies\") ]\n\nfor ingredient, food in lst:\n print \"I need\", ingredient, \"to make\", food + \".\"\n\nIf you run the above code, here is the output you will get:\nI need eggs to make an omelet.\nI need bread to make a sandwich.\nI need sugar to make cookies.\n\nThis is the Pythonic way to solve this problem. Here is another way, which I don't like as well:\ningredients = [\"eggs\", \"bread\", \"sugar\"]\nfoods = [\"an omelet\", \"a sandwich\", \"cookies\"]\n\nfor i in range(len(ingredients)):\n print \"I need\", ingredients[i], \"to make\", foods[i] + \".\"\n\nThis will print the same output as the previous example, but it's harder to work with. You need to make sure that the two lists stay synchronized. The whole \"list of tuples\" thing may seem weird, but it's actually much easier once you are used to it.\nI suggest you get the book Learning Python and study that; it will teach you a lot and it is very clear.\n",
"I am not sure if I want to do your homework, but I will give you some hints: To me, this looks like a good place for a for loop. Use range, xrange, or enumerate.\n",
"First, you probably want to say z = 0 because arrays in Python (and most programming languages) are zero-indexed - the first element of num is num[0], the second is num[1], and so on. Though we don't really even need z. More on that later.\nSecond, I would change num and end to nums and endings. If we give them a plural name it's clearer that we're using a list. Also, lists aren't individual values, but a collection of values. We need to get an element of that collection in this case. We wouldn't say (nums) (or (num) if you keep it your way) - that gets the entire list. We would say nums[x] (or num[x]) which gets element x (remember that arrays are zero-indexed!), where x can be a number, a variable holding a number, or any arbitrary expression that evaluates to a number.\nThird, you could use a while loop, but even better would be a for loop and the range() and len() functions. The syntax for for loops is:\nfor x in y:\n\nWhere x is a temporary variable, and y is a list of items. The loop iterates over all the items in y, setting each one to x for the body of the loop. The range() function creates a list of numbers (which can be easily used as array indices, hint hint). The len() function takes a list and returns the length of the list. Combine these in the appropriate order to create a loop.\n",
"Your rhyme encourages violence against the Irish. Nevertheless, here's some help. \nSince the rhymes themselves have two parts, I've put each rhyme as a two part list, put together into a list of all those rhymes. \n# List of rhymes, with each part of the rhyme as another list.\nrhymes = ['one','on my thumb'],['two','on my shoe']\n\n# The total amount of rhymes len(rhymes) lets us know when to stop...\ncount = 0\nwhile count < len(rhymes):\n print \"This old man, he played \"+rhymes[count][0]\n print \"He played knick-knack \"+rhymes[count][1]\n print \"Knick-knack paddywhack, give your dog a bone\"\n print \"This old man came rolling home \\n\"\n count += 1\n\n",
"first half shamelessly copied from from nailer's answer:\n# List of rhymes, with each part of the rhyme as another list.\nrhymes = ['one','on my thumb'],['two','on my shoe']\n\nfor number, position in rhymes:\n # print your output in terms of number and position\n\n"
] |
[
5,
2,
2,
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001527097_python.txt
|
Q:
How to do a Post Request in Python?
Hi I'm sitting in a Greyhound Bus with Wifi and want to connect a second Device to the Network. But I have to accept an onscreen contract and the device does not have a browser.
To accept the contract the following form has to be accepted. The device has no CURL but all the standard python 2.6. libraries.
<form method="POST" name="wifi" id="wifi" action="http://192.168.100.1:5280/">
<input type="image" name="mode_login" value="Agree" src="btn_accept.gif" />
<input type="hidden" name="redirect" value="http://stackoverflow.com/">
</form>
How would I write a quick python script to accept the contract?
A:
I think this should do the trick:
import urllib
data = urllib.urlencode({"mode_login":"Agree","redirect":"http://stackoverflow.com"})
result = urllib.urlopen("http://192.168.100.1:5280/",data).read()
print result
|
How to do a Post Request in Python?
|
Hi I'm sitting in a Greyhound Bus with Wifi and want to connect a second Device to the Network. But I have to accept an onscreen contract and the device does not have a browser.
To accept the contract the following form has to be accepted. The device has no CURL but all the standard python 2.6. libraries.
<form method="POST" name="wifi" id="wifi" action="http://192.168.100.1:5280/">
<input type="image" name="mode_login" value="Agree" src="btn_accept.gif" />
<input type="hidden" name="redirect" value="http://stackoverflow.com/">
</form>
How would I write a quick python script to accept the contract?
|
[
"I think this should do the trick:\nimport urllib\ndata = urllib.urlencode({\"mode_login\":\"Agree\",\"redirect\":\"http://stackoverflow.com\"})\nresult = urllib.urlopen(\"http://192.168.100.1:5280/\",data).read()\nprint result\n\n"
] |
[
2
] |
[] |
[] |
[
"http_post",
"networking",
"python"
] |
stackoverflow_0001527337_http_post_networking_python.txt
|
Q:
How does Google Books work? Are there any open source alternatives?
I have been asked to publish a complete book online similar way Google Books does? i.e. it's viewable and printable but not download-able.
Is the process is basically "high quality scanning"? are there any open source solution to "mass generation" of "watermark" on those high quality images. Suppose you have an original image. and when the user views it online, I re-create the image add watermark and some other text on top of the image "on-the-fly" are there such library exist in python off course :)
Any tips? If you have done this before please share.
Thanks
A:
Unfortunately Google uses a patented technique for scanning it's books, so you will probably have to stick to traditional methods.
Google created some seriously nifty
infrared camera technology that
detects the three-dimensional shape
and angle of book pages when the book
is placed in the scanner. This
information is transmitted to the OCR
software, which adjusts for the
distortions and allows the OCR
software to read text more accurately.
No more broken bindings, no more
inefficient glass plates.
Basically you will need to scan the book using an OCR application (tesseract is good), then I would generate a PDF/image from the scanned text, and finally add the watermark on top. The Python Imaging Library would seem to be the best tool for this.
A:
Don't know much about Google Books, but Python Imaging Library can do watermarking (there's ASPN recipe for that).
A:
See the slashdot question on reproducing Google's photo + laser grid technique.
|
How does Google Books work? Are there any open source alternatives?
|
I have been asked to publish a complete book online similar way Google Books does? i.e. it's viewable and printable but not download-able.
Is the process is basically "high quality scanning"? are there any open source solution to "mass generation" of "watermark" on those high quality images. Suppose you have an original image. and when the user views it online, I re-create the image add watermark and some other text on top of the image "on-the-fly" are there such library exist in python off course :)
Any tips? If you have done this before please share.
Thanks
|
[
"Unfortunately Google uses a patented technique for scanning it's books, so you will probably have to stick to traditional methods.\n\nGoogle created some seriously nifty\n infrared camera technology that\n detects the three-dimensional shape\n and angle of book pages when the book\n is placed in the scanner. This\n information is transmitted to the OCR\n software, which adjusts for the\n distortions and allows the OCR\n software to read text more accurately.\n No more broken bindings, no more\n inefficient glass plates.\n\nBasically you will need to scan the book using an OCR application (tesseract is good), then I would generate a PDF/image from the scanned text, and finally add the watermark on top. The Python Imaging Library would seem to be the best tool for this.\n",
"Don't know much about Google Books, but Python Imaging Library can do watermarking (there's ASPN recipe for that).\n",
"See the slashdot question on reproducing Google's photo + laser grid technique.\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"image_processing",
"python",
"watermark"
] |
stackoverflow_0001527549_image_processing_python_watermark.txt
|
Q:
How do I convert Perl's pack 'Nc*' format to struct.pack for Python?
I'm trying to convert a Perl script to python, and it uses quite a few different packs. I've been able to figure out the lettering differences in the "templates" for each one, but I'm having an issue with understanding how to handle Perl's lack of length declaration.
example:
pack('Nc*',$some_integer,$long_array_of_integers);
I don't see an analog for this "*" feature in struct.pack, on Python. Any ideas on how to convert this to Python?
A:
How about this?
struct.pack('>I', some_integer) + struct.pack('b'*len(long_array), *long_array)
A:
Perl's pack is using the '*' character similar to in regular expressions--meaning a wildcard for more of the same. Here, of course, it means more signed ints.
In Python, you'd just loop through the string and concat the pieces:
result = struct.pack('>L', some_integer)
for c in long_array_of_integers:
result += struct.pack('b',c)
|
How do I convert Perl's pack 'Nc*' format to struct.pack for Python?
|
I'm trying to convert a Perl script to python, and it uses quite a few different packs. I've been able to figure out the lettering differences in the "templates" for each one, but I'm having an issue with understanding how to handle Perl's lack of length declaration.
example:
pack('Nc*',$some_integer,$long_array_of_integers);
I don't see an analog for this "*" feature in struct.pack, on Python. Any ideas on how to convert this to Python?
|
[
"How about this?\nstruct.pack('>I', some_integer) + struct.pack('b'*len(long_array), *long_array)\n\n",
"Perl's pack is using the '*' character similar to in regular expressions--meaning a wildcard for more of the same. Here, of course, it means more signed ints.\nIn Python, you'd just loop through the string and concat the pieces:\nresult = struct.pack('>L', some_integer)\nfor c in long_array_of_integers:\n result += struct.pack('b',c)\n\n"
] |
[
7,
1
] |
[] |
[] |
[
"pack",
"perl",
"python",
"struct"
] |
stackoverflow_0001527534_pack_perl_python_struct.txt
|
Q:
Get records before and after current selection in Django query
It sounds like an odd one but it's a really simple idea. I'm trying to make a simple Flickr for a website I'm building. This specific problem comes when I want to show a single photo (from my Photo model) on the page but I also want to show the image before it in the stream and the image after it.
If I were only sorting these streams by date, or was only sorting by ID, that might be simpler... But I'm not. I want to allow the user to sort and filter by a whole variety of methods. The sorting is simple. I've done that and I have a result-set, containing 0-many Photos.
If I want a single Photo, I start off with that filtered/sorted/etc stream. From it I need to get the current Photo, the Photo before it and the Photo after it.
Here's what I'm looking at, at the moment.
prev = None
next = None
photo = None
for i in range(1, filtered_queryset.count()):
if filtered_queryset[i].pk = desired_pk:
if i>1: prev = filtered_queryset[i-1]
if i<filtered_queryset.count(): next = filtered_queryset[i+1]
photo = filtered_queryset[i]
break
It just seems disgustingly messy. And inefficient. Oh my lord, so inefficient. Can anybody improve on it though?
Django queries are late-binding, so it would be nice to make use of that though I guess that might be impossible given my horrible restrictions.
Edit: it occurs to me that I can just chuck in some SQL to re-filter queryset. If there's a way of selecting something with its two (or one, or zero) closest neighbours with SQL, I'd love to know!
A:
I see the following possibilities:
Your URL query parameters contain the sort/filtering information and some kind of 'item number', which is the item number within your filtered queryset. This is the simple case - previous and next are item number minus one and plus one respectively (plus some bounds checking)
You want the URL to be a permalink, and contain the photo primary key (or some unique ID). In this case, you are presumably storing the sorting/filtering in:
in the URL as query parameters. In this case you don't have true permalinks, and so you may as well stick the item number in the URL as well, getting you back to option 1.
hidden fields in the page, and using POSTs for links instead of normal links. In this case, stick the item number in the hidden fields as well.
session data/cookies. This will break if the user has two tabs open with different sorts/filtering applied, but that might be a limitation you don't mind - after all, you have envisaged that they will probably just be using one tab and clicking through the list. In this case, store the item number in the session as well. You might be able to do something clever to "namespace" the item number for the case where they have multiple tabs open.
In short, store the item number wherever you are storing the filtering/sorting information.
A:
You could try the following:
Evaluate the filtered/sorted queryset and get the list of photo ids, which you hold in the session. These ids all match the filter/sort criteria.
Keep the current index into this list in the session too, and update it when the user moves to the previous/next photo. Use this index to get the prev/current/next ids to use in showing the photos.
When the filtering/sorting criteria change, re-evaluate the list and set the current index to a suitable value (e.g. 0 for the first photo in the new list).
|
Get records before and after current selection in Django query
|
It sounds like an odd one but it's a really simple idea. I'm trying to make a simple Flickr for a website I'm building. This specific problem comes when I want to show a single photo (from my Photo model) on the page but I also want to show the image before it in the stream and the image after it.
If I were only sorting these streams by date, or was only sorting by ID, that might be simpler... But I'm not. I want to allow the user to sort and filter by a whole variety of methods. The sorting is simple. I've done that and I have a result-set, containing 0-many Photos.
If I want a single Photo, I start off with that filtered/sorted/etc stream. From it I need to get the current Photo, the Photo before it and the Photo after it.
Here's what I'm looking at, at the moment.
prev = None
next = None
photo = None
for i in range(1, filtered_queryset.count()):
if filtered_queryset[i].pk = desired_pk:
if i>1: prev = filtered_queryset[i-1]
if i<filtered_queryset.count(): next = filtered_queryset[i+1]
photo = filtered_queryset[i]
break
It just seems disgustingly messy. And inefficient. Oh my lord, so inefficient. Can anybody improve on it though?
Django queries are late-binding, so it would be nice to make use of that though I guess that might be impossible given my horrible restrictions.
Edit: it occurs to me that I can just chuck in some SQL to re-filter queryset. If there's a way of selecting something with its two (or one, or zero) closest neighbours with SQL, I'd love to know!
|
[
"I see the following possibilities:\n\nYour URL query parameters contain the sort/filtering information and some kind of 'item number', which is the item number within your filtered queryset. This is the simple case - previous and next are item number minus one and plus one respectively (plus some bounds checking)\nYou want the URL to be a permalink, and contain the photo primary key (or some unique ID). In this case, you are presumably storing the sorting/filtering in:\n\nin the URL as query parameters. In this case you don't have true permalinks, and so you may as well stick the item number in the URL as well, getting you back to option 1.\nhidden fields in the page, and using POSTs for links instead of normal links. In this case, stick the item number in the hidden fields as well.\nsession data/cookies. This will break if the user has two tabs open with different sorts/filtering applied, but that might be a limitation you don't mind - after all, you have envisaged that they will probably just be using one tab and clicking through the list. In this case, store the item number in the session as well. You might be able to do something clever to \"namespace\" the item number for the case where they have multiple tabs open.\n\n\nIn short, store the item number wherever you are storing the filtering/sorting information.\n",
"You could try the following:\n\nEvaluate the filtered/sorted queryset and get the list of photo ids, which you hold in the session. These ids all match the filter/sort criteria.\nKeep the current index into this list in the session too, and update it when the user moves to the previous/next photo. Use this index to get the prev/current/next ids to use in showing the photos.\nWhen the filtering/sorting criteria change, re-evaluate the list and set the current index to a suitable value (e.g. 0 for the first photo in the new list).\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"django",
"django_models",
"django_queryset",
"python",
"sql"
] |
stackoverflow_0001527710_django_django_models_django_queryset_python_sql.txt
|
Q:
Creating tables with pylons and SQLAlchemy
I'm using SQLAlchemy and I can create tables that I have defined in /model/__init__.py but I have defined my classes, tables and their mappings in other files found in the /model directory.
For example I have a profile class and a profile table which are defined and mapped in /model/profile.py
To create the tables I run:
paster setup-app development.ini
But my problem is that the tables that I have defined in /model/__init__.py are created properly but the table definitions found in /model/profile.py are not created. How can I execute the table definitions found in the /model/profile.py so that all my tables can be created?
Thanks for the help!
A:
I ran into the same problem with my first real Pylons project. The solution that worked for me was this:
Define tables and classes in your profile.py file
In your __init__.py add from profile import * after your def init_model
I then added all of my mapper definitions afterwards. Keeping them all in the init file solved some problems I was having relating between tables defined in different files.
Also, I've since created projects using the declarative method and didn't need to define the mapping in the init file.
A:
Just import your other table's modules in your init.py, and use metadata object from models.meta in other files. Pylons default setup_app function creates all tables found in metadata object from model.meta after importing it.
A:
If you are using declarative style, be sure to use Base.meta for tables generation.
|
Creating tables with pylons and SQLAlchemy
|
I'm using SQLAlchemy and I can create tables that I have defined in /model/__init__.py but I have defined my classes, tables and their mappings in other files found in the /model directory.
For example I have a profile class and a profile table which are defined and mapped in /model/profile.py
To create the tables I run:
paster setup-app development.ini
But my problem is that the tables that I have defined in /model/__init__.py are created properly but the table definitions found in /model/profile.py are not created. How can I execute the table definitions found in the /model/profile.py so that all my tables can be created?
Thanks for the help!
|
[
"I ran into the same problem with my first real Pylons project. The solution that worked for me was this:\n\nDefine tables and classes in your profile.py file\nIn your __init__.py add from profile import * after your def init_model\nI then added all of my mapper definitions afterwards. Keeping them all in the init file solved some problems I was having relating between tables defined in different files.\n\nAlso, I've since created projects using the declarative method and didn't need to define the mapping in the init file.\n",
"Just import your other table's modules in your init.py, and use metadata object from models.meta in other files. Pylons default setup_app function creates all tables found in metadata object from model.meta after importing it.\n",
"If you are using declarative style, be sure to use Base.meta for tables generation.\n"
] |
[
5,
0,
0
] |
[] |
[] |
[
"pylons",
"python",
"sqlalchemy"
] |
stackoverflow_0001482627_pylons_python_sqlalchemy.txt
|
Q:
Constant instance variables?
I use @property to ensure that changes to an objects instance variables are wrapped by methods where I need to.
What about when an instance has an variable that logically should not be changed? Eg, if I'm making a class for a Process, each Process instance should have a PID attribute that will frequently be accessed but should not be changed.
What's the most Pythonic way to handle someone attempting to modify that instance variable?
Simply trust the user not to try and change
something they shouldn't?
Use property but raise an
exception if the instance variable is
changed?
Something else?
A:
Prepend name of the variable with __, and create read-only property, Python will take care of exceptions, and variable itself will be protected from accidental overwrite.
class foo(object):
def __init__(self, bar):
self.__bar = bar
@property
def bar(self):
return self.__bar
f = foo('bar')
f.bar # => bar
f.bar = 'baz' # AttributeError; would have to use f._foo__bar
A:
Simply trusting the user is not necessarily a bad thing; if you are just writing a quick Python program to be used once and thrown away, you might very well just trust that the user not alter the pid field.
IMHO the most Pythonic way to enforce the read-only field is to use a property that raises an exception on an attempt to set the field.
So, IMHO you have good instincts about this stuff.
A:
Maybe you can override __setattr__? In the line of,
def __setattr__(self, name, value):
if self.__dict__.has_key(name):
raise TypeError, 'value is read only'
self.__dict__[name] = value
A:
Simply use a property and a hidden attribute (prefixed with one underscore).
Simple properties are read-only!
>>> class Test (object):
... @property
... def bar(self):
... return self._bar
...
>>> t = Test()
>>> t._bar = 2
>>> t.bar
2
>>> t.bar = 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
Hiding with double underscore is not used to hide the implementation, but to make sure you don't collide with a subclass' attributes; consider a mixin for example, it has to be very careful!
If you just want to hide the attribute, use a single underscore. And as you see there is no extra magic to add -- if you don't define a set function, your property is just as read-only as the return value of a method.
|
Constant instance variables?
|
I use @property to ensure that changes to an objects instance variables are wrapped by methods where I need to.
What about when an instance has an variable that logically should not be changed? Eg, if I'm making a class for a Process, each Process instance should have a PID attribute that will frequently be accessed but should not be changed.
What's the most Pythonic way to handle someone attempting to modify that instance variable?
Simply trust the user not to try and change
something they shouldn't?
Use property but raise an
exception if the instance variable is
changed?
Something else?
|
[
"Prepend name of the variable with __, and create read-only property, Python will take care of exceptions, and variable itself will be protected from accidental overwrite.\nclass foo(object):\n def __init__(self, bar):\n self.__bar = bar\n\n @property\n def bar(self):\n return self.__bar\n\nf = foo('bar')\nf.bar # => bar\nf.bar = 'baz' # AttributeError; would have to use f._foo__bar\n\n",
"Simply trusting the user is not necessarily a bad thing; if you are just writing a quick Python program to be used once and thrown away, you might very well just trust that the user not alter the pid field.\nIMHO the most Pythonic way to enforce the read-only field is to use a property that raises an exception on an attempt to set the field.\nSo, IMHO you have good instincts about this stuff.\n",
"Maybe you can override __setattr__? In the line of,\ndef __setattr__(self, name, value):\n if self.__dict__.has_key(name):\n raise TypeError, 'value is read only'\n self.__dict__[name] = value\n\n",
"Simply use a property and a hidden attribute (prefixed with one underscore).\nSimple properties are read-only!\n>>> class Test (object):\n... @property\n... def bar(self):\n... return self._bar\n... \n>>> t = Test()\n>>> t._bar = 2\n>>> t.bar\n2\n>>> t.bar = 2\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: can't set attribute\n\nHiding with double underscore is not used to hide the implementation, but to make sure you don't collide with a subclass' attributes; consider a mixin for example, it has to be very careful!\nIf you just want to hide the attribute, use a single underscore. And as you see there is no extra magic to add -- if you don't define a set function, your property is just as read-only as the return value of a method.\n"
] |
[
9,
3,
1,
1
] |
[] |
[] |
[
"instance",
"instance_variables",
"properties",
"python",
"setter"
] |
stackoverflow_0001527395_instance_instance_variables_properties_python_setter.txt
|
Q:
Does Python have a "compile only" switch like Perl's -c?
Perl has the -c switch to compile the code without running it. This is convenient for debugging compile errors in Perl.
Does Python have a similar switch?
A:
You can say
python -m py_compile script_to_check.py
However, this will have the side effect of creating a compiled script_to_check.pyc file in the same directory as your script. This feature is designed to speed up later uses of a module rather than to make sure that your syntax is correct, though you could certainly use it for that.
A:
Even better is to run pyflakes, pychecker or maybe pylint at the code. They catch some common errors that compiling won't.
A:
Through 2.6, there's the compiler package. That page doesn't say if there is a replacement in 3.0, or if you just can't do that any more.
|
Does Python have a "compile only" switch like Perl's -c?
|
Perl has the -c switch to compile the code without running it. This is convenient for debugging compile errors in Perl.
Does Python have a similar switch?
|
[
"You can say\npython -m py_compile script_to_check.py\n\nHowever, this will have the side effect of creating a compiled script_to_check.pyc file in the same directory as your script. This feature is designed to speed up later uses of a module rather than to make sure that your syntax is correct, though you could certainly use it for that.\n",
"Even better is to run pyflakes, pychecker or maybe pylint at the code. They catch some common errors that compiling won't.\n",
"Through 2.6, there's the compiler package. That page doesn't say if there is a replacement in 3.0, or if you just can't do that any more.\n"
] |
[
22,
9,
0
] |
[] |
[] |
[
"compiler_construction",
"python"
] |
stackoverflow_0001528346_compiler_construction_python.txt
|
Q:
Regex for extraction in Python
I have a string like this:
"a word {{bla|123|456}} another {{bli|789|123}} some more text {{blu|789}} and more".
I would like to get this as an output:
(("bla", 123, 456), ("bli", 789, 123), ("blu", 789))
I haven't been able to find the proper python regex to achieve that.
A:
You need a lot of escapes in your regular expression since {, } and | are special characters in them. A first step to extract the relevant parts of the string would be this:
regex = re.compile(r'\{\{(.*?)\|(.*?)(?:\|(.*?))?\}\}')
regex.findall(line)
For the example this gives:
[('bla', '123', '456'), ('bli', '789', '123'), ('blu', '789', '')]
Then you can continue with converting strings with digits into integers and removing empty strings like for the last match.
A:
>>> re.findall(' {{(\w+)\|(\w+)(?:\|(\w+))?}} ', s)
[('bla', '123', '456'), ('bli', '789', '123'), ('blu', '789', '')]
if you still want number there you'd need to iterate over the output and convert it to the integer with int.
A:
[re.split('\|', i) for i in re.findall("{{(.*?)}}", str)]
Returns:
[['bla', '123', '456'], ['bli', '789', '123'], ['blu', '789']]
This method works regardless of the number of elements in the {{ }} blocks.
A:
To get the exact output you wrote, you need a regex and a split:
import re
map(lambda s: s.split("|"), re.findall(r"\{\{([^}]*)\}\}", s))
To get it with the numbers converted, do this:
toint = lambda x: int(x) if x.isdigit() else x
[map(toint, p.split("|")) for p in re.findall(r"\{\{([^}]*)\}\}", s)]
A:
We might be able to get fancy and do everything in a single complicated regular expression, but that way lies madness. Let's do one regexp that grabs the groups, and then split the groups up. We could use a regexp to split the groups, but we can just use str.split(), so let's do that.
import re
pat_group = re.compile("{{([^}]*)}}")
def mixed_tuple(iterable):
lst = []
for x in iterable:
try:
lst.append(int(x))
except ValueError:
lst.append(x)
return tuple(lst)
s = "a word {{bla|123|456}} another {{bli|789|123}} some more text {{blu|789}} and more"
lst_groups = re.findall(pat_group, s)
lst = [mixed_tuple(x.split("|")) for x in lst_groups]
In pat_group, "{{" just matches literal "{{". "(" starts a group. "[^}]" is a character class that matches any character except for "}", and '*' allows it to match zero or more such characters. ")" closes out the group and "}}" matches literal characters. Thus, we match the "{{...}}" patterns, and can extract everything between the curly braces as a group.
re.findall() returns a list of groups matched from the pattern.
Finally, a list comprehension splits each string and returns the result as a tuple.
A:
Assuming your actual format is {{[a-z]+|[0-9]+|[0-9]+}}, here's a complete program with conversion to ints.
import re
s = "a word {{bla|123|456}} another {{bli|789|123}} some more text {{blu|789}} and more"
result = []
for match in re.finditer('{{.*?}}', s):
# Split on pipe (|) and filter out non-alphanumerics
parts = [filter(str.isalnum, part) for part in match.group().split('|')]
# Convert to int when possible
for index, part in enumerate(parts):
try:
parts[index] = int(part)
except ValueError:
pass
result.append(tuple(parts))
A:
Is pyparsing overkill for this? Maybe, but without too much suffering, it does deliver the desired output, without a thicket of backslashes to escape the '{', '|', or '}' characters. Plus, there's no need for post-parse conversions of integers and whatnot - the parse actions take care of this kind of stuff at parse time.
from pyparsing import Word, Suppress, alphas, alphanums, nums, delimitedList
LBRACE,RBRACE,VERT = map(Suppress,"{}|")
word = Word(alphas,alphanums)
integer = Word(nums)
integer.setParseAction(lambda t: int(t[0]))
patt = (LBRACE*2 + delimitedList(word|integer, VERT) + RBRACE*2)
patt.setParseAction(lambda toks:tuple(toks.asList()))
s = "a word {{bla|123|456}} another {{bli|789|123}} some more text {{blu|789}} and more"
print tuple(p[0] for p in patt.searchString(s))
Prints:
(('bla', 123, 456), ('bli', 789, 123), ('blu', 789))
|
Regex for extraction in Python
|
I have a string like this:
"a word {{bla|123|456}} another {{bli|789|123}} some more text {{blu|789}} and more".
I would like to get this as an output:
(("bla", 123, 456), ("bli", 789, 123), ("blu", 789))
I haven't been able to find the proper python regex to achieve that.
|
[
"You need a lot of escapes in your regular expression since {, } and | are special characters in them. A first step to extract the relevant parts of the string would be this:\nregex = re.compile(r'\\{\\{(.*?)\\|(.*?)(?:\\|(.*?))?\\}\\}')\nregex.findall(line)\n\nFor the example this gives:\n[('bla', '123', '456'), ('bli', '789', '123'), ('blu', '789', '')]\n\nThen you can continue with converting strings with digits into integers and removing empty strings like for the last match.\n",
">>> re.findall(' {{(\\w+)\\|(\\w+)(?:\\|(\\w+))?}} ', s)\n[('bla', '123', '456'), ('bli', '789', '123'), ('blu', '789', '')]\n\nif you still want number there you'd need to iterate over the output and convert it to the integer with int.\n",
"[re.split('\\|', i) for i in re.findall(\"{{(.*?)}}\", str)]\n\nReturns:\n[['bla', '123', '456'], ['bli', '789', '123'], ['blu', '789']]\n\nThis method works regardless of the number of elements in the {{ }} blocks.\n",
"To get the exact output you wrote, you need a regex and a split:\nimport re\nmap(lambda s: s.split(\"|\"), re.findall(r\"\\{\\{([^}]*)\\}\\}\", s))\n\nTo get it with the numbers converted, do this:\ntoint = lambda x: int(x) if x.isdigit() else x\n[map(toint, p.split(\"|\")) for p in re.findall(r\"\\{\\{([^}]*)\\}\\}\", s)]\n\n",
"We might be able to get fancy and do everything in a single complicated regular expression, but that way lies madness. Let's do one regexp that grabs the groups, and then split the groups up. We could use a regexp to split the groups, but we can just use str.split(), so let's do that.\nimport re\npat_group = re.compile(\"{{([^}]*)}}\")\ndef mixed_tuple(iterable):\n lst = []\n for x in iterable:\n try:\n lst.append(int(x))\n except ValueError:\n lst.append(x)\n return tuple(lst)\n\ns = \"a word {{bla|123|456}} another {{bli|789|123}} some more text {{blu|789}} and more\"\n\nlst_groups = re.findall(pat_group, s)\nlst = [mixed_tuple(x.split(\"|\")) for x in lst_groups]\n\nIn pat_group, \"{{\" just matches literal \"{{\". \"(\" starts a group. \"[^}]\" is a character class that matches any character except for \"}\", and '*' allows it to match zero or more such characters. \")\" closes out the group and \"}}\" matches literal characters. Thus, we match the \"{{...}}\" patterns, and can extract everything between the curly braces as a group.\nre.findall() returns a list of groups matched from the pattern.\nFinally, a list comprehension splits each string and returns the result as a tuple.\n",
"Assuming your actual format is {{[a-z]+|[0-9]+|[0-9]+}}, here's a complete program with conversion to ints.\nimport re\n\ns = \"a word {{bla|123|456}} another {{bli|789|123}} some more text {{blu|789}} and more\"\nresult = []\n\nfor match in re.finditer('{{.*?}}', s):\n\n # Split on pipe (|) and filter out non-alphanumerics\n parts = [filter(str.isalnum, part) for part in match.group().split('|')]\n\n # Convert to int when possible\n for index, part in enumerate(parts): \n try:\n parts[index] = int(part)\n except ValueError:\n pass\n\n result.append(tuple(parts))\n\n",
"Is pyparsing overkill for this? Maybe, but without too much suffering, it does deliver the desired output, without a thicket of backslashes to escape the '{', '|', or '}' characters. Plus, there's no need for post-parse conversions of integers and whatnot - the parse actions take care of this kind of stuff at parse time.\nfrom pyparsing import Word, Suppress, alphas, alphanums, nums, delimitedList\n\nLBRACE,RBRACE,VERT = map(Suppress,\"{}|\")\nword = Word(alphas,alphanums)\ninteger = Word(nums)\ninteger.setParseAction(lambda t: int(t[0]))\n\npatt = (LBRACE*2 + delimitedList(word|integer, VERT) + RBRACE*2)\npatt.setParseAction(lambda toks:tuple(toks.asList()))\n\n\ns = \"a word {{bla|123|456}} another {{bli|789|123}} some more text {{blu|789}} and more\"\n\nprint tuple(p[0] for p in patt.searchString(s))\n\nPrints:\n(('bla', 123, 456), ('bli', 789, 123), ('blu', 789))\n\n"
] |
[
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001528016_python_regex.txt
|
Q:
Best Visual-Studio Like tool for Linux Development
I need to write a few programs for a project I'm currently working on and I'm very much used to Visual Studio 2008 and I do not mind programming in Python but I need to have a comfortable GUI programming ontop of the language itself, and it has to be well integrated and fast. I know that's a lot to ask for but is there any such thing out for Linux?
I know of Mono but I have found it to not be fully compatible or capable of what I want and, to be frank, the programs look like shit running in Linux
edit: I didn't discard GTK#, only Mono Winform
A:
Since you have already discarded MonoDevelop the next thing that comes to my mind is: Netbeans it has a very nice GUI builder
alt text http://www.netbeans.org/images/v6/7/screenshots/swing-gui-builder-cut.png
The l&f for linux is quite accurate for most apps. Java Swing it in it self a bit harder to grasp than other programming toolkis though.
EDIT
MonoDevelop has:
GTK# Visual Designer
Easily build GTK# applications
(source: monodevelop.com)
I don't think you cant get any closer to VS than that.
A:
For python, I've found that Eclipse has been the best option for Python development. I've never used it on Linux, though. Only Windows and Mac OS X.
I've also tried NetBeans with some success with Python but felt that Eclipse was a bit more polished. NetBeans also seems to direct one to Jython which I, personally, didn't want.
A:
Consider me in the text-editor camp for most Linux development (though I gladly and happily use Visual Studio on the Windows side). That being said, this looks interesting: http://eric-ide.python-projects.org/ And there are a number of other options here: http://wiki.python.org/moin/IntegratedDevelopmentEnvironments
A:
If you need a GUI designer, Glade maybe is what you're looking for. It isn't an IDE but saves designs as XML you can easily load by using a GTK+ object in your application. (GTK+ is a cross-platform GUI toolkit with C, C++, Python, Mono, etc. bindings)
A:
KDevelop used to be very much like Visual Studio, but the last time I looked was some 8 years ago so things may have changed.
Personally I would recommend something cross-platform that you could use both on Windows and Linux (or even Mac). My tool of choice is Eclipse, which has a great Python plugin available: Pydev. Eclipse has a pretty steep learning curve, but once you are past that it is generally pretty good and there are tons of plugins for all kinds of languages and things you might want. Eclipse does require pretty beefy machine for it to not feel sluggish. It works fine for me on my Dell Latitude D820 and D830.
A:
Wing IDE is very much like a Visual Studio for Python. About a year ago I worked on a team developing in Python, and this was the standard IDE for the team.
|
Best Visual-Studio Like tool for Linux Development
|
I need to write a few programs for a project I'm currently working on and I'm very much used to Visual Studio 2008 and I do not mind programming in Python but I need to have a comfortable GUI programming ontop of the language itself, and it has to be well integrated and fast. I know that's a lot to ask for but is there any such thing out for Linux?
I know of Mono but I have found it to not be fully compatible or capable of what I want and, to be frank, the programs look like shit running in Linux
edit: I didn't discard GTK#, only Mono Winform
|
[
"Since you have already discarded MonoDevelop the next thing that comes to my mind is: Netbeans it has a very nice GUI builder\nalt text http://www.netbeans.org/images/v6/7/screenshots/swing-gui-builder-cut.png\nThe l&f for linux is quite accurate for most apps. Java Swing it in it self a bit harder to grasp than other programming toolkis though.\nEDIT\nMonoDevelop has:\n\nGTK# Visual Designer\nEasily build GTK# applications\n\n\n(source: monodevelop.com)\nI don't think you cant get any closer to VS than that.\n",
"For python, I've found that Eclipse has been the best option for Python development. I've never used it on Linux, though. Only Windows and Mac OS X.\nI've also tried NetBeans with some success with Python but felt that Eclipse was a bit more polished. NetBeans also seems to direct one to Jython which I, personally, didn't want.\n",
"Consider me in the text-editor camp for most Linux development (though I gladly and happily use Visual Studio on the Windows side). That being said, this looks interesting: http://eric-ide.python-projects.org/ And there are a number of other options here: http://wiki.python.org/moin/IntegratedDevelopmentEnvironments\n",
"If you need a GUI designer, Glade maybe is what you're looking for. It isn't an IDE but saves designs as XML you can easily load by using a GTK+ object in your application. (GTK+ is a cross-platform GUI toolkit with C, C++, Python, Mono, etc. bindings)\n",
"KDevelop used to be very much like Visual Studio, but the last time I looked was some 8 years ago so things may have changed.\nPersonally I would recommend something cross-platform that you could use both on Windows and Linux (or even Mac). My tool of choice is Eclipse, which has a great Python plugin available: Pydev. Eclipse has a pretty steep learning curve, but once you are past that it is generally pretty good and there are tons of plugins for all kinds of languages and things you might want. Eclipse does require pretty beefy machine for it to not feel sluggish. It works fine for me on my Dell Latitude D820 and D830.\n",
"Wing IDE is very much like a Visual Studio for Python. About a year ago I worked on a team developing in Python, and this was the standard IDE for the team.\n"
] |
[
7,
2,
1,
1,
1,
1
] |
[] |
[] |
[
".net",
"ide",
"linux",
"python",
"visual_studio"
] |
stackoverflow_0001528535_.net_ide_linux_python_visual_studio.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.