content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Python: Stopping miniDOM from expanding escape sequences
When xml.dom.minidom parses a piece of xml, it automagically converts escape characters for greater than and less than into their visual representation. For example:
>>> import xml.dom.minidom
>>> s = "<example>4 < 5</example>"
>>> x = xml.dom.minidom.parseString(s)
>>> x.firstChild.firstChild.data
u'4 < 5'
Does anyone know how to stop minidom from doing this?
A:
>>> import xml.dom.minidom
>>> s = "<example>4 < 5</example>"
>>> x = xml.dom.minidom.parseString(s)
>>> x.firstChild.firstChild.toxml()
u'4 < 5'
|
Python: Stopping miniDOM from expanding escape sequences
|
When xml.dom.minidom parses a piece of xml, it automagically converts escape characters for greater than and less than into their visual representation. For example:
>>> import xml.dom.minidom
>>> s = "<example>4 < 5</example>"
>>> x = xml.dom.minidom.parseString(s)
>>> x.firstChild.firstChild.data
u'4 < 5'
Does anyone know how to stop minidom from doing this?
|
[
">>> import xml.dom.minidom\n>>> s = \"<example>4 < 5</example>\"\n>>> x = xml.dom.minidom.parseString(s)\n>>> x.firstChild.firstChild.toxml()\nu'4 < 5'\n\n"
] |
[
3
] |
[] |
[] |
[
"minidom",
"python",
"xml"
] |
stackoverflow_0000893930_minidom_python_xml.txt
|
Q:
How does setuptools decide which files to keep for sdist/bdist?
I'm working on a Python package that uses namespace_packages and find_packages() like so in setup.py:
from setuptools import setup, find_packages
setup(name="package",
version="1.3.3.7",
package=find_packages(),
namespace_packages=['package'], ...)
It isn't in source control because it is a bundle of upstream components. There is no MANIFEST.
When I run python setup.py sdist I get a tarball of most of the files under the package/ directory but any directories that don't contain .py files are left out.
What are the default rules for what setup.py includes and excludes from built distributions? I've fixed my problem by adding a MANIFEST.in with
recursive-include package *
but I would like to understand what setuptools and distutils are doing by default.
A:
You need to add a package_data directive. For example, if you want to include files with .txt or .rst extensions:
from setuptools import setup, find_packages
setup(name="package",
version="1.3.3.7",
package=find_packages(),
include_package_data=True,
namespace_packages=['package'],
package_data = {
# If any package contains *.txt or *.rst files, include them:
'': ['*.txt', '*.rst']...
)
|
How does setuptools decide which files to keep for sdist/bdist?
|
I'm working on a Python package that uses namespace_packages and find_packages() like so in setup.py:
from setuptools import setup, find_packages
setup(name="package",
version="1.3.3.7",
package=find_packages(),
namespace_packages=['package'], ...)
It isn't in source control because it is a bundle of upstream components. There is no MANIFEST.
When I run python setup.py sdist I get a tarball of most of the files under the package/ directory but any directories that don't contain .py files are left out.
What are the default rules for what setup.py includes and excludes from built distributions? I've fixed my problem by adding a MANIFEST.in with
recursive-include package *
but I would like to understand what setuptools and distutils are doing by default.
|
[
"You need to add a package_data directive. For example, if you want to include files with .txt or .rst extensions:\nfrom setuptools import setup, find_packages\nsetup(name=\"package\",\n version=\"1.3.3.7\",\n package=find_packages(),\n include_package_data=True,\n namespace_packages=['package'], \n package_data = {\n # If any package contains *.txt or *.rst files, include them:\n '': ['*.txt', '*.rst']...\n\n)\n\n"
] |
[
4
] |
[] |
[] |
[
"distutils",
"python",
"setuptools"
] |
stackoverflow_0000894323_distutils_python_setuptools.txt
|
Q:
Multi-server monitor/auto restarter in python
I have 2 server programs that must be started with the use of GNU Screen. I'd like to harden these servers against crashes with a Python based program that kicks off each screen session then monitors the server process. If the server process crashes, I need the python code to kill the extraneous screen session and restart the server with screen again.
I'm very new to python but I'm using this opportunity to teach myself. I'm aware this can be done in bash scripting. But I want to build on this code for future features, so it needs to be just python.
The pseudocode is as follows:
thread-one {
While 1:
start server 1 using screen
wait for server to end
end while
}
thread-two {
While 1:
start server 2 using screen
wait for server to end
end while
}
A:
"need to be multi-threaded to handle the restarting of two separate programs"
Don't see why.
import subprocess
commands = [ ["p1"], ["p2"] ]
programs = [ subprocess.Popen(c) for c in commands ]
while True:
for i in range(len(programs)):
if programs[i].returncode is None:
continue # still running
else:
# restart this one
programs[i]= subprocess.Popen(commands[i])
time.sleep(1.0)
A:
You really shouldn't run production software on a screen. If the server gets rebooted, how will You start it up? Manually?
Also I think that You are trying to re-invent the wheel. There are already pretty good tools that do the thing You need.
launchtool lets you run a
user-supplied command supervising its
execution in many ways, such as
controlling its environment, blocking
signals, logging its output, changing
user and group permissions, limiting
resource usage, restarting it if it
fails, running it continuously,
turning it into a daemon, and more.
.
Monit is a free open source
utility for managing and monitoring,
processes, files, directories and
filesystems on a UNIX system. Monit
conducts automatic maintenance and
repair and can execute meaningful
causal actions in error situations.
|
Multi-server monitor/auto restarter in python
|
I have 2 server programs that must be started with the use of GNU Screen. I'd like to harden these servers against crashes with a Python based program that kicks off each screen session then monitors the server process. If the server process crashes, I need the python code to kill the extraneous screen session and restart the server with screen again.
I'm very new to python but I'm using this opportunity to teach myself. I'm aware this can be done in bash scripting. But I want to build on this code for future features, so it needs to be just python.
The pseudocode is as follows:
thread-one {
While 1:
start server 1 using screen
wait for server to end
end while
}
thread-two {
While 1:
start server 2 using screen
wait for server to end
end while
}
|
[
"\"need to be multi-threaded to handle the restarting of two separate programs\" \nDon't see why.\nimport subprocess\n\ncommands = [ [\"p1\"], [\"p2\"] ]\nprograms = [ subprocess.Popen(c) for c in commands ]\nwhile True:\n for i in range(len(programs)):\n if programs[i].returncode is None:\n continue # still running\n else:\n # restart this one\n programs[i]= subprocess.Popen(commands[i])\n time.sleep(1.0)\n\n",
"You really shouldn't run production software on a screen. If the server gets rebooted, how will You start it up? Manually?\nAlso I think that You are trying to re-invent the wheel. There are already pretty good tools that do the thing You need.\n\nlaunchtool lets you run a\n user-supplied command supervising its\n execution in many ways, such as\n controlling its environment, blocking\n signals, logging its output, changing\n user and group permissions, limiting\n resource usage, restarting it if it\n fails, running it continuously,\n turning it into a daemon, and more.\n\n.\n\nMonit is a free open source\n utility for managing and monitoring,\n processes, files, directories and\n filesystems on a UNIX system. Monit\n conducts automatic maintenance and\n repair and can execute meaningful\n causal actions in error situations.\n\n"
] |
[
6,
3
] |
[] |
[] |
[
"bash",
"linux",
"python",
"restart"
] |
stackoverflow_0000894474_bash_linux_python_restart.txt
|
Q:
Has Python changed to more object oriented?
I remember that at one point, it was said that Python is less object oriented than Ruby, since in Ruby, everything is an object. Has this changed for Python as well? Is the latest Python more object oriented than the previous version?
A:
Jian Lin — the answer is "Yes", Python is more object-oriented than when Matz decided he wanted to create Ruby, and both languages now feature "everything is an object". Back when Python was younger, "types" like strings and numbers lacked methods, whereas "objects" were built with the "class" statement (or by deliberately building a class in a C extension module) and were a bit less efficient but did support methods and inheritance. For the very early 1990s, when a fast 386 was a pretty nice machine, this compromise made sense. But types and classes were unified in Python 2.2 (released in 2001), and strings got methods and, in more recent Python versions, users can even subclass from them.
So: Python was certainly less object oriented at one time; but, so far as I know, every one of those old barriers is now gone.
Here's the guide to the unification that took place:
http://www.python.org/download/releases/2.2/descrintro/
Clarification: perhaps I can put it even more simply: in Python, everything has always been an object; but some basic kinds of object (ints, strings) once played by "different rules" that prevent OO programming methods (like inheritance) from being used with them. That has now been fixed. The len() method, described in another response here, is probably the only thing left that I wish Guido had changed in the upgrade to Python 3.0. But at least he gave me dictionary comprehensions, so I won't complain too loudly. :-)
A:
I'm not sure that I buy the argument that Ruby is more object-oriented than Python. There's more to being object-oriented than just using objects and dot syntax. A common argument that I see is that in Python to get the length of a list, you do something like this:
len(some_list)
I see this as a bikeshed argument. What this really translates to (almost directly) is this:
some_list.__len__()
which is perfectly object oriented. I think Rubyists may get a bit confused because typically being object-oriented involves using the dot syntax (for example object.method()). However, if I misunderstand Rubyists' arguments, feel free to let me know.
Regardless of the object-orientation of this, there is one advantage to using len this way. One thing that's always annoyed me about some languages is having to remember whether to use some_list.size() or some_list.length() or some_list.len for a particular object. Python's way means just one function to remember
A:
Although this is not properly an answer... Why do you care about Python being more or less OO? The cool thing about Python is that it's pythonic, not object oriented or funcitonal or whichever paradigm that is fashionable at the moment! :-)
I learnt to program with Java and Object Orientation, but now I don't give a sh.t about it because I know that OOP is not the solution to all the problems (indeed, no single paradigm is).
see:
The Zen of Python
What is Pythonic
Be Pythonic
A:
Hold on, both Ruby and Python are object oriented. Objects are objects. There isn't more object oriented 'comparison function' that will lead you to the better one. Syntax is not only thing which makes some language to look like object oriented one, but also data model.
Objects are Python’s abstraction for data. All data in a Python program is represented by objects or by relations between objects. (In a sense, and in conformance to Von Neumann’s model of a “stored program computer,” code is also represented by objects.) http://docs.python.org/reference/datamodel.html
A:
This is an incorrect belief.
See my previous answer here for more in-depth explanation:
Is everything an object in python like ruby?
Why not expose .len() directly off of list then? I think you can't completely divorce OO design from the syntax, because the syntax, to a large extent, defines your code paradigm. some_list.len() is OO because you are thinking about the list as an object that will be able to tell you what its length is. len(some_list)
.len() is available directly off the list. It is available as __len__(). len() is a function object. You can see all its methods with dir(len). While I do not know why Guido decided to make the __len__() method longer, it does not change the fact that all of those are still objects.
A:
I have the same "perception" perhaps derived from this:
Why was python created in the first place:
It occurred to me that a scripting language with a syntax like ABC [...] would fill the need
An Interview with the Creator of Ruby:
"I wanted a scripting language that was more powerful than Perl, and more object-oriented than Python
I know that perception is not the same as reality. Both Python and Ruby are great programming languages and both are very OO.
|
Has Python changed to more object oriented?
|
I remember that at one point, it was said that Python is less object oriented than Ruby, since in Ruby, everything is an object. Has this changed for Python as well? Is the latest Python more object oriented than the previous version?
|
[
"Jian Lin — the answer is \"Yes\", Python is more object-oriented than when Matz decided he wanted to create Ruby, and both languages now feature \"everything is an object\". Back when Python was younger, \"types\" like strings and numbers lacked methods, whereas \"objects\" were built with the \"class\" statement (or by deliberately building a class in a C extension module) and were a bit less efficient but did support methods and inheritance. For the very early 1990s, when a fast 386 was a pretty nice machine, this compromise made sense. But types and classes were unified in Python 2.2 (released in 2001), and strings got methods and, in more recent Python versions, users can even subclass from them.\nSo: Python was certainly less object oriented at one time; but, so far as I know, every one of those old barriers is now gone.\nHere's the guide to the unification that took place:\nhttp://www.python.org/download/releases/2.2/descrintro/\nClarification: perhaps I can put it even more simply: in Python, everything has always been an object; but some basic kinds of object (ints, strings) once played by \"different rules\" that prevent OO programming methods (like inheritance) from being used with them. That has now been fixed. The len() method, described in another response here, is probably the only thing left that I wish Guido had changed in the upgrade to Python 3.0. But at least he gave me dictionary comprehensions, so I won't complain too loudly. :-)\n",
"I'm not sure that I buy the argument that Ruby is more object-oriented than Python. There's more to being object-oriented than just using objects and dot syntax. A common argument that I see is that in Python to get the length of a list, you do something like this:\nlen(some_list)\n\nI see this as a bikeshed argument. What this really translates to (almost directly) is this:\nsome_list.__len__()\n\nwhich is perfectly object oriented. I think Rubyists may get a bit confused because typically being object-oriented involves using the dot syntax (for example object.method()). However, if I misunderstand Rubyists' arguments, feel free to let me know.\nRegardless of the object-orientation of this, there is one advantage to using len this way. One thing that's always annoyed me about some languages is having to remember whether to use some_list.size() or some_list.length() or some_list.len for a particular object. Python's way means just one function to remember\n",
"Although this is not properly an answer... Why do you care about Python being more or less OO? The cool thing about Python is that it's pythonic, not object oriented or funcitonal or whichever paradigm that is fashionable at the moment! :-)\nI learnt to program with Java and Object Orientation, but now I don't give a sh.t about it because I know that OOP is not the solution to all the problems (indeed, no single paradigm is).\nsee:\n\nThe Zen of Python\nWhat is Pythonic\nBe Pythonic\n\n",
"Hold on, both Ruby and Python are object oriented. Objects are objects. There isn't more object oriented 'comparison function' that will lead you to the better one. Syntax is not only thing which makes some language to look like object oriented one, but also data model.\n\nObjects are Python’s abstraction for data. All data in a Python program is represented by objects or by relations between objects. (In a sense, and in conformance to Von Neumann’s model of a “stored program computer,” code is also represented by objects.) http://docs.python.org/reference/datamodel.html\n\n",
"This is an incorrect belief.\nSee my previous answer here for more in-depth explanation:\nIs everything an object in python like ruby?\n\nWhy not expose .len() directly off of list then? I think you can't completely divorce OO design from the syntax, because the syntax, to a large extent, defines your code paradigm. some_list.len() is OO because you are thinking about the list as an object that will be able to tell you what its length is. len(some_list)\n\n.len() is available directly off the list. It is available as __len__(). len() is a function object. You can see all its methods with dir(len). While I do not know why Guido decided to make the __len__() method longer, it does not change the fact that all of those are still objects.\n",
"I have the same \"perception\" perhaps derived from this:\nWhy was python created in the first place:\n\nIt occurred to me that a scripting language with a syntax like ABC [...] would fill the need\n\nAn Interview with the Creator of Ruby:\n\n\"I wanted a scripting language that was more powerful than Perl, and more object-oriented than Python\n\nI know that perception is not the same as reality. Both Python and Ruby are great programming languages and both are very OO. \n"
] |
[
40,
12,
7,
2,
2,
1
] |
[] |
[] |
[
"oop",
"python",
"ruby"
] |
stackoverflow_0000894502_oop_python_ruby.txt
|
Q:
More Pythonic conversion to binary?
Here is a contrived example of how a lot of our classes return binary representations (to be read by C++) of themselves.
def to_binary(self):
'Return the binary representation as a string.'
data = []
# Binary version number.
data.append(struct.pack('<I', [2]))
# Image size.
data.append(struct.pack('<II', *self.image.size))
# Attribute count.
data.append(struct.pack('<I', len(self.attributes)))
# Attributes.
for attribute in self.attributes:
# Id.
data.append(struct.pack('<I', attribute.id))
# Type.
data.append(struct.pack('<H', attribute.type))
# Extra Type.
if attribute.type == 0:
data.append(struct.pack('<I', attribute.typeEx))
return ''.join(data)
What I dislike:
Every line starts with data.append(struct.pack(, distracting from the unique part of the line.
The byte order ('<') is repeated over and over again.
You have to remember to return the boilerplate ''.join(data).
What I like:
The format specifiers appear near the attribute name. E.g., it's easy to see that self.image.size is written out as two unsigned ints.
The lines are (mostly) independent. E.g., To remove the Id field from an 'attribute', you don't have to touch more than one line of code.
Is there a more readable/pythonic way to do this?
A:
You can try to implement some sort of declarative syntax for your data.
Which may result in something like:
class Image(SomeClassWithMetamagic):
type = PackedValue(2)
attribute = PackedValue('attributes') # accessed via self.__dict__
#or using decorators
@pack("<II")
def get_size():
pass
#and a generic function in the Superclass
def get_packed():
stuff
etc...
Other examples would be SQLAlchemy's declarative_base, ToscaWidgets and sprox
A:
from StringIO import StringIO
import struct
class BinaryIO(StringIO):
def writepack(self, fmt, *values):
self.write(struct.pack('<' + fmt, *values))
def to_binary_example():
data = BinaryIO()
data.writepack('I', 42)
data.writepack('II', 1, 2)
return data.getvalue()
A:
If you just want nicer syntax, you can abuse generators/decorators:
from functools import wraps
def packed(g):
'''a decorator that packs the list data items
that is generated by the decorated function
'''
@wraps(g)
def wrapper(*p, **kw):
data = []
for params in g(*p, **kw):
fmt = params[0]
fields = params[1:]
data.append(struct.pack('<'+fmt, *fields))
return ''.join(data)
return wrapper
@packed
def as_binary(self):
'''just |yield|s the data items that should be packed
by the decorator
'''
yield 'I', [2]
yield 'II', self.image.size[0], self.image.size[1]
yield 'I', len(self.attributes)
for attribute in self.attributes:
yield 'I', attribute.id
yield 'H', attribute.type
if attribute.type == 0:
yield 'I', attribute.typeEx
Basically this uses the generator to implement a "monad", an abstraction usually found in functional languages like Haskell. It separates the generation of some values from the code that decides how to combine these values together. It's more a functional programming approach then "pythonic", but I think it improves readability.
A:
How about protocol buffers google's extensive cross language format and protocol of sharing data.
A:
def to_binary(self):
struct_i_pack = struct.Struct('<I').pack
struct_ii_pack = struct.Struct('<II').pack
struct_h_pack = struct.Struct('<H').pack
struct_ih_pack = struct.Struct('<IH').pack
struct_ihi_pack = struct.Struct('<IHI').pack
return ''.join([
struct_i_pack(2),
struct_ii_pack(*self.image.size),
struct_i_pack(len(self.attributes)),
''.join([
struct_ih_pack(a.id, a.type) if a.type else struct_ihi_pack(a.id, a.type, a.typeEx)
for a in attributes
])
])
A:
You could refactor your code to wrap boilerplate in a class. Something like:
def to_binary(self):
'Return the binary representation as a string.'
binary = BinaryWrapper()
# Binary version number.
binary.pack('<I', [2])
# alternatively, you can pass an array
stuff = [
('<II', *self.image.size), # Image size.
('<I', len(self.attributes)), # Attribute count
]
binary.pack_all(stuff)
return binary.get_packed()
A:
The worst problem is that you need corresponding code in C++ to read the output. Can you reasonably arrange to have both the reading and writing code mechanically derive from or use a common specification? How to go about that depends on your C++ needs as much as Python.
A:
You can get rid of the repetition while still as readable easily like this:
def to_binary(self):
output = struct.pack(
'<IIII', 2, self.image.size[0], self.image.size[1], len(self.attributes)
)
return output + ''.join(
struct.pack('<IHI', attribute.id, attribute.type, attribute.typeEx)
for attribute in self.attributes
)
|
More Pythonic conversion to binary?
|
Here is a contrived example of how a lot of our classes return binary representations (to be read by C++) of themselves.
def to_binary(self):
'Return the binary representation as a string.'
data = []
# Binary version number.
data.append(struct.pack('<I', [2]))
# Image size.
data.append(struct.pack('<II', *self.image.size))
# Attribute count.
data.append(struct.pack('<I', len(self.attributes)))
# Attributes.
for attribute in self.attributes:
# Id.
data.append(struct.pack('<I', attribute.id))
# Type.
data.append(struct.pack('<H', attribute.type))
# Extra Type.
if attribute.type == 0:
data.append(struct.pack('<I', attribute.typeEx))
return ''.join(data)
What I dislike:
Every line starts with data.append(struct.pack(, distracting from the unique part of the line.
The byte order ('<') is repeated over and over again.
You have to remember to return the boilerplate ''.join(data).
What I like:
The format specifiers appear near the attribute name. E.g., it's easy to see that self.image.size is written out as two unsigned ints.
The lines are (mostly) independent. E.g., To remove the Id field from an 'attribute', you don't have to touch more than one line of code.
Is there a more readable/pythonic way to do this?
|
[
"You can try to implement some sort of declarative syntax for your data.\nWhich may result in something like:\nclass Image(SomeClassWithMetamagic):\n type = PackedValue(2)\n attribute = PackedValue('attributes') # accessed via self.__dict__\n\n#or using decorators\n @pack(\"<II\")\n def get_size():\n pass\n\n#and a generic function in the Superclass\n def get_packed():\n stuff\n\netc...\nOther examples would be SQLAlchemy's declarative_base, ToscaWidgets and sprox\n",
"from StringIO import StringIO\nimport struct\n\nclass BinaryIO(StringIO):\n def writepack(self, fmt, *values):\n self.write(struct.pack('<' + fmt, *values))\n\ndef to_binary_example():\n data = BinaryIO()\n data.writepack('I', 42)\n data.writepack('II', 1, 2)\n return data.getvalue()\n\n",
"If you just want nicer syntax, you can abuse generators/decorators:\nfrom functools import wraps \n\ndef packed(g):\n '''a decorator that packs the list data items\n that is generated by the decorated function\n '''\n @wraps(g)\n def wrapper(*p, **kw):\n data = []\n for params in g(*p, **kw):\n fmt = params[0]\n fields = params[1:]\n data.append(struct.pack('<'+fmt, *fields))\n return ''.join(data) \n return wrapper\n\n@packed\ndef as_binary(self):\n '''just |yield|s the data items that should be packed\n by the decorator\n '''\n yield 'I', [2]\n yield 'II', self.image.size[0], self.image.size[1]\n yield 'I', len(self.attributes)\n\n for attribute in self.attributes:\n yield 'I', attribute.id\n yield 'H', attribute.type\n if attribute.type == 0:\n yield 'I', attribute.typeEx\n\nBasically this uses the generator to implement a \"monad\", an abstraction usually found in functional languages like Haskell. It separates the generation of some values from the code that decides how to combine these values together. It's more a functional programming approach then \"pythonic\", but I think it improves readability.\n",
"How about protocol buffers google's extensive cross language format and protocol of sharing data. \n",
"def to_binary(self):\n struct_i_pack = struct.Struct('<I').pack\n struct_ii_pack = struct.Struct('<II').pack\n struct_h_pack = struct.Struct('<H').pack\n struct_ih_pack = struct.Struct('<IH').pack\n struct_ihi_pack = struct.Struct('<IHI').pack\n\n return ''.join([\n struct_i_pack(2),\n struct_ii_pack(*self.image.size),\n struct_i_pack(len(self.attributes)),\n ''.join([\n struct_ih_pack(a.id, a.type) if a.type else struct_ihi_pack(a.id, a.type, a.typeEx)\n for a in attributes\n ])\n ])\n\n",
"You could refactor your code to wrap boilerplate in a class. Something like:\ndef to_binary(self):\n 'Return the binary representation as a string.'\n binary = BinaryWrapper()\n\n # Binary version number.\n binary.pack('<I', [2])\n\n # alternatively, you can pass an array\n stuff = [\n ('<II', *self.image.size), # Image size.\n ('<I', len(self.attributes)), # Attribute count\n ]\n binary.pack_all(stuff)\n\n return binary.get_packed()\n\n",
"The worst problem is that you need corresponding code in C++ to read the output. Can you reasonably arrange to have both the reading and writing code mechanically derive from or use a common specification? How to go about that depends on your C++ needs as much as Python.\n",
"You can get rid of the repetition while still as readable easily like this:\ndef to_binary(self): \n output = struct.pack(\n '<IIII', 2, self.image.size[0], self.image.size[1], len(self.attributes)\n )\n return output + ''.join(\n struct.pack('<IHI', attribute.id, attribute.type, attribute.typeEx)\n for attribute in self.attributes\n )\n\n"
] |
[
4,
4,
2,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000894157_python.txt
|
Q:
Compiled Python CGI
Assuming the webserver is configured to handle .exe, Can i compile a python CGI file into an exe for speed. What would some pros and cons be to such a desession?
A:
There is py2exe (and a tutorial on how to use it), but there is no guarentee that it will make your script any faster. Really its more of an executable interpreter that wraps the bytecode. There are other exe compilers that do varying degrees of things to the python code, so you might want to do a general Google search.
However, you can always use psyco to boost speed of many of the most intensive operations that a python script can perform, namely looping.
A:
You probably don't want to run Python as a CGI if you want it fast. Look at proxies, mod_python, WSGI or FastCGI, as those techinques avoid re-loading python runtime and your app on each request.
A:
Since the RDBMS and the network are the bottlenecks, I see no value in fussing around creating an EXE.
On average, most of a web site's transfers are static content (images, .CSS, .JS, etc.) which is best handled by Apache without any Python in the loop. This has huge impact.
Reserve Python for the "interesting" and "complex" parts of creating the dynamic HTML. Use a framework.
|
Compiled Python CGI
|
Assuming the webserver is configured to handle .exe, Can i compile a python CGI file into an exe for speed. What would some pros and cons be to such a desession?
|
[
"There is py2exe (and a tutorial on how to use it), but there is no guarentee that it will make your script any faster. Really its more of an executable interpreter that wraps the bytecode. There are other exe compilers that do varying degrees of things to the python code, so you might want to do a general Google search.\nHowever, you can always use psyco to boost speed of many of the most intensive operations that a python script can perform, namely looping.\n",
"You probably don't want to run Python as a CGI if you want it fast. Look at proxies, mod_python, WSGI or FastCGI, as those techinques avoid re-loading python runtime and your app on each request.\n",
"Since the RDBMS and the network are the bottlenecks, I see no value in fussing around creating an EXE. \nOn average, most of a web site's transfers are static content (images, .CSS, .JS, etc.) which is best handled by Apache without any Python in the loop. This has huge impact.\nReserve Python for the \"interesting\" and \"complex\" parts of creating the dynamic HTML. Use a framework. \n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000895163_python.txt
|
Q:
How do I get the current file, current class, and current method with Python?
Name of the file from where code is running
Name of the class from where code is running
Name of the method (attribute of the class) where code is running
A:
Here is an example of each:
from inspect import stack
class Foo:
def __init__(self):
print __file__
print self.__class__.__name__
print stack()[0][3]
f = Foo()
A:
import sys
class A:
def __init__(self):
print __file__
print self.__class__.__name__
print sys._getframe().f_code.co_name
a = A()
A:
self.__class__.__name__ # name of class i'm in
for the rest the sys and trace modules
http://docs.python.org/library/sys.html
http://docs.python.org/library/trace.html
Some more info:
https://mail.python.org/pipermail/python-list/2001-August/096499.html
and
http://www.dalkescientific.com/writings/diary/archive/2005/04/20/tracing_python_code.html
did you want it for error reporting because the traceback module can handle that:
http://docs.python.org/library/traceback.html
A:
Be very careful. Consider:
class A:
pass
B = A
b = B()
What is the 'class name' of b here? Is it A, or B? Why?
The point is, you shouldn't need to know or care. An object is what it is: its name is very rarely useful.
|
How do I get the current file, current class, and current method with Python?
|
Name of the file from where code is running
Name of the class from where code is running
Name of the method (attribute of the class) where code is running
|
[
"Here is an example of each:\nfrom inspect import stack\n\nclass Foo:\n def __init__(self):\n print __file__\n print self.__class__.__name__\n print stack()[0][3]\n\nf = Foo()\n\n",
"import sys\n\nclass A:\n def __init__(self):\n print __file__\n print self.__class__.__name__\n print sys._getframe().f_code.co_name\n\na = A()\n\n",
"self.__class__.__name__ # name of class i'm in\n\nfor the rest the sys and trace modules\nhttp://docs.python.org/library/sys.html\nhttp://docs.python.org/library/trace.html\nSome more info:\nhttps://mail.python.org/pipermail/python-list/2001-August/096499.html\nand\nhttp://www.dalkescientific.com/writings/diary/archive/2005/04/20/tracing_python_code.html\ndid you want it for error reporting because the traceback module can handle that:\nhttp://docs.python.org/library/traceback.html\n",
"Be very careful. Consider:\nclass A:\n pass\n\nB = A\nb = B()\n\nWhat is the 'class name' of b here? Is it A, or B? Why?\nThe point is, you shouldn't need to know or care. An object is what it is: its name is very rarely useful.\n"
] |
[
33,
11,
5,
3
] |
[] |
[] |
[
"filenames",
"python"
] |
stackoverflow_0000894088_filenames_python.txt
|
Q:
Miminal Linux For a Pylons Web App?
I am going to be building a Pylons-based web application. For this purpose, I'd like to build a minimal Linux platform, upon which I would then install the necessary packages such as Python and Pylons, and other necessary dependencies. The other reason to keep it minimal is because this machine will be virtual, probably over KVM, and will eventually be replicated in some cloud environment.
What would you use to do this? I am thinking of using Fedora 10's AOS iso, but would love to understand all my options.
A:
I really like JeOS "Just enough OS" which is a minimal distribution of the Ubuntu Server Edition.
A:
If you want to be able to remove all the cruft but still be using a ‘mainstream’ distro rather than one cut down to aim at tiny devices, look at Slackware. You can happily remove stuff as low-level as sysvinit, cron and so on, without collapsing into dependency hell. And nothing in it relies on Perl or Python, so you can easily remove them (and install whichever version of Python your app prefers to use).
A:
For this purpose, I'd like to build a minimal Linux platform...
So Why not try to use ArchLinux www.archlinux.org?
Also you can use virtualenv with Pylons in it.
A:
debootstrap is your friend.
A:
Damn Small Linux? Slax?
A:
If you want to go serious about the virtual appliance idea, take a look at the newly released VMware Studio. It was built exactly for trimming down a system (only Linux for now afaik) so it provides only enough base to run your application.
VMware is going (a bit more) open by pushing an open virtual appliance format (OVF) so, at some point in the future, you might be able to run the result on other virtualization platforms too.
A:
Debootstrap, or use kickstart to strap your FC domains. However, other methods of strapping an RPM based distro exist, such as Steve Kemp's rinse utility that replaces rpmstrap.
Or, you could just grab something at jailtime to use as a base.
If that fails, download everything you need from source, build / install it with a /mydist prefix (including libc, etc) and test it via chroot.
I've been building templates for Xen for years .. its actually turned into a very fun hobby :)
|
Miminal Linux For a Pylons Web App?
|
I am going to be building a Pylons-based web application. For this purpose, I'd like to build a minimal Linux platform, upon which I would then install the necessary packages such as Python and Pylons, and other necessary dependencies. The other reason to keep it minimal is because this machine will be virtual, probably over KVM, and will eventually be replicated in some cloud environment.
What would you use to do this? I am thinking of using Fedora 10's AOS iso, but would love to understand all my options.
|
[
"I really like JeOS \"Just enough OS\" which is a minimal distribution of the Ubuntu Server Edition.\n",
"If you want to be able to remove all the cruft but still be using a ‘mainstream’ distro rather than one cut down to aim at tiny devices, look at Slackware. You can happily remove stuff as low-level as sysvinit, cron and so on, without collapsing into dependency hell. And nothing in it relies on Perl or Python, so you can easily remove them (and install whichever version of Python your app prefers to use).\n",
"\nFor this purpose, I'd like to build a minimal Linux platform...\n\nSo Why not try to use ArchLinux www.archlinux.org?\nAlso you can use virtualenv with Pylons in it.\n",
"debootstrap is your friend.\n",
"Damn Small Linux? Slax?\n",
"If you want to go serious about the virtual appliance idea, take a look at the newly released VMware Studio. It was built exactly for trimming down a system (only Linux for now afaik) so it provides only enough base to run your application. \nVMware is going (a bit more) open by pushing an open virtual appliance format (OVF) so, at some point in the future, you might be able to run the result on other virtualization platforms too.\n",
"Debootstrap, or use kickstart to strap your FC domains. However, other methods of strapping an RPM based distro exist, such as Steve Kemp's rinse utility that replaces rpmstrap.\nOr, you could just grab something at jailtime to use as a base.\nIf that fails, download everything you need from source, build / install it with a /mydist prefix (including libc, etc) and test it via chroot.\nI've been building templates for Xen for years .. its actually turned into a very fun hobby :)\n"
] |
[
8,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"linux",
"pylons",
"python"
] |
stackoverflow_0000589115_linux_pylons_python.txt
|
Q:
Can anyone recommend a Python PDF generator with OpenType (.OTF) support?
After asking this question back in November, I've been very happy with ReportLab for all of my python pdf-generation needs.
However, it turns out that while ReportLab will use regular TrueType (TTF) fonts, it does not support OpenType (OTF) fonts.
One of the current widgets I'm working on is going to need to use some OpenType fonts, and so sadly, ReportLab just removed itself from the running.
Can anyone recommend an OpenType-compatible PDF generator for Python?
It doesn't need to be fancy - I just need to be able to drop UTF-8 text onto a page.
Update: OpenType fonts, roughtly, come in two flavors: TrueType-style and PostScript-style, based on how they store glyph outlines. ReportLab just supports the TypeType-style. On Windows, it turns out, you can tell the difference by the extension: TrueType and OpenType of the TrueType-style are .TTF, OpenType with the PostScript style are .OTF.
So, my real question is, can anyone recommend a Python PDF generator that supports .otf fonts?
A:
That sort of depends... OpenType was intended to extend TrueType (and uses the general structure of TrueType internally) - so much so that some folks have reported success using OpenType fonts in reportlab; I suppose it all depends on whether or not there are any special OTF characteristics that your use of the font requires.
In fact, some comments in the TTFontFile class source for reportlab mention OpenType by name, so it's probably worth a shot.
EDIT: The comments reference an error message that pretty much summarizes the case where reportlab can't support an OTF font. OTF fonts can store outline data in several formats (see the wikipedia link above). In this case, the font appears to be using the CFF format, for which reportlab specifically checks in its font parser, and which reportlab specifically rejects with the error message "postscript outlines are not supported".
That pretty much ends my font and PDF-generator expertise. Sorry! Looking forward to seeing any suggestions of alternatives.
EDIT 2: Ok, looking at the docs for Django, I see they reference another full PDF api: pdflib. I have no direct experience with PDFlib, and it's not free (neither price nor license). I also find their docs annoying as I couldn't just see the English API without downloading the whole bloomin package (don't know if there's a free trial or what). I did look at the German docs, though, which ARE mysteriously available for free, separate downlod. My second-language-in-university german did allow me to discern that they claim support for unicode and 8-bit OpenType fonts with postscript outlines.
Do I sound enthused about them? Nope :-) Hopefully someone who uses and loves them will correct me, as I repeat I have no first-hand experience with them. It may be an option if your budget allows and all else fails.
A:
It would be nice if reportlab had native OTF support but all most people really need is a TrueType version of a particular OpenType font. I used this fontforge script to convert the font I needed to TrueType with perfect results.
From http://www.se.eecs.uni-kassel.de/~thm/OpenOffice.org/bugs.html :
#!/usr/bin/fontforge
# Quick and dirty hack: converts a font to truetype (.ttf)
Print("Opening "+$1);
Open($1);
Print("Saving "+$1:r+".ttf");
Generate($1:r+".ttf");
Quit(0);
|
Can anyone recommend a Python PDF generator with OpenType (.OTF) support?
|
After asking this question back in November, I've been very happy with ReportLab for all of my python pdf-generation needs.
However, it turns out that while ReportLab will use regular TrueType (TTF) fonts, it does not support OpenType (OTF) fonts.
One of the current widgets I'm working on is going to need to use some OpenType fonts, and so sadly, ReportLab just removed itself from the running.
Can anyone recommend an OpenType-compatible PDF generator for Python?
It doesn't need to be fancy - I just need to be able to drop UTF-8 text onto a page.
Update: OpenType fonts, roughtly, come in two flavors: TrueType-style and PostScript-style, based on how they store glyph outlines. ReportLab just supports the TypeType-style. On Windows, it turns out, you can tell the difference by the extension: TrueType and OpenType of the TrueType-style are .TTF, OpenType with the PostScript style are .OTF.
So, my real question is, can anyone recommend a Python PDF generator that supports .otf fonts?
|
[
"That sort of depends... OpenType was intended to extend TrueType (and uses the general structure of TrueType internally) - so much so that some folks have reported success using OpenType fonts in reportlab; I suppose it all depends on whether or not there are any special OTF characteristics that your use of the font requires.\nIn fact, some comments in the TTFontFile class source for reportlab mention OpenType by name, so it's probably worth a shot.\n\nEDIT: The comments reference an error message that pretty much summarizes the case where reportlab can't support an OTF font. OTF fonts can store outline data in several formats (see the wikipedia link above). In this case, the font appears to be using the CFF format, for which reportlab specifically checks in its font parser, and which reportlab specifically rejects with the error message \"postscript outlines are not supported\".\nThat pretty much ends my font and PDF-generator expertise. Sorry! Looking forward to seeing any suggestions of alternatives.\n\nEDIT 2: Ok, looking at the docs for Django, I see they reference another full PDF api: pdflib. I have no direct experience with PDFlib, and it's not free (neither price nor license). I also find their docs annoying as I couldn't just see the English API without downloading the whole bloomin package (don't know if there's a free trial or what). I did look at the German docs, though, which ARE mysteriously available for free, separate downlod. My second-language-in-university german did allow me to discern that they claim support for unicode and 8-bit OpenType fonts with postscript outlines.\nDo I sound enthused about them? Nope :-) Hopefully someone who uses and loves them will correct me, as I repeat I have no first-hand experience with them. It may be an option if your budget allows and all else fails. \n",
"It would be nice if reportlab had native OTF support but all most people really need is a TrueType version of a particular OpenType font. I used this fontforge script to convert the font I needed to TrueType with perfect results.\nFrom http://www.se.eecs.uni-kassel.de/~thm/OpenOffice.org/bugs.html :\n#!/usr/bin/fontforge\n# Quick and dirty hack: converts a font to truetype (.ttf)\nPrint(\"Opening \"+$1);\nOpen($1);\nPrint(\"Saving \"+$1:r+\".ttf\");\nGenerate($1:r+\".ttf\");\nQuit(0);\n\n"
] |
[
4,
4
] |
[] |
[] |
[
"opentype",
"pdf_generation",
"python"
] |
stackoverflow_0000895596_opentype_pdf_generation_python.txt
|
Q:
Using virtualenv on Mac OS X
I've been using virtualenv on Ubuntu and it rocks, so I'm trying to use it on my Mac and I'm having trouble.
The virtualenv command successfully creates the directory, and easy_install gladly installs packages in it, but I can't import anything I install. It seems like sys.path isn't being set correctly: it doesn't include the virtual site-packages, even if I use the --no-site-packages option. Am I doing something wrong?
I'm using Python 2.5.1 and virtualenv 1.3.3 on Mac OS 10.5.6
Edit: Here's what happens when I try to use virtualenv:
$ virtualenv test
New python executable in test/bin/python
Installing setuptools............done.
$ source test/bin/activate
(test)$ which python
/Users/Justin/test/bin/python
(test)$ which easy_install
/Users/Justin/test/bin/easy_install
(test)$ easy_install webcolors
[...]
Installed /Users/Justin/test/lib/python2.5/site-packages/webcolors-1.3-py2.5.egg
Processing dependencies for webcolors
Finished processing dependencies for webcolors
(test)$ python
[...]
>>> import webcolors
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named webcolors
>>> import sys
>>> print sys.path
['',
'/Library/Python/2.5/site-packages/SQLObject-0.10.2-py2.5.egg',
'/Library/Python/2.5/site-packages/FormEncode-1.0.1-py2.5.egg',
...,
'/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5',
'/Users/Justin/test/lib/python25.zip',
'/Users/Justin/test/lib/python2.5',
'/Users/Justin/test/lib/python2.5/plat-darwin',
'/Users/Justin/test/lib/python2.5/plat-mac',
'/Users/Justin/test/lib/python2.5/plat-mac/lib-scriptpackages',
'/Users/Justin/test/Extras/lib/python',
'/Users/Justin/test/lib/python2.5/lib-tk',
'/Users/Justin/test/lib/python2.5/lib-dynload',
'/Library/Python/2.5/site-packages',
'/Library/Python/2.5/site-packages/PIL']
Edit 2: Using the activate_this.py script works, but running source bin/activate does not. Hopefully that helps narrow down the problem?
A:
I've not had any problems with the same OS X/Python/virtualenv version (OS X 10.5.6, Python 2.5.1, virtualenv 1.3.1)
$ virtualenv test
New python executable in test/bin/python
Installing setuptools............done.
$ source test/bin/activate
(test)$ which python
/Users/dbr/test/bin/python
$ echo $PATH
/Users/dbr/test/bin:/usr/bin:[...]
$ python
[...]
>>> import sys
>>> print sys.path
['', '/Users/dbr/test/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg',
One thing to check - in a clean shell, run the following:
$ virtualenv test
$ python
[...]
>>> import sys
>>> sys.path
['', '/Library/Python/2.5/site-packages/elementtree-1.2.7_20070827_preview-py2.5.egg'[...]
>>> sys.path.append("test/bin/")
>>> import activate_this
>>> sys.path
['/Users/dbr/test/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg'
Or from the virtualenv docs:
activate_this = '/path/to/env/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
That should force the current Python shell into the virtualenv
Also, after running source test/bin/activate try running python with the -v flag (verbose), it may produce something useful.
A:
It turns out that my problems with virtualenv were my own fault: I had configured my .bash_profile to muck with the PYTHONPATH environment variable, which caused the import problems.
Thank you to everyone who took the time to answer; sorry for not investigating the problem further on my own.
|
Using virtualenv on Mac OS X
|
I've been using virtualenv on Ubuntu and it rocks, so I'm trying to use it on my Mac and I'm having trouble.
The virtualenv command successfully creates the directory, and easy_install gladly installs packages in it, but I can't import anything I install. It seems like sys.path isn't being set correctly: it doesn't include the virtual site-packages, even if I use the --no-site-packages option. Am I doing something wrong?
I'm using Python 2.5.1 and virtualenv 1.3.3 on Mac OS 10.5.6
Edit: Here's what happens when I try to use virtualenv:
$ virtualenv test
New python executable in test/bin/python
Installing setuptools............done.
$ source test/bin/activate
(test)$ which python
/Users/Justin/test/bin/python
(test)$ which easy_install
/Users/Justin/test/bin/easy_install
(test)$ easy_install webcolors
[...]
Installed /Users/Justin/test/lib/python2.5/site-packages/webcolors-1.3-py2.5.egg
Processing dependencies for webcolors
Finished processing dependencies for webcolors
(test)$ python
[...]
>>> import webcolors
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named webcolors
>>> import sys
>>> print sys.path
['',
'/Library/Python/2.5/site-packages/SQLObject-0.10.2-py2.5.egg',
'/Library/Python/2.5/site-packages/FormEncode-1.0.1-py2.5.egg',
...,
'/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5',
'/Users/Justin/test/lib/python25.zip',
'/Users/Justin/test/lib/python2.5',
'/Users/Justin/test/lib/python2.5/plat-darwin',
'/Users/Justin/test/lib/python2.5/plat-mac',
'/Users/Justin/test/lib/python2.5/plat-mac/lib-scriptpackages',
'/Users/Justin/test/Extras/lib/python',
'/Users/Justin/test/lib/python2.5/lib-tk',
'/Users/Justin/test/lib/python2.5/lib-dynload',
'/Library/Python/2.5/site-packages',
'/Library/Python/2.5/site-packages/PIL']
Edit 2: Using the activate_this.py script works, but running source bin/activate does not. Hopefully that helps narrow down the problem?
|
[
"I've not had any problems with the same OS X/Python/virtualenv version (OS X 10.5.6, Python 2.5.1, virtualenv 1.3.1)\n$ virtualenv test\nNew python executable in test/bin/python\nInstalling setuptools............done.\n$ source test/bin/activate\n(test)$ which python\n/Users/dbr/test/bin/python\n$ echo $PATH\n/Users/dbr/test/bin:/usr/bin:[...]\n$ python\n[...]\n>>> import sys\n>>> print sys.path\n['', '/Users/dbr/test/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg',\n\nOne thing to check - in a clean shell, run the following:\n$ virtualenv test\n$ python\n[...]\n>>> import sys\n>>> sys.path\n['', '/Library/Python/2.5/site-packages/elementtree-1.2.7_20070827_preview-py2.5.egg'[...]\n>>> sys.path.append(\"test/bin/\")\n>>> import activate_this\n>>> sys.path\n['/Users/dbr/test/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg'\n\nOr from the virtualenv docs:\nactivate_this = '/path/to/env/bin/activate_this.py'\nexecfile(activate_this, dict(__file__=activate_this))\n\nThat should force the current Python shell into the virtualenv\nAlso, after running source test/bin/activate try running python with the -v flag (verbose), it may produce something useful.\n",
"It turns out that my problems with virtualenv were my own fault: I had configured my .bash_profile to muck with the PYTHONPATH environment variable, which caused the import problems.\nThank you to everyone who took the time to answer; sorry for not investigating the problem further on my own.\n"
] |
[
5,
2
] |
[] |
[] |
[
"macos",
"python",
"virtualenv"
] |
stackoverflow_0000843531_macos_python_virtualenv.txt
|
Q:
How can I launch a python script on windows?
I have run a few using batch jobs, but, I am wondering what would be the most appropriate? Maybe using time.strftime?
A:
If you're looking to do recurring scheduled tasks, then the Task Scheduler (Vista) or Scheduled Tasks (XP and, I think, earlier) is the appropriate method on Windows.
A:
I'd second using the Task Scheduler.
I have also read about a 'cron-like' python based application PyCron - http://www.bigbluehost.com/article4.html . If you're from the Unix end of town you might find it more familiar than the Windows scheduler. Never used it myself but it might be of interest.
|
How can I launch a python script on windows?
|
I have run a few using batch jobs, but, I am wondering what would be the most appropriate? Maybe using time.strftime?
|
[
"If you're looking to do recurring scheduled tasks, then the Task Scheduler (Vista) or Scheduled Tasks (XP and, I think, earlier) is the appropriate method on Windows.\n",
"I'd second using the Task Scheduler. \nI have also read about a 'cron-like' python based application PyCron - http://www.bigbluehost.com/article4.html . If you're from the Unix end of town you might find it more familiar than the Windows scheduler. Never used it myself but it might be of interest.\n"
] |
[
5,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000894845_python.txt
|
Q:
How can I start an interactive program(like gdb) from python?
I am going to start up gdb from python.
For example:
prog.shell.py:
#do lots of things
#
#
p.subprocess.Popen("gdb --args myprog", shell=True, stdin=sys.stdin, stdout=sys.stdout)
But the gdb is not invoked as I expected, the interaction with gdb is broken. I have also tried os.system(), but it still doesnt work. What might I be doing wrong?
A:
I think you meant
p = subprocess.Popen(...)
You probably need to wait for p to finish:
p.wait()
|
How can I start an interactive program(like gdb) from python?
|
I am going to start up gdb from python.
For example:
prog.shell.py:
#do lots of things
#
#
p.subprocess.Popen("gdb --args myprog", shell=True, stdin=sys.stdin, stdout=sys.stdout)
But the gdb is not invoked as I expected, the interaction with gdb is broken. I have also tried os.system(), but it still doesnt work. What might I be doing wrong?
|
[
"I think you meant\np = subprocess.Popen(...)\n\nYou probably need to wait for p to finish:\np.wait()\n\n"
] |
[
3
] |
[] |
[] |
[
"gdb",
"python"
] |
stackoverflow_0000896031_gdb_python.txt
|
Q:
Can reading a list from a disk be better than loading a dictionary?
I am building an application where I am trying to allow users to submit a list of company and date pairs and find out whether or not there was a news event on that date. The news events are stored in a dictionary with a company identifier and a date as a key.
newsDict('identifier','MM/DD/YYYY')=[list of news events for that date]
The dictionary turned out to be much larger than I thought-too big even to build it in memory so I broke it down into three pieces, each piece is limited to a particular range of company identifiers.
My plan was to take the user submitted list and using a dictionary group the user list of company identifiers to match the particular newsDict that the company events would be expected to be found and then load the newsDicts one after another to get the values.
Well now I am wondering if it would not be better to keep the news events in a list with each item of the list being a sublist list of a tuple and another list
[('identifier','MM/DD/YYYY'),[list of news events for that date]]
my thought then is that I would have a dictionary that would have the range of the list for each company identifier
companyDict['identifier']=(begofRangeinListforComp,endofRangeinListforComp)
I would use the user input to look up the ranges I needed and construct a list of the identifiers and ranges sorted by the ranges. Then I would just read the appropriate section of the list to get the data and construct the output.
The biggest reason I see for this is that even with the dictionary broken into thirds each section takes about two minutes to load on my machine and the dictionary ends up taking about 600 to 750 mb of ram.
I was surprised to note that a list of eight million lines took only about 15 seconds to load and used about 1/3 of the memory of the dictionary that had 1/3 the entries.
Further, since I can discard the lines in the list as I work through the list I will be freeing memory as I work down the user list.
I am surprised as I thought a dictionary would be the most efficient way to do this. but my poking at it suggests that the dictionary requires significantly more memory than a list. My reading of other posts on SO and elsewhere suggests that any other structure is going to require pointer allocations that are more expensive than list pointers. Am I missing something here and is there a better way to do this?
After reading Alberto's answer and response to my comment I spent some time trying to figure out how to write the function if I were to use a db. Now I might be hobbled here because I don't know much about db programming but
I think the code to implement using a db would be much more complicated than:
outList=[]
massiveFile=open('theFile','r')
for identifier in sortedUserList
# I get the list and sort it by the key of the dictionary
identifierList=massiveFile[theDict[identifier]['beginPosit']:theDict[identifier]['endPosit']+1]
for item in identifierList:
if item.startswith(manipulation of the identifier)
outList.append(item)
I have to wrap this in a function I didn't see anything that would be as comparably simple if I converted the list to a db.
Of course simpler was not the reason to bring me to this forum. I still don't see that using another structure will cost less memory. I have 30000 company identifiers and approximately 3600 dates. Each item in my list is an object in the parlance of OOD. That is where I am struggling I spent six hours this morning organizing the data for a dictionary before I gave up. Spending that amount of time to implement a database and then find that I am using half a gig or more of someone else's memory to load it seems problematic
A:
With such a large amount of data, you should be using a database. This would be far better than looking at a list, and would be the most appropriate way of storing your data anyway. If you're using Python, it has SQLite built in I believe.
A:
The dictionary will take more memory because it is effectively a hash.
You don't have to go so far as using a database, since your lookup requirements are so simple. Just use the file system.
Create a directory structure based on the company name (or ticker), with subdirectories for each date. To find whether data exists and load it up, just form the name of the subdirectory where the data would be, and see if it exists.
E.g., IBM news for May 21 would be in C:\db\IBM\20090521\news.txt, if in fact there were news for that day. You just check if the file exists; no searches.
If you want to try and boost speed from there, come up with a scheme to cache a limited amount of results that are likely to be frequently requested (assuming you're operating a server). For that, you'd use a hash.
|
Can reading a list from a disk be better than loading a dictionary?
|
I am building an application where I am trying to allow users to submit a list of company and date pairs and find out whether or not there was a news event on that date. The news events are stored in a dictionary with a company identifier and a date as a key.
newsDict('identifier','MM/DD/YYYY')=[list of news events for that date]
The dictionary turned out to be much larger than I thought-too big even to build it in memory so I broke it down into three pieces, each piece is limited to a particular range of company identifiers.
My plan was to take the user submitted list and using a dictionary group the user list of company identifiers to match the particular newsDict that the company events would be expected to be found and then load the newsDicts one after another to get the values.
Well now I am wondering if it would not be better to keep the news events in a list with each item of the list being a sublist list of a tuple and another list
[('identifier','MM/DD/YYYY'),[list of news events for that date]]
my thought then is that I would have a dictionary that would have the range of the list for each company identifier
companyDict['identifier']=(begofRangeinListforComp,endofRangeinListforComp)
I would use the user input to look up the ranges I needed and construct a list of the identifiers and ranges sorted by the ranges. Then I would just read the appropriate section of the list to get the data and construct the output.
The biggest reason I see for this is that even with the dictionary broken into thirds each section takes about two minutes to load on my machine and the dictionary ends up taking about 600 to 750 mb of ram.
I was surprised to note that a list of eight million lines took only about 15 seconds to load and used about 1/3 of the memory of the dictionary that had 1/3 the entries.
Further, since I can discard the lines in the list as I work through the list I will be freeing memory as I work down the user list.
I am surprised as I thought a dictionary would be the most efficient way to do this. but my poking at it suggests that the dictionary requires significantly more memory than a list. My reading of other posts on SO and elsewhere suggests that any other structure is going to require pointer allocations that are more expensive than list pointers. Am I missing something here and is there a better way to do this?
After reading Alberto's answer and response to my comment I spent some time trying to figure out how to write the function if I were to use a db. Now I might be hobbled here because I don't know much about db programming but
I think the code to implement using a db would be much more complicated than:
outList=[]
massiveFile=open('theFile','r')
for identifier in sortedUserList
# I get the list and sort it by the key of the dictionary
identifierList=massiveFile[theDict[identifier]['beginPosit']:theDict[identifier]['endPosit']+1]
for item in identifierList:
if item.startswith(manipulation of the identifier)
outList.append(item)
I have to wrap this in a function I didn't see anything that would be as comparably simple if I converted the list to a db.
Of course simpler was not the reason to bring me to this forum. I still don't see that using another structure will cost less memory. I have 30000 company identifiers and approximately 3600 dates. Each item in my list is an object in the parlance of OOD. That is where I am struggling I spent six hours this morning organizing the data for a dictionary before I gave up. Spending that amount of time to implement a database and then find that I am using half a gig or more of someone else's memory to load it seems problematic
|
[
"With such a large amount of data, you should be using a database. This would be far better than looking at a list, and would be the most appropriate way of storing your data anyway. If you're using Python, it has SQLite built in I believe.\n",
"The dictionary will take more memory because it is effectively a hash.\nYou don't have to go so far as using a database, since your lookup requirements are so simple. Just use the file system.\nCreate a directory structure based on the company name (or ticker), with subdirectories for each date. To find whether data exists and load it up, just form the name of the subdirectory where the data would be, and see if it exists.\nE.g., IBM news for May 21 would be in C:\\db\\IBM\\20090521\\news.txt, if in fact there were news for that day. You just check if the file exists; no searches.\nIf you want to try and boost speed from there, come up with a scheme to cache a limited amount of results that are likely to be frequently requested (assuming you're operating a server). For that, you'd use a hash.\n"
] |
[
5,
1
] |
[] |
[] |
[
"dictionary",
"list",
"performance",
"python"
] |
stackoverflow_0000895500_dictionary_list_performance_python.txt
|
Q:
How to convert a Pyglet image to a PIL image?
i want to convert a Pyglet.AbstractImage object to an PIL image for further manipulation
here are my codes
from pyglet import image
from PIL import Image
pic = image.load('pic.jpg')
data = pic.get_data('RGB', pic.pitch)
im = Image.fromstring('RGB', (pic.width, pic.height), data)
im.show()
but the image shown went wrong.
so how to convert an image from pyglet to PIL properly?
A:
I think I find the solution
the pitch in Pyglet.AbstractImage instance is not compatible with PIL
I found in pyglet 1.1 there is a codec function to encode the Pyglet image to PIL
here is the link to the source
so the code above should be modified to this
from pyglet import image
from PIL import Image
pic = image.load('pic.jpg')
pitch = -(pic.width * len('RGB'))
data = pic.get_data('RGB', pitch) # using the new pitch
im = Image.fromstring('RGB', (pic.width, pic.height), data)
im.show()
I'm using a 461x288 image in this case and find that pic.pitch is -1384
but the new pitch is -1383
A:
This is an open wishlist item:
AbstractImage to/from PIL image.
|
How to convert a Pyglet image to a PIL image?
|
i want to convert a Pyglet.AbstractImage object to an PIL image for further manipulation
here are my codes
from pyglet import image
from PIL import Image
pic = image.load('pic.jpg')
data = pic.get_data('RGB', pic.pitch)
im = Image.fromstring('RGB', (pic.width, pic.height), data)
im.show()
but the image shown went wrong.
so how to convert an image from pyglet to PIL properly?
|
[
"I think I find the solution\nthe pitch in Pyglet.AbstractImage instance is not compatible with PIL\nI found in pyglet 1.1 there is a codec function to encode the Pyglet image to PIL\nhere is the link to the source\nso the code above should be modified to this\nfrom pyglet import image\nfrom PIL import Image\npic = image.load('pic.jpg')\npitch = -(pic.width * len('RGB'))\ndata = pic.get_data('RGB', pitch) # using the new pitch\nim = Image.fromstring('RGB', (pic.width, pic.height), data)\nim.show()\n\nI'm using a 461x288 image in this case and find that pic.pitch is -1384\nbut the new pitch is -1383\n",
"This is an open wishlist item:\n\nAbstractImage to/from PIL image. \n\n"
] |
[
2,
0
] |
[] |
[] |
[
"image",
"pyglet",
"python",
"python_imaging_library"
] |
stackoverflow_0000896548_image_pyglet_python_python_imaging_library.txt
|
Q:
Display all file names from a specific folder
Like there is a folder say XYZ , whcih contain files with diffrent diffrent format
let say .txt file, excel file, .py file etc.
i want to display in the output all file name using Python programming
A:
import glob
glob.glob('XYZ/*')
See the documentation for more
A:
Here is an example that might also help show some of the handy basics of python -- dicts {} , lists [] , little string techniques (split), a module like os, etc.:
bvm@bvm:~/example$ ls
deal.xls five.xls france.py guido.py make.py thing.mp3 work2.doc
example.py four.xls fun.mp3 letter.doc thing2.xlsx what.docx work45.doc
bvm@bvm:~/example$ python
>>> import os
>>> files = {}
>>> for item in os.listdir('.'):
... try:
... files[item.split('.')[1]].append(item)
... except KeyError:
... files[item.split('.')[1]] = [item]
...
>>> files
{'xlsx': ['thing2.xlsx'], 'docx': ['what.docx'], 'doc': ['letter.doc',
'work45.doc', 'work2.doc'], 'py': ['example.py', 'guido.py', 'make.py',
'france.py'], 'mp3': ['thing.mp3', 'fun.mp3'], 'xls': ['five.xls',
'deal.xls', 'four.xls']}
>>> files['doc']
['letter.doc', 'work45.doc', 'work2.doc']
>>> files['py']
['example.py', 'guido.py', 'make.py', 'france.py']
For your update question, you might try something like:
>>> for item in enumerate(os.listdir('.')):
... print item
...
(0, 'thing.mp3')
(1, 'fun.mp3')
(2, 'example.py')
(3, 'letter.doc')
(4, 'five.xls')
(5, 'guido.py')
(6, 'what.docx')
(7, 'work45.doc')
(8, 'deal.xls')
(9, 'four.xls')
(10, 'make.py')
(11, 'thing2.xlsx')
(12, 'france.py')
(13, 'work2.doc')
>>>
A:
import os
XYZ = '.'
for item in enumerate(sorted(os.listdir(XYZ))):
print item
|
Display all file names from a specific folder
|
Like there is a folder say XYZ , whcih contain files with diffrent diffrent format
let say .txt file, excel file, .py file etc.
i want to display in the output all file name using Python programming
|
[
"import glob\nglob.glob('XYZ/*')\n\nSee the documentation for more\n",
"Here is an example that might also help show some of the handy basics of python -- dicts {} , lists [] , little string techniques (split), a module like os, etc.:\nbvm@bvm:~/example$ ls\ndeal.xls five.xls france.py guido.py make.py thing.mp3 work2.doc\nexample.py four.xls fun.mp3 letter.doc thing2.xlsx what.docx work45.doc\nbvm@bvm:~/example$ python\n>>> import os\n>>> files = {}\n>>> for item in os.listdir('.'):\n... try:\n... files[item.split('.')[1]].append(item)\n... except KeyError:\n... files[item.split('.')[1]] = [item]\n... \n>>> files\n{'xlsx': ['thing2.xlsx'], 'docx': ['what.docx'], 'doc': ['letter.doc', \n'work45.doc', 'work2.doc'], 'py': ['example.py', 'guido.py', 'make.py', \n'france.py'], 'mp3': ['thing.mp3', 'fun.mp3'], 'xls': ['five.xls',\n'deal.xls', 'four.xls']}\n>>> files['doc']\n['letter.doc', 'work45.doc', 'work2.doc']\n>>> files['py']\n['example.py', 'guido.py', 'make.py', 'france.py']\n\nFor your update question, you might try something like:\n>>> for item in enumerate(os.listdir('.')):\n... print item\n... \n(0, 'thing.mp3')\n(1, 'fun.mp3')\n(2, 'example.py')\n(3, 'letter.doc')\n(4, 'five.xls')\n(5, 'guido.py')\n(6, 'what.docx')\n(7, 'work45.doc')\n(8, 'deal.xls')\n(9, 'four.xls')\n(10, 'make.py')\n(11, 'thing2.xlsx')\n(12, 'france.py')\n(13, 'work2.doc')\n>>>\n\n",
"import os\n\nXYZ = '.'\n\nfor item in enumerate(sorted(os.listdir(XYZ))):\n print item\n\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"directory",
"ls",
"python"
] |
stackoverflow_0000896595_directory_ls_python.txt
|
Q:
Properly importing modules in Python
How do I set up module imports so that each module can access the objects of all the others?
I have a medium size Python application with modules files in various subdirectories. I have created modules that append these subdirectories to sys.path and imports a group of modules, using import thisModule as tm. Module objects are referred to with that qualification. I then import that module into the others with from moduleImports import *. The code is sloppy right now and has several of these things, which are often duplicative.
First, the application is failing because some module references aren't assigned. This same code does run when unit tested.
Second, I'm worried that I'm causing a problem with recursive module imports. Importing moduleImports imports thisModule, which imports moduleImports . . . .
What is the right way to do this?
A:
"I have a medium size Python application with modules files in various subdirectories."
Good. Make absolutely sure that each directory include a __init__.py file, so that it's a package.
"I have created modules that append these subdirectories to sys.path"
Bad. Use PYTHONPATH or install the whole structure Lib/site-packages. Don't update sys.path dynamically. It's a bad thing. Hard to manage and maintain.
"imports a group of modules, using import thisModule as tm."
Doesn't make sense. Perhaps you have one import thisModule as tm for each module in your structure. This is typical, standard practice: import just the modules you need, no others.
"I then import that module into the others with from moduleImports import *"
Bad. Don't blanket import a bunch of random stuff.
Each module should have a longish list of the specific things it needs.
import this
import that
import package.module
Explicit list. No magic. No dynamic change to sys.path.
My current project has 100's of modules, a dozen or so packages. Each module imports just what it needs. No magic.
A:
Few pointers
You may have already split
functionality in various module. If
correctly done most of the time you
will not fall into circular import
problems (e.g. if module a depends
on b and b on a you can make a third
module c to remove such circular
dependency). As last resort, in a
import b but in b import a at the
point where a is needed e.g. inside
function.
Once functionality is properly in
modules group them in packages under
a subdir and add a __init__.py file
to it so that you can import the
package. Keep such pakages in a
folder e.g. lib and then either add
to sys.path or set PYTHONPATH env
variable
from module import * may not
be good idea. Instead, import whatever
is needed. It may be fully qualified. It
doesn't hurt to be verbose. e.g.
from pakageA.moduleB import
CoolClass.
A:
The way to do this is to avoid magic. In other words, if your module requires something from another module, it should import it explicitly. You shouldn't rely on things being imported automatically.
As the Zen of Python (import this) has it, explicit is better than implicit.
A:
You won't get recursion on imports because Python caches each module and won't reload one it already has.
|
Properly importing modules in Python
|
How do I set up module imports so that each module can access the objects of all the others?
I have a medium size Python application with modules files in various subdirectories. I have created modules that append these subdirectories to sys.path and imports a group of modules, using import thisModule as tm. Module objects are referred to with that qualification. I then import that module into the others with from moduleImports import *. The code is sloppy right now and has several of these things, which are often duplicative.
First, the application is failing because some module references aren't assigned. This same code does run when unit tested.
Second, I'm worried that I'm causing a problem with recursive module imports. Importing moduleImports imports thisModule, which imports moduleImports . . . .
What is the right way to do this?
|
[
"\"I have a medium size Python application with modules files in various subdirectories.\"\nGood. Make absolutely sure that each directory include a __init__.py file, so that it's a package.\n\"I have created modules that append these subdirectories to sys.path\"\nBad. Use PYTHONPATH or install the whole structure Lib/site-packages. Don't update sys.path dynamically. It's a bad thing. Hard to manage and maintain.\n\"imports a group of modules, using import thisModule as tm.\"\nDoesn't make sense. Perhaps you have one import thisModule as tm for each module in your structure. This is typical, standard practice: import just the modules you need, no others.\n\"I then import that module into the others with from moduleImports import *\"\nBad. Don't blanket import a bunch of random stuff.\nEach module should have a longish list of the specific things it needs. \nimport this\nimport that\nimport package.module\n\nExplicit list. No magic. No dynamic change to sys.path.\nMy current project has 100's of modules, a dozen or so packages. Each module imports just what it needs. No magic.\n",
"Few pointers\n\nYou may have already split\nfunctionality in various module. If\ncorrectly done most of the time you\nwill not fall into circular import\nproblems (e.g. if module a depends\non b and b on a you can make a third\nmodule c to remove such circular\ndependency). As last resort, in a\nimport b but in b import a at the\npoint where a is needed e.g. inside\nfunction.\nOnce functionality is properly in\nmodules group them in packages under\na subdir and add a __init__.py file\nto it so that you can import the\npackage. Keep such pakages in a\nfolder e.g. lib and then either add\nto sys.path or set PYTHONPATH env\nvariable\nfrom module import * may not\nbe good idea. Instead, import whatever\nis needed. It may be fully qualified. It\ndoesn't hurt to be verbose. e.g.\nfrom pakageA.moduleB import\nCoolClass.\n\n",
"The way to do this is to avoid magic. In other words, if your module requires something from another module, it should import it explicitly. You shouldn't rely on things being imported automatically.\nAs the Zen of Python (import this) has it, explicit is better than implicit.\n",
"You won't get recursion on imports because Python caches each module and won't reload one it already has.\n"
] |
[
25,
6,
4,
3
] |
[] |
[] |
[
"python",
"python_import"
] |
stackoverflow_0000896112_python_python_import.txt
|
Q:
Running multiple processes and capturing the output in python with pygtk
I'd like to write a simple application that runs multiple programs and displays their output in multiple terminal (style) windows. In addition, I want to be able to read the stdout/stderr of these processes and search for keywords in the output.
I've tried implementing this two ways in python, the first using subprocess.Popen and the second using vte (python-vte).
I've only gotten Popen to work w/ polling. I have to constantly check to see if the processes have data to be read, read the data, and then send it to my TextArea. It's been recommended to use gobject.io_add_watch() instead, but whenever I try that my program hangs on the second call to io_add_watch--it's like it can only handle one file descriptor at a time.
vte works great but I haven't found a reliable way to capture the output. You can get a callback when the cursor moves and then screen scrape w/ get_text(), but I've already run into cases where these programs I'm viewing generate an obscene about of tty in one go and then it's off the screen. There doesn't appear to be a callback that contains new text to be added to the window.
Any ideas?
A:
I did something similar to this using the subprocess.Popen. For each process I actually ended up redirecting the stdout and stderr to a temporary file, then periodically checking the file for updates and dumping the output into a TextView.
The reason for not using a pipe to the process was that the processes themselves were volatile and prone to segfaults. When that happened I sometimes lost data between the last read and the segfault (which was the most needed data to determine the cause of the segfault).
As it turned out, sometimes I'd want to save the output from a specific process, so this method worked well for me.
A:
If you go with igkuk's suggestion, I got some good advice on watching files for changes in a related question. That worked pretty well for me (I was watching a log file for changes).
A:
You want to use select to monitor the pipes from your subprocesses. It's better than polling.
|
Running multiple processes and capturing the output in python with pygtk
|
I'd like to write a simple application that runs multiple programs and displays their output in multiple terminal (style) windows. In addition, I want to be able to read the stdout/stderr of these processes and search for keywords in the output.
I've tried implementing this two ways in python, the first using subprocess.Popen and the second using vte (python-vte).
I've only gotten Popen to work w/ polling. I have to constantly check to see if the processes have data to be read, read the data, and then send it to my TextArea. It's been recommended to use gobject.io_add_watch() instead, but whenever I try that my program hangs on the second call to io_add_watch--it's like it can only handle one file descriptor at a time.
vte works great but I haven't found a reliable way to capture the output. You can get a callback when the cursor moves and then screen scrape w/ get_text(), but I've already run into cases where these programs I'm viewing generate an obscene about of tty in one go and then it's off the screen. There doesn't appear to be a callback that contains new text to be added to the window.
Any ideas?
|
[
"I did something similar to this using the subprocess.Popen. For each process I actually ended up redirecting the stdout and stderr to a temporary file, then periodically checking the file for updates and dumping the output into a TextView. \nThe reason for not using a pipe to the process was that the processes themselves were volatile and prone to segfaults. When that happened I sometimes lost data between the last read and the segfault (which was the most needed data to determine the cause of the segfault).\nAs it turned out, sometimes I'd want to save the output from a specific process, so this method worked well for me.\n",
"If you go with igkuk's suggestion, I got some good advice on watching files for changes in a related question. That worked pretty well for me (I was watching a log file for changes).\n",
"You want to use select to monitor the pipes from your subprocesses. It's better than polling.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000896874_python.txt
|
Q:
Python cgi FieldStorage slow, alternatives?
I have a python cgi script that receives files uploaded via a http post. The files can be large (300+ Mb). The thing is, cgi.FieldStorage() is incredibly slow for getting the file (a 300Mb file took 6 minutes to be "received"). Doing the same by just reading the stdin took around 15 seconds. The problem with the latter is, i would have to parse the data myself if there are multiple fields that are posted.
Are there any faster alternatives to FieldStorage()?
A:
"[I] would have to parse the data myself"
Why? CGI has a parser you can call explicitly.
Read the uploaded stream and save it in a local disk file.
For blazing speed, use a StringIO in-memory file. Just be aware of the amount of memory the upload will take.
Use cgi.parse(mylocalfile).
|
Python cgi FieldStorage slow, alternatives?
|
I have a python cgi script that receives files uploaded via a http post. The files can be large (300+ Mb). The thing is, cgi.FieldStorage() is incredibly slow for getting the file (a 300Mb file took 6 minutes to be "received"). Doing the same by just reading the stdin took around 15 seconds. The problem with the latter is, i would have to parse the data myself if there are multiple fields that are posted.
Are there any faster alternatives to FieldStorage()?
|
[
"\"[I] would have to parse the data myself\"\nWhy? CGI has a parser you can call explicitly.\nRead the uploaded stream and save it in a local disk file. \nFor blazing speed, use a StringIO in-memory file. Just be aware of the amount of memory the upload will take.\nUse cgi.parse(mylocalfile).\n"
] |
[
2
] |
[] |
[] |
[
"cgi",
"python"
] |
stackoverflow_0000897206_cgi_python.txt
|
Q:
CreateDatabase often fails on the google data api
The following test program is suppossed to create a new spreadsheet:
#!/usr/bin/python
import gdata.spreadsheet.text_db
import getpass
import atom
import gdata.contacts
import gdata.contacts.service
import smtplib
import time
password = getpass.getpass()
client = gdata.spreadsheet.text_db.DatabaseClient(username='[email protected]',password=password)
database = client.CreateDatabase('My Test Database')
table = database.CreateTable('addresses', ['name','email',
'phonenumber', 'mailingaddress'])
record = table.AddRecord({'name':'Bob', 'email':'[email protected]',
'phonenumber':'555-555-1234', 'mailingaddress':'900 Imaginary St.'})
# Edit a record
record.content['email'] = '[email protected]'
record.Push()
which it does, but only on about every 1 out of 5 runs. On the other 4 out of 5 runs I get:
Password:
Traceback (most recent call last):
File "./test.py", line 13, in <module>
database = client.CreateDatabase('My Test Database')
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/spreadsheet/text_db.py", line 146, in CreateDatabase
db_entry = self.__docs_client.UploadSpreadsheet(virtual_media_source, name)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/docs/service.py", line 304, in UploadSpreadsheet
return self._UploadFile(media_source, title, category, folder_or_uri)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/docs/service.py", line 144, in _UploadFile
converter=gdata.docs.DocumentListEntryFromString)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/service.py", line 1151, in Post
media_source=media_source, converter=converter)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/service.py", line 1271, in PostOrPut
'reason': server_response.reason, 'body': result_body}
gdata.service.RequestError: {'status': 404, 'body': '<HTML>\n<HEAD>\n<TITLE>Not Found</TITLE>\n</HEAD>\n<BODY BGCOLOR="#FFFFFF" TEXT="#000000">\n<H1>Not Found</H1>\n<H2>Error 404</H2>\n</BODY>\n</HTML>\n', 'reason': 'Not Found'}
The same thing happens when I run similar code on the appengine, so I don't think this problem is due to a slow connection (also, I have a cable modem, which works at about 1Mbps).
So, why a 404? and, more importantly, is there anyway to improve the chances that my CreateDatabase call will succeed?
A:
Someone clued me in that this is a known bug in gdata.
|
CreateDatabase often fails on the google data api
|
The following test program is suppossed to create a new spreadsheet:
#!/usr/bin/python
import gdata.spreadsheet.text_db
import getpass
import atom
import gdata.contacts
import gdata.contacts.service
import smtplib
import time
password = getpass.getpass()
client = gdata.spreadsheet.text_db.DatabaseClient(username='[email protected]',password=password)
database = client.CreateDatabase('My Test Database')
table = database.CreateTable('addresses', ['name','email',
'phonenumber', 'mailingaddress'])
record = table.AddRecord({'name':'Bob', 'email':'[email protected]',
'phonenumber':'555-555-1234', 'mailingaddress':'900 Imaginary St.'})
# Edit a record
record.content['email'] = '[email protected]'
record.Push()
which it does, but only on about every 1 out of 5 runs. On the other 4 out of 5 runs I get:
Password:
Traceback (most recent call last):
File "./test.py", line 13, in <module>
database = client.CreateDatabase('My Test Database')
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/spreadsheet/text_db.py", line 146, in CreateDatabase
db_entry = self.__docs_client.UploadSpreadsheet(virtual_media_source, name)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/docs/service.py", line 304, in UploadSpreadsheet
return self._UploadFile(media_source, title, category, folder_or_uri)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/docs/service.py", line 144, in _UploadFile
converter=gdata.docs.DocumentListEntryFromString)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/service.py", line 1151, in Post
media_source=media_source, converter=converter)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/service.py", line 1271, in PostOrPut
'reason': server_response.reason, 'body': result_body}
gdata.service.RequestError: {'status': 404, 'body': '<HTML>\n<HEAD>\n<TITLE>Not Found</TITLE>\n</HEAD>\n<BODY BGCOLOR="#FFFFFF" TEXT="#000000">\n<H1>Not Found</H1>\n<H2>Error 404</H2>\n</BODY>\n</HTML>\n', 'reason': 'Not Found'}
The same thing happens when I run similar code on the appengine, so I don't think this problem is due to a slow connection (also, I have a cable modem, which works at about 1Mbps).
So, why a 404? and, more importantly, is there anyway to improve the chances that my CreateDatabase call will succeed?
|
[
"Someone clued me in that this is a known bug in gdata.\n"
] |
[
1
] |
[] |
[] |
[
"gdata_api",
"python"
] |
stackoverflow_0000894881_gdata_api_python.txt
|
Q:
Checking folder/file ntfs permissions using python
As the question title might suggest, I would very much like to know of the way to check the ntfs permissions of the given file or folder (hint: those are the ones you see in the "security" tab). Basically, what I need is to take a path to a file or directory (on a local machine, or, preferrably, on a share on a remote machine) and get the list of users/groups and the corresponding permissions for this file/folder. Ultimately, the application is going to traverse a directory tree, reading permissions for each object and processing them accordingly.
Now, I can think of a number of ways to do that:
parse cacls.exe output -- easily done, BUT, unless im missing something, cacls.exe only gives the permissions in the form of R|W|C|F (read/write/change/full), which is insufficient (I need to get the permissions like "List folder contents", extended permissions too)
xcacls.exe or xcacls.vbs output -- yes, they give me all the permissions I need, but they work dreadfully slow, it takes xcacls.vbs about ONE SECOND to get permissions on a local system file. Such speed is unacceptable
win32security (it wraps around winapi, right?) -- I am sure it can be handled like this, but I'd rather not reinvent the wheel
Is there anything else I am missing here?
A:
Unless you fancy rolling your own, win32security is the way to go. There's the beginnings of an example here:
http://timgolden.me.uk/python/win32_how_do_i/get-the-owner-of-a-file.html
If you want to live slightly dangerously (!) my in-progress winsys package is designed to do exactly what you're after. You can get an MSI of the dev version here:
http://timgolden.me.uk/python/downloads/WinSys-0.4.win32-py2.6.msi
or you can just checkout the svn trunk:
svn co http://winsys.googlecode.com/svn/trunk winsys
To do what you describe (guessing slightly at the exact requirements) you could do this:
import codecs
from winsys import fs
base = "c:/temp"
with codecs.open ("permissions.log", "wb", encoding="utf8") as log:
for f in fs.flat (base):
log.write ("\n" + f.filepath.relative_to (base) + "\n")
for ace in f.security ().dacl:
access_flags = fs.FILE_ACCESS.names_from_value (ace.access)
log.write (u" %s => %s\n" % (ace.trustee, ", ".join (access_flags)))
TJG
|
Checking folder/file ntfs permissions using python
|
As the question title might suggest, I would very much like to know of the way to check the ntfs permissions of the given file or folder (hint: those are the ones you see in the "security" tab). Basically, what I need is to take a path to a file or directory (on a local machine, or, preferrably, on a share on a remote machine) and get the list of users/groups and the corresponding permissions for this file/folder. Ultimately, the application is going to traverse a directory tree, reading permissions for each object and processing them accordingly.
Now, I can think of a number of ways to do that:
parse cacls.exe output -- easily done, BUT, unless im missing something, cacls.exe only gives the permissions in the form of R|W|C|F (read/write/change/full), which is insufficient (I need to get the permissions like "List folder contents", extended permissions too)
xcacls.exe or xcacls.vbs output -- yes, they give me all the permissions I need, but they work dreadfully slow, it takes xcacls.vbs about ONE SECOND to get permissions on a local system file. Such speed is unacceptable
win32security (it wraps around winapi, right?) -- I am sure it can be handled like this, but I'd rather not reinvent the wheel
Is there anything else I am missing here?
|
[
"Unless you fancy rolling your own, win32security is the way to go. There's the beginnings of an example here:\nhttp://timgolden.me.uk/python/win32_how_do_i/get-the-owner-of-a-file.html\nIf you want to live slightly dangerously (!) my in-progress winsys package is designed to do exactly what you're after. You can get an MSI of the dev version here:\nhttp://timgolden.me.uk/python/downloads/WinSys-0.4.win32-py2.6.msi\nor you can just checkout the svn trunk:\nsvn co http://winsys.googlecode.com/svn/trunk winsys\nTo do what you describe (guessing slightly at the exact requirements) you could do this:\nimport codecs\nfrom winsys import fs\n\nbase = \"c:/temp\"\nwith codecs.open (\"permissions.log\", \"wb\", encoding=\"utf8\") as log:\n for f in fs.flat (base):\n log.write (\"\\n\" + f.filepath.relative_to (base) + \"\\n\")\n for ace in f.security ().dacl:\n access_flags = fs.FILE_ACCESS.names_from_value (ace.access)\n log.write (u\" %s => %s\\n\" % (ace.trustee, \", \".join (access_flags)))\n\nTJG\n"
] |
[
17
] |
[] |
[] |
[
"acl",
"ntfs",
"permissions",
"python",
"winapi"
] |
stackoverflow_0000896638_acl_ntfs_permissions_python_winapi.txt
|
Q:
Convert list of lists to delimited string
How do I do the following using built-in modules only?
I have a list of lists like this:
[['dog', 1], ['cat', 2, 'a'], ['rat', 3, 4], ['bat', 5]]
And from it, I'd like to produce a string representation of a table like this where the columns are delimited by tabs and the rows by newlines.
dog 1
cat 2 a
rat 3 4
bat 5
i.e.
'dog\t1\ncat\t2\ta\nrat\t3\t4\nbat\t5'
A:
Like this, perhaps:
lists = [['dog', 1], ['cat', 2, 'a'], ['rat', 3, 4], ['bat', 5]]
result = "\n".join("\t".join(map(str,l)) for l in lists)
This joins all the inner lists using tabs, and concatenates the resulting list of strings using newlines.
It uses a feature called list comprehension to process the outer list.
A:
# rows contains the list of lists
lines = []
for row in rows:
lines.append('\t'.join(map(str, row)))
result = '\n'.join(lines)
|
Convert list of lists to delimited string
|
How do I do the following using built-in modules only?
I have a list of lists like this:
[['dog', 1], ['cat', 2, 'a'], ['rat', 3, 4], ['bat', 5]]
And from it, I'd like to produce a string representation of a table like this where the columns are delimited by tabs and the rows by newlines.
dog 1
cat 2 a
rat 3 4
bat 5
i.e.
'dog\t1\ncat\t2\ta\nrat\t3\t4\nbat\t5'
|
[
"Like this, perhaps:\nlists = [['dog', 1], ['cat', 2, 'a'], ['rat', 3, 4], ['bat', 5]]\nresult = \"\\n\".join(\"\\t\".join(map(str,l)) for l in lists)\n\nThis joins all the inner lists using tabs, and concatenates the resulting list of strings using newlines.\nIt uses a feature called list comprehension to process the outer list.\n",
"# rows contains the list of lists\nlines = []\nfor row in rows:\n lines.append('\\t'.join(map(str, row)))\nresult = '\\n'.join(lines)\n\n"
] |
[
19,
4
] |
[] |
[] |
[
"list",
"python",
"string"
] |
stackoverflow_0000898391_list_python_string.txt
|
Q:
Priority issue in Sitemaps
I am trying to use Django sitemaps.
class BlogSiteMap(Sitemap):
"""A simple class to get sitemaps for blog"""
changefreq = 'hourly'
priority = 0.5
def items(self):
return Blog.objects.order_by('-pubDate')
def lastmod(self, obj):
return obj.pubDate
My problem is..I wanted to set the priority of first 3 blog object as 1.0 and rest of them
as 0.5 priority.
I read the documentation but couldn't any way out of it.
Any help would be appreciable. Thanks in advance.
A:
I think you can alter each object with its priority. Like that for example:
def items(self):
for i, obj in enumerate(Blog.objects.order_by('-pubDate')):
obj.priority = i < 3 and 1 or 0.5
yield obj
def priority(self, obj):
return obj.priority
A:
Something like that might work:
def priority(self, obj):
if obj.id in list(Blog.objects.all()[:3].values_list('id'))
return 1.0
else:
return 0.5
|
Priority issue in Sitemaps
|
I am trying to use Django sitemaps.
class BlogSiteMap(Sitemap):
"""A simple class to get sitemaps for blog"""
changefreq = 'hourly'
priority = 0.5
def items(self):
return Blog.objects.order_by('-pubDate')
def lastmod(self, obj):
return obj.pubDate
My problem is..I wanted to set the priority of first 3 blog object as 1.0 and rest of them
as 0.5 priority.
I read the documentation but couldn't any way out of it.
Any help would be appreciable. Thanks in advance.
|
[
"I think you can alter each object with its priority. Like that for example:\ndef items(self):\n for i, obj in enumerate(Blog.objects.order_by('-pubDate')):\n obj.priority = i < 3 and 1 or 0.5\n yield obj\n\ndef priority(self, obj):\n return obj.priority\n\n",
"Something like that might work:\ndef priority(self, obj):\n if obj.id in list(Blog.objects.all()[:3].values_list('id'))\n return 1.0\n else:\n return 0.5\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"python",
"sitemap"
] |
stackoverflow_0000763485_django_python_sitemap.txt
|
Q:
How do I refer to a class method outside a function body in Python?
I want to do a one time callback registration within Observer. I don't want to do the registration inside init or other function. I don't know if there is a class level equivalent for init
class Observer:
@classmethod
def on_new_user_registration(new_user):
#body of handler...
# first I try
NewUserRegistered().subscribe \
(Observer.on_new_user_registration) #gives NameError for Observer
#so I try
NewUserRegistered().subscribe(on_new_user_registration) #says not callable
#neither does this work
NewUserRegistered().subscribe(__metaclass__.on_new_user_registration)
class BaseEvent(object):
_subscriptions = {}
def __init__(self, event_info = None):
self.info = event_info
def fire(self):
for callback in self._subscriptions[event_type]:
callback(event_info)
def subscribe(self, callback):
if not callable(callback):
raise Exception(str(callback) + 'is not callable')
existing = self._subscriptions.get(self.__class__, None)
if not existing:
existing = set()
self._subscriptions[self.__class__] = existing
existing.add(callback)
class NewUserRegistered(BaseEvent):
pass
A:
I suggest to cut down on the number of classes -- remember that Python isn't Java. Every time you use @classmethod or @staticmethod you should stop and think about it since these keywords are quite rare in Python.
Doing it like this works:
class BaseEvent(object):
def __init__(self, event_info=None):
self._subscriptions = set()
self.info = event_info
def fire(self, data):
for callback in self._subscriptions:
callback(self.info, data)
def subscribe(self, callback):
if not callable(callback):
raise ValueError("%r is not callable" % callback)
self._subscriptions.add(callback)
return callback
new_user = BaseEvent()
@new_user.subscribe
def on_new_user_registration(info, username):
print "new user: %s" % username
new_user.fire("Martin")
If you want an Observer class, then you can do it like this:
class Observer:
@staticmethod
@new_user.subscribe
def on_new_user_registration(info, username):
print "new user: %s" % username
But note that the static method does not have access to the protocol instance, so this is probably not very useful. You can not subscribe a method bound to an object instance like this since the object wont exist when the class definition is executed.
But you can of course do this:
class Observer:
def on_new_user_registration(self, info, username):
print "new user: %s" % username
o = Observer()
new_user.subscribe(o.on_new_user_registration)
where we use the bound o.on_new_user_registration as argument to subscribe.
A:
I've come to accept that python isn't very intuitive when it comes to functional programming within class definitions. See this question. The problem with the first method is that Observer doesn't exist as a namespace until the class has been built. The problem with the second is that you've made a class method that doesn't really do what it's supposed to until after the namespace has been created. (I have no idea why you're trying the third.) In both case neither of these things occurs until after the class definition of Observer has been populated.
This might sound like a sad constraint, but it's really not so bad. Just register after the class definition. Once you realize that it's not bad style to perform certain initialization routines on classes in the body of the module but outside the body of the class, python becomes a lot friendlier. Try:
class Observer:
# Define the other classes first
class Observer:
@classmethod
def on_new_user_registration(new_user):
#body of handler...
NewUserRegistered().subscribe(Observer.on_new_user_registration)
Because of the way modules work in python, you are guaranteed that this registration will be performed once and only once (barring process forking and maybe some other irrelevant boundary cases) wherever Observer is imported.
A:
oops. sorry about that.
All I had to do was to move the subscription outside the class definition
class Observer:
@classmethod
def on_new_user_registration(new_user):
#body of handler...
#after end of class
NewUserRegistered().subscribe(Observer.on_new_user_registration)
Guess it is a side-effect of too much Java that one doesn't immediately think of this.
A:
What you're doing should work:
>>> class foo:
... @classmethod
... def func(cls):
... print 'func called!'
...
>>> foo.func()
func called!
>>> class foo:
... @classmethod
... def func(cls):
... print 'func called!'
... foo.func()
...
func called!
One thing to note though, class methods take a cls argument instead of a self argument. Thus, your class definition should look like this:
class Observer:
@classmethod
def on_new_user_registration(cls, new_user):
#body of handler...
|
How do I refer to a class method outside a function body in Python?
|
I want to do a one time callback registration within Observer. I don't want to do the registration inside init or other function. I don't know if there is a class level equivalent for init
class Observer:
@classmethod
def on_new_user_registration(new_user):
#body of handler...
# first I try
NewUserRegistered().subscribe \
(Observer.on_new_user_registration) #gives NameError for Observer
#so I try
NewUserRegistered().subscribe(on_new_user_registration) #says not callable
#neither does this work
NewUserRegistered().subscribe(__metaclass__.on_new_user_registration)
class BaseEvent(object):
_subscriptions = {}
def __init__(self, event_info = None):
self.info = event_info
def fire(self):
for callback in self._subscriptions[event_type]:
callback(event_info)
def subscribe(self, callback):
if not callable(callback):
raise Exception(str(callback) + 'is not callable')
existing = self._subscriptions.get(self.__class__, None)
if not existing:
existing = set()
self._subscriptions[self.__class__] = existing
existing.add(callback)
class NewUserRegistered(BaseEvent):
pass
|
[
"I suggest to cut down on the number of classes -- remember that Python isn't Java. Every time you use @classmethod or @staticmethod you should stop and think about it since these keywords are quite rare in Python.\nDoing it like this works:\nclass BaseEvent(object):\n def __init__(self, event_info=None):\n self._subscriptions = set()\n self.info = event_info\n\n def fire(self, data):\n for callback in self._subscriptions:\n callback(self.info, data)\n\n def subscribe(self, callback):\n if not callable(callback):\n raise ValueError(\"%r is not callable\" % callback)\n self._subscriptions.add(callback)\n return callback\n\nnew_user = BaseEvent()\n\n@new_user.subscribe\ndef on_new_user_registration(info, username):\n print \"new user: %s\" % username\n\nnew_user.fire(\"Martin\")\n\nIf you want an Observer class, then you can do it like this:\nclass Observer:\n@staticmethod\n@new_user.subscribe\ndef on_new_user_registration(info, username):\n print \"new user: %s\" % username\n\nBut note that the static method does not have access to the protocol instance, so this is probably not very useful. You can not subscribe a method bound to an object instance like this since the object wont exist when the class definition is executed.\nBut you can of course do this:\nclass Observer:\n def on_new_user_registration(self, info, username):\n print \"new user: %s\" % username\n\no = Observer()\nnew_user.subscribe(o.on_new_user_registration)\n\nwhere we use the bound o.on_new_user_registration as argument to subscribe.\n",
"I've come to accept that python isn't very intuitive when it comes to functional programming within class definitions. See this question. The problem with the first method is that Observer doesn't exist as a namespace until the class has been built. The problem with the second is that you've made a class method that doesn't really do what it's supposed to until after the namespace has been created. (I have no idea why you're trying the third.) In both case neither of these things occurs until after the class definition of Observer has been populated.\nThis might sound like a sad constraint, but it's really not so bad. Just register after the class definition. Once you realize that it's not bad style to perform certain initialization routines on classes in the body of the module but outside the body of the class, python becomes a lot friendlier. Try:\nclass Observer:\n# Define the other classes first\n\nclass Observer:\n @classmethod\n def on_new_user_registration(new_user):\n #body of handler...\nNewUserRegistered().subscribe(Observer.on_new_user_registration)\n\nBecause of the way modules work in python, you are guaranteed that this registration will be performed once and only once (barring process forking and maybe some other irrelevant boundary cases) wherever Observer is imported.\n",
"oops. sorry about that.\nAll I had to do was to move the subscription outside the class definition\nclass Observer:\n\n @classmethod\n def on_new_user_registration(new_user):\n #body of handler...\n\n#after end of class\n\nNewUserRegistered().subscribe(Observer.on_new_user_registration)\n\nGuess it is a side-effect of too much Java that one doesn't immediately think of this.\n",
"What you're doing should work:\n>>> class foo:\n... @classmethod\n... def func(cls):\n... print 'func called!'\n...\n>>> foo.func()\nfunc called!\n>>> class foo:\n... @classmethod\n... def func(cls):\n... print 'func called!'\n... foo.func()\n...\nfunc called!\n\nOne thing to note though, class methods take a cls argument instead of a self argument. Thus, your class definition should look like this:\nclass Observer:\n\n @classmethod\n def on_new_user_registration(cls, new_user):\n #body of handler...\n\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000897739_python.txt
|
Q:
In Python how do I sort a list of dictionaries by a certain value of the dictionary + alphabetically?
Ok, here's what I'm trying to do... I know that itemgetter() sort could to alphabetical sort easy, but if I have something like this:
[{'Name':'TOTAL', 'Rank':100},
{'Name':'Woo Company', 'Rank':15},
{'Name':'ABC Company', 'Rank':20}]
And I want it sorted alphabetically (by Name) + include the condition that the one with Name:'TOTAL' should be listed last in the sequence, like this:
[{'Name':'ABC Company', 'Rank':20},
{'Name':'Woo Company', 'Rank':15},
{'Name':'TOTAL', 'Rank':100}]
How would I do that?
A:
The best approach here is to decorate the sort key... Python will sort a tuple by the tuple components in order, so build a tuple key with your sorting criteria:
sorted(list_of_dicts, key=lambda d: (d['Name'] == 'TOTAL', d['Name'].lower()))
This results in a sort key of:
(True, 'total') for {'Name': 'TOTAL', 'Rank': 100}
(False, 'woo company') for {'Name': 'Woo Company', 'Rank': 15}
(False, 'abc company') for {'Name': 'ABC Company', 'Rank': 20}
Since False sorts earlier than True, the ones whose names aren't TOTAL will end up together, then be sorted alphabetically, and TOTAL will end up at the end.
A:
>>> lst = [{'Name':'TOTAL', 'Rank':100}, {'Name':'Woo Company', 'Rank':15}, {'Name':'ABC Company', 'Rank':20}]
>>> lst.sort(key=lambda d: (d['Name']=='TOTAL',d['Name'].lower()))
>>> print lst
[{'Name': 'ABC Company', 'Rank': 20}, {'Name': 'Woo Company', 'Rank': 15}, {'Name': 'TOTAL', 'Rank': 100}]
A:
Use the key parameter of sort or sorted.
For example:
dicts = [
{'Name':'TOTAL', 'Rank':100},
{'Name':'Woo Company', 'Rank':15},
{'Name':'ABC Company', 'Rank':20}
]
def total_last(d):
if d['Name'] == 'TOTAL':
return '\xff\xff\xff\xff\xff\xff'
return d['Name'].lower()
import pprint
pprint.pprint(sorted(dicts, key = total_last))
>python sort_dict.py
[{'Name': 'ABC Company', 'Rank': 20},
{'Name': 'Woo Company', 'Rank': 15},
{'Name': 'TOTAL', 'Rank': 100}]
|
In Python how do I sort a list of dictionaries by a certain value of the dictionary + alphabetically?
|
Ok, here's what I'm trying to do... I know that itemgetter() sort could to alphabetical sort easy, but if I have something like this:
[{'Name':'TOTAL', 'Rank':100},
{'Name':'Woo Company', 'Rank':15},
{'Name':'ABC Company', 'Rank':20}]
And I want it sorted alphabetically (by Name) + include the condition that the one with Name:'TOTAL' should be listed last in the sequence, like this:
[{'Name':'ABC Company', 'Rank':20},
{'Name':'Woo Company', 'Rank':15},
{'Name':'TOTAL', 'Rank':100}]
How would I do that?
|
[
"The best approach here is to decorate the sort key... Python will sort a tuple by the tuple components in order, so build a tuple key with your sorting criteria:\nsorted(list_of_dicts, key=lambda d: (d['Name'] == 'TOTAL', d['Name'].lower()))\n\nThis results in a sort key of:\n\n(True, 'total') for {'Name': 'TOTAL', 'Rank': 100}\n(False, 'woo company') for {'Name': 'Woo Company', 'Rank': 15}\n(False, 'abc company') for {'Name': 'ABC Company', 'Rank': 20}\n\nSince False sorts earlier than True, the ones whose names aren't TOTAL will end up together, then be sorted alphabetically, and TOTAL will end up at the end. \n",
">>> lst = [{'Name':'TOTAL', 'Rank':100}, {'Name':'Woo Company', 'Rank':15}, {'Name':'ABC Company', 'Rank':20}]\n>>> lst.sort(key=lambda d: (d['Name']=='TOTAL',d['Name'].lower()))\n>>> print lst\n[{'Name': 'ABC Company', 'Rank': 20}, {'Name': 'Woo Company', 'Rank': 15}, {'Name': 'TOTAL', 'Rank': 100}]\n\n",
"Use the key parameter of sort or sorted.\nFor example:\ndicts = [\n {'Name':'TOTAL', 'Rank':100}, \n {'Name':'Woo Company', 'Rank':15},\n {'Name':'ABC Company', 'Rank':20}\n]\n\ndef total_last(d):\n if d['Name'] == 'TOTAL':\n return '\\xff\\xff\\xff\\xff\\xff\\xff'\n return d['Name'].lower()\n\nimport pprint\npprint.pprint(sorted(dicts, key = total_last))\n\n>python sort_dict.py\n[{'Name': 'ABC Company', 'Rank': 20},\n {'Name': 'Woo Company', 'Rank': 15},\n {'Name': 'TOTAL', 'Rank': 100}]\n\n"
] |
[
10,
1,
0
] |
[
"Well, I would sort it in multiple passes, using list's sort method.\nlist = [{'Name':'TOTAL', 'Rank':100}, {'Name':'Woo Company', 'Rank':15}, {'Name':'ABC Company', 'Rank':20}]\n\nlist.sort(key = lambda x: x['Name']) # Sorted by Name, alphabetically\n\nlist.sort(key = lambda x: 'b' if x['Name'] == 'TOTAL' else 'a')\n\n"
] |
[
-1
] |
[
"dictionary",
"list",
"python",
"sorting"
] |
stackoverflow_0000898773_dictionary_list_python_sorting.txt
|
Q:
Django shopping cart/basket solution (or should I DIM)?
I'm about to build a site that has about half a dozen fairly similar products. They're all DVDs so they fit into a very "fixed" database very well. I was going to make a DVD model. Tag them up. All very simple. All very easy.
But we need to be able to sell them. The current site outsources the whole purchasing system but that's not going to fly on the new site. We want to integrate everything right up until the payment (for both UX reasons plus we get to customise the process a lot more).
The other problem with the outsourced problem is it doesn't account for people that don't need to pay VAT (sales tax) or for the fact you get a discounts if you buy more than one of the same thing, or more than one SKU at the same time.
So I've been looking around.
Satchmo looks like a whole mini-framework. It has listing options that I just don't need with the quantities of SKUs I'm dealing with.
django-cart has been re-hashed as of March but it looks pretty abandoned since then.
I'm looking for something that will let me:
pass it a model instances, a price and a quantity
apply a quantities formula based on the number of unique SKUs and copies in the same title
list what's in the cart on every page
That's about it (but it's quite fiddly, nevertheless). I can handle the final order processing nonsense.
Or am I just being silly?
Should I just get on and Do It Myself? If that's your vote, I've never built a cart before so are there any considerations that are not obvious to somebody who has only used shopping carts before?
A:
Since you asked: if your needs are that limited, it does sound like a DIY situation to me. I don't see what's so fiddly about it; what complexity there is is all in the pricing formula, and you're planning to supply that either way. Add in Django's built-in session support and you're most of the way there.
A:
There is an open source solution available: http://www.getlfs.com
I don't know if you could tweak it to suit you but it's based on the technologies you mention. The license is very liberal and it is heavily maintained.
|
Django shopping cart/basket solution (or should I DIM)?
|
I'm about to build a site that has about half a dozen fairly similar products. They're all DVDs so they fit into a very "fixed" database very well. I was going to make a DVD model. Tag them up. All very simple. All very easy.
But we need to be able to sell them. The current site outsources the whole purchasing system but that's not going to fly on the new site. We want to integrate everything right up until the payment (for both UX reasons plus we get to customise the process a lot more).
The other problem with the outsourced problem is it doesn't account for people that don't need to pay VAT (sales tax) or for the fact you get a discounts if you buy more than one of the same thing, or more than one SKU at the same time.
So I've been looking around.
Satchmo looks like a whole mini-framework. It has listing options that I just don't need with the quantities of SKUs I'm dealing with.
django-cart has been re-hashed as of March but it looks pretty abandoned since then.
I'm looking for something that will let me:
pass it a model instances, a price and a quantity
apply a quantities formula based on the number of unique SKUs and copies in the same title
list what's in the cart on every page
That's about it (but it's quite fiddly, nevertheless). I can handle the final order processing nonsense.
Or am I just being silly?
Should I just get on and Do It Myself? If that's your vote, I've never built a cart before so are there any considerations that are not obvious to somebody who has only used shopping carts before?
|
[
"Since you asked: if your needs are that limited, it does sound like a DIY situation to me. I don't see what's so fiddly about it; what complexity there is is all in the pricing formula, and you're planning to supply that either way. Add in Django's built-in session support and you're most of the way there.\n",
"There is an open source solution available: http://www.getlfs.com\nI don't know if you could tweak it to suit you but it's based on the technologies you mention. The license is very liberal and it is heavily maintained.\n"
] |
[
3,
2
] |
[] |
[] |
[
"django",
"e_commerce",
"python",
"shopping_cart"
] |
stackoverflow_0000898426_django_e_commerce_python_shopping_cart.txt
|
Q:
How do you store an app engine Image object in the db?
I'm a bit stuck with my code:
def setVenueImage(img):
img = images.Image(img.read())
x, y = photo_utils.getIdealResolution(img.width, img.height)
img.resize(x, y)
img.execute_transforms()
venue_obj = getVenueSingletonObject()
if venue_obj is None:
venue_obj = Venue(images = [img])
else:
venue_obj.images.append(img)
db.put(venue_obj)
I'm using django with app engine - so img.read() works fine.
In fact all of this code works fine up until I try to store img into the database. My model expects a Blob, so when I put in the image as img, then it throws a fit, and I get:
BadValueError at /admin/venue/
Items in the images list must all be Blob instances
Ok, so an Image must not be a Blob, but then how do I make it a blob? Blobs take in a byte string, but how do I make my image a byte string?
I haven't seen in the docs anywhere where they actually use image objects, so I'm not sure how this is all supposed to work, but I do want to use image objects to resize my image (I know you can do it in PIL, but I'd like to know how to do it with google's Image class).
Thanks for any pointers :)
A:
I'm not happy with this solution as it doesn't convert an Image object to a blob, but it will do for the time being:
def setVenueImage(img):
original = img.read()
img = images.Image(original)
x, y = photo_utils.getIdealResolution(img.width, img.height)
img = images.resize(original, x, y)
venue_obj = getVenueSingletonObject()
if venue_obj is None:
venue_obj = Venue(images = [db.Blob(img)])
else:
venue_obj.images.append(db.Blob(img))
db.put(venue_obj)
A:
This will probably work:
def setVenueImage(img):
img = images.Image(img.read())
x, y = photo_utils.getIdealResolution(img.width, img.height)
img.resize(x, y)
img_bytes = img.execute_transforms() # Converts to PNG
venue_obj = getVenueSingletonObject()
if venue_obj is None:
venue_obj = Venue(images = [img_bytes])
else:
venue_obj.images.append(img_bytes)
db.put(venue_obj)
I'm assuming that Venue.images is a ListProperty(db.Blob), correct? This is probably the wrong thing to do. Define a VenueImage model with a simple blob property and store its key into the Venue. If you put the images in there directly you'll hit the 1MB row limit on the datastore.
A:
http://code.google.com/appengine/docs/python/images/usingimages.html
I think that link should help. Good luck.
|
How do you store an app engine Image object in the db?
|
I'm a bit stuck with my code:
def setVenueImage(img):
img = images.Image(img.read())
x, y = photo_utils.getIdealResolution(img.width, img.height)
img.resize(x, y)
img.execute_transforms()
venue_obj = getVenueSingletonObject()
if venue_obj is None:
venue_obj = Venue(images = [img])
else:
venue_obj.images.append(img)
db.put(venue_obj)
I'm using django with app engine - so img.read() works fine.
In fact all of this code works fine up until I try to store img into the database. My model expects a Blob, so when I put in the image as img, then it throws a fit, and I get:
BadValueError at /admin/venue/
Items in the images list must all be Blob instances
Ok, so an Image must not be a Blob, but then how do I make it a blob? Blobs take in a byte string, but how do I make my image a byte string?
I haven't seen in the docs anywhere where they actually use image objects, so I'm not sure how this is all supposed to work, but I do want to use image objects to resize my image (I know you can do it in PIL, but I'd like to know how to do it with google's Image class).
Thanks for any pointers :)
|
[
"I'm not happy with this solution as it doesn't convert an Image object to a blob, but it will do for the time being:\ndef setVenueImage(img):\n original = img.read()\n img = images.Image(original)\n x, y = photo_utils.getIdealResolution(img.width, img.height)\n img = images.resize(original, x, y)\n venue_obj = getVenueSingletonObject()\n if venue_obj is None:\n venue_obj = Venue(images = [db.Blob(img)])\n else:\n venue_obj.images.append(db.Blob(img))\n db.put(venue_obj)\n\n",
"This will probably work:\ndef setVenueImage(img):\n img = images.Image(img.read())\n x, y = photo_utils.getIdealResolution(img.width, img.height)\n img.resize(x, y)\n img_bytes = img.execute_transforms() # Converts to PNG\n venue_obj = getVenueSingletonObject()\n if venue_obj is None:\n venue_obj = Venue(images = [img_bytes])\n else:\n venue_obj.images.append(img_bytes)\n db.put(venue_obj)\n\nI'm assuming that Venue.images is a ListProperty(db.Blob), correct? This is probably the wrong thing to do. Define a VenueImage model with a simple blob property and store its key into the Venue. If you put the images in there directly you'll hit the 1MB row limit on the datastore.\n",
"http://code.google.com/appengine/docs/python/images/usingimages.html\nI think that link should help. Good luck.\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"google_app_engine",
"image",
"python"
] |
stackoverflow_0000762764_google_app_engine_image_python.txt
|
Q:
Pad an integer using a regular expression
I'm using regular expressions with a python framework to pad a specific number in a version number:
10.2.11
I want to transform the second element to be padded with a zero, so it looks like this:
10.02.11
My regular expression looks like this:
^(\d{2}\.)(\d{1})([\.].*)
If I just regurgitate back the matching groups, I use this string:
\1\2\3
When I use my favorite regular expression test harness (http://kodos.sourceforge.net/), I can't get it to pad the second group. I tried \1\20\3, but that interprets the second reference as 20, and not 2.
Because of the library I'm using this with, I need it to be a one liner. The library takes a regular expression string, and then a string for what should be used to replace it with.
I'm assuming I just need to escape the matching groups string, but I can't figure it out. Thanks in advance for any help.
A:
How about a completely different approach?
nums = version_string.split('.')
print ".".join("%02d" % int(n) for n in nums)
A:
Try this:
(^\d(?=\.)|(?<=\.)\d(?=\.)|(?<=\.)\d$)
And replace the match by 0\1. This will make any number at least two digits long.
A:
What about removing the . from the regex?
^(\d{2})\.(\d{1})[\.](.*)
replace with:
\1.0\2.\3
A:
Does your library support named groups? That might solve your problem.
|
Pad an integer using a regular expression
|
I'm using regular expressions with a python framework to pad a specific number in a version number:
10.2.11
I want to transform the second element to be padded with a zero, so it looks like this:
10.02.11
My regular expression looks like this:
^(\d{2}\.)(\d{1})([\.].*)
If I just regurgitate back the matching groups, I use this string:
\1\2\3
When I use my favorite regular expression test harness (http://kodos.sourceforge.net/), I can't get it to pad the second group. I tried \1\20\3, but that interprets the second reference as 20, and not 2.
Because of the library I'm using this with, I need it to be a one liner. The library takes a regular expression string, and then a string for what should be used to replace it with.
I'm assuming I just need to escape the matching groups string, but I can't figure it out. Thanks in advance for any help.
|
[
"How about a completely different approach?\nnums = version_string.split('.')\nprint \".\".join(\"%02d\" % int(n) for n in nums)\n\n",
"Try this:\n(^\\d(?=\\.)|(?<=\\.)\\d(?=\\.)|(?<=\\.)\\d$)\n\nAnd replace the match by 0\\1. This will make any number at least two digits long.\n",
"What about removing the . from the regex?\n^(\\d{2})\\.(\\d{1})[\\.](.*)\n\nreplace with:\n\\1.0\\2.\\3\n\n",
"Does your library support named groups? That might solve your problem.\n"
] |
[
3,
1,
1,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000899804_python_regex.txt
|
Q:
Producing documentation for Python classes
I'm about to start a project where I will be the only one doing actual code and two less experienced programmers (scary to think of myself as experienced!) will be watching and making suggestions on the program in general.
Is there a good (free) system that I can use to provide documentation for classes and functions based on the code I've written? It'd likely help them a lot in getting to grips with the structure of the data.
A:
I have used epydoc to generate documentation for Python modules from embedded docstrings. It's pretty easy to use and generates nice looking output in multiple formats.
A:
python.org is now using sphinx for it's documentation.
I personally like the output of sphinx over epydoc. I also feel the restructured text is easier to read in the docstrings than the epydoc markup.
A:
Sphinx can be useful for generating very verbose and informative docs that go above and beyond what simple API docs give you. However in many cases you'll be better served to use a wiki for these kind of documents. Also consider writing functional tests which demonstrate the usage of your code instead of documenting in words how to use your code.
Epydoc is very good at scanning your docstrings and looking at your code to generate API documents but is not necessarily good at giving much more in-depth information.
There is virtue to having both types of documentation for a project. However if you get in a time crunch it is always more beneficial to have good test coverage and meaningful tests than documentation.
A:
I use Sphinx for my project, not only because it looks good, but also because Sphinx encourages writing documentation for humans to read, not just computers.
I find the JavaDoc-style documentation produced by tools like epydoc quite sad to read. It happens all too often that the programmer is mindlessly "documenting" arguments and return types simply because there would otherwise be a gap in the API docs. So you end up with code line this (which is supposed to look like Java, but it's been a while since I wrote Java, so it might not compile...)
/**
* Set the name.
*
* @param firstName the first name.
* @param lastName the last name.
*/
public void setName(String firstName, String lastName) {
this.firstName = firstName;
this.lastName = lastName;
}
There is a very small amount of information in this so-called "documentation". I much prefer the Sphinx way where you (using the autodoc plugin) simply write
.. autofunction:: set_name
and Sphinx will then add a line to your documentation that says
set_name(first_name, last_name)
and from that every Python programmer should know what is going on.
A:
See answers to can-i-document-python-code-with-doxygen-and-does-it-make-sense, especially those mentioning Epydoc and pydoctor.
|
Producing documentation for Python classes
|
I'm about to start a project where I will be the only one doing actual code and two less experienced programmers (scary to think of myself as experienced!) will be watching and making suggestions on the program in general.
Is there a good (free) system that I can use to provide documentation for classes and functions based on the code I've written? It'd likely help them a lot in getting to grips with the structure of the data.
|
[
"I have used epydoc to generate documentation for Python modules from embedded docstrings. It's pretty easy to use and generates nice looking output in multiple formats.\n",
"python.org is now using sphinx for it's documentation.\nI personally like the output of sphinx over epydoc. I also feel the restructured text is easier to read in the docstrings than the epydoc markup.\n",
"Sphinx can be useful for generating very verbose and informative docs that go above and beyond what simple API docs give you. However in many cases you'll be better served to use a wiki for these kind of documents. Also consider writing functional tests which demonstrate the usage of your code instead of documenting in words how to use your code. \nEpydoc is very good at scanning your docstrings and looking at your code to generate API documents but is not necessarily good at giving much more in-depth information.\nThere is virtue to having both types of documentation for a project. However if you get in a time crunch it is always more beneficial to have good test coverage and meaningful tests than documentation.\n",
"I use Sphinx for my project, not only because it looks good, but also because Sphinx encourages writing documentation for humans to read, not just computers.\nI find the JavaDoc-style documentation produced by tools like epydoc quite sad to read. It happens all too often that the programmer is mindlessly \"documenting\" arguments and return types simply because there would otherwise be a gap in the API docs. So you end up with code line this (which is supposed to look like Java, but it's been a while since I wrote Java, so it might not compile...)\n/**\n * Set the name.\n *\n * @param firstName the first name.\n * @param lastName the last name.\n */\npublic void setName(String firstName, String lastName) {\n this.firstName = firstName;\n this.lastName = lastName;\n}\n\nThere is a very small amount of information in this so-called \"documentation\". I much prefer the Sphinx way where you (using the autodoc plugin) simply write\n.. autofunction:: set_name\n\nand Sphinx will then add a line to your documentation that says\n\nset_name(first_name, last_name)\n\nand from that every Python programmer should know what is going on.\n",
"See answers to can-i-document-python-code-with-doxygen-and-does-it-make-sense, especially those mentioning Epydoc and pydoctor.\n"
] |
[
12,
11,
4,
3,
2
] |
[] |
[] |
[
"data_structures",
"documentation",
"python"
] |
stackoverflow_0000389688_data_structures_documentation_python.txt
|
Q:
pywikipedia name wikiquote is not defined?
I'm writing a bot for Wikipedia but have a problem. When I want to get stuff from another Wikimedia site I get the error - error-name 'wikiquote' is not defined.
This is when I start the code off like this-
import wikipedia
site = wikiquote.getSite()
Yet if I was to start it with wikipedia written instead of wikiquote, it works. From what I can understand it should work on other Mediawiki sites?
Help gratefully appreciated!
Thanks!
A:
wikiquote is not defined or imported anywhere in your script. So it is understandable that your code does not work.
According to documentation of pywikipedia, you need to use this instead:
import wikipedia
site = wikipedia.getSite('en', 'wikiquote')
A:
If you're only running this for yourself, it doesn't matter, but pywikipedia bots should let the user control which site they're run against (and which account is used). Users specify these settings in the user-config.py file, as described here. In this case they'd set:
family = 'wikiquote'
which your bot should process. You can look at the login.py file to see an example of how to consume these configuration settings.
|
pywikipedia name wikiquote is not defined?
|
I'm writing a bot for Wikipedia but have a problem. When I want to get stuff from another Wikimedia site I get the error - error-name 'wikiquote' is not defined.
This is when I start the code off like this-
import wikipedia
site = wikiquote.getSite()
Yet if I was to start it with wikipedia written instead of wikiquote, it works. From what I can understand it should work on other Mediawiki sites?
Help gratefully appreciated!
Thanks!
|
[
"wikiquote is not defined or imported anywhere in your script. So it is understandable that your code does not work.\nAccording to documentation of pywikipedia, you need to use this instead:\nimport wikipedia\nsite = wikipedia.getSite('en', 'wikiquote')\n\n",
"If you're only running this for yourself, it doesn't matter, but pywikipedia bots should let the user control which site they're run against (and which account is used). Users specify these settings in the user-config.py file, as described here. In this case they'd set:\nfamily = 'wikiquote'\n\nwhich your bot should process. You can look at the login.py file to see an example of how to consume these configuration settings.\n"
] |
[
2,
0
] |
[] |
[] |
[
"python",
"python_2.6",
"pywikibot"
] |
stackoverflow_0000900306_python_python_2.6_pywikibot.txt
|
Q:
Is there a high-level way to read in lines from an output file and have the types recognized by the structure of the contents?
Suppose I have an output file that I want to read and each line was created by joining several types together, prepending and appending the list braces,
[('tupleValueA','tupleValueB'), 'someString', ('anotherTupleA','anotherTupleB')]
I want to read the lines in. Now I can read them in, and operate on the string to assign values and types but I was wondering if Python had a higher level method for this.
After building a function to do this I tried to find a higher level approach but didn't find one.
A:
What you are looking for is eval. But please keep in mind that this function will evaluate and execute the lines. So don't run it on untrusted input ever!
>>> print eval("[('tupleValueA', 1), 'someString']")
[('tupleValueA', 1), 'someString']
If you have control over the script that generate the output file, then I would suggest you use json encoding. JSON format is very similar to the python string representation of lists and dictionaries. And will be much more secure and robust to read from.
>>> import json
>>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
'["foo", {"bar": ["baz", null, 1.0, 2]}]'
>>> json.loads('["foo", {"bar": ["baz", null, 1.0, 2]}]')
["foo", {"bar": ["baz", null, 1.0, 2]}]
A:
The problem you describe is conventionally referred to as serialization. JavaScript Object Notation (JSON) is one popular serialization protocol.
A:
Probably it would be better to store the data with a module like pickle in the first place, instead of using normal strings. This way you avoid a lot of problems that come with eval and get a more robust solution.
|
Is there a high-level way to read in lines from an output file and have the types recognized by the structure of the contents?
|
Suppose I have an output file that I want to read and each line was created by joining several types together, prepending and appending the list braces,
[('tupleValueA','tupleValueB'), 'someString', ('anotherTupleA','anotherTupleB')]
I want to read the lines in. Now I can read them in, and operate on the string to assign values and types but I was wondering if Python had a higher level method for this.
After building a function to do this I tried to find a higher level approach but didn't find one.
|
[
"What you are looking for is eval. But please keep in mind that this function will evaluate and execute the lines. So don't run it on untrusted input ever!\n>>> print eval(\"[('tupleValueA', 1), 'someString']\")\n[('tupleValueA', 1), 'someString']\n\nIf you have control over the script that generate the output file, then I would suggest you use json encoding. JSON format is very similar to the python string representation of lists and dictionaries. And will be much more secure and robust to read from.\n>>> import json\n>>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])\n'[\"foo\", {\"bar\": [\"baz\", null, 1.0, 2]}]'\n>>> json.loads('[\"foo\", {\"bar\": [\"baz\", null, 1.0, 2]}]')\n[\"foo\", {\"bar\": [\"baz\", null, 1.0, 2]}]\n\n",
"The problem you describe is conventionally referred to as serialization. JavaScript Object Notation (JSON) is one popular serialization protocol.\n",
"Probably it would be better to store the data with a module like pickle in the first place, instead of using normal strings. This way you avoid a lot of problems that come with eval and get a more robust solution.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"python",
"string",
"tuples"
] |
stackoverflow_0000900396_python_string_tuples.txt
|
Q:
Passing Variables to Django Comment Views
Alright, I know I've asked similar questions, but I feel this is hopefully a bit different. I'm integrating django.comments into my application, and the more I play with it, the more I realize it may not even be worth my while at the end of the day. That aside, I've managed to add Captcha to my comments, and I've learned that customizing the form is a terrible idea (hiding that honeypot is stupidly difficult, and from what I can tell requires JS to hide. Pity.). That's alright though, I've managed to work with it. However, the templates for the comments (preview and posted) are frustrating.
When a user is sent to the preview or posted templates, I'd like my sidebar's that have dynamic data to still be functional, however they're not. Do I have to override/rewrite the comments views to push data to these views? At that point it seems like I'm rewriting a major chunk of the comment system anyway, and it'd almost be beneficial to just write my own in that case. I'm more than willing to do that, and totally understand that I'm not entitled to a perfect comments system from Django. I just want to make sure I'm thinking right, and that if I want more than what I get from the comment views, that rewriting them is my only path.
Surely someone's found a healthier way though, so I thought I'd poll the audience. Any thoughts? If you need more info, just lemme know!
A:
Dynamic data in sidebars is what template tags are for.
There's absolutely no need to muck around with the built-in views - just define the tags add them to your templates.
A:
I user template tags as well. Templates in Django are truly for displaying data only.
I think Django believes in separation between the Designers and the Developers. So, they are enforcing the idea of templates should be simple enough for web designer to work with. (the photoshop guys)
So, as long as you don't need complected functionality, just pass the info to a filter and have it do the data manipulation and return the final string that you need.
|
Passing Variables to Django Comment Views
|
Alright, I know I've asked similar questions, but I feel this is hopefully a bit different. I'm integrating django.comments into my application, and the more I play with it, the more I realize it may not even be worth my while at the end of the day. That aside, I've managed to add Captcha to my comments, and I've learned that customizing the form is a terrible idea (hiding that honeypot is stupidly difficult, and from what I can tell requires JS to hide. Pity.). That's alright though, I've managed to work with it. However, the templates for the comments (preview and posted) are frustrating.
When a user is sent to the preview or posted templates, I'd like my sidebar's that have dynamic data to still be functional, however they're not. Do I have to override/rewrite the comments views to push data to these views? At that point it seems like I'm rewriting a major chunk of the comment system anyway, and it'd almost be beneficial to just write my own in that case. I'm more than willing to do that, and totally understand that I'm not entitled to a perfect comments system from Django. I just want to make sure I'm thinking right, and that if I want more than what I get from the comment views, that rewriting them is my only path.
Surely someone's found a healthier way though, so I thought I'd poll the audience. Any thoughts? If you need more info, just lemme know!
|
[
"Dynamic data in sidebars is what template tags are for.\nThere's absolutely no need to muck around with the built-in views - just define the tags add them to your templates.\n",
"I user template tags as well. Templates in Django are truly for displaying data only.\nI think Django believes in separation between the Designers and the Developers. So, they are enforcing the idea of templates should be simple enough for web designer to work with. (the photoshop guys)\nSo, as long as you don't need complected functionality, just pass the info to a filter and have it do the data manipulation and return the final string that you need.\n"
] |
[
3,
0
] |
[] |
[] |
[
"django",
"django_comments",
"python"
] |
stackoverflow_0000896532_django_django_comments_python.txt
|
Q:
Understanding python imports
In the process of learning Django and Python. I can't make sense of this.
(Example Notes:'helloworld' is the name of my project. It has 1 app called 'app'.)
from helloworld.views import * # <<-- this works
from helloworld import views # <<-- this doesn't work
from helloworld.app import views # <<-- but this works. why?
It seems like line #2 and #3 are practically the same. Why does like #2 not work?
Edit -- Added the source of the two files.
You might recognize this code from the Django Book project (http://www.djangobook.com/en/2.0)
helloworld/views.py
from django.shortcuts import render_to_response
from django.http import HttpResponse, Http404
import datetime
def hello(request):
return HttpResponse("Hello world")
def current_datetime(request):
current_date = datetime.datetime.now()
return render_to_response('current_datetime.html', locals())
def offset_datetime(request, offset):
try:
offset = int(offset)
except ValueError:
raise Http404()
next_time = datetime.datetime.now() + datetime.timedelta(hours=offset)
return render_to_response('offset_datetime.html', locals())
def display_meta(request):
values = request.META.items()
values.sort()
path = request.path
return render_to_response('metavalues.html', locals())
helloworld/app/views.py
from django.shortcuts import render_to_response
def search_form(request):
return render_to_response('search_form.html')
def search(request):
if 'q' in request.GET:
message = 'You searched for: %r' % request.GET['q']
else:
message = 'You searched for nothing.'
return render_to_response('search_results.html', locals())
A:
Python imports can import two different kinds of things: modules and objects.
import x
Imports an entire module named x.
import x.y
Imports a module named y and it's container x. You refer to x.y.
When you created it, however, you created this directory structure
x
__init__.py
y.py
When you add to the import statement, you identify specific objects to pull from the module and move into the global namespace
import x # the module as a whole
x.a # Must pick items out of the module
x.b
from x import a, b # two things lifted out of the module
a # items are global
b
If helloworld is a package (a directory, with an __init__.py file), it typically doesn't contain any objects.
from x import y # isn't sensible
import x.y # importing a whole module.
Sometimes, you will have objects defined in the __init__.py file.
Generally, use "from module import x" to pick specific objects out of a module.
Use import module to import an entire module.
A:
from helloworld.views import * # <<-- this works
from helloworld import views # <<-- this doesn't work
from helloworld.app import views # <<-- but this works. why?
#2 and #3 are not the same.
The second one imports views from the package helloworld. The third one imports views from the package helloworld.app, which is a subpackage of helloworld. What it means is that views are specific to your django apps, and not your projects. If you had separate apps, how would you import views from each one? You have to specify the name of the app you want to import from, hence the syntax helloworld.app.
A:
As sykora alluded, helloworld is not in-and-of-itself a package, so #2 won't work. You would need a helloworld.py, appropriately set up.
I asked about import a couple of days ago, this might help:
Lay out import pathing in Python, straight and simple?
|
Understanding python imports
|
In the process of learning Django and Python. I can't make sense of this.
(Example Notes:'helloworld' is the name of my project. It has 1 app called 'app'.)
from helloworld.views import * # <<-- this works
from helloworld import views # <<-- this doesn't work
from helloworld.app import views # <<-- but this works. why?
It seems like line #2 and #3 are practically the same. Why does like #2 not work?
Edit -- Added the source of the two files.
You might recognize this code from the Django Book project (http://www.djangobook.com/en/2.0)
helloworld/views.py
from django.shortcuts import render_to_response
from django.http import HttpResponse, Http404
import datetime
def hello(request):
return HttpResponse("Hello world")
def current_datetime(request):
current_date = datetime.datetime.now()
return render_to_response('current_datetime.html', locals())
def offset_datetime(request, offset):
try:
offset = int(offset)
except ValueError:
raise Http404()
next_time = datetime.datetime.now() + datetime.timedelta(hours=offset)
return render_to_response('offset_datetime.html', locals())
def display_meta(request):
values = request.META.items()
values.sort()
path = request.path
return render_to_response('metavalues.html', locals())
helloworld/app/views.py
from django.shortcuts import render_to_response
def search_form(request):
return render_to_response('search_form.html')
def search(request):
if 'q' in request.GET:
message = 'You searched for: %r' % request.GET['q']
else:
message = 'You searched for nothing.'
return render_to_response('search_results.html', locals())
|
[
"Python imports can import two different kinds of things: modules and objects.\nimport x\n\nImports an entire module named x.\nimport x.y\n\nImports a module named y and it's container x. You refer to x.y. \nWhen you created it, however, you created this directory structure\nx\n __init__.py\n y.py\n\nWhen you add to the import statement, you identify specific objects to pull from the module and move into the global namespace\nimport x # the module as a whole\nx.a # Must pick items out of the module\nx.b\n\nfrom x import a, b # two things lifted out of the module\na # items are global\nb\n\nIf helloworld is a package (a directory, with an __init__.py file), it typically doesn't contain any objects.\nfrom x import y # isn't sensible\nimport x.y # importing a whole module.\n\nSometimes, you will have objects defined in the __init__.py file.\nGenerally, use \"from module import x\" to pick specific objects out of a module.\nUse import module to import an entire module.\n",
"from helloworld.views import * # <<-- this works\nfrom helloworld import views # <<-- this doesn't work\nfrom helloworld.app import views # <<-- but this works. why?\n\n#2 and #3 are not the same.\nThe second one imports views from the package helloworld. The third one imports views from the package helloworld.app, which is a subpackage of helloworld. What it means is that views are specific to your django apps, and not your projects. If you had separate apps, how would you import views from each one? You have to specify the name of the app you want to import from, hence the syntax helloworld.app.\n",
"As sykora alluded, helloworld is not in-and-of-itself a package, so #2 won't work. You would need a helloworld.py, appropriately set up.\nI asked about import a couple of days ago, this might help:\nLay out import pathing in Python, straight and simple?\n"
] |
[
11,
4,
1
] |
[] |
[] |
[
"django",
"import",
"python"
] |
stackoverflow_0000900591_django_import_python.txt
|
Q:
Creating a python win32 service
I am currently trying to create a win32 service using pywin32. My main point of reference has been this tutorial:
http://code.activestate.com/recipes/551780/
What i don't understand is the initialization process, since the Daemon is never initialized directly by Daemon(), instead from my understanding its initialized by the following:
mydaemon = Daemon
__svc_regClass__(mydaemon, "foo", "foo display", "foo description")
__svc_install__(mydaemon)
Where svc_install, handles the initalization, by calling Daemon.init() and passing some arguments to it.
But how can i initialize the daemon object, without initalizing the service? I want to do a few things, before i init the service. Does anyone have any ideas?
class Daemon(win32serviceutil.ServiceFramework):
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
def SvcDoRun(self):
self.run()
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def start(self):
pass
def stop(self):
self.SvcStop()
def run(self):
pass
def __svc_install__(cls):
win32api.SetConsoleCtrlHandler(lambda x: True, True)
try:
win32serviceutil.InstallService(
cls._svc_reg_class_,
cls._svc_name_,
cls._svc_display_name_,
startType = win32service.SERVICE_AUTO_START
)
print "Installed"
except Exception, err:
print str(err)
def __svc_regClass__(cls, name, display_name, description):
#Bind the values to the service name
cls._svc_name_ = name
cls._svc_display_name_ = display_name
cls._svc_description_ = description
try:
module_path = sys.modules[cls.__module__].__file__
except AttributeError:
from sys import executable
module_path = executable
module_file = os.path.splitext(os.path.abspath(module_path))[0]
cls._svc_reg_class_ = '%s.%s' % (module_file, cls.__name__)
A:
I just create a simple "how to" where the program is in one module and the service is in another place, it uses py2exe to create the win32 service, which I believe is the best you can do for your users that don't want to mess with the python interpreter or other dependencies.
You can check my tutorial here: Create win32 services using Python and py2exe
A:
I've never used these APIs, but digging through the code, it looks like the class passed in is used to register the name of the class in the registry, so you can't do any initialization of your own. But there's a method called GetServiceCustomOption that may help:
http://mail.python.org/pipermail/python-win32/2006-April/004518.html
|
Creating a python win32 service
|
I am currently trying to create a win32 service using pywin32. My main point of reference has been this tutorial:
http://code.activestate.com/recipes/551780/
What i don't understand is the initialization process, since the Daemon is never initialized directly by Daemon(), instead from my understanding its initialized by the following:
mydaemon = Daemon
__svc_regClass__(mydaemon, "foo", "foo display", "foo description")
__svc_install__(mydaemon)
Where svc_install, handles the initalization, by calling Daemon.init() and passing some arguments to it.
But how can i initialize the daemon object, without initalizing the service? I want to do a few things, before i init the service. Does anyone have any ideas?
class Daemon(win32serviceutil.ServiceFramework):
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
def SvcDoRun(self):
self.run()
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def start(self):
pass
def stop(self):
self.SvcStop()
def run(self):
pass
def __svc_install__(cls):
win32api.SetConsoleCtrlHandler(lambda x: True, True)
try:
win32serviceutil.InstallService(
cls._svc_reg_class_,
cls._svc_name_,
cls._svc_display_name_,
startType = win32service.SERVICE_AUTO_START
)
print "Installed"
except Exception, err:
print str(err)
def __svc_regClass__(cls, name, display_name, description):
#Bind the values to the service name
cls._svc_name_ = name
cls._svc_display_name_ = display_name
cls._svc_description_ = description
try:
module_path = sys.modules[cls.__module__].__file__
except AttributeError:
from sys import executable
module_path = executable
module_file = os.path.splitext(os.path.abspath(module_path))[0]
cls._svc_reg_class_ = '%s.%s' % (module_file, cls.__name__)
|
[
"I just create a simple \"how to\" where the program is in one module and the service is in another place, it uses py2exe to create the win32 service, which I believe is the best you can do for your users that don't want to mess with the python interpreter or other dependencies.\nYou can check my tutorial here: Create win32 services using Python and py2exe\n",
"I've never used these APIs, but digging through the code, it looks like the class passed in is used to register the name of the class in the registry, so you can't do any initialization of your own. But there's a method called GetServiceCustomOption that may help:\nhttp://mail.python.org/pipermail/python-win32/2006-April/004518.html\n"
] |
[
10,
6
] |
[] |
[] |
[
"python",
"pywin32",
"winapi"
] |
stackoverflow_0000263296_python_pywin32_winapi.txt
|
Q:
How to work with threads in pygtk
I have a problem with threads in pygtk. My application consist of a program that downloads pictures off the internet and then displays it with pygtk. The problem is that in order to do this and keep the GUI responsive, I need to use threads.
So I got into a callback after the user clicked on the button "Download pictures" and I call the method to download the pictures that is within that same class.
thread.start_new_thread(self.images_download, (path,pages)
This won't work. The only way I get my program to get into the thread is by using
gtk.threads_init()
Before starting any thread. Now it downloads the pictures but the GUI remains unresponsive.
I googled this and I tried putting gtk.threads_enter and gtk.threads_leave around the threads but it just doesn't work.
A:
Your question is a bit vague, and without a reference to your actual code it's hard to speculate what you're doing wrong.
So I'll give you some pointers to read, then speculate wildly based on experience.
First of all, you seem to think that you can only keep the GUI responsive by using threads. This is not true. You can also write your code asynchronously, and do everything in a single-threaded application. Twisted is built on this programming model. I recently made a blog post that explains how I created an asynchronous task interface, and example runners both for CLI and GTK+. You can look at those examples to see how tasks can be implemented asynchronously, and the UI still gets updated.
Second, if you prefer to use threads for some reason, you will need to understand the GTK+ threading model a little.
You should start by reading The PyGTK FAQ entry on the subject, and you might find this blog post easy to understand too.
Now, on to speculation. I am guessing that you are trying to update your GTK UI from the thread, and not handling the locking properly. If this is the case, you are better off for now deferring all your UI updates you want to do from threads to the main thread by using gobject.idle_add() This way, all UI calls will be made from the main thread. It is an easier mental model to follow in your programming.
Once you feel you really understand the threading and locking models, you could consider updating the UI from your threads, but it's easy to miss a threads_enter()/threads_leave()
A:
You can use gtk.gdk.threads_init() in order to allow any thread modify the UI with the respecting gtk.gdk.threads_enter() and gtk.gdk.theads_leave() lock, but, the problem with this is that doesn't work well on windows. I have tested it on Linux and performs quite well, but I had no luck making this to work over win32.
=== Edit ===
I have been browsing about this, you could make use of the gobject.io_add_watch to check if there is something in your socket, grab it and then update the GUI. check my post about this:
Sockets (and some other files) and PyGTK without threads.
|
How to work with threads in pygtk
|
I have a problem with threads in pygtk. My application consist of a program that downloads pictures off the internet and then displays it with pygtk. The problem is that in order to do this and keep the GUI responsive, I need to use threads.
So I got into a callback after the user clicked on the button "Download pictures" and I call the method to download the pictures that is within that same class.
thread.start_new_thread(self.images_download, (path,pages)
This won't work. The only way I get my program to get into the thread is by using
gtk.threads_init()
Before starting any thread. Now it downloads the pictures but the GUI remains unresponsive.
I googled this and I tried putting gtk.threads_enter and gtk.threads_leave around the threads but it just doesn't work.
|
[
"Your question is a bit vague, and without a reference to your actual code it's hard to speculate what you're doing wrong.\nSo I'll give you some pointers to read, then speculate wildly based on experience.\nFirst of all, you seem to think that you can only keep the GUI responsive by using threads. This is not true. You can also write your code asynchronously, and do everything in a single-threaded application. Twisted is built on this programming model. I recently made a blog post that explains how I created an asynchronous task interface, and example runners both for CLI and GTK+. You can look at those examples to see how tasks can be implemented asynchronously, and the UI still gets updated.\nSecond, if you prefer to use threads for some reason, you will need to understand the GTK+ threading model a little.\nYou should start by reading The PyGTK FAQ entry on the subject, and you might find this blog post easy to understand too.\nNow, on to speculation. I am guessing that you are trying to update your GTK UI from the thread, and not handling the locking properly. If this is the case, you are better off for now deferring all your UI updates you want to do from threads to the main thread by using gobject.idle_add() This way, all UI calls will be made from the main thread. It is an easier mental model to follow in your programming.\nOnce you feel you really understand the threading and locking models, you could consider updating the UI from your threads, but it's easy to miss a threads_enter()/threads_leave()\n",
"You can use gtk.gdk.threads_init() in order to allow any thread modify the UI with the respecting gtk.gdk.threads_enter() and gtk.gdk.theads_leave() lock, but, the problem with this is that doesn't work well on windows. I have tested it on Linux and performs quite well, but I had no luck making this to work over win32.\n=== Edit ===\nI have been browsing about this, you could make use of the gobject.io_add_watch to check if there is something in your socket, grab it and then update the GUI. check my post about this: \nSockets (and some other files) and PyGTK without threads.\n"
] |
[
12,
1
] |
[] |
[] |
[
"multithreading",
"pygtk",
"python"
] |
stackoverflow_0000809818_multithreading_pygtk_python.txt
|
Q:
drawing a pixbuf onto a drawing area using pygtk and glade
i'm trying to make a GTK application in python where I can just draw a loaded image onto the screen where I click on it. The way I am trying to do this is by loading the image into a pixbuf file, and then drawing that pixbuf onto a drawing area.
the main line of code is here:
def drawing_refresh(self, widget, event):
#clear the screen
widget.window.draw_rectangle(widget.get_style().white_gc, True, 0, 0, 400, 400)
for n in self.nodes:
widget.window.draw_pixbuf(widget.get_style().fg_gc[gtk.STATE_NORMAL],
self.node_image, 0, 0, 0, 0)
This should just draw the pixbuf onto the image in the top left corner, but nothing shows but the white image. I have tested that the pixbuf loads by putting it into a gtk image. What am I doing wrong here?
A:
I found out I just need to get the function to call another expose event with widget.queue_draw() at the end of the function. The function was only being called once at the start, and there were no nodes available at this point so nothing was being drawn.
A:
You can make use of cairo to do this. First, create a gtk.DrawingArea based class, and connect the expose-event to your expose func.
class draw(gtk.gdk.DrawingArea):
def __init__(self):
self.connect('expose-event', self._do_expose)
self.pixbuf = self.gen_pixbuf_from_file(PATH_TO_THE_FILE)
def _do_expose(self, widget, event):
cr = self.window.cairo_create()
cr.set_operator(cairo.OPERATOR_SOURCE)
cr.set_source_rgb(1,1,1)
cr.paint()
cr.set_source_pixbuf(self.pixbuf, 0, 0)
cr.paint()
This will draw the image every time the expose-event is emited.
|
drawing a pixbuf onto a drawing area using pygtk and glade
|
i'm trying to make a GTK application in python where I can just draw a loaded image onto the screen where I click on it. The way I am trying to do this is by loading the image into a pixbuf file, and then drawing that pixbuf onto a drawing area.
the main line of code is here:
def drawing_refresh(self, widget, event):
#clear the screen
widget.window.draw_rectangle(widget.get_style().white_gc, True, 0, 0, 400, 400)
for n in self.nodes:
widget.window.draw_pixbuf(widget.get_style().fg_gc[gtk.STATE_NORMAL],
self.node_image, 0, 0, 0, 0)
This should just draw the pixbuf onto the image in the top left corner, but nothing shows but the white image. I have tested that the pixbuf loads by putting it into a gtk image. What am I doing wrong here?
|
[
"I found out I just need to get the function to call another expose event with widget.queue_draw() at the end of the function. The function was only being called once at the start, and there were no nodes available at this point so nothing was being drawn.\n",
"You can make use of cairo to do this. First, create a gtk.DrawingArea based class, and connect the expose-event to your expose func.\nclass draw(gtk.gdk.DrawingArea):\n def __init__(self):\n self.connect('expose-event', self._do_expose)\n self.pixbuf = self.gen_pixbuf_from_file(PATH_TO_THE_FILE)\n\n def _do_expose(self, widget, event):\n cr = self.window.cairo_create()\n cr.set_operator(cairo.OPERATOR_SOURCE)\n cr.set_source_rgb(1,1,1)\n cr.paint()\n cr.set_source_pixbuf(self.pixbuf, 0, 0)\n cr.paint()\n\nThis will draw the image every time the expose-event is emited.\n"
] |
[
3,
3
] |
[] |
[] |
[
"drawing",
"glade",
"pygtk",
"python"
] |
stackoverflow_0000775528_drawing_glade_pygtk_python.txt
|
Q:
File I/O in the Python 3 C API
The C API in Python 3.0 has changed (deprecated) many of the functions for File Objects.
Before, in 2.X, you could use
PyObject* PyFile_FromString(char *filename, char *mode)
to create a Python file object, e.g:
PyObject *myFile = PyFile_FromString("test.txt", "r");
...but such function no longer exists in Python 3.0.
What would be the Python 3.0 equivalent to such call?
A:
You can do it the old(new?)-fashioned way, by just calling the io module.
This code works, but it does no error checking. See the docs for explanation.
PyObject *ioMod, *openedFile;
PyGILState_STATE gilState = PyGILState_Ensure();
ioMod = PyImport_ImportModule("io");
openedFile = PyObject_CallMethod(ioMod, "open", "ss", "foo.txt", "wb");
Py_DECREF(ioMod);
PyObject_CallMethod(openedFile, "write", "y", "Written from Python C API!\n");
PyObject_CallMethod(openedFile, "flush", NULL);
PyObject_CallMethod(openedFile, "close", NULL);
Py_DECREF(openedFile);
PyGILState_Release(gilState);
Py_Finalize();
A:
This page claims the API is:
PyFile_FromFd(int fd, char *name, char *mode, int buffering, char *encoding, char *newline, int closefd);
Not sure if that means it's not possible to have Python open the file from the filename, but that should be trivial to do yourself, in C.
|
File I/O in the Python 3 C API
|
The C API in Python 3.0 has changed (deprecated) many of the functions for File Objects.
Before, in 2.X, you could use
PyObject* PyFile_FromString(char *filename, char *mode)
to create a Python file object, e.g:
PyObject *myFile = PyFile_FromString("test.txt", "r");
...but such function no longer exists in Python 3.0.
What would be the Python 3.0 equivalent to such call?
|
[
"You can do it the old(new?)-fashioned way, by just calling the io module.\nThis code works, but it does no error checking. See the docs for explanation.\nPyObject *ioMod, *openedFile;\n\nPyGILState_STATE gilState = PyGILState_Ensure();\n\nioMod = PyImport_ImportModule(\"io\");\n\nopenedFile = PyObject_CallMethod(ioMod, \"open\", \"ss\", \"foo.txt\", \"wb\");\nPy_DECREF(ioMod);\n\nPyObject_CallMethod(openedFile, \"write\", \"y\", \"Written from Python C API!\\n\");\nPyObject_CallMethod(openedFile, \"flush\", NULL);\nPyObject_CallMethod(openedFile, \"close\", NULL);\nPy_DECREF(openedFile);\n\nPyGILState_Release(gilState);\nPy_Finalize();\n\n",
"This page claims the API is:\nPyFile_FromFd(int fd, char *name, char *mode, int buffering, char *encoding, char *newline, int closefd);\n\nNot sure if that means it's not possible to have Python open the file from the filename, but that should be trivial to do yourself, in C.\n"
] |
[
10,
4
] |
[] |
[] |
[
"python",
"python_3.x",
"python_c_api"
] |
stackoverflow_0000898136_python_python_3.x_python_c_api.txt
|
Q:
Satchmo donations
Can anyone share some pointers on building a Donations module for Satchmo? I'm comfortable customizing Satchmo's product models etc but unable to find anything related to Donations
I realize it's possible to create a Donations virtual product but as far as I can tell this still requires setting the amount beforehand ($5, $10 etc). I want users to be able to donate arbitrary amounts
A:
It looks like the satchmo_cart_details_query signal is the way to go about doing this. It allows you to add a price change value (in my case, donation amount) to a cart item
I'll post the full solution if anyone is interested
|
Satchmo donations
|
Can anyone share some pointers on building a Donations module for Satchmo? I'm comfortable customizing Satchmo's product models etc but unable to find anything related to Donations
I realize it's possible to create a Donations virtual product but as far as I can tell this still requires setting the amount beforehand ($5, $10 etc). I want users to be able to donate arbitrary amounts
|
[
"It looks like the satchmo_cart_details_query signal is the way to go about doing this. It allows you to add a price change value (in my case, donation amount) to a cart item\nI'll post the full solution if anyone is interested\n"
] |
[
3
] |
[] |
[] |
[
"django",
"e_commerce",
"python",
"satchmo"
] |
stackoverflow_0000891934_django_e_commerce_python_satchmo.txt
|
Q:
Replace in Python-* equivalent?
If I am finding & replacing some text how can I get it to replace some text that will change each day so ie anything between (( & )) whatever it is?
Cheers!
A:
Use regular expressions (http://docs.python.org/library/re.html)?
Could you please be more specific, I don't think I fully understand what you are trying to accomplish.
EDIT:
Ok, now I see. This may be done even easier, but here goes:
>>> import re
>>> s = "foo(bar)whatever"
>>> r = re.compile(r"(\()(.+?)(\))")
>>> r.sub(r"\1baz\3",s)
'foo(baz)whatever'
For multiple levels of parentheses this will not work, or rather it WILL work, but will do something you probably don't want it to do.
Oh hey, as a bonus here's the same regular expression, only now it will replace the string in the innermost parentheses:
r1 = re.compile(r"(\()([^)^(]+?)(\))")
|
Replace in Python-* equivalent?
|
If I am finding & replacing some text how can I get it to replace some text that will change each day so ie anything between (( & )) whatever it is?
Cheers!
|
[
"Use regular expressions (http://docs.python.org/library/re.html)?\nCould you please be more specific, I don't think I fully understand what you are trying to accomplish.\nEDIT:\nOk, now I see. This may be done even easier, but here goes:\n>>> import re\n\n>>> s = \"foo(bar)whatever\"\n>>> r = re.compile(r\"(\\()(.+?)(\\))\")\n>>> r.sub(r\"\\1baz\\3\",s)\n'foo(baz)whatever'\n\nFor multiple levels of parentheses this will not work, or rather it WILL work, but will do something you probably don't want it to do.\nOh hey, as a bonus here's the same regular expression, only now it will replace the string in the innermost parentheses:\nr1 = re.compile(r\"(\\()([^)^(]+?)(\\))\")\n\n"
] |
[
4
] |
[] |
[] |
[
"python",
"replace"
] |
stackoverflow_0000901074_python_replace.txt
|
Q:
Need help with the class and instance concept in Python
I have read several documentation already but the definition of "class" and "instance" didnt get really clear for me yet.
Looks like that "class" is like a combination of functions or methods that return some result is that correct? And how about the instance? I read that you work with the class you creat trough the instance but wouldnt be easier to just work direct with the class?
Sometimes geting the concepts of the language is harder than working with it.
A:
Your question is really rather broad as classes and instances/objects are vital parts of object-oriented programming, so this is not really Python specific. I recommend you buy some books on this as, while initially basic, it can get pretty in-depth. In essense, however:
The most popular and developed model of OOP is a class-based model, as opposed to an object-based model. In this model, objects are entities that combine state (i.e., data), behavior (i.e., procedures, or methods) and identity (unique existence among all other objects). The structure and behavior of an object are defined by a class, which is a definition, or blueprint, of all objects of a specific type. An object must be explicitly created based on a class and an object thus created is considered to be an instance of that class. An object is similar to a structure, with the addition of method pointers, member access control, and an implicit data member which locates instances of the class (i.e. actual objects of that class) in the class hierarchy (essential for runtime inheritance features).
So you would, for example, define a Dog class, and create instances of particular dogs:
>>> class Dog():
... def __init__(self, name, breed):
... self.name = name
... self.breed = breed
... def talk(self):
... print "Hi, my name is " + self.name + ", I am a " + self.breed
...
>>> skip = Dog('Skip','Bulldog')
>>> spot = Dog('Spot','Dalmatian')
>>> spot.talk()
Hi, my name is Spot, I am a Dalmatian
>>> skip.talk()
Hi, my name is Skip, I am a Bulldog
While this example is silly, you can then start seeing how you might define a Client class that sets a blueprint for what a Client is, has methods to perform actions on a particular client, then manipulate a particular instance of a client by creating an object and calling these methods in that context.
Sometimes, however, you have methods of a class that don't really make sense being accessed through an instance of the class, but more from the class itself. These are known as static methods.
A:
I am not sure of what level of knowledge you have, so I apologize if this answer is too simplified (then just ignore it).
A class is a template for an object. Like a blueprint for a car. The instance of a class is like an actual car. So you have one blueprint, but you can have several different instances of cars. The blueprint and the car are different things.
So you make a class that describes what an instance of that class can do and what properties it should have. Then you "build" the instance and get an object that you can work with.
A:
It's fairly simple actually. You know how in python they say "everything is an object". Well in simplistic terms you can think of any object as being an 'instance' and the instructions to create an object as the class. Or in biological terms DNA is the class and you are an instance of DNA.
class HumanDNA(): # class
... class attributes ...
you = HumanDNA() # instance
A:
See http://homepage.mac.com/s_lott/books/python/htmlchunks/ch21.html
Object-oriented programming permits us
to organize our programs around the
interactions of objects. A class
provides the definition of the
structure and behavior of the objects;
each object is an instance of a class.
Objects ("instances") are things which interact, do work, persist in the file system, etc.
Classes are the definitions for the object's behavior.
Also, a class creates new objects that are members of that class (share common structure and behavior)
A:
In part it is confusing due to the dynamically typed nature of Python, which allows you to operate on a class and an instance in essentially the same way. In other languages, the difference is more concrete in that a class provides a template by which to create an object (instance) and cannot be as directly manipulated as in Python. The benefit of operating on the instance rather than the class is that the class can provide a prototype upon which instances are created.
|
Need help with the class and instance concept in Python
|
I have read several documentation already but the definition of "class" and "instance" didnt get really clear for me yet.
Looks like that "class" is like a combination of functions or methods that return some result is that correct? And how about the instance? I read that you work with the class you creat trough the instance but wouldnt be easier to just work direct with the class?
Sometimes geting the concepts of the language is harder than working with it.
|
[
"Your question is really rather broad as classes and instances/objects are vital parts of object-oriented programming, so this is not really Python specific. I recommend you buy some books on this as, while initially basic, it can get pretty in-depth. In essense, however:\n\nThe most popular and developed model of OOP is a class-based model, as opposed to an object-based model. In this model, objects are entities that combine state (i.e., data), behavior (i.e., procedures, or methods) and identity (unique existence among all other objects). The structure and behavior of an object are defined by a class, which is a definition, or blueprint, of all objects of a specific type. An object must be explicitly created based on a class and an object thus created is considered to be an instance of that class. An object is similar to a structure, with the addition of method pointers, member access control, and an implicit data member which locates instances of the class (i.e. actual objects of that class) in the class hierarchy (essential for runtime inheritance features).\n\nSo you would, for example, define a Dog class, and create instances of particular dogs:\n>>> class Dog():\n... def __init__(self, name, breed):\n... self.name = name\n... self.breed = breed\n... def talk(self):\n... print \"Hi, my name is \" + self.name + \", I am a \" + self.breed\n...\n>>> skip = Dog('Skip','Bulldog')\n>>> spot = Dog('Spot','Dalmatian')\n>>> spot.talk()\nHi, my name is Spot, I am a Dalmatian\n>>> skip.talk()\nHi, my name is Skip, I am a Bulldog\n\nWhile this example is silly, you can then start seeing how you might define a Client class that sets a blueprint for what a Client is, has methods to perform actions on a particular client, then manipulate a particular instance of a client by creating an object and calling these methods in that context.\nSometimes, however, you have methods of a class that don't really make sense being accessed through an instance of the class, but more from the class itself. These are known as static methods. \n",
"I am not sure of what level of knowledge you have, so I apologize if this answer is too simplified (then just ignore it).\nA class is a template for an object. Like a blueprint for a car. The instance of a class is like an actual car. So you have one blueprint, but you can have several different instances of cars. The blueprint and the car are different things.\nSo you make a class that describes what an instance of that class can do and what properties it should have. Then you \"build\" the instance and get an object that you can work with.\n",
"It's fairly simple actually. You know how in python they say \"everything is an object\". Well in simplistic terms you can think of any object as being an 'instance' and the instructions to create an object as the class. Or in biological terms DNA is the class and you are an instance of DNA.\nclass HumanDNA(): # class\n ... class attributes ...\n\nyou = HumanDNA() # instance\n\n",
"See http://homepage.mac.com/s_lott/books/python/htmlchunks/ch21.html\n\nObject-oriented programming permits us\n to organize our programs around the\n interactions of objects. A class\n provides the definition of the\n structure and behavior of the objects;\n each object is an instance of a class.\n\nObjects (\"instances\") are things which interact, do work, persist in the file system, etc.\nClasses are the definitions for the object's behavior.\nAlso, a class creates new objects that are members of that class (share common structure and behavior)\n",
"In part it is confusing due to the dynamically typed nature of Python, which allows you to operate on a class and an instance in essentially the same way. In other languages, the difference is more concrete in that a class provides a template by which to create an object (instance) and cannot be as directly manipulated as in Python. The benefit of operating on the instance rather than the class is that the class can provide a prototype upon which instances are created. \n"
] |
[
7,
3,
2,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000900929_python.txt
|
Q:
Dynamic function calls in Python using XMLRPC
I'm writing a class which I intend to use to create subroutines, constructor as following:
def __init__(self,menuText,RPC_params,RPC_call):
#Treat the params
#Call the given RPC_call with the treated params
The problem is that I want to call the function on the pattern "rpc.serve.(function name here)(params)",
where rpc is a serverProxy object that I'm using to call XMLRPC functions, and serve.-function name- is the method I'm calling on the XMLRPC-server.
I've looked at Calling a function from a string with the function's name in Python, but seeing how my serverProxy object doesnt know which "remote attributes" it have, I cant use the getattr() function to retrieve the method.
I've seen a example by making a dictionary to call a given function, but is there no way to make the function truly dynamic by creating the function call as you would create a String?
Like running a String as a function?
A:
You can use getattr to get the function name from the server proxy, so calling the function like this will work:
getattr(rpc, function_name)(*params)
|
Dynamic function calls in Python using XMLRPC
|
I'm writing a class which I intend to use to create subroutines, constructor as following:
def __init__(self,menuText,RPC_params,RPC_call):
#Treat the params
#Call the given RPC_call with the treated params
The problem is that I want to call the function on the pattern "rpc.serve.(function name here)(params)",
where rpc is a serverProxy object that I'm using to call XMLRPC functions, and serve.-function name- is the method I'm calling on the XMLRPC-server.
I've looked at Calling a function from a string with the function's name in Python, but seeing how my serverProxy object doesnt know which "remote attributes" it have, I cant use the getattr() function to retrieve the method.
I've seen a example by making a dictionary to call a given function, but is there no way to make the function truly dynamic by creating the function call as you would create a String?
Like running a String as a function?
|
[
"You can use getattr to get the function name from the server proxy, so calling the function like this will work:\ngetattr(rpc, function_name)(*params)\n\n"
] |
[
2
] |
[] |
[] |
[
"function",
"python",
"rpc",
"xml_rpc"
] |
stackoverflow_0000901391_function_python_rpc_xml_rpc.txt
|
Q:
finding substring
Thanks in advance.I want to find all the substring that occurs between K and N,eventhough K and N occurs in between any number of times.
for example
a='KANNKAAN'
OUTPUT;
[KANNKAAN, KANN , KAN ,KAAN]
A:
import re
def occurences(ch_searched, str_input):
return [i.start() for i in re.finditer(ch_searched, str_input)]
def betweeners(str_input, ch_from, ch_to):
starts = occurences(ch_from, str_input)
ends = occurences(ch_to, str_input)
result = []
for start in starts:
for end in ends:
if start<end:
result.append( str_input[start:end+1] )
return result
print betweeners('KANNKAAN', "K", "N")
Is that what You need?
A:
Another way:
def findbetween(text, begin, end):
for match in re.findall(begin + '.*' +end, text):
yield match
for m in findbetween(match[1:], begin, end):
yield m
for m in findbetween(match[:-1], begin, end):
yield m
>>> list(findbetween('KANNKAAN', 'K', 'N'))
['KANNKAAN', 'KAAN', 'KANN', 'KAN']
|
finding substring
|
Thanks in advance.I want to find all the substring that occurs between K and N,eventhough K and N occurs in between any number of times.
for example
a='KANNKAAN'
OUTPUT;
[KANNKAAN, KANN , KAN ,KAAN]
|
[
"import re\n\ndef occurences(ch_searched, str_input):\n return [i.start() for i in re.finditer(ch_searched, str_input)]\n\ndef betweeners(str_input, ch_from, ch_to):\n starts = occurences(ch_from, str_input)\n ends = occurences(ch_to, str_input)\n result = []\n for start in starts:\n for end in ends:\n if start<end:\n result.append( str_input[start:end+1] )\n return result\n\nprint betweeners('KANNKAAN', \"K\", \"N\")\n\nIs that what You need?\n",
"Another way:\ndef findbetween(text, begin, end):\n for match in re.findall(begin + '.*' +end, text):\n yield match\n for m in findbetween(match[1:], begin, end):\n yield m\n for m in findbetween(match[:-1], begin, end):\n yield m\n\n>>> list(findbetween('KANNKAAN', 'K', 'N'))\n['KANNKAAN', 'KAAN', 'KANN', 'KAN']\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000901070_python.txt
|
Q:
Python: Joining Multiple Lists to one single Sentence
Howdy, I've got multiple lists. For example:
[u'This/ABC']
[u'is/ABC']
[u'not/ABC']
[u'even/ABC']
[u'close/ABC']
[u'to/ABC']
[u'funny/ABC']
[u'./ABC']
[u'O/ABC']
[u'noez/ABC']
[u'!/ABC']
I need to join this List to
This/ABC is/ABC not/ABC even/ABC close/ABC to/ABC funny/ABC ./ABC
O/ABC noez/ABC !/ABC
How do I do that please? Yes, with the empty space in between!
A:
If you put them all in a list, for example like this:
a = [
[u'This/ABC'],
[u'is/ABC'],
...
]
You can get your result by adding all the lists and using a regular join on the result:
result = ' '.join(sum(a, []))
After re-reading the question a couple of times, I suppose you also want that empty line. This is just more of the same. Add:
b = [
[u'O/ABC'],
[u'HAI/ABC'],
...
]
lines = [a, b]
result = '\n\n'.join([' '.join(sum(line, [])) for line in lines])
A:
To join lists, try the chain function in the module itertools, For example, you can try
import itertools
print ' '.join(itertools.chain(mylist))
if the new line between the two lists are intentional, then add '\n' at the end of the first list
import itertools
a = [[u'This/ABZ'], [u'is/ABZ'], ....]
b = [[u'O/ABZ'], [u'O/noez'], ...]
a.append('\n')
print ' '.join(itertools.chain(a + b))
A:
Easy:
x = [[u'O/ABC'], [u'noez/ABC'], [u'!/ABC']]
print ' '.join(y[0] for y in x)
A:
If you put all your lists in one list, you can do it like this:
' '.join(e[0] for e in [[u'This/ABC'], [u'is/ABC']])
|
Python: Joining Multiple Lists to one single Sentence
|
Howdy, I've got multiple lists. For example:
[u'This/ABC']
[u'is/ABC']
[u'not/ABC']
[u'even/ABC']
[u'close/ABC']
[u'to/ABC']
[u'funny/ABC']
[u'./ABC']
[u'O/ABC']
[u'noez/ABC']
[u'!/ABC']
I need to join this List to
This/ABC is/ABC not/ABC even/ABC close/ABC to/ABC funny/ABC ./ABC
O/ABC noez/ABC !/ABC
How do I do that please? Yes, with the empty space in between!
|
[
"If you put them all in a list, for example like this:\na = [\n [u'This/ABC'],\n [u'is/ABC'],\n ...\n]\n\nYou can get your result by adding all the lists and using a regular join on the result:\nresult = ' '.join(sum(a, []))\n\n\nAfter re-reading the question a couple of times, I suppose you also want that empty line. This is just more of the same. Add:\nb = [\n [u'O/ABC'],\n [u'HAI/ABC'],\n ...\n]\n\nlines = [a, b]\n\nresult = '\\n\\n'.join([' '.join(sum(line, [])) for line in lines])\n\n",
"To join lists, try the chain function in the module itertools, For example, you can try\nimport itertools\nprint ' '.join(itertools.chain(mylist))\n\nif the new line between the two lists are intentional, then add '\\n' at the end of the first list\nimport itertools\na = [[u'This/ABZ'], [u'is/ABZ'], ....]\nb = [[u'O/ABZ'], [u'O/noez'], ...]\na.append('\\n')\n\nprint ' '.join(itertools.chain(a + b))\n\n",
"Easy:\nx = [[u'O/ABC'], [u'noez/ABC'], [u'!/ABC']] \nprint ' '.join(y[0] for y in x)\n\n",
"If you put all your lists in one list, you can do it like this:\n' '.join(e[0] for e in [[u'This/ABC'], [u'is/ABC']])\n\n"
] |
[
6,
3,
1,
0
] |
[] |
[] |
[
"join",
"list",
"python"
] |
stackoverflow_0000901412_join_list_python.txt
|
Q:
Executing *nix binaries in Python
I need to run the following command:
screen -dmS RealmD top
Essentially invoking GNU screen in the background with the session title 'RealmD' with the top command being run inside screen. The command MUST be invoked this way so there can't be a substitute for screen at this time until the server is re-tooled. (Another project for another time)
I've subbed in the top command for the server binary that needs to run. But top is a decent substitute while the code is being debugged for this python module.
What I really need is a way to execute screen with the above parameters in Python.
A:
os.system is the simplest way, but, for many more possibilities and degrees of freedom, also look at the standard library subprocess module (unless Stephan202's wonderfully simple use of os.system meets all your needs, of course;-).
Edit Here's the standard replacement for os.system()
p = Popen("screen -dmS RealmD top", shell=True)
sts = p.wait()
http://docs.python.org/library/subprocess.html#replacing-os-system
A:
Use os.system:
os.system("screen -dmS RealmD top")
Then in a separate shell you can have a look at top by running screen -rd RealmD.
|
Executing *nix binaries in Python
|
I need to run the following command:
screen -dmS RealmD top
Essentially invoking GNU screen in the background with the session title 'RealmD' with the top command being run inside screen. The command MUST be invoked this way so there can't be a substitute for screen at this time until the server is re-tooled. (Another project for another time)
I've subbed in the top command for the server binary that needs to run. But top is a decent substitute while the code is being debugged for this python module.
What I really need is a way to execute screen with the above parameters in Python.
|
[
"os.system is the simplest way, but, for many more possibilities and degrees of freedom, also look at the standard library subprocess module (unless Stephan202's wonderfully simple use of os.system meets all your needs, of course;-).\nEdit Here's the standard replacement for os.system()\np = Popen(\"screen -dmS RealmD top\", shell=True)\nsts = p.wait()\n\nhttp://docs.python.org/library/subprocess.html#replacing-os-system\n",
"Use os.system:\nos.system(\"screen -dmS RealmD top\")\n\nThen in a separate shell you can have a look at top by running screen -rd RealmD.\n"
] |
[
11,
7
] |
[] |
[] |
[
"gnu_screen",
"python"
] |
stackoverflow_0000901829_gnu_screen_python.txt
|
Q:
Give Wxwidget Grid rows an ID
I posted this in the mailing list, but the reply I got wasn't too clear, so maybe I'll have better luck here.
I currently have a grid with data in it.
I would like to know if there is a way to give each generated row an
ID, or at least, associate each row with an object.
It may make it more clear if I clarify what i'm doing. It is described
below.
I pull data from an SQL table and display them in the grid.
I am allowing for the user to add/delete rows and edit cells.
Say the user is viewing a grid that has 3 rows(which is, in turn, a
mysql table with 3 rows).
If he is on the last row and presses the down arrow key, a new row is
created and he can enter data into it and it will be inserted in the
database when he presses enter.
However, I need a way to find out which rows will use "insert" query
and which will use "update" query.
So ideally, when the user creates a new row by pressing the down
arrow, I would give that row an ID and store it in a list(or, if rows
already have IDs, just store it in a list) and when the user finishes
entering data in the cells and presses enter, I would check if that
row's ID is in the in the list. If it is, i would insert all of that
row's cells values into the table, if not, i would update mysql with
the values.
Hope I made this clear.
A:
What I did when I encountered such a case was to create a column for IDs and set its width to 0.
A:
You could make your own GridTableBase that implements this, for a simple example to get you started see my answer to this question.
|
Give Wxwidget Grid rows an ID
|
I posted this in the mailing list, but the reply I got wasn't too clear, so maybe I'll have better luck here.
I currently have a grid with data in it.
I would like to know if there is a way to give each generated row an
ID, or at least, associate each row with an object.
It may make it more clear if I clarify what i'm doing. It is described
below.
I pull data from an SQL table and display them in the grid.
I am allowing for the user to add/delete rows and edit cells.
Say the user is viewing a grid that has 3 rows(which is, in turn, a
mysql table with 3 rows).
If he is on the last row and presses the down arrow key, a new row is
created and he can enter data into it and it will be inserted in the
database when he presses enter.
However, I need a way to find out which rows will use "insert" query
and which will use "update" query.
So ideally, when the user creates a new row by pressing the down
arrow, I would give that row an ID and store it in a list(or, if rows
already have IDs, just store it in a list) and when the user finishes
entering data in the cells and presses enter, I would check if that
row's ID is in the in the list. If it is, i would insert all of that
row's cells values into the table, if not, i would update mysql with
the values.
Hope I made this clear.
|
[
"What I did when I encountered such a case was to create a column for IDs and set its width to 0.\n",
"You could make your own GridTableBase that implements this, for a simple example to get you started see my answer to this question.\n"
] |
[
3,
2
] |
[] |
[] |
[
"python",
"wxpython",
"wxwidgets"
] |
stackoverflow_0000901704_python_wxpython_wxwidgets.txt
|
Q:
Django not picking up changes to INSTALLED_APPS in settings.py
I'm trying to get South to work - it worked fine on my PC, but I'm struggling to deploy it on my webhost.
Right now it seems that any changes I make to add/remove items from INSTALLED_APPS aren't being picked up by syncdb or diffsettings. I've added south to my list of INSTALLED_APPS, but the tables it needs aren't being created when I run syncdb. If I change other settings, they are picked up, it just seems to be INSTALLED_APPS that doesn't work.
If I run
from south.db import db
from the shell I get with manage.py shell, I don't get any import errors, so I don't think it's a problem with where south is. I've tried removing all my other applications (other than the Django standard ones), and tables for them still get created when I run syncdb.
Even if I delete INSTALLED_APPS completely, I still get the old list of INSTALLED_APPS when I run manage.py diffsettings.
Any ideas what I've done wrong?
Thanks,
Dom
A:
If you write a migration for an application, syncdb wont work.
You have to use
manage.py migrate
syncdb wont work for applications which are hooked under migration using south. Those applications model change will be noticed only depending on south migration history.
South Migration Docs
A:
The answer, it turns out, is that I'm a moron. I'd done this:
In settings.py:
...
INSTALLED_APPS = (
...
)
...
from localsettings import *
In localsettings.py
...
INSTALLED_APPS = (
...
)
...
I'd created localsettings.py from settings.py, to contain things only relevant to the current location of the project (like database settings), and forgot to delete the INSTALLED_APPS section.
Apologies for doing such a flagrantly stupid thing.
|
Django not picking up changes to INSTALLED_APPS in settings.py
|
I'm trying to get South to work - it worked fine on my PC, but I'm struggling to deploy it on my webhost.
Right now it seems that any changes I make to add/remove items from INSTALLED_APPS aren't being picked up by syncdb or diffsettings. I've added south to my list of INSTALLED_APPS, but the tables it needs aren't being created when I run syncdb. If I change other settings, they are picked up, it just seems to be INSTALLED_APPS that doesn't work.
If I run
from south.db import db
from the shell I get with manage.py shell, I don't get any import errors, so I don't think it's a problem with where south is. I've tried removing all my other applications (other than the Django standard ones), and tables for them still get created when I run syncdb.
Even if I delete INSTALLED_APPS completely, I still get the old list of INSTALLED_APPS when I run manage.py diffsettings.
Any ideas what I've done wrong?
Thanks,
Dom
|
[
"If you write a migration for an application, syncdb wont work.\nYou have to use \nmanage.py migrate\n\nsyncdb wont work for applications which are hooked under migration using south. Those applications model change will be noticed only depending on south migration history.\nSouth Migration Docs\n",
"The answer, it turns out, is that I'm a moron. I'd done this:\nIn settings.py:\n...\nINSTALLED_APPS = (\n ...\n)\n...\n\nfrom localsettings import *\n\nIn localsettings.py\n...\nINSTALLED_APPS = (\n ...\n)\n...\n\nI'd created localsettings.py from settings.py, to contain things only relevant to the current location of the project (like database settings), and forgot to delete the INSTALLED_APPS section.\nApologies for doing such a flagrantly stupid thing.\n"
] |
[
3,
2
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000901061_django_python.txt
|
Q:
Issue with adding new properties to existing Google AppEngine data models / entities
In GAE, I have a model called Foo, with existing entities, and attempt to add a new property called memcached to Foo that takes datetime values for the last time this value was set to memcache. If I try to query and sort on this property, or even filter for entities that do not have a value for memcached, entities that haven't had a value set for this property yet are not returned. Is there something I'm missing here, or as an alternative, is there a quick way to set a value for a new property on every entity of a given model?
I have created a bunch of entities of the following model,
class Foo(db.Model):
name = db.StringProperty(required=True)
and then add a property to this model,
class Foo(db.Model):
name = db.StringProperty(required=True)
memcached = db.DateTimeProperty(required=True, auto_now=True, auto_now_add=True, default=datetime.min)
the default value of the new property is not considered when I do a sort or filter on a query.
A:
There's nothing for it but to go through each of your existing entities and add the property, here is the official documentation which walks you through the process.
|
Issue with adding new properties to existing Google AppEngine data models / entities
|
In GAE, I have a model called Foo, with existing entities, and attempt to add a new property called memcached to Foo that takes datetime values for the last time this value was set to memcache. If I try to query and sort on this property, or even filter for entities that do not have a value for memcached, entities that haven't had a value set for this property yet are not returned. Is there something I'm missing here, or as an alternative, is there a quick way to set a value for a new property on every entity of a given model?
I have created a bunch of entities of the following model,
class Foo(db.Model):
name = db.StringProperty(required=True)
and then add a property to this model,
class Foo(db.Model):
name = db.StringProperty(required=True)
memcached = db.DateTimeProperty(required=True, auto_now=True, auto_now_add=True, default=datetime.min)
the default value of the new property is not considered when I do a sort or filter on a query.
|
[
"There's nothing for it but to go through each of your existing entities and add the property, here is the official documentation which walks you through the process.\n"
] |
[
8
] |
[] |
[] |
[
"bigtable",
"google_app_engine",
"properties",
"python"
] |
stackoverflow_0000902633_bigtable_google_app_engine_properties_python.txt
|
Q:
python factory functions compared to class
Simple example of a nested function:
def maker(N):
def action(X):
return X * N
return action
Do factory functions have advantages over creating a class? Performance, memory, clean up?
A:
What I like most about nested functions is that it is less verbose than classes. The equivalent class definition to your maker function is:
class clsmaker(object):
def __init__(self, N):
self.N = N
def __call__(self, X):
return X * self.N
That doesn't seem so bad until you start adding more arguments to the constructor. Then doing it the class way takes an extra line for each argument, while the function just gets the extra args.
It turns out that there is a speed advantage to the nested functions as well:
>>> T1 = timeit.Timer('maker(3)(4)', 'from __main__ import maker')
>>> T1.timeit()
1.2818338871002197
>>> T2 = timeit.Timer('clsmaker(3)(4)', 'from __main__ import clsmaker')
>>> T2.timeit()
2.2137160301208496
This may be due to there being fewer opcodes involved in the nested functions version:
>>> dis(clsmaker.__call__)
5 0 LOAD_FAST 1 (X)
3 LOAD_FAST 0 (self)
6 LOAD_ATTR 0 (N)
9 BINARY_MULTIPLY
10 RETURN_VALUE
>>> act = maker(3)
>>> dis(act)
3 0 LOAD_FAST 0 (X)
3 LOAD_DEREF 0 (N)
6 BINARY_MULTIPLY
7 RETURN_VALUE
A:
Comparing a function factory to a class is comparing apples and oranges. Use a class if you have a cohesive collection of data and functions, together called an object. Use a function factory if you need a function, and want to parameterize its creation.
Your choice of the two techniques should depend on the meaning of the code.
A:
Nesting functions allows one to create custom functions on the fly.
Have a look at e.g. decorators. The resulting functions depend on variables that are bound at creation time and do not need to be changed afterwards. So using a class for this purpose would make less sense.
|
python factory functions compared to class
|
Simple example of a nested function:
def maker(N):
def action(X):
return X * N
return action
Do factory functions have advantages over creating a class? Performance, memory, clean up?
|
[
"What I like most about nested functions is that it is less verbose than classes. The equivalent class definition to your maker function is:\nclass clsmaker(object):\n def __init__(self, N):\n self.N = N\n def __call__(self, X):\n return X * self.N\n\nThat doesn't seem so bad until you start adding more arguments to the constructor. Then doing it the class way takes an extra line for each argument, while the function just gets the extra args.\nIt turns out that there is a speed advantage to the nested functions as well:\n>>> T1 = timeit.Timer('maker(3)(4)', 'from __main__ import maker')\n>>> T1.timeit()\n1.2818338871002197\n>>> T2 = timeit.Timer('clsmaker(3)(4)', 'from __main__ import clsmaker')\n>>> T2.timeit()\n2.2137160301208496\n\nThis may be due to there being fewer opcodes involved in the nested functions version:\n>>> dis(clsmaker.__call__)\n 5 0 LOAD_FAST 1 (X)\n 3 LOAD_FAST 0 (self)\n 6 LOAD_ATTR 0 (N)\n 9 BINARY_MULTIPLY \n 10 RETURN_VALUE \n>>> act = maker(3)\n>>> dis(act)\n 3 0 LOAD_FAST 0 (X)\n 3 LOAD_DEREF 0 (N)\n 6 BINARY_MULTIPLY \n 7 RETURN_VALUE \n\n",
"Comparing a function factory to a class is comparing apples and oranges. Use a class if you have a cohesive collection of data and functions, together called an object. Use a function factory if you need a function, and want to parameterize its creation.\nYour choice of the two techniques should depend on the meaning of the code.\n",
"Nesting functions allows one to create custom functions on the fly.\nHave a look at e.g. decorators. The resulting functions depend on variables that are bound at creation time and do not need to be changed afterwards. So using a class for this purpose would make less sense.\n"
] |
[
29,
18,
6
] |
[] |
[] |
[
"function",
"python"
] |
stackoverflow_0000901892_function_python.txt
|
Q:
Django, custom template filters - regex problems
I'm trying to implement a WikiLink template filter in Django that queries the database model to give different responses depending on Page existence, identical to Wikipedia's red links. The filter does not raise an Error but instead doesn't do anything to the input.
WikiLink is defined as: [[ThisIsAWikiLink | This is the alt text]]
Here's a working example that does not query the database:
from django import template
from django.template.defaultfilters import stringfilter
from sites.wiki.models import Page
import re
register = template.Library()
@register.filter
@stringfilter
def wikilink(value):
return re.sub(r'\[\[ ?(.*?) ?\| ?(.*?) ?\]\]', r'<a href="/Sites/wiki/\1">\2</a>', value)
wikilink.is_safe = True
The input (value) is a multi-line string, containing HTML and many WikiLinks.
The expected output is substituting [[ThisIsAWikiLink | This is the alt text]] with
<a href="/Sites/wiki/ThisIsAWikiLink">This is the alt text</a>
or if "ThisIsAWikiLink" doesn't exist in the database:
<a href="/Sites/wiki/ThisIsAWikiLink/edit" class="redlink">This is the alt text</a>
and returning value.
Here's the non-working code (edited in response to comments/answers):
from django import template
from django.template.defaultfilters import stringfilter
from sites.wiki.models import Page
import re
register = template.Library()
@register.filter
@stringfilter
def wikilink(value):
m = re.match(r'\[\[ ?(.*?) ?\| ?(.*?) ?\]\]', value)
if(m):
page_alias = m.group(2)
page_title = m.group(3)
try:
page = Page.objects.get(alias=page_alias)
return re.sub(r'(\[\[)(.*)\|(.*)(\]\])', r'<a href="Sites\/wiki\/\2">\3</a>', value)
except Page.DoesNotExist:
return re.sub(r'(\[\[)(.*)\|(.*)(\]\])', r'<a href="Sites\/wiki\/\2\/edit" class="redlink">\3</a>', value)
else:
return value
wikilink.is_safe = True
What the code needs to do is:
extract all the WikiLinks in value
query the Page model to see if the page exists
substitute all the WikiLinks with normal links, styled dependent on each wikipage existence.
return the altered value
The updated question is:
What regular expression (method) can return a python List of WikiLinks, which can be altered and used to substitute the original matches (after being altered).
Edit:
I'd like to do something like this:
def wikilink(value):
regex = re.magic_method(r'\[\[ ?(.*?) ?\| ?(.*?) ?\]\]', value)
foreach wikilink in regex:
alias = wikilink.group(0)
text = wikilink.group(1)
if(alias exists in Page):
regex.sub("<a href="+alias+">"+ text +"</a>")
else:
regex.sub("<a href="+alias+" class='redlink'>"+ text +"</a>")
return value
A:
If your string contains other text in addition to the wiki-link, your filter won't work because you are using re.match instead of re.search. re.match matches at the beginning of the string. re.search matches anywhere in the string. See matching vs. searching.
Also, your regex uses the greedy *, so it won't work if one line contains multiple wiki-links. Use *? instead to make it non-greedy:
re.search(r'\[\[(.*?)\|(.*?)\]\]', value)
Edit:
As for tips on how to fix your code, I suggest that you use re.sub with a callback. The advantages are:
It works correctly if you have multiple wiki-links in the same line.
One pass over the string is enough. You don't need a pass to find wiki-links, and another one to do the replacement.
Here is a sketch of the implmentation:
import re
WIKILINK_RE = re.compile(r'\[\[(.*?)\|(.*?)\]\]')
def wikilink(value):
def wikilink_sub_callback(match_obj):
alias = match_obj.group(1).strip()
text = match_obj.group(2).strip()
if(alias exists in Page):
class_attr = ''
else:
class_attr = ' class="redlink"'
return '<a href="%s"%s>%s</a>' % (alias, class_attr, text)
return WIKILINK_RE.sub(wikilink_sub_callback, value)
A:
This is the type of problem that falls quickly to a small set of unit tests.
Pieces of the filter that can be tested in isolation (with a bit of code restructuring):
Determining whether or not value contains the pattern you're looking for
What string gets generated if there is a matching Page
What string gets generated is there isn't a matching Page
That would help you isolate where things are going wrong. You'll probably find that you'll need to rewire the regexps to account for optional spaces around the |.
Also, on first glance it looks like your filter is exploitable. You're claiming the result is safe, but you haven't filtered the alt text for nasties like script tags.
A:
Code:
import re
def page_exists(alias):
if alias == 'ThisIsAWikiLink':
return True
return False
def wikilink(value):
if value == None:
return None
for alias, text in re.findall('\[\[\s*(.*?)\s*\|\s*(.*?)\s*\]\]',value):
if page_exists(alias):
value = re.sub('\[\[\s*%s\s*\|\s*%s\s*\]\]' % (alias,text), '<a href="/Sites/wiki/%s">%s</a>' % (alias, text),value)
else:
value = re.sub('\[\[\s*%s\s*\|\s*%s\s*\]\]' % (alias,text), '<a href="/Sites/wiki/%s/edit/" class="redtext">%s</a>' % (alias, text), value)
return value
Sample results:
>>> import wikilink
>>> wikilink.wikilink(None)
>>> wikilink.wikilink('')
''
>>> wikilink.wikilink('Test')
'Test'
>>> wikilink.wikilink('[[ThisIsAWikiLink | This is the alt text]]')
'<a href="/Sites/wiki/ThisIsAWikiLink">This is the alt text</a>'
>>> wikilink.wikilink('[[ThisIsABadWikiLink | This is the alt text]]')
'<a href="/Sites/wiki/ThisIsABadWikiLink/edit/" class="redtext">This is the alt text</a>'
>>> wikilink.wikilink('[[ThisIsAWikiLink | This is the alt text]]\n[[ThisIsAWikiLink | This is another instance]]')
'<a href="/Sites/wiki/ThisIsAWikiLink">This is the alt text</a>\n<a href="/Sites/wiki/ThisIsAWikiLink">This is another instance</a>'
>>> wikilink.wikilink('[[ThisIsAWikiLink | This is the alt text]]\n[[ThisIsAWikiLink | This is another instance]]')
General comments:
findall is the magic re function you're looking for
Change page_exists to run whatever query you want
Vulnerable to HTML injection (as mentioned by Dave W. Smith above)
Having to recompile the regex on each iteration is inefficient
Querying the database each time is inefficient
I think you'd run into performance issues pretty quickly with this approach.
A:
This is the working code in case someone needs it:
from django import template
from django.template.defaultfilters import stringfilter
from sites.wiki.models import Page
import re
register = template.Library()
@register.filter
@stringfilter
def wikilink(value):
WIKILINK_RE = re.compile(r'\[\[ ?(.*?) ?\| ?(.*?) ?\]\]')
def wikilink_sub_callback(match_obj):
alias = match_obj.group(1).strip()
text = match_obj.group(2).strip()
class_attr = ''
try:
Page.objects.get(alias=alias)
except Page.DoesNotExist:
class_attr = ' class="redlink"'
return '<a href="%s"%s>%s</a>' % (alias, class_attr, text)
return WIKILINK_RE.sub(wikilink_sub_callback, value)
wikilink.is_safe = True
Many thanks for all the answers!
|
Django, custom template filters - regex problems
|
I'm trying to implement a WikiLink template filter in Django that queries the database model to give different responses depending on Page existence, identical to Wikipedia's red links. The filter does not raise an Error but instead doesn't do anything to the input.
WikiLink is defined as: [[ThisIsAWikiLink | This is the alt text]]
Here's a working example that does not query the database:
from django import template
from django.template.defaultfilters import stringfilter
from sites.wiki.models import Page
import re
register = template.Library()
@register.filter
@stringfilter
def wikilink(value):
return re.sub(r'\[\[ ?(.*?) ?\| ?(.*?) ?\]\]', r'<a href="/Sites/wiki/\1">\2</a>', value)
wikilink.is_safe = True
The input (value) is a multi-line string, containing HTML and many WikiLinks.
The expected output is substituting [[ThisIsAWikiLink | This is the alt text]] with
<a href="/Sites/wiki/ThisIsAWikiLink">This is the alt text</a>
or if "ThisIsAWikiLink" doesn't exist in the database:
<a href="/Sites/wiki/ThisIsAWikiLink/edit" class="redlink">This is the alt text</a>
and returning value.
Here's the non-working code (edited in response to comments/answers):
from django import template
from django.template.defaultfilters import stringfilter
from sites.wiki.models import Page
import re
register = template.Library()
@register.filter
@stringfilter
def wikilink(value):
m = re.match(r'\[\[ ?(.*?) ?\| ?(.*?) ?\]\]', value)
if(m):
page_alias = m.group(2)
page_title = m.group(3)
try:
page = Page.objects.get(alias=page_alias)
return re.sub(r'(\[\[)(.*)\|(.*)(\]\])', r'<a href="Sites\/wiki\/\2">\3</a>', value)
except Page.DoesNotExist:
return re.sub(r'(\[\[)(.*)\|(.*)(\]\])', r'<a href="Sites\/wiki\/\2\/edit" class="redlink">\3</a>', value)
else:
return value
wikilink.is_safe = True
What the code needs to do is:
extract all the WikiLinks in value
query the Page model to see if the page exists
substitute all the WikiLinks with normal links, styled dependent on each wikipage existence.
return the altered value
The updated question is:
What regular expression (method) can return a python List of WikiLinks, which can be altered and used to substitute the original matches (after being altered).
Edit:
I'd like to do something like this:
def wikilink(value):
regex = re.magic_method(r'\[\[ ?(.*?) ?\| ?(.*?) ?\]\]', value)
foreach wikilink in regex:
alias = wikilink.group(0)
text = wikilink.group(1)
if(alias exists in Page):
regex.sub("<a href="+alias+">"+ text +"</a>")
else:
regex.sub("<a href="+alias+" class='redlink'>"+ text +"</a>")
return value
|
[
"If your string contains other text in addition to the wiki-link, your filter won't work because you are using re.match instead of re.search. re.match matches at the beginning of the string. re.search matches anywhere in the string. See matching vs. searching.\nAlso, your regex uses the greedy *, so it won't work if one line contains multiple wiki-links. Use *? instead to make it non-greedy:\nre.search(r'\\[\\[(.*?)\\|(.*?)\\]\\]', value)\n\nEdit:\nAs for tips on how to fix your code, I suggest that you use re.sub with a callback. The advantages are:\n\nIt works correctly if you have multiple wiki-links in the same line.\nOne pass over the string is enough. You don't need a pass to find wiki-links, and another one to do the replacement.\n\nHere is a sketch of the implmentation:\nimport re\n\nWIKILINK_RE = re.compile(r'\\[\\[(.*?)\\|(.*?)\\]\\]')\n\ndef wikilink(value):\n def wikilink_sub_callback(match_obj):\n alias = match_obj.group(1).strip()\n text = match_obj.group(2).strip()\n if(alias exists in Page):\n class_attr = ''\n else:\n class_attr = ' class=\"redlink\"'\n return '<a href=\"%s\"%s>%s</a>' % (alias, class_attr, text)\n\n return WIKILINK_RE.sub(wikilink_sub_callback, value)\n\n",
"This is the type of problem that falls quickly to a small set of unit tests.\nPieces of the filter that can be tested in isolation (with a bit of code restructuring):\n\nDetermining whether or not value contains the pattern you're looking for\nWhat string gets generated if there is a matching Page\nWhat string gets generated is there isn't a matching Page\n\nThat would help you isolate where things are going wrong. You'll probably find that you'll need to rewire the regexps to account for optional spaces around the |.\nAlso, on first glance it looks like your filter is exploitable. You're claiming the result is safe, but you haven't filtered the alt text for nasties like script tags.\n",
"Code:\nimport re\n\ndef page_exists(alias):\n if alias == 'ThisIsAWikiLink':\n return True\n\n return False\n\ndef wikilink(value):\n if value == None:\n return None\n\n for alias, text in re.findall('\\[\\[\\s*(.*?)\\s*\\|\\s*(.*?)\\s*\\]\\]',value):\n if page_exists(alias):\n value = re.sub('\\[\\[\\s*%s\\s*\\|\\s*%s\\s*\\]\\]' % (alias,text), '<a href=\"/Sites/wiki/%s\">%s</a>' % (alias, text),value) \n else:\n value = re.sub('\\[\\[\\s*%s\\s*\\|\\s*%s\\s*\\]\\]' % (alias,text), '<a href=\"/Sites/wiki/%s/edit/\" class=\"redtext\">%s</a>' % (alias, text), value)\n\n return value\n\nSample results:\n>>> import wikilink\n>>> wikilink.wikilink(None)\n>>> wikilink.wikilink('')\n''\n>>> wikilink.wikilink('Test')\n'Test'\n>>> wikilink.wikilink('[[ThisIsAWikiLink | This is the alt text]]')\n'<a href=\"/Sites/wiki/ThisIsAWikiLink\">This is the alt text</a>'\n>>> wikilink.wikilink('[[ThisIsABadWikiLink | This is the alt text]]')\n'<a href=\"/Sites/wiki/ThisIsABadWikiLink/edit/\" class=\"redtext\">This is the alt text</a>'\n>>> wikilink.wikilink('[[ThisIsAWikiLink | This is the alt text]]\\n[[ThisIsAWikiLink | This is another instance]]')\n'<a href=\"/Sites/wiki/ThisIsAWikiLink\">This is the alt text</a>\\n<a href=\"/Sites/wiki/ThisIsAWikiLink\">This is another instance</a>'\n>>> wikilink.wikilink('[[ThisIsAWikiLink | This is the alt text]]\\n[[ThisIsAWikiLink | This is another instance]]')\n\nGeneral comments:\n\nfindall is the magic re function you're looking for\nChange page_exists to run whatever query you want\nVulnerable to HTML injection (as mentioned by Dave W. Smith above)\nHaving to recompile the regex on each iteration is inefficient\nQuerying the database each time is inefficient\n\nI think you'd run into performance issues pretty quickly with this approach.\n",
"This is the working code in case someone needs it:\nfrom django import template\nfrom django.template.defaultfilters import stringfilter\nfrom sites.wiki.models import Page\nimport re\n\nregister = template.Library()\n\[email protected]\n@stringfilter\ndef wikilink(value):\n WIKILINK_RE = re.compile(r'\\[\\[ ?(.*?) ?\\| ?(.*?) ?\\]\\]')\n\n def wikilink_sub_callback(match_obj):\n alias = match_obj.group(1).strip()\n text = match_obj.group(2).strip()\n\n class_attr = ''\n try:\n Page.objects.get(alias=alias)\n except Page.DoesNotExist:\n class_attr = ' class=\"redlink\"'\n return '<a href=\"%s\"%s>%s</a>' % (alias, class_attr, text)\n\n return WIKILINK_RE.sub(wikilink_sub_callback, value)\nwikilink.is_safe = True\n\nMany thanks for all the answers!\n"
] |
[
4,
3,
1,
0
] |
[] |
[] |
[
"django",
"django_templates",
"python",
"regex"
] |
stackoverflow_0000902184_django_django_templates_python_regex.txt
|
Q:
How does the win32com python.Interpreter work?
Ok, so I'm trying to google the win32com python package and the python.Interpreter COM server. Unfortunately, python.Interpreter ends up as "python Interpreter" and not giving me any COM server results.
I'm trying to make a pluggable program that has a plugin to allow python code to run, and it seems like the python.Interpreter would be a good way to go. But I haven't used it before and I'm not sure how to make objects created from it available through COM.
Any advice or pointers to documentation/examples would be appreciated.
Also, would a user need to install a python package to use the COM server, or is the interpreter built into the server dll?
Thanks
Brett
A:
See http://books.google.com/books?id=ns1WMyLVnRMC&pg=PA232&lpg=PA232&dq=win32com+%22python.interpreter%22&source=bl&ots=NVpe-E8eGg&sig=imGi73WQyOmP4rJC6-jpz4stb9M&hl=en&ei=xrAYSsTHBZH0tAORqeCSDw&sa=X&oi=book_result&ct=result&resnum=6#PPA232,M1 for excellent docs on python.interpreter -- as for your second question, normally win32com is installed as an add-on to an existing Python install, but of course you can pick a Python distro including the win32 extensions, such as Activestate's at http://www.activestate.com/activepython/ .
|
How does the win32com python.Interpreter work?
|
Ok, so I'm trying to google the win32com python package and the python.Interpreter COM server. Unfortunately, python.Interpreter ends up as "python Interpreter" and not giving me any COM server results.
I'm trying to make a pluggable program that has a plugin to allow python code to run, and it seems like the python.Interpreter would be a good way to go. But I haven't used it before and I'm not sure how to make objects created from it available through COM.
Any advice or pointers to documentation/examples would be appreciated.
Also, would a user need to install a python package to use the COM server, or is the interpreter built into the server dll?
Thanks
Brett
|
[
"See http://books.google.com/books?id=ns1WMyLVnRMC&pg=PA232&lpg=PA232&dq=win32com+%22python.interpreter%22&source=bl&ots=NVpe-E8eGg&sig=imGi73WQyOmP4rJC6-jpz4stb9M&hl=en&ei=xrAYSsTHBZH0tAORqeCSDw&sa=X&oi=book_result&ct=result&resnum=6#PPA232,M1 for excellent docs on python.interpreter -- as for your second question, normally win32com is installed as an add-on to an existing Python install, but of course you can pick a Python distro including the win32 extensions, such as Activestate's at http://www.activestate.com/activepython/ .\n"
] |
[
1
] |
[] |
[] |
[
"com",
"python"
] |
stackoverflow_0000902895_com_python.txt
|
Q:
Data storage to ease data interpolation in Python
I have 20+ tables similar to table 1. Where all letters represent actual values.
Table 1:
$ / cars |<1 | 2 | 3 | 4+
<10,000 | a | b | c | d
20,000 | e | f | g | h
30,000 | i | j | k | l
40,000+ | m | n | o | p
A user input could be for example, (2.4, 24594) which is a value between f, g, j, and k.
My Python function definition and pseudo-code to calculate this bilinear interpolation is as follows.
def bilinear_interpolation( x_in, y_in, x_high, x_low, y_low, y_high ):
# interpolate with respect to x
# interpolate with respect to y
# return result
How should I store the data from table 1 (a file, a dict, tuple of tuples, or dict of lists), so I can perform the bilinear interpolation most efficiently and correctly?
A:
If you want the most computationally efficient solution I can think of and are not restricted to the standard library, then I would recommend scipy/numpy. First, store the a..p array as a 2D numpy array and then both the $4k-10k and 1-4 arrays as 1D numpy arrays. Use scipy's interpolate.interp1d if both 1D arrays are monotonically increasing, or interpolate.bsplrep (bivariate spline representation) if not and your example arrays are as small as your example. Or simply write your own and not bother with scipy. Here are some examples:
# this follows your pseudocode most closely, but it is *not*
# the most efficient since it creates the interpolation
# functions on each call to bilinterp
from scipy import interpolate
import numpy
data = numpy.arange(0., 16.).reshape((4,4)) #2D array
prices = numpy.arange(10000., 50000., 10000.)
cars = numpy.arange(1., 5.)
def bilinterp(price,car):
return interpolate.interp1d(cars, interpolate.interp1d(prices, a)(price))(car)
print bilinterp(22000,2)
The last time I checked (a version of scipy from 2007-ish) it only worked for monotonically increasing arrays of x and y)
for small arrays like this 4x4 array, I think you want to use this:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.bisplrep.html#scipy.interpolate.bisplrep
which will handle more interestingly shaped surfaces and the function only needs to be created once. For larger arrays, I think you want this (not sure if this has the same restrictions as interp1d):
http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html#scipy.interpolate.interp2d
but they both require a different and more verbose data structure than the three arrays in the example above.
A:
I'd keep a sorted list of the first column, and use the bisect module in the standard library to look for the values -- it's the best way to get the immediately-lower and immediately-higher indices. Every other column can be kept as another list parallel to this one.
A:
There's nothing special about bilinear interpolation that makes your use case particularly odd; you just have to do two lookups (for storage units of full rows/columns) or four lookups (for array-type storage). The most efficient method depends on your access patterns and the structure of the data.
If your example is truly representative, with 16 total entries, you can store it however you want and it'll be fast enough for any kind of sane loads.
|
Data storage to ease data interpolation in Python
|
I have 20+ tables similar to table 1. Where all letters represent actual values.
Table 1:
$ / cars |<1 | 2 | 3 | 4+
<10,000 | a | b | c | d
20,000 | e | f | g | h
30,000 | i | j | k | l
40,000+ | m | n | o | p
A user input could be for example, (2.4, 24594) which is a value between f, g, j, and k.
My Python function definition and pseudo-code to calculate this bilinear interpolation is as follows.
def bilinear_interpolation( x_in, y_in, x_high, x_low, y_low, y_high ):
# interpolate with respect to x
# interpolate with respect to y
# return result
How should I store the data from table 1 (a file, a dict, tuple of tuples, or dict of lists), so I can perform the bilinear interpolation most efficiently and correctly?
|
[
"If you want the most computationally efficient solution I can think of and are not restricted to the standard library, then I would recommend scipy/numpy. First, store the a..p array as a 2D numpy array and then both the $4k-10k and 1-4 arrays as 1D numpy arrays. Use scipy's interpolate.interp1d if both 1D arrays are monotonically increasing, or interpolate.bsplrep (bivariate spline representation) if not and your example arrays are as small as your example. Or simply write your own and not bother with scipy. Here are some examples:\n# this follows your pseudocode most closely, but it is *not*\n# the most efficient since it creates the interpolation \n# functions on each call to bilinterp\nfrom scipy import interpolate\nimport numpy\ndata = numpy.arange(0., 16.).reshape((4,4)) #2D array\nprices = numpy.arange(10000., 50000., 10000.)\ncars = numpy.arange(1., 5.)\ndef bilinterp(price,car):\n return interpolate.interp1d(cars, interpolate.interp1d(prices, a)(price))(car)\nprint bilinterp(22000,2)\n\nThe last time I checked (a version of scipy from 2007-ish) it only worked for monotonically increasing arrays of x and y)\nfor small arrays like this 4x4 array, I think you want to use this:\nhttp://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.bisplrep.html#scipy.interpolate.bisplrep\nwhich will handle more interestingly shaped surfaces and the function only needs to be created once. For larger arrays, I think you want this (not sure if this has the same restrictions as interp1d):\nhttp://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html#scipy.interpolate.interp2d\nbut they both require a different and more verbose data structure than the three arrays in the example above.\n",
"I'd keep a sorted list of the first column, and use the bisect module in the standard library to look for the values -- it's the best way to get the immediately-lower and immediately-higher indices. Every other column can be kept as another list parallel to this one. \n",
"There's nothing special about bilinear interpolation that makes your use case particularly odd; you just have to do two lookups (for storage units of full rows/columns) or four lookups (for array-type storage). The most efficient method depends on your access patterns and the structure of the data.\nIf your example is truly representative, with 16 total entries, you can store it however you want and it'll be fast enough for any kind of sane loads.\n"
] |
[
7,
3,
0
] |
[] |
[] |
[
"interpolation",
"python"
] |
stackoverflow_0000902910_interpolation_python.txt
|
Q:
How to embed a Poll in a Web Page
I want to create a simple online poll application. I have created a backend in python that handles vote tracking, poll display, results display and admin setup. However, if I wanted a third party to be able to embed the poll in their website, what would be the recommended way of doing so? I would love to be able to provide a little javascript to drop into the third parties web page, but I can't use javascript because it would require a cross-domain access. What approach would provide an easy general solution for third parties?
A:
Make your app into a Google Gadget, Open Social gadget, or other kind of gadgets -- these are all designed to be embeddable into third-party pages with as little fuss as possible.
A:
IFrame is the easiest no muss no fuss solution if you want to allow postbacks.
Or, this is a bit left field and oldschool, but could you use a 1x1 transparent gif as your vote submission? They click the link (radio/span/whatever), you set the src of an image to something that lives on your server. Something like
document.getElementById('voteImage').src='http://your.server/vote.html?pollidentifier=123&vote=4'
where vote.html is your server side code for processing a vote, poll identifier is the way of telling what poll it is, and vote is the vote they've chosen. All vote.html has to do is return something that smells like an image and the browser will be happy. Obviously you'll need to put stuff in place to stop people faking up votes, but an image + cookies is what the old timers used to use before that new fandangled xhr came along.
|
How to embed a Poll in a Web Page
|
I want to create a simple online poll application. I have created a backend in python that handles vote tracking, poll display, results display and admin setup. However, if I wanted a third party to be able to embed the poll in their website, what would be the recommended way of doing so? I would love to be able to provide a little javascript to drop into the third parties web page, but I can't use javascript because it would require a cross-domain access. What approach would provide an easy general solution for third parties?
|
[
"Make your app into a Google Gadget, Open Social gadget, or other kind of gadgets -- these are all designed to be embeddable into third-party pages with as little fuss as possible.\n",
"IFrame is the easiest no muss no fuss solution if you want to allow postbacks.\nOr, this is a bit left field and oldschool, but could you use a 1x1 transparent gif as your vote submission? They click the link (radio/span/whatever), you set the src of an image to something that lives on your server. Something like\ndocument.getElementById('voteImage').src='http://your.server/vote.html?pollidentifier=123&vote=4'\n\nwhere vote.html is your server side code for processing a vote, poll identifier is the way of telling what poll it is, and vote is the vote they've chosen. All vote.html has to do is return something that smells like an image and the browser will be happy. Obviously you'll need to put stuff in place to stop people faking up votes, but an image + cookies is what the old timers used to use before that new fandangled xhr came along.\n"
] |
[
1,
1
] |
[] |
[] |
[
"cross_domain",
"javascript",
"python"
] |
stackoverflow_0000903104_cross_domain_javascript_python.txt
|
Q:
How to implement hotlinking prevention in Google App Engine
My application is on GAE and I'm trying to figure out how to prevent hotlinking of images dynamically served (e.g. /image?id=E23432E) in Python. Please advise.
A:
In Google webapp framework, you can extract the referer from the Request class:
def get(self):
referer = self.request.headers.get("Referer")
# Will be None if no referer given in header.
Note that's referer, not referrer (see this dictionary entry).
|
How to implement hotlinking prevention in Google App Engine
|
My application is on GAE and I'm trying to figure out how to prevent hotlinking of images dynamically served (e.g. /image?id=E23432E) in Python. Please advise.
|
[
"In Google webapp framework, you can extract the referer from the Request class:\ndef get(self):\n referer = self.request.headers.get(\"Referer\")\n # Will be None if no referer given in header.\n\nNote that's referer, not referrer (see this dictionary entry).\n"
] |
[
11
] |
[] |
[] |
[
"google_app_engine",
"hotlinking",
"python"
] |
stackoverflow_0000903144_google_app_engine_hotlinking_python.txt
|
Q:
Why is the PyObjC documentation so bad?
For example, http://developer.apple.com/cocoa/pyobjc.html is still for OS X 10.4 Tiger, not 10.5 Leopard.. And that's the official Apple documentation for it..
The official PyObjC page is equally bad, http://pyobjc.sourceforge.net/
It's so bad it's baffling.. I'm considering learning Ruby primarily because the RubyCocoa stuff is so much better documented, and there's lots of decent tutorials (http://www.rubycocoa.com/ for example), and because of the Shoes GUI toolkit..
Even this badly-auto-translated Japanese tutorial is more useful than the rest of the documentation I could find..
All I want to do is create fairly simple Python applications with Cocoa GUI's..
Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None) does..?
A:
The main reason for the lack of documentation for PyObjC is that there is one developer (me), and as most developers I don't particularly like writing documentation. Because PyObjC is a side project for me I tend to focus on working on features and bugfixes, because that's more interesting for me.
The best way to improve the documentation is to volunteer to help on the pyobjc-dev mailing list.
As an aside: the pythonmac-sig mailinglist (see google) is an excelent resource for getting help on Python on MacOSX (not just PyObjC).
A:
I agree that that tutorial is flawed, throwing random, unexplained code right in front of your eyes. It introduces concepts such as the autorelease pool and user defaults without explaining why you would want them ("Autorelease pool for memory management" is hardly an explanation).
That said…
basically all I want to do is write Cocoa applications without having to learn ObjC.
I'm afraid that for the time being, you will need a basic grasp of ObjC in order to benefit from any language that uses Cocoa. PyObjC, RubyCocoa, Nu and others are niches at best, and all of them were developed by people intimately familiar with the ins and outs of ObjC and Cocoa.
For now, you will benefit the most if you realistically see those bridges as useful where scripting languages truly shine, rather than trying to build a whole application with them. While this has been done (with LimeChat, I'm using a RubyCocoa-written app right now), it is rare and likely will be for a while.
A:
To be blunt:
If you want to be an effective Cocoa programmer, you must learn Objective-C. End of story.
Neither Python or Ruby are a substitute for Objective-C via their respective bridges. You still have to understand the Objective-C APIs, the behaviors inherent to NSObject derived classes, and many other details of Cocoa.
PyObjC and RubyCocoa are a great way to access Python or Ruby functionality from a Cocoa application, including building a Cocoa application mostly -- if not entirely -- in Python or Ruby. But success therein is founded upon a thorough understanding of Cocoa and the Objective-C APIs it is composed of.
A:
Tom's and Martin's response are definitely true (in just about any open source project, you'll find that most contributors are particularly interested in, well, developing; not so much in semi-related matters such as documentation), but I don't think your particular question at the end would fit well inside PyObjC documentation.
NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None)
NSThread is part of the Cocoa API, and as such documented over at Apple, including the particular method + detachNewThreadSelector:toTarget:withObject: (I'd link there, but apparently stackoverflow has bugs with parsing it). The CocoaDev wiki also has an article.
I don't think it would be a good idea for PyObjC to attempt to document Cocoa, other than a few basic examples of how to use it from within Python. Explaining selectors is also likely outside the scope of PyObjC, as those, too, are a feature of Objective-C, not PyObjC specifically.
A:
I stumbled across a good tutorial on PyObjC/Cocoa:
http://lethain.com/entry/2008/aug/22/an-epic-introduction-to-pyobjc-and-cocoa/
A:
All I want to do is create fairly simple Python applications with Cocoa GUI's.. Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None) does..?
[...]
basically all I want to do is write Cocoa applications without having to learn ObjC.
Although I basically agree with Soeren's response, I'd take it even further:
It will be a long time, if ever, before you can use Cocoa without some understanding of Objective C. Cocoa isn't an abstraction built independently from Objective C, it is explicitly tied to it. You can see this in the example line of code you quoted above:
NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None)
This is the Python way of writing the Objective C line:
[NSThread detachNewThreadSelector:@selector(queryController:) toTarget:self withObject:nil];
Now, it's important to notice here that this line can be seen in two ways: (1) as a line of Objective C, or (2) as an invocation of the Cocoa frameworks. We see it as (1) by the syntax. We see it as (2) by recognizing that NSThread is a Cocoa framework which provides a set of handy features. In this case, this particular Cocoa framework is making it easy for us to have an object start doing something on a new thread.
But the kicker is this: The Cocoa framework here (NSThread) is providing us this handy service in a way that is explicitly tied to the language the framework has been written in. Namely, NSThread gave us a feature that explicitly refers to "selectors". Selectors are, in point of fact, the name for something fundamental about how Objective C works.
So there's the rub. Cocoa is fundamentally an Objective-C creation, and its creators have built it with Objective C in mind. I'm not claiming that it's impossible to translate the interface to the Cocoa features into a form more natural for other languages. It's just that as soon as you change the Cocoa framework to stop referring to "selectors", it's not really the Cocoa framework any more. It's a translated version. And once you start going down that road, I'm guessing things get really messy. You're trying to keep up with Apple as they update Cocoa, maybe you hit some parts of Cocoa that just don't translate well into the new language, whatever. So instead, things like PyObjC opt to expose Cocoa directly, in a way that has a very clear and simple correlation. As they say in the documentation:
In order to have a lossless and unambiguous translation between Objective-C messages and Python methods, the Python method name equivalent is simply the selector with colons replaced by underscores.
Sure, it's a bit ugly, and it does mean you need to know something about Objective-C, but that's because the alternative, if one truly exists, is not necessarily better.
A:
I didn't know anything at all about Objective C or Cocoa (but plenty about Python), but I am now writing a rather complex application in PyObjc. How did I learn? I picked up Cocoa Programming for OSX and went through the whole book (a pretty quick process) using PyObjC. Just ignore anything about memory management and you'll pretty much be fine. The only caveat is that very occasionally you have to use a decorator like endSheetMethod (actually I think that's the only one I've hit):
@PyObjcTools.AppHelper.endSheetMethod
def alertEnded_code_context_(self, alert, choice, context):
pass
A:
This answer isn't going to be very helpful but, as a developer I hate doing documentation. This being a opensource project, it's hard to find people to do documentation.
A:
Tom says it all really. Lots of open source projects have dedicated developers and few who are interested in documenting. It isn't helped by the fact that goalposts can shift on a daily basis which means documentation not only has to be created, but maintained.
|
Why is the PyObjC documentation so bad?
|
For example, http://developer.apple.com/cocoa/pyobjc.html is still for OS X 10.4 Tiger, not 10.5 Leopard.. And that's the official Apple documentation for it..
The official PyObjC page is equally bad, http://pyobjc.sourceforge.net/
It's so bad it's baffling.. I'm considering learning Ruby primarily because the RubyCocoa stuff is so much better documented, and there's lots of decent tutorials (http://www.rubycocoa.com/ for example), and because of the Shoes GUI toolkit..
Even this badly-auto-translated Japanese tutorial is more useful than the rest of the documentation I could find..
All I want to do is create fairly simple Python applications with Cocoa GUI's..
Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None) does..?
|
[
"The main reason for the lack of documentation for PyObjC is that there is one developer (me), and as most developers I don't particularly like writing documentation. Because PyObjC is a side project for me I tend to focus on working on features and bugfixes, because that's more interesting for me.\nThe best way to improve the documentation is to volunteer to help on the pyobjc-dev mailing list. \nAs an aside: the pythonmac-sig mailinglist (see google) is an excelent resource for getting help on Python on MacOSX (not just PyObjC).\n",
"I agree that that tutorial is flawed, throwing random, unexplained code right in front of your eyes. It introduces concepts such as the autorelease pool and user defaults without explaining why you would want them (\"Autorelease pool for memory management\" is hardly an explanation).\nThat said…\n\nbasically all I want to do is write Cocoa applications without having to learn ObjC.\n\nI'm afraid that for the time being, you will need a basic grasp of ObjC in order to benefit from any language that uses Cocoa. PyObjC, RubyCocoa, Nu and others are niches at best, and all of them were developed by people intimately familiar with the ins and outs of ObjC and Cocoa.\nFor now, you will benefit the most if you realistically see those bridges as useful where scripting languages truly shine, rather than trying to build a whole application with them. While this has been done (with LimeChat, I'm using a RubyCocoa-written app right now), it is rare and likely will be for a while.\n",
"To be blunt:\nIf you want to be an effective Cocoa programmer, you must learn Objective-C. End of story.\nNeither Python or Ruby are a substitute for Objective-C via their respective bridges. You still have to understand the Objective-C APIs, the behaviors inherent to NSObject derived classes, and many other details of Cocoa.\nPyObjC and RubyCocoa are a great way to access Python or Ruby functionality from a Cocoa application, including building a Cocoa application mostly -- if not entirely -- in Python or Ruby. But success therein is founded upon a thorough understanding of Cocoa and the Objective-C APIs it is composed of.\n",
"Tom's and Martin's response are definitely true (in just about any open source project, you'll find that most contributors are particularly interested in, well, developing; not so much in semi-related matters such as documentation), but I don't think your particular question at the end would fit well inside PyObjC documentation.\nNSThread.detachNewThreadSelector_toTarget_withObject_(\"queryController\", self, None)\n\nNSThread is part of the Cocoa API, and as such documented over at Apple, including the particular method + detachNewThreadSelector:toTarget:withObject: (I'd link there, but apparently stackoverflow has bugs with parsing it). The CocoaDev wiki also has an article.\nI don't think it would be a good idea for PyObjC to attempt to document Cocoa, other than a few basic examples of how to use it from within Python. Explaining selectors is also likely outside the scope of PyObjC, as those, too, are a feature of Objective-C, not PyObjC specifically.\n",
"I stumbled across a good tutorial on PyObjC/Cocoa:\nhttp://lethain.com/entry/2008/aug/22/an-epic-introduction-to-pyobjc-and-cocoa/\n",
"\nAll I want to do is create fairly simple Python applications with Cocoa GUI's.. Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_(\"queryController\", self, None) does..?\n[...]\nbasically all I want to do is write Cocoa applications without having to learn ObjC.\n\nAlthough I basically agree with Soeren's response, I'd take it even further:\nIt will be a long time, if ever, before you can use Cocoa without some understanding of Objective C. Cocoa isn't an abstraction built independently from Objective C, it is explicitly tied to it. You can see this in the example line of code you quoted above:\nNSThread.detachNewThreadSelector_toTarget_withObject_(\"queryController\", self, None) \n\nThis is the Python way of writing the Objective C line:\n[NSThread detachNewThreadSelector:@selector(queryController:) toTarget:self withObject:nil];\n\nNow, it's important to notice here that this line can be seen in two ways: (1) as a line of Objective C, or (2) as an invocation of the Cocoa frameworks. We see it as (1) by the syntax. We see it as (2) by recognizing that NSThread is a Cocoa framework which provides a set of handy features. In this case, this particular Cocoa framework is making it easy for us to have an object start doing something on a new thread.\nBut the kicker is this: The Cocoa framework here (NSThread) is providing us this handy service in a way that is explicitly tied to the language the framework has been written in. Namely, NSThread gave us a feature that explicitly refers to \"selectors\". Selectors are, in point of fact, the name for something fundamental about how Objective C works.\nSo there's the rub. Cocoa is fundamentally an Objective-C creation, and its creators have built it with Objective C in mind. I'm not claiming that it's impossible to translate the interface to the Cocoa features into a form more natural for other languages. It's just that as soon as you change the Cocoa framework to stop referring to \"selectors\", it's not really the Cocoa framework any more. It's a translated version. And once you start going down that road, I'm guessing things get really messy. You're trying to keep up with Apple as they update Cocoa, maybe you hit some parts of Cocoa that just don't translate well into the new language, whatever. So instead, things like PyObjC opt to expose Cocoa directly, in a way that has a very clear and simple correlation. As they say in the documentation:\n\nIn order to have a lossless and unambiguous translation between Objective-C messages and Python methods, the Python method name equivalent is simply the selector with colons replaced by underscores.\n\nSure, it's a bit ugly, and it does mean you need to know something about Objective-C, but that's because the alternative, if one truly exists, is not necessarily better.\n",
"I didn't know anything at all about Objective C or Cocoa (but plenty about Python), but I am now writing a rather complex application in PyObjc. How did I learn? I picked up Cocoa Programming for OSX and went through the whole book (a pretty quick process) using PyObjC. Just ignore anything about memory management and you'll pretty much be fine. The only caveat is that very occasionally you have to use a decorator like endSheetMethod (actually I think that's the only one I've hit):\[email protected]\ndef alertEnded_code_context_(self, alert, choice, context):\n pass\n\n",
"This answer isn't going to be very helpful but, as a developer I hate doing documentation. This being a opensource project, it's hard to find people to do documentation.\n",
"Tom says it all really. Lots of open source projects have dedicated developers and few who are interested in documenting. It isn't helped by the fact that goalposts can shift on a daily basis which means documentation not only has to be created, but maintained.\n"
] |
[
31,
21,
21,
7,
7,
5,
5,
3,
3
] |
[] |
[] |
[
"cocoa",
"macos",
"pyobjc",
"python"
] |
stackoverflow_0000014422_cocoa_macos_pyobjc_python.txt
|
Q:
How do I represent a void pointer in a PyObjC selector?
I'm wanting to use an NSOpenPanel for an application I'm designing. Here's what I have so far:
@objc.IBAction
def ShowOpenPanel_(self, sender):
self.panel = NSOpenPanel.openPanel()
self.panel.setCanChooseFiles_(False)
self.panel.setCanChooseDirectories_(True)
NSLog(u'Starting OpenPanel')
self.panel.beginForDirectory_file_types_modelessDelegate_didEndSelector_contextInfo_(
self.defaults.objectForKey_(u'projpath'),
objc.nil,
objc.nil,
self,
objc.selector(self.OpenPanelDidEnd_returnCode_contextInfo_,
signature='v:@ii'),
objc.nil)
NSLog(u'OpenPanel was started.')
def OpenPanelDidEnd_returnCode_contextInfo_(self, panel, returnCode, context):
NSLog('Panel ended.')
if (returnCode == NSOKButton):
NSLog(u'User selected OK')
path = self.panel.filenames()[0]
self.defaults.setObject_forKey_(path, u'projpath')
del self.panel
The main two lines I'm concerned about are:
objc.selector(self.OpenPanelDidEnd_returnCode_contextInfo_,
signature='v:@ii'),
objc.nil) #this is the argument that gets passed as the void pointer
The third argument is supposed to be a void pointer. Since I don't intend to use that data, I'd rather just leave it empty. I've tried making the signature 'v:@iv' and tried using objc.NULL and python's None, and just about every combination of all these things. What is the best way to handle this?
A:
I think you don't need to use objc.selector at all; try this instead:
@objc.IBAction
def ShowOpenPanel_(self, sender):
self.panel = NSOpenPanel.openPanel()
self.panel.setCanChooseFiles_(False)
self.panel.setCanChooseDirectories_(True)
NSLog(u'Starting OpenPanel')
self.panel.beginForDirectory_file_types_modelessDelegate_didEndSelector_contextInfo_(
self.defaults.objectForKey_(u'projpath'),
objc.nil,
objc.nil,
self,
self.OpenPanelDidEnd_returnCode_contextInfo_,
objc.nil)
NSLog(u'OpenPanel was started.')
I've also found that I need to decorate the end-of-panel function with PyObjCTools.AppHelper.endSheetMethod:
@PyObjCTools.AppHelper.endSheetMethod
def OpenPanelDidEnd_returnCode_contextInfo_(self, panel, returnCode, context):
NSLog('Panel ended.')
if (returnCode == NSOKButton):
NSLog(u'User selected OK')
path = self.panel.filenames()[0]
self.defaults.setObject_forKey_(path, u'projpath')
del self.panel
Here's how I would write what you have:
@objc.IBAction
def showOpenPanel_(self, sender):
panel = NSOpenPanel.openPanel()
panel.setCanChooseFiles_(False)
panel.setCanChooseDirectories_(True)
NSLog(u'Starting openPanel')
panel.beginForDirectory_file_types_modelessDelegate_didEndSelector_contextInfo_(
self.defaults.objectForKey_(u'projpath'), #forDirectory
None, #file
None, #types
self, #modelessDelegate
self.openPanelDidEnd_returnCode_contextInfo_, #didEndSelector
None) #contextInfo
NSLog(u'openPanel started')
@PyObjCTools.AppHelper.endSheetMethod
def openPanelDidEnd_returnCode_contextInfo_(self, panel, returnCode, context):
NSLog(u'Panel ended')
if returnCode != NSOKButton:
return
NSLog(u'User selected OK')
path = panel.filenames()[0]
self.defaults.setObject_forKey_(path, u'projpath')
Explanation of changes: I always use None rather than objc.nil and it hasn't messed me up yet; I don't think your panel needs to be a property of self since you get it in your return function; objc convention is to have the first letter of your function in lower case.
A:
The right way to open the panel is:
@objc.IBAction
def showOpenPanel_(self, sender):
panel = NSOpenPanel.openPanel()
panel.setCanChooseFiles_(False)
panel.setCanChooseDirectories_(True)
NSLog(u'Starting openPanel')
panel.beginForDirectory_file_types_modelessDelegate_didEndSelector_contextInfo_(
self.defaults.objectForKey_(u'projpath'), #forDirectory
None, #file
None, #types
self, #modelessDelegate
'openPanelDidEnd:returnCode:contextInfo:', #didEndSelector
None) #contextInfo
NSLog(u'openPanel started')
Dan's code works as well, but IMHO my variant is slighly clearer: you don't pass the actual method but the name of the method that should be called.
|
How do I represent a void pointer in a PyObjC selector?
|
I'm wanting to use an NSOpenPanel for an application I'm designing. Here's what I have so far:
@objc.IBAction
def ShowOpenPanel_(self, sender):
self.panel = NSOpenPanel.openPanel()
self.panel.setCanChooseFiles_(False)
self.panel.setCanChooseDirectories_(True)
NSLog(u'Starting OpenPanel')
self.panel.beginForDirectory_file_types_modelessDelegate_didEndSelector_contextInfo_(
self.defaults.objectForKey_(u'projpath'),
objc.nil,
objc.nil,
self,
objc.selector(self.OpenPanelDidEnd_returnCode_contextInfo_,
signature='v:@ii'),
objc.nil)
NSLog(u'OpenPanel was started.')
def OpenPanelDidEnd_returnCode_contextInfo_(self, panel, returnCode, context):
NSLog('Panel ended.')
if (returnCode == NSOKButton):
NSLog(u'User selected OK')
path = self.panel.filenames()[0]
self.defaults.setObject_forKey_(path, u'projpath')
del self.panel
The main two lines I'm concerned about are:
objc.selector(self.OpenPanelDidEnd_returnCode_contextInfo_,
signature='v:@ii'),
objc.nil) #this is the argument that gets passed as the void pointer
The third argument is supposed to be a void pointer. Since I don't intend to use that data, I'd rather just leave it empty. I've tried making the signature 'v:@iv' and tried using objc.NULL and python's None, and just about every combination of all these things. What is the best way to handle this?
|
[
"I think you don't need to use objc.selector at all; try this instead:\[email protected]\ndef ShowOpenPanel_(self, sender):\n self.panel = NSOpenPanel.openPanel()\n self.panel.setCanChooseFiles_(False)\n self.panel.setCanChooseDirectories_(True)\n NSLog(u'Starting OpenPanel')\n self.panel.beginForDirectory_file_types_modelessDelegate_didEndSelector_contextInfo_(\n self.defaults.objectForKey_(u'projpath'), \n objc.nil, \n objc.nil, \n self, \n self.OpenPanelDidEnd_returnCode_contextInfo_,\n objc.nil)\n NSLog(u'OpenPanel was started.')\n\nI've also found that I need to decorate the end-of-panel function with PyObjCTools.AppHelper.endSheetMethod:\[email protected]\ndef OpenPanelDidEnd_returnCode_contextInfo_(self, panel, returnCode, context):\n NSLog('Panel ended.')\n if (returnCode == NSOKButton):\n NSLog(u'User selected OK')\n path = self.panel.filenames()[0]\n self.defaults.setObject_forKey_(path, u'projpath')\n del self.panel\n\nHere's how I would write what you have:\[email protected]\ndef showOpenPanel_(self, sender):\n panel = NSOpenPanel.openPanel()\n panel.setCanChooseFiles_(False)\n panel.setCanChooseDirectories_(True)\n NSLog(u'Starting openPanel')\n panel.beginForDirectory_file_types_modelessDelegate_didEndSelector_contextInfo_(\n self.defaults.objectForKey_(u'projpath'), #forDirectory\n None, #file\n None, #types\n self, #modelessDelegate\n self.openPanelDidEnd_returnCode_contextInfo_, #didEndSelector\n None) #contextInfo\n NSLog(u'openPanel started')\n\[email protected]\ndef openPanelDidEnd_returnCode_contextInfo_(self, panel, returnCode, context):\n NSLog(u'Panel ended')\n if returnCode != NSOKButton:\n return\n NSLog(u'User selected OK')\n path = panel.filenames()[0]\n self.defaults.setObject_forKey_(path, u'projpath')\n\nExplanation of changes: I always use None rather than objc.nil and it hasn't messed me up yet; I don't think your panel needs to be a property of self since you get it in your return function; objc convention is to have the first letter of your function in lower case.\n",
"The right way to open the panel is:\[email protected]\ndef showOpenPanel_(self, sender):\n panel = NSOpenPanel.openPanel()\n panel.setCanChooseFiles_(False)\n panel.setCanChooseDirectories_(True)\n NSLog(u'Starting openPanel')\n panel.beginForDirectory_file_types_modelessDelegate_didEndSelector_contextInfo_(\n self.defaults.objectForKey_(u'projpath'), #forDirectory\n None, #file\n None, #types\n self, #modelessDelegate\n 'openPanelDidEnd:returnCode:contextInfo:', #didEndSelector\n None) #contextInfo\n NSLog(u'openPanel started')\n\nDan's code works as well, but IMHO my variant is slighly clearer: you don't pass the actual method but the name of the method that should be called.\n"
] |
[
1,
1
] |
[] |
[] |
[
"macos",
"objective_c",
"pyobjc",
"python",
"void_pointers"
] |
stackoverflow_0000845970_macos_objective_c_pyobjc_python_void_pointers.txt
|
Q:
Why can I not view my Google App Engine cron admin page?
When I go to http://localhost:8080/_ah/admin/cron, as stated in Google's docs, I get the following:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 501, in __call__
handler.get(*groups)
File "C:\Program Files\Google\google_appengine\google\appengine\ext\admin\__init__.py", line 239, in get
schedule = groctimespecification.GrocTimeSpecification(entry.schedule)
File "C:\Program Files\Google\google_appengine\google\appengine\cron\groctimespecification.py", line 71, in GrocTimeSpecification
parser.period_string)
File "C:\Program Files\Google\google_appengine\google\appengine\cron\groctimespecification.py", line 122, in __init__
super(IntervalTimeSpecification, self).__init__(self)
TypeError: object.__init__() takes no parameters
I have the latest SDK, and it looks like my config files are correct.
A:
This is definitely a bug in Google App Engine. If you check groctimespecification.py, you'll see that IntervalTimeSpecification inherits from TimeSpecification, which in turn inherits directly from object and doesn't override its __init__ method.
So the __init__ of IntervalTimeSpecification is incorrect:
class IntervalTimeSpecification(TimeSpecification):
def __init__(self, interval, period):
super(IntervalTimeSpecification, self).__init__(self)
My guess is, someone converted an old-style parent class init call:
TimeSpecification.__init__(self)
to the current one, but forgot that with super, self is passed implicitly. The correct line should look like this:
super(IntervalTimeSpecification, self).__init__()
A:
Congratulations! You've found a bug. Can you file a bug on the public issue tracker, please? If you want to fix it for yourself immediately, delete the 'self' argument in the line referenced at the end of that stacktrace.
|
Why can I not view my Google App Engine cron admin page?
|
When I go to http://localhost:8080/_ah/admin/cron, as stated in Google's docs, I get the following:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 501, in __call__
handler.get(*groups)
File "C:\Program Files\Google\google_appengine\google\appengine\ext\admin\__init__.py", line 239, in get
schedule = groctimespecification.GrocTimeSpecification(entry.schedule)
File "C:\Program Files\Google\google_appengine\google\appengine\cron\groctimespecification.py", line 71, in GrocTimeSpecification
parser.period_string)
File "C:\Program Files\Google\google_appengine\google\appengine\cron\groctimespecification.py", line 122, in __init__
super(IntervalTimeSpecification, self).__init__(self)
TypeError: object.__init__() takes no parameters
I have the latest SDK, and it looks like my config files are correct.
|
[
"This is definitely a bug in Google App Engine. If you check groctimespecification.py, you'll see that IntervalTimeSpecification inherits from TimeSpecification, which in turn inherits directly from object and doesn't override its __init__ method.\nSo the __init__ of IntervalTimeSpecification is incorrect:\nclass IntervalTimeSpecification(TimeSpecification):\n def __init__(self, interval, period):\n super(IntervalTimeSpecification, self).__init__(self)\n\nMy guess is, someone converted an old-style parent class init call:\nTimeSpecification.__init__(self)\n\nto the current one, but forgot that with super, self is passed implicitly. The correct line should look like this:\nsuper(IntervalTimeSpecification, self).__init__()\n\n",
"Congratulations! You've found a bug. Can you file a bug on the public issue tracker, please? If you want to fix it for yourself immediately, delete the 'self' argument in the line referenced at the end of that stacktrace.\n"
] |
[
4,
3
] |
[] |
[] |
[
"cron",
"google_app_engine",
"python",
"stack_trace"
] |
stackoverflow_0000902039_cron_google_app_engine_python_stack_trace.txt
|
Q:
python, regular expressions, named groups and "logical or" operator
In python regular expression, named and unnamed groups are both defined with '(' and ')'. This leads to a weird behavior. Regexp
"(?P<a>1)=(?P<b>2)"
used with text "1=2" will find named group "a" with value "1" and named group "b" with value "2". But if i want to use "logical or" operator and concatenate multiple rules, the following regexp:
"((?P<a>1)=(?P<b>2))|(?P<c>3)"
used with same text "1=2" will find an unnamed group with value "1=2". I understood that regexp engine treats "(" and ")" that encloses groups "a" and "b" as an unnamed group and reports that it is found. But i don't want an unnamed groups to be reported, i just want to use "|" in order to "glue" multiple regexps together. Without creating any parasitic unnamed groups. Is it a way to do so in python?
A:
Use (?:) to get rid of the unnamed group:
r"(?:(?P<a>1)=(?P<b>2))|(?P<c>3)"
From the documentation of re:
(?:...) A non-grouping version of
regular parentheses. Matches whatever
regular expression is inside the
parentheses, but the substring matched
by the group cannot be retrieved after
performing a match or referenced later
in the pattern.
By the way, the alternation operator | has very low precedence in order to make parentheses unnecessary in cases like yours. You can drop the extra parentheses in your regex and it will continue to work as expected:
r"(?P<a>1)=(?P<b>2)|(?P<c>3)"
|
python, regular expressions, named groups and "logical or" operator
|
In python regular expression, named and unnamed groups are both defined with '(' and ')'. This leads to a weird behavior. Regexp
"(?P<a>1)=(?P<b>2)"
used with text "1=2" will find named group "a" with value "1" and named group "b" with value "2". But if i want to use "logical or" operator and concatenate multiple rules, the following regexp:
"((?P<a>1)=(?P<b>2))|(?P<c>3)"
used with same text "1=2" will find an unnamed group with value "1=2". I understood that regexp engine treats "(" and ")" that encloses groups "a" and "b" as an unnamed group and reports that it is found. But i don't want an unnamed groups to be reported, i just want to use "|" in order to "glue" multiple regexps together. Without creating any parasitic unnamed groups. Is it a way to do so in python?
|
[
"Use (?:) to get rid of the unnamed group:\nr\"(?:(?P<a>1)=(?P<b>2))|(?P<c>3)\"\n\nFrom the documentation of re:\n\n(?:...) A non-grouping version of\n regular parentheses. Matches whatever\n regular expression is inside the\n parentheses, but the substring matched\n by the group cannot be retrieved after\n performing a match or referenced later\n in the pattern.\n\nBy the way, the alternation operator | has very low precedence in order to make parentheses unnecessary in cases like yours. You can drop the extra parentheses in your regex and it will continue to work as expected:\nr\"(?P<a>1)=(?P<b>2)|(?P<c>3)\"\n\n"
] |
[
15
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000903562_python_regex.txt
|
Q:
Python - get static information
i have to get static information from one 'module' to another. I'm trying to write logger with information about code place from where we're logging.
For example, in some file:
LogObject.Log('Describe error', STATIC_INFORMATION)
Static information is class name, file name and function name.
I get it from this:
__file__
self.__class__.__name__
sys._getframe().f_code.co_name
But i don't want to write this variables during logging. Can i create some function and call it. For example:
LogObject.Log('Describe error', someFunction())
How can i use it for getting static information?
A:
I don't think "static" is the world you're looking for. If I understand you correctly, you want to write a function that will return the filename, class name and method name of the caller.
Basically, you should use sys._getframe(1) to access the previous frame, and work from there.
Example:
def codeinfo():
import sys
f = sys._getframe(1)
filename = f.f_code.co_filename
classname = ''
if 'self' in f.f_locals:
classname = f.f_locals['self'].__class__.__name__
funcname = f.f_code.co_name
return "filename: %s\nclass: %s\nfunc: %s" % (filename, classname, funcname)
Then from a method somewhere you can write
logger.info("Some message \n %s" % codeinfo())
A:
First, please use lower-case names for objects and methods. Only use UpperCase Names for Class definitions.
More importantly, you want a clever introspective function in every class, it appears.
class Loggable( object ):
def identification( self ):
return self.__class__.__module__, self.__class__.__name__, sys._getframe().f_code.co_name
class ARealClass( Loggable ):
def someFunction( self ):
logger.info( "Some Message from %r", self. identification() )
If all of your classes are subclasses of Loggable, you'll inherit this identification function in all classes.
|
Python - get static information
|
i have to get static information from one 'module' to another. I'm trying to write logger with information about code place from where we're logging.
For example, in some file:
LogObject.Log('Describe error', STATIC_INFORMATION)
Static information is class name, file name and function name.
I get it from this:
__file__
self.__class__.__name__
sys._getframe().f_code.co_name
But i don't want to write this variables during logging. Can i create some function and call it. For example:
LogObject.Log('Describe error', someFunction())
How can i use it for getting static information?
|
[
"I don't think \"static\" is the world you're looking for. If I understand you correctly, you want to write a function that will return the filename, class name and method name of the caller.\nBasically, you should use sys._getframe(1) to access the previous frame, and work from there.\nExample:\ndef codeinfo():\n import sys\n f = sys._getframe(1)\n\n filename = f.f_code.co_filename\n classname = ''\n\n if 'self' in f.f_locals:\n classname = f.f_locals['self'].__class__.__name__\n\n funcname = f.f_code.co_name\n\n return \"filename: %s\\nclass: %s\\nfunc: %s\" % (filename, classname, funcname)\n\nThen from a method somewhere you can write\nlogger.info(\"Some message \\n %s\" % codeinfo())\n\n",
"First, please use lower-case names for objects and methods. Only use UpperCase Names for Class definitions. \nMore importantly, you want a clever introspective function in every class, it appears. \nclass Loggable( object ):\n def identification( self ):\n return self.__class__.__module__, self.__class__.__name__, sys._getframe().f_code.co_name\n\nclass ARealClass( Loggable ):\n def someFunction( self ):\n logger.info( \"Some Message from %r\", self. identification() )\n\nIf all of your classes are subclasses of Loggable, you'll inherit this identification function in all classes.\n"
] |
[
3,
2
] |
[] |
[] |
[
"python",
"static",
"variables"
] |
stackoverflow_0000903497_python_static_variables.txt
|
Q:
is it possible to call python methods from a C program?
I remember seeing somewhere that you could call python methods from inside C using
#include "python.h"
But I can't seem to find the source for this or any examples.
How can I call python methods from inside a C program?
A:
Here's a doc item from the python site about extending C with python functionality
Here's the start of the documentation (where it refers to python.h) where you can extend Python with C functionality.
A:
Check out http://docs.python.org/c-api
|
is it possible to call python methods from a C program?
|
I remember seeing somewhere that you could call python methods from inside C using
#include "python.h"
But I can't seem to find the source for this or any examples.
How can I call python methods from inside a C program?
|
[
"Here's a doc item from the python site about extending C with python functionality\nHere's the start of the documentation (where it refers to python.h) where you can extend Python with C functionality.\n",
"Check out http://docs.python.org/c-api\n"
] |
[
5,
2
] |
[] |
[] |
[
"c",
"embedding",
"python"
] |
stackoverflow_0000903596_c_embedding_python.txt
|
Q:
Prevent splitting Window when using pythoncomplete in Vim
I'm using VIM with pythoncomplete. When I'm making a completion, the current window is splitted and calltips are shown in the upper pane. I hate that! Is there a way to prevent that behavior or at least limit the size of the upper pane automaticly?
A:
You need to do something like:
set completeopt-=preview
This will prevent the opening of the preview window.
|
Prevent splitting Window when using pythoncomplete in Vim
|
I'm using VIM with pythoncomplete. When I'm making a completion, the current window is splitted and calltips are shown in the upper pane. I hate that! Is there a way to prevent that behavior or at least limit the size of the upper pane automaticly?
|
[
"You need to do something like:\nset completeopt-=preview\n\nThis will prevent the opening of the preview window. \n"
] |
[
5
] |
[] |
[] |
[
"autocomplete",
"python",
"vim"
] |
stackoverflow_0000903847_autocomplete_python_vim.txt
|
Q:
Cannot access Python server running as Windows service
I have written a Python TCP/IP server for internal use, using win32serviceutil/py2exe to create a Windows service.
I installed it on a computer running Windows XP Pro SP3. However, I can't connect to it when it's running as a service. I can confirm that it's binding to the address/port, because I get a conflict when I try to bind to that address/port with another application. Further, I have checked the Windows Firewall settings and have added appropriate exceptions. If I run the server as a simple console application, everything works as expected. However, when I run it as a service, it doesn't work.
I vaguely remember running into this problem before, but for the life of me can't remember any of the details.
Suggestions, anyone?
A:
Possibly the program may be terminated just after initialization. Please check whether it is continuously listening to the requests.
netstat -an |find /i "listening"
And analyze the command line parsed to the programs. You may use procexp to do that.
A:
First of all, whenever you implement a Windows service, be sure to add proper logging.
My worker threads were terminating because of the exception, "The socket operation could not complete without blocking."
The solution was to simply call sock.setblocking(1) after accepting the connection.
A:
Check to see that the service is running under the Nertwork Service account and not the Local System account. The later doesn't have network access and is the default user to run services under. You can check this by going to the services app under administrative tool in the start menu and looking for your service. If you right-click the service you can go to properties and change the user that it is run under.
|
Cannot access Python server running as Windows service
|
I have written a Python TCP/IP server for internal use, using win32serviceutil/py2exe to create a Windows service.
I installed it on a computer running Windows XP Pro SP3. However, I can't connect to it when it's running as a service. I can confirm that it's binding to the address/port, because I get a conflict when I try to bind to that address/port with another application. Further, I have checked the Windows Firewall settings and have added appropriate exceptions. If I run the server as a simple console application, everything works as expected. However, when I run it as a service, it doesn't work.
I vaguely remember running into this problem before, but for the life of me can't remember any of the details.
Suggestions, anyone?
|
[
"Possibly the program may be terminated just after initialization. Please check whether it is continuously listening to the requests.\nnetstat -an |find /i \"listening\"\n\nAnd analyze the command line parsed to the programs. You may use procexp to do that.\n",
"First of all, whenever you implement a Windows service, be sure to add proper logging.\nMy worker threads were terminating because of the exception, \"The socket operation could not complete without blocking.\"\nThe solution was to simply call sock.setblocking(1) after accepting the connection.\n",
"Check to see that the service is running under the Nertwork Service account and not the Local System account. The later doesn't have network access and is the default user to run services under. You can check this by going to the services app under administrative tool in the start menu and looking for your service. If you right-click the service you can go to properties and change the user that it is run under.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"python",
"tcp",
"windows_services"
] |
stackoverflow_0000833062_python_tcp_windows_services.txt
|
Q:
Python: Sending a large dictionary to a server
I have an application that should communicate status information to a server. This information is effectively a large dictionary with string keys.
The server will run a web application based on Turbogears, so the server-side method called accepts an arbitrary number of keyword arguments.
In addition to the actual data, some data related to authentication (id, password..) should be transmitted. One approach would be to simply urlencode a large dictionary containing all this and send it in a request to the server.
urllib2.urlencode(dataPlusId)
But actually, the method doing the authentication and accepting the data set does not have to know much about the data. The data could be transmitted and accepted transparently and handed over to another method working with the data.
So my question is: What is the best way to transmit a large dictionary of data to a server in general? And, in this specific case, what is the best way to deal with authentication here?
A:
I agree with all the answers about avoiding pickle, if safety is a concern (it might not be if the sender gets authenticated before the data's unpickled -- but, when security's at issue, two levels of defense may be better than one); JSON is often of help in such cases (or, XML, if nothing else will do...!-).
Authentication should ideally be left to the webserver, as SpliFF recommends, and SSL (i.e. HTTPS) is generally good for that. If that's unfeasible, but it's feasible to let client and server share a "secret", then sending the serialized string in encrypted form may be best.
A:
I think the best way is to encode your data in an appropriate transfer format (you should not use pickle, as it's not save, but it can be binary) and transfer it as a multipart post request
What I do not know if you can make it work with repoze.who. If it does not support sign in and function call in one step, you'll perhaps have to verify the credentials yourself.
If you can wrap your data in xml you could also use XML-RPC.
A:
Why don't you serialize the dictionary to a file, and upload the file? This way, the server can read the object back into a dictionary .
A:
Do a POST of your python data (use binary as suggested in other answers) and handle security using your webserver. Apache and Microsoft servers can both do authentication using a wide variety of methods (SSL client certs, Password, System accounts, etc...)
Serialising/Deserialising to text or XML is probably overkill if you're just going to turn it back to dictionary again).
A:
I'd personally use SimpleJSON at both ends and just post the "file" (it would really just be a stream) over as multipart data.
But that's me. There are other options.
A:
Have you tried using pickle on the data ?
|
Python: Sending a large dictionary to a server
|
I have an application that should communicate status information to a server. This information is effectively a large dictionary with string keys.
The server will run a web application based on Turbogears, so the server-side method called accepts an arbitrary number of keyword arguments.
In addition to the actual data, some data related to authentication (id, password..) should be transmitted. One approach would be to simply urlencode a large dictionary containing all this and send it in a request to the server.
urllib2.urlencode(dataPlusId)
But actually, the method doing the authentication and accepting the data set does not have to know much about the data. The data could be transmitted and accepted transparently and handed over to another method working with the data.
So my question is: What is the best way to transmit a large dictionary of data to a server in general? And, in this specific case, what is the best way to deal with authentication here?
|
[
"I agree with all the answers about avoiding pickle, if safety is a concern (it might not be if the sender gets authenticated before the data's unpickled -- but, when security's at issue, two levels of defense may be better than one); JSON is often of help in such cases (or, XML, if nothing else will do...!-).\nAuthentication should ideally be left to the webserver, as SpliFF recommends, and SSL (i.e. HTTPS) is generally good for that. If that's unfeasible, but it's feasible to let client and server share a \"secret\", then sending the serialized string in encrypted form may be best.\n",
"I think the best way is to encode your data in an appropriate transfer format (you should not use pickle, as it's not save, but it can be binary) and transfer it as a multipart post request\nWhat I do not know if you can make it work with repoze.who. If it does not support sign in and function call in one step, you'll perhaps have to verify the credentials yourself.\nIf you can wrap your data in xml you could also use XML-RPC.\n",
"Why don't you serialize the dictionary to a file, and upload the file? This way, the server can read the object back into a dictionary .\n",
"Do a POST of your python data (use binary as suggested in other answers) and handle security using your webserver. Apache and Microsoft servers can both do authentication using a wide variety of methods (SSL client certs, Password, System accounts, etc...)\nSerialising/Deserialising to text or XML is probably overkill if you're just going to turn it back to dictionary again).\n",
"I'd personally use SimpleJSON at both ends and just post the \"file\" (it would really just be a stream) over as multipart data.\nBut that's me. There are other options.\n",
"Have you tried using pickle on the data ?\n"
] |
[
4,
3,
2,
2,
2,
1
] |
[] |
[] |
[
"python",
"turbogears"
] |
stackoverflow_0000903885_python_turbogears.txt
|
Q:
How to tell when a function in another class has been called
I have two Python classes, call them "C1" and "C2". Inside C1 is a function named "F1" and inside C2 is a function named "F2". Is there a way to execute F2 each time F1 is run without making a direct call to F2 from within F1? Is there some other mechanism by which to know when a function from inside another class has been called?
I welcome any suggestions, but would like to know of some way to achieve this without making an instance of C2 inside C1.
A:
You can write a little helper decorator that will make the call for you. The advantage is that it's easy to tell who is going to call what by looking at the code. And you can add as many function calls as you want. It works like registering a callback function:
from functools import wraps
def oncall(call):
def helper(fun):
@wraps(fun)
def wrapper(*args, **kwargs):
result = fun(*args, **kwargs)
call()
return result
return wrapper
return helper
class c1:
@classmethod
def f1(cls):
print 'f1'
class c2:
@classmethod
@oncall(c1.f1)
def f2(cls):
print 'f2'
>>> c2.f2()
f2
f1
>>> c1.f1()
f1
A:
I believe that what you are trying to do would fit into the realm of Aspect Oriented Programming. However I have never used this methodology and don't even know if it can/has been implemented in Python.
Edit I just took a look at the link I provided and saw that there are 8 Python implementations mentioned. So the hard work has already been done for you :-)
A:
You can do aspect-oriented programming with function and method decorators since Python 2.2:
@decorator(decorator_args)
def functionToBeDecorated(function_args) :
pass
A:
It really depends on why you don't want to call F2 directly from within F1. You could always create a third class (C3) which encapsulates both C1 and C2. When F3 is called, it will call both F1 and F2. This is known as the Mediator pattern - http://en.wikipedia.org/wiki/Mediator_pattern
A:
Not knowing what is you are trying to achieve, i would suggest taking a look at pydispatcher.
It allows you to implement the Observer pattern
Basically, you register F2 with the dispatcher so that it will be called when a specific 'signal' is emitted. Your F1 'emits a signal' that says "I've been called". The dispatcher then calls F2 (or any number of functions that have registered themselves with that particular signal).
Its actually really simpler than it sounds, easy to use, and de-couples your code (F1 does not need to know about F2).
(arhh.. I'm a new user and not allowed to include hyperlinks, but pydispatcher is easy to google for)
A:
You catch all accesses to F1 with __getattr__ . This will allow you to do extra processing or return your own function in place of F1
class C1:
def __getattr__(self,name):
if name == 'F1': C2.F2()
return self[name]
I should warn you that this will call C2.F2 even if F1 is only being accessed (not run). It's rare but not impossible that F1 might simply be accessed for another purpose like f = myC1.F1 . To run F2 only on a call of F1 you need to expand this example to combine F2 with the returned function object. in other words:
def F1F2():
self.F1()
C2.F2()
return F1F2
|
How to tell when a function in another class has been called
|
I have two Python classes, call them "C1" and "C2". Inside C1 is a function named "F1" and inside C2 is a function named "F2". Is there a way to execute F2 each time F1 is run without making a direct call to F2 from within F1? Is there some other mechanism by which to know when a function from inside another class has been called?
I welcome any suggestions, but would like to know of some way to achieve this without making an instance of C2 inside C1.
|
[
"You can write a little helper decorator that will make the call for you. The advantage is that it's easy to tell who is going to call what by looking at the code. And you can add as many function calls as you want. It works like registering a callback function:\nfrom functools import wraps\n\ndef oncall(call):\n def helper(fun):\n @wraps(fun)\n def wrapper(*args, **kwargs):\n result = fun(*args, **kwargs)\n call()\n return result\n return wrapper\n return helper\n\nclass c1:\n @classmethod\n def f1(cls):\n print 'f1'\n\nclass c2:\n @classmethod\n @oncall(c1.f1)\n def f2(cls):\n print 'f2'\n\n>>> c2.f2()\nf2\nf1\n>>> c1.f1()\nf1\n\n",
"I believe that what you are trying to do would fit into the realm of Aspect Oriented Programming. However I have never used this methodology and don't even know if it can/has been implemented in Python.\nEdit I just took a look at the link I provided and saw that there are 8 Python implementations mentioned. So the hard work has already been done for you :-)\n",
"You can do aspect-oriented programming with function and method decorators since Python 2.2:\n@decorator(decorator_args)\ndef functionToBeDecorated(function_args) :\n pass\n\n",
"It really depends on why you don't want to call F2 directly from within F1. You could always create a third class (C3) which encapsulates both C1 and C2. When F3 is called, it will call both F1 and F2. This is known as the Mediator pattern - http://en.wikipedia.org/wiki/Mediator_pattern\n",
"Not knowing what is you are trying to achieve, i would suggest taking a look at pydispatcher.\nIt allows you to implement the Observer pattern \nBasically, you register F2 with the dispatcher so that it will be called when a specific 'signal' is emitted. Your F1 'emits a signal' that says \"I've been called\". The dispatcher then calls F2 (or any number of functions that have registered themselves with that particular signal).\nIts actually really simpler than it sounds, easy to use, and de-couples your code (F1 does not need to know about F2).\n(arhh.. I'm a new user and not allowed to include hyperlinks, but pydispatcher is easy to google for)\n",
"You catch all accesses to F1 with __getattr__ . This will allow you to do extra processing or return your own function in place of F1\nclass C1:\n def __getattr__(self,name):\n if name == 'F1': C2.F2()\n return self[name]\n\nI should warn you that this will call C2.F2 even if F1 is only being accessed (not run). It's rare but not impossible that F1 might simply be accessed for another purpose like f = myC1.F1 . To run F2 only on a call of F1 you need to expand this example to combine F2 with the returned function object. in other words:\ndef F1F2():\n self.F1()\n C2.F2()\nreturn F1F2\n\n"
] |
[
3,
2,
2,
2,
1,
0
] |
[] |
[] |
[
"class",
"python"
] |
stackoverflow_0000903818_class_python.txt
|
Q:
Unit testing with nose: tests at compile time?
Is it possible for the nose unit testing framework to perform tests during the compilation phase of a module?
In fact, I'd like to test something with the following structure:
x = 123
# [x is used here...]
def test_x():
assert (x == 123)
del x # Deleted because I don't want to clutter the module with unnecessary attributes
nosetests tells me that x is undefined, as it apparently runs test_x() after importing the module. Is there a way of having nose perform test during the compilation phase while having the module free unnecessary resources after using them?
A:
A simple way to handle this would be to have a TESTING flag, and write:
if not TESTING:
del x
However, you won't really be properly testing your modules as the tests will be running under different circumstances to your code.
The proper answer is that you shouldn't really be bothering with manually cleaning up variables, unless you have actually had some major performance problems because of them. Read up on Premature Optimization, it's an important concept. Fix the problems you have, not the ones you maybe could have one day.
A:
According to nose's main developer Jason Pellerin, the nose unit testing framework cannot run tests during compilation. This is a potential annoyance if both the module "construction" and the test routines need to access a certain variable (which would be deleted in the absence of tests).
One option is to discourage the user from using any of these unnecessarily saved variables by prepending "__" to their name (this works also for variables used in class construction: they can be one of these "private" globals).
Another, perhaps cleaner option is to dedicate a module to the task: this module would contain variables that are shared by the module "itself" (i.e. without tests) and its tests (and that would not have to be shared were it not for the tests).
The problem with these option is that variables that could be deleted if there were no tests are instead kept in memory, just because it is better for the test code to use them. At least, with the above two options, the user should not be tempted to use these variables, nor should he feel the need to wonder what they are!
|
Unit testing with nose: tests at compile time?
|
Is it possible for the nose unit testing framework to perform tests during the compilation phase of a module?
In fact, I'd like to test something with the following structure:
x = 123
# [x is used here...]
def test_x():
assert (x == 123)
del x # Deleted because I don't want to clutter the module with unnecessary attributes
nosetests tells me that x is undefined, as it apparently runs test_x() after importing the module. Is there a way of having nose perform test during the compilation phase while having the module free unnecessary resources after using them?
|
[
"A simple way to handle this would be to have a TESTING flag, and write:\nif not TESTING:\n del x\n\nHowever, you won't really be properly testing your modules as the tests will be running under different circumstances to your code.\nThe proper answer is that you shouldn't really be bothering with manually cleaning up variables, unless you have actually had some major performance problems because of them. Read up on Premature Optimization, it's an important concept. Fix the problems you have, not the ones you maybe could have one day.\n",
"According to nose's main developer Jason Pellerin, the nose unit testing framework cannot run tests during compilation. This is a potential annoyance if both the module \"construction\" and the test routines need to access a certain variable (which would be deleted in the absence of tests).\nOne option is to discourage the user from using any of these unnecessarily saved variables by prepending \"__\" to their name (this works also for variables used in class construction: they can be one of these \"private\" globals).\nAnother, perhaps cleaner option is to dedicate a module to the task: this module would contain variables that are shared by the module \"itself\" (i.e. without tests) and its tests (and that would not have to be shared were it not for the tests).\nThe problem with these option is that variables that could be deleted if there were no tests are instead kept in memory, just because it is better for the test code to use them. At least, with the above two options, the user should not be tempted to use these variables, nor should he feel the need to wonder what they are!\n"
] |
[
2,
2
] |
[] |
[] |
[
"nose",
"python",
"unit_testing"
] |
stackoverflow_0000892297_nose_python_unit_testing.txt
|
Q:
Python's 'with' statement versus 'with .. as'
Having just pulled my hair off because of a difference, I'd like to know what the difference really is in Python 2.5.
I had two blocks of code (dbao.getConnection() returns a MySQLdb connection).
conn = dbao.getConnection()
with conn:
# Do stuff
And
with dbao.getConnection() as conn:
# Do stuff
I thought these would have the same effect but apparently not as the conn object of the latter version was a Cursor. Where did the cursor come from and is there a way to combine the variable initialization and with statement somehow?
A:
It may be a little confusing at first glance, but
with babby() as b:
...
is not equivalent to
b = babby()
with b:
...
To see why, here's how the context manager would be implemented:
class babby(object):
def __enter__(self):
return 'frigth'
def __exit__(self, type, value, tb):
pass
In the first case, the name b will be bound to whatever is returned from the __enter__ method of the context manager. This is often the context manager itself (for example for file objects), but it doesn't have to be; in this case it's the string 'frigth', and in your case it's the database cursor.
In the second case, b is the context manager object itself.
A:
In general terms, the value assigned by the as part of a with statement is going to be whatever gets returned by the __enter__ method of the context manager.
A:
The with statement is there to allow for example making sure that transaction is started and stopped correctly.
In case of database connections in python, I think the natural thing to do is to create a cursor at the beginning of the with statement and then commit or rollback the transaction at the end of it.
The two blocks you gave are same from the with statement point of view. You can add the as to the first one just as well and get the cursor.
You need to check how the with support is implemented in the object you use it with.
See http://docs.python.org/whatsnew/2.5.html#pep-343-the-with-statement
|
Python's 'with' statement versus 'with .. as'
|
Having just pulled my hair off because of a difference, I'd like to know what the difference really is in Python 2.5.
I had two blocks of code (dbao.getConnection() returns a MySQLdb connection).
conn = dbao.getConnection()
with conn:
# Do stuff
And
with dbao.getConnection() as conn:
# Do stuff
I thought these would have the same effect but apparently not as the conn object of the latter version was a Cursor. Where did the cursor come from and is there a way to combine the variable initialization and with statement somehow?
|
[
"It may be a little confusing at first glance, but \nwith babby() as b:\n ...\n\nis not equivalent to\nb = babby()\nwith b:\n ...\n\nTo see why, here's how the context manager would be implemented:\nclass babby(object):\n def __enter__(self):\n return 'frigth'\n\n def __exit__(self, type, value, tb):\n pass\n\nIn the first case, the name b will be bound to whatever is returned from the __enter__ method of the context manager. This is often the context manager itself (for example for file objects), but it doesn't have to be; in this case it's the string 'frigth', and in your case it's the database cursor.\nIn the second case, b is the context manager object itself.\n",
"In general terms, the value assigned by the as part of a with statement is going to be whatever gets returned by the __enter__ method of the context manager.\n",
"The with statement is there to allow for example making sure that transaction is started and stopped correctly.\nIn case of database connections in python, I think the natural thing to do is to create a cursor at the beginning of the with statement and then commit or rollback the transaction at the end of it.\nThe two blocks you gave are same from the with statement point of view. You can add the as to the first one just as well and get the cursor.\nYou need to check how the with support is implemented in the object you use it with.\nSee http://docs.python.org/whatsnew/2.5.html#pep-343-the-with-statement\n"
] |
[
36,
21,
1
] |
[] |
[] |
[
"python",
"syntax"
] |
stackoverflow_0000903557_python_syntax.txt
|
Q:
How to parse malformed HTML in python
I need to browse the DOM tree of a parsed HTML document.
I'm using uTidyLib before parsing the string with lxml
a = tidy.parseString(html_code, options)
dom = etree.fromstring(str(a))
sometimes I get an error, it seems that tidylib is not able to repair malformed html.
how can I parse every HTML file without getting an error (parsing only some parts of files that can not be repaired)?
A:
Beautiful Soup does a good job with invalid/broken HTML
>>> from BeautifulSoup import BeautifulSoup
>>> soup = BeautifulSoup("<htm@)($*><body><table <tr><td>hi</tr></td></body><html")
>>> print soup.prettify()
<htm>
<body>
<table>
<tr>
<td>
hi
</td>
</tr>
</table>
</body>
</htm>
A:
Since you are already using lxml, have you tried lxml's ElementSoup module?
If ElementSoup can't repair the HTML then you'll probably need to apply your own filters first that are based on your own observations of how the data is broken.
|
How to parse malformed HTML in python
|
I need to browse the DOM tree of a parsed HTML document.
I'm using uTidyLib before parsing the string with lxml
a = tidy.parseString(html_code, options)
dom = etree.fromstring(str(a))
sometimes I get an error, it seems that tidylib is not able to repair malformed html.
how can I parse every HTML file without getting an error (parsing only some parts of files that can not be repaired)?
|
[
"Beautiful Soup does a good job with invalid/broken HTML\n>>> from BeautifulSoup import BeautifulSoup\n>>> soup = BeautifulSoup(\"<htm@)($*><body><table <tr><td>hi</tr></td></body><html\")\n>>> print soup.prettify()\n<htm>\n <body>\n <table>\n <tr>\n <td>\n hi\n </td>\n </tr>\n </table>\n </body>\n</htm>\n\n",
"Since you are already using lxml, have you tried lxml's ElementSoup module?\nIf ElementSoup can't repair the HTML then you'll probably need to apply your own filters first that are based on your own observations of how the data is broken.\n"
] |
[
27,
13
] |
[] |
[] |
[
"html",
"lxml",
"python"
] |
stackoverflow_0000904644_html_lxml_python.txt
|
Q:
Problem with datetime module-Python
How come this works:
import datetime
now = datetime.datetime.now()
month = '%d' % now.month
print month
But this doesn't?
import datetime
now = datetime.datetime.now()
month = '%m' % now.month
print month
Thanks!
A:
%m is not a supported format character for the % operator. Here is the list of supported formating characters for this operator
%m is valid when your are using strftime function to build a date string
A:
'%d' is a format character that insert a "signed integer decimal", '%m' has no such meaning. The possible format characters are listed here.
|
Problem with datetime module-Python
|
How come this works:
import datetime
now = datetime.datetime.now()
month = '%d' % now.month
print month
But this doesn't?
import datetime
now = datetime.datetime.now()
month = '%m' % now.month
print month
Thanks!
|
[
"%m is not a supported format character for the % operator. Here is the list of supported formating characters for this operator\n%m is valid when your are using strftime function to build a date string\n",
"'%d' is a format character that insert a \"signed integer decimal\", '%m' has no such meaning. The possible format characters are listed here.\n"
] |
[
3,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000904852_python.txt
|
Q:
How to do this with datetime & getpage in Python?
Currently having some problems-
now = datetime.datetime.now()
month = now.strftime("%B")
site = wikipedia.getSite('en', 'wikiquote')
page = wikipedia.Page(site, u"Wikiquote:Quote_of_the_day:abc")
I need to get abc to change into the name of the month before it then tries to get the page, yet everything I try it gives an error of some sort.
How could I do it?
Thanks!
A:
The page URL format is actually Wikiquote:Quote_of_the_day/Month. Try this:
page = wikipedia.Page(site, u"Wikiquote:Quote_of_the_day/%s" % month)
A:
Would this work?
page = wikipedia.Page(site, u"Wikiquote:Quote_of_the_day:" + month)
A:
Did you try this:
page = wikipedia.Page(site, u"Wikiquote:Quote_of_the_day:%s" % month )
|
How to do this with datetime & getpage in Python?
|
Currently having some problems-
now = datetime.datetime.now()
month = now.strftime("%B")
site = wikipedia.getSite('en', 'wikiquote')
page = wikipedia.Page(site, u"Wikiquote:Quote_of_the_day:abc")
I need to get abc to change into the name of the month before it then tries to get the page, yet everything I try it gives an error of some sort.
How could I do it?
Thanks!
|
[
"The page URL format is actually Wikiquote:Quote_of_the_day/Month. Try this:\npage = wikipedia.Page(site, u\"Wikiquote:Quote_of_the_day/%s\" % month)\n\n",
"Would this work?\npage = wikipedia.Page(site, u\"Wikiquote:Quote_of_the_day:\" + month)\n\n",
"Did you try this:\npage = wikipedia.Page(site, u\"Wikiquote:Quote_of_the_day:%s\" % month )\n\n"
] |
[
3,
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000904894_python.txt
|
Q:
Python - how to implement Bridge (or Adapter) design pattern?
I'm struggling with implementing the Bridge design pattern (or an alternative such as Adapter) in Python
I want to be able to write code like this to dump database schemas based on a supplied URL:
urls = ['sqlite://c:\\temp\\test.db', 'oracle://user:password@tns_name'];
for url in urls:
db = Database(url);
schema = db.schema()
I've got classes defined as
class Database():
def __init__(self, url):
self.db_type = string.split(self.url, "://")[0]
class Oracle():
def schema(self):
# Code to return Oracle schema
class SQLite():
def schema(self):
# Code to return SQLite schema
How can I "glue" these 3 classes together so I can get the first code block to execute correctly? I've Googled around, but must be having a thick day as it's just not coming together in my mind...
Thanks in advance
A:
Use a Factory pattern instead:
class Oracle(object):
...
class SQLite(object):
...
dbkind = dict(sqlite=SQLite, oracle=Oracle)
def Database(url):
db_type, rest = string.split(self.url, "://", 1)
return dbkind[db_type](rest)
|
Python - how to implement Bridge (or Adapter) design pattern?
|
I'm struggling with implementing the Bridge design pattern (or an alternative such as Adapter) in Python
I want to be able to write code like this to dump database schemas based on a supplied URL:
urls = ['sqlite://c:\\temp\\test.db', 'oracle://user:password@tns_name'];
for url in urls:
db = Database(url);
schema = db.schema()
I've got classes defined as
class Database():
def __init__(self, url):
self.db_type = string.split(self.url, "://")[0]
class Oracle():
def schema(self):
# Code to return Oracle schema
class SQLite():
def schema(self):
# Code to return SQLite schema
How can I "glue" these 3 classes together so I can get the first code block to execute correctly? I've Googled around, but must be having a thick day as it's just not coming together in my mind...
Thanks in advance
|
[
"Use a Factory pattern instead:\nclass Oracle(object):\n ...\n\nclass SQLite(object):\n ...\n\ndbkind = dict(sqlite=SQLite, oracle=Oracle)\n\ndef Database(url):\n db_type, rest = string.split(self.url, \"://\", 1)\n return dbkind[db_type](rest)\n\n"
] |
[
26
] |
[] |
[] |
[
"design_patterns",
"python"
] |
stackoverflow_0000905289_design_patterns_python.txt
|
Q:
How can I draw automatic graphs using dot in Python on a Mac?
I am producing graphs in a Python program, and now I need to visualize them.
I am using Tkinter as GUI to visualize all the other data, and I would like to have a small subwindow inside with the graph of the data.
At the moment I have the data being represented in a .dot file. And then I keep graphviz open, which shows the graph. But this is of course suboptimal. I need to get the graph inside the tk window.
I thought about using graphviz from the command line, but I always run into the same well known bug:
Desktop ibook$ dot -Tpng -O 1.dot
dyld: lazy symbol binding failed: Symbol not found: _pixman_image_create_bits
Referenced from: /usr/local/lib/graphviz/libgvplugin_pango.5.dylib
Expected in: flat namespace
dyld: Symbol not found: _pixman_image_create_bits
Referenced from: /usr/local/lib/graphviz/libgvplugin_pango.5.dylib
Expected in: flat namespace
Trace/BPT trap
The bug seem to be well known in the Graphviz community:
http://www.graphviz.org/bugs/b1479.html
http://www.graphviz.org/bugs/b1488.html
http://www.graphviz.org/bugs/b1498.html
So since it seems that I cannot use the command line utility I was wondering if anyone knew a direct way to draw a dot graph in Python, without using the command line, or doing something that would incur the same error?
I am programming on a Mac Leopard, python 2.5.2
A:
I do not have a mac to test it on, but the NetworkX package includes methods to read .dot files and draw graphs using matplotlib. You can embed a matplotlib figure in Tk (example 1, example 2).
A:
Quick Google pulls up http://code.google.com/p/pydot/. I haven't tried it but it looks promising.
|
How can I draw automatic graphs using dot in Python on a Mac?
|
I am producing graphs in a Python program, and now I need to visualize them.
I am using Tkinter as GUI to visualize all the other data, and I would like to have a small subwindow inside with the graph of the data.
At the moment I have the data being represented in a .dot file. And then I keep graphviz open, which shows the graph. But this is of course suboptimal. I need to get the graph inside the tk window.
I thought about using graphviz from the command line, but I always run into the same well known bug:
Desktop ibook$ dot -Tpng -O 1.dot
dyld: lazy symbol binding failed: Symbol not found: _pixman_image_create_bits
Referenced from: /usr/local/lib/graphviz/libgvplugin_pango.5.dylib
Expected in: flat namespace
dyld: Symbol not found: _pixman_image_create_bits
Referenced from: /usr/local/lib/graphviz/libgvplugin_pango.5.dylib
Expected in: flat namespace
Trace/BPT trap
The bug seem to be well known in the Graphviz community:
http://www.graphviz.org/bugs/b1479.html
http://www.graphviz.org/bugs/b1488.html
http://www.graphviz.org/bugs/b1498.html
So since it seems that I cannot use the command line utility I was wondering if anyone knew a direct way to draw a dot graph in Python, without using the command line, or doing something that would incur the same error?
I am programming on a Mac Leopard, python 2.5.2
|
[
"I do not have a mac to test it on, but the NetworkX package includes methods to read .dot files and draw graphs using matplotlib. You can embed a matplotlib figure in Tk (example 1, example 2).\n",
"Quick Google pulls up http://code.google.com/p/pydot/. I haven't tried it but it looks promising.\n"
] |
[
2,
1
] |
[] |
[] |
[
"dot",
"dyld",
"graphviz",
"macos",
"python"
] |
stackoverflow_0000903582_dot_dyld_graphviz_macos_python.txt
|
Q:
Is there any library to find out urls of embedded flvs in a webpage?
I'm trying to write a script which can automatically download gameplay videos. The webpages look like dota.sgamer.com/Video/Detail/402 and www.wfbrood.com/movie/spl2009/movie_38214.html, they have flv player embedded in the flash plugin.
Is there any library to help me find out the exact flv urls? or any other ideas to get it?
Many thanks for your replies
A:
It looks like Flashticle (4th result on searching the cheese shop for "flash"), might be able to get the information you want, if it is there.
As to getting the file, you want to look at a html parser. I've heard good things about Beautiful Soup. Between that and urllib2 (part of the standard library), you should be able to download the swf file to work on.
Please note that I have never tried this, and am not familiar with flash at all (other than as an end-user, of course).
A:
if the embed player makes use of some variable where the flv path is set then you can download it, if not.. I doubt you find something to do it "automaticly" since every site make it's own player and identify the file by id not by path, which makes hard to know where the flv file is.
|
Is there any library to find out urls of embedded flvs in a webpage?
|
I'm trying to write a script which can automatically download gameplay videos. The webpages look like dota.sgamer.com/Video/Detail/402 and www.wfbrood.com/movie/spl2009/movie_38214.html, they have flv player embedded in the flash plugin.
Is there any library to help me find out the exact flv urls? or any other ideas to get it?
Many thanks for your replies
|
[
"It looks like Flashticle (4th result on searching the cheese shop for \"flash\"), might be able to get the information you want, if it is there.\nAs to getting the file, you want to look at a html parser. I've heard good things about Beautiful Soup. Between that and urllib2 (part of the standard library), you should be able to download the swf file to work on.\nPlease note that I have never tried this, and am not familiar with flash at all (other than as an end-user, of course).\n",
"if the embed player makes use of some variable where the flv path is set then you can download it, if not.. I doubt you find something to do it \"automaticly\" since every site make it's own player and identify the file by id not by path, which makes hard to know where the flv file is.\n"
] |
[
1,
0
] |
[] |
[] |
[
"download",
"flv",
"python"
] |
stackoverflow_0000905403_download_flv_python.txt
|
Q:
Storing Python scripts on a webserver
Following on my previous question, if I have some hosting how can I put a python script on their that I can then run from there? Do I need to do something special to run it/install something?
EDIT-Clarification-I would like to be able to upload the script which does stuff on the internet-no data is stored on my computer. I then need to schedule it to run once a day.
A:
You have to ensure your hoster system supports Python.
You can ask them about that.
To run the script once it is there, you can act in several ways, depending on what you want to do.
You can have your server side language to invoke it (i.e. from the backend of a web page), or if you have a shell access to the machine you can invoke it manually.
Btw, very often hosting providers give a scheduling tool (i.e. an interface for crontab or at) via the hosting plan administration panel, which you could use to start your script.
First thing, anyway, you have to ask your hoster and check Python availability.
A:
Usually python is already installed, but it depends on your hoster. Ask them.
|
Storing Python scripts on a webserver
|
Following on my previous question, if I have some hosting how can I put a python script on their that I can then run from there? Do I need to do something special to run it/install something?
EDIT-Clarification-I would like to be able to upload the script which does stuff on the internet-no data is stored on my computer. I then need to schedule it to run once a day.
|
[
"You have to ensure your hoster system supports Python.\nYou can ask them about that.\nTo run the script once it is there, you can act in several ways, depending on what you want to do.\nYou can have your server side language to invoke it (i.e. from the backend of a web page), or if you have a shell access to the machine you can invoke it manually.\nBtw, very often hosting providers give a scheduling tool (i.e. an interface for crontab or at) via the hosting plan administration panel, which you could use to start your script. \nFirst thing, anyway, you have to ask your hoster and check Python availability.\n",
"Usually python is already installed, but it depends on your hoster. Ask them.\n"
] |
[
1,
0
] |
[] |
[] |
[
"hosting",
"python"
] |
stackoverflow_0000905902_hosting_python.txt
|
Q:
Why such import is not allowed?
FILE: b.py
class B:
def __init__(self):
print "B"
import a
a = A()
FILE: a.py
class A(B): ###=> B is not defined
def __init__(self):
print "A"
When I try to execute b.py, it's said that B is not defined. Am I misunderstanding "import"?
Thanks a lot if you can pointer out the problem.
A:
Because python initializes class A in its own file. It is not like a C or PHP include where every imported module is essentially pasted into the original file.
You should put class B in the same file as class A to fix this problem. Or you can put class B in c.py and import it with "from c import B".
A:
The closest working thing to your code would be:
==== FILE: b.py ====
class B:
def __init__(self):
print "B"
import a
if __name__ == "__main__":
a = a.A()
==== FILE: a.py ====
import b
class A(b.B): ###=> B is not defined
def __init__(self):
print "A"
Notice the differences:
Files (modules) are namespaces, if you import "a" you refer to its class A as "a.A".
You need to import b in a.py if you want to use it.
You want to avoid having two modules that need to include each other, by either putting everything in the same module, or splitting things up in more modules. Also, it's better to have all your imports at the head of the file, which makes this kind of monkeying impossible.
|
Why such import is not allowed?
|
FILE: b.py
class B:
def __init__(self):
print "B"
import a
a = A()
FILE: a.py
class A(B): ###=> B is not defined
def __init__(self):
print "A"
When I try to execute b.py, it's said that B is not defined. Am I misunderstanding "import"?
Thanks a lot if you can pointer out the problem.
|
[
"Because python initializes class A in its own file. It is not like a C or PHP include where every imported module is essentially pasted into the original file.\nYou should put class B in the same file as class A to fix this problem. Or you can put class B in c.py and import it with \"from c import B\".\n",
"The closest working thing to your code would be:\n==== FILE: b.py ====\n\nclass B:\n def __init__(self):\n print \"B\"\n\nimport a\n\nif __name__ == \"__main__\":\n a = a.A()\n\n==== FILE: a.py ====\nimport b\n\nclass A(b.B): ###=> B is not defined\n def __init__(self):\n print \"A\"\n\nNotice the differences:\n\nFiles (modules) are namespaces, if you import \"a\" you refer to its class A as \"a.A\".\nYou need to import b in a.py if you want to use it.\n\nYou want to avoid having two modules that need to include each other, by either putting everything in the same module, or splitting things up in more modules. Also, it's better to have all your imports at the head of the file, which makes this kind of monkeying impossible.\n"
] |
[
5,
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000905848_python.txt
|
Q:
HttpResponseRedirect django + facebook
I have a form with 2 buttons. depending on the button click user is
taken to different url.
view function is :
friend_id = request.POST.get('selected_friend_id_list')
history = request.POST.get('statushistory')
if history:
print "dfgdfgdf"
return HttpResponseRedirect('../status/')
else:
return direct_to_template(request, 'friends_list.fbml',
extra_context={'fbuser': user,
'user_lastname':user_lastname,
'activemaintab':activemaintab,
'friends':friends,
'friend_list':friend_list})
for template :
<input type="submit" value="Calendar View" name="calendarview"/>
<input type="submit" value="Status History" name="statushistory"/>
</form
so my problem is page is not redirecting to the url .
If I make HttpResponseRedirect('../') it gives me the correct page
but url is not changing.
current page = "friendlist/ status/
so after submitting form my url should be frinedlist/list/
so this should work HttpResponseRedirect('../list/') but url is not getting changed.
Any idea?
How can I fix this
Thanks
A:
"so my problem is page is not redirecting to the url . If I make HttpResponseRedirect('../') it gives me the correct page but url is not changing."
By "URL" I'm guessing you mean "The URL shown in the browser". It helps if your question is very precise.
First, you must provide an absolute URL. http://docs.djangoproject.com/en/dev/ref/request-response/#django.http.HttpResponseRedirect
It's quite clear from the standards (RFC 2616, section 14.30) that an absolute URL is required. Some browsers may tolerate a relative URL. Some don't.
Second, you should never use relative URL's anywhere in your programs.
You should be using reverse.
from django.core.urlresolvers import reverse
def myview(request):
theURL= reverse('path.to.viewFunction')
return HttpResponseRedirect(theURL)
A:
Why do you need to use relative urls? Can you not use absolute urls?
|
HttpResponseRedirect django + facebook
|
I have a form with 2 buttons. depending on the button click user is
taken to different url.
view function is :
friend_id = request.POST.get('selected_friend_id_list')
history = request.POST.get('statushistory')
if history:
print "dfgdfgdf"
return HttpResponseRedirect('../status/')
else:
return direct_to_template(request, 'friends_list.fbml',
extra_context={'fbuser': user,
'user_lastname':user_lastname,
'activemaintab':activemaintab,
'friends':friends,
'friend_list':friend_list})
for template :
<input type="submit" value="Calendar View" name="calendarview"/>
<input type="submit" value="Status History" name="statushistory"/>
</form
so my problem is page is not redirecting to the url .
If I make HttpResponseRedirect('../') it gives me the correct page
but url is not changing.
current page = "friendlist/ status/
so after submitting form my url should be frinedlist/list/
so this should work HttpResponseRedirect('../list/') but url is not getting changed.
Any idea?
How can I fix this
Thanks
|
[
"\"so my problem is page is not redirecting to the url . If I make HttpResponseRedirect('../') it gives me the correct page but url is not changing.\"\nBy \"URL\" I'm guessing you mean \"The URL shown in the browser\". It helps if your question is very precise.\nFirst, you must provide an absolute URL. http://docs.djangoproject.com/en/dev/ref/request-response/#django.http.HttpResponseRedirect\nIt's quite clear from the standards (RFC 2616, section 14.30) that an absolute URL is required. Some browsers may tolerate a relative URL. Some don't.\nSecond, you should never use relative URL's anywhere in your programs.\nYou should be using reverse.\nfrom django.core.urlresolvers import reverse\n\ndef myview(request):\n theURL= reverse('path.to.viewFunction')\n return HttpResponseRedirect(theURL)\n\n",
"Why do you need to use relative urls? Can you not use absolute urls?\n"
] |
[
2,
0
] |
[] |
[] |
[
"django",
"facebook",
"python"
] |
stackoverflow_0000905803_django_facebook_python.txt
|
Q:
Alternative to innerhtml that includes header?
I'm trying to extract data from the following page:
http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?param1=¶m2=¶m3=¶m4=¶m5=2009-04-22¶m6=37#
Which, conveniently and inefficiently enough, includes all the data embedded as a csv file in the header, set as a variable called gs_csv.
How do I extract this? Document.body.innerhtml skips the header where the data is, what is the alternative that includes the header (or better yet, the value associated with gs_csv)?
(Sorry, new to all this, I've been searching through loads of documentation, and trying a lot of them, but nothing so far has worked).
Thanks to Sinan (this is mostly his solution transcribed into Python).
import win32com.client
import time
import os
import os.path
ie = Dispatch("InternetExplorer.Application")
ie.Visible=False
ie.Navigate("http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?param1=¶m2=¶m3=¶m4=¶m5=2009-04-22¶m6=37#")
time.sleep(20)
webpage=ie.document.body.innerHTML
s1=ie.document.scripts(1).text
s1=s1[s1.find("gs_csv")+8:-11]
scriptfilepath="c:\FO Share\bmreports\script.txt"
scriptfile = open(scriptfilepath, 'wb')
scriptfile.write(s1.replace('\n','\n'))
scriptfile.close()
ie.quit
A:
Untested: Did you try looking at what Document.scripts contains?
UPDATE:
For some reason, I am having immense difficulty getting this to work using the Windows Scripting Host (but then, I don't use it very often, apologies). Anyway, here is the Perl source that works:
use strict;
use warnings;
use Win32::OLE;
$Win32::OLE::Warn = 3;
my $ie = get_ie();
$ie->{Visible} = 1;
$ie->Navigate(
'http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?'
.'param1=¶m2=¶m3=¶m4=¶m5=2009-04-22¶m6=37#'
);
sleep 1 until is_ready( $ie );
my $scripts = $ie->Document->{scripts};
for my $script (in $scripts ) {
print $script->text;
}
sub is_ready { $_[0]->{ReadyState} == 4 }
sub get_ie {
Win32::OLE->new('InternetExplorer.Application',
sub { $_[0] and $_[0]->Quit },
);
}
__END__
C:\Temp> ie > output
output now contains everything within the script tags.
A:
fetch the source of that page using ajax, and parse the response text like XML using jquery. It should be simple enought to get the text of the first tag you encounter inside the
I'm out of touch with jquery, or I would have posted code examples.
EDIT: I assume you are talking about fetching the csv on the client side.
A:
If this is just a one off script then exctracting this csv data is as simple as this:
import urllib2
response = urllib2.urlopen('http://www.bmreports.com/foo?bar?')
html = response.read()
csv = data.split('gs_csv=')[1].split('</SCRIPT>')[0]
#process csv data here
A:
Thanks to Sinan (this is mostly his solution transcribed into Python).
import win32com.client
import time import os
import os.path
ie = Dispatch("InternetExplorer.Application") ie.Visible=False
ie.Navigate("http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?param1=¶m2=¶m3=¶m4=¶m5=2009-04-22¶m6=37#")
time.sleep(20)
webpage=ie.document.body.innerHTML
s1=ie.document.scripts(1).text s1=s1[s1.find("gs_csv")+8:-11]
scriptfilepath="c:\FO Share\bmreports\script.txt"
scriptfile = open(scriptfilepath, 'wb')
scriptfile.write(s1.replace('\n','\n'))
scriptfile.close()
ie.quit
|
Alternative to innerhtml that includes header?
|
I'm trying to extract data from the following page:
http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?param1=¶m2=¶m3=¶m4=¶m5=2009-04-22¶m6=37#
Which, conveniently and inefficiently enough, includes all the data embedded as a csv file in the header, set as a variable called gs_csv.
How do I extract this? Document.body.innerhtml skips the header where the data is, what is the alternative that includes the header (or better yet, the value associated with gs_csv)?
(Sorry, new to all this, I've been searching through loads of documentation, and trying a lot of them, but nothing so far has worked).
Thanks to Sinan (this is mostly his solution transcribed into Python).
import win32com.client
import time
import os
import os.path
ie = Dispatch("InternetExplorer.Application")
ie.Visible=False
ie.Navigate("http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?param1=¶m2=¶m3=¶m4=¶m5=2009-04-22¶m6=37#")
time.sleep(20)
webpage=ie.document.body.innerHTML
s1=ie.document.scripts(1).text
s1=s1[s1.find("gs_csv")+8:-11]
scriptfilepath="c:\FO Share\bmreports\script.txt"
scriptfile = open(scriptfilepath, 'wb')
scriptfile.write(s1.replace('\n','\n'))
scriptfile.close()
ie.quit
|
[
"Untested: Did you try looking at what Document.scripts contains?\nUPDATE:\nFor some reason, I am having immense difficulty getting this to work using the Windows Scripting Host (but then, I don't use it very often, apologies). Anyway, here is the Perl source that works:\nuse strict;\nuse warnings;\n\nuse Win32::OLE;\n$Win32::OLE::Warn = 3;\n\nmy $ie = get_ie();\n\n$ie->{Visible} = 1;\n\n$ie->Navigate(\n 'http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?'\n .'param1=¶m2=¶m3=¶m4=¶m5=2009-04-22¶m6=37#'\n);\n\nsleep 1 until is_ready( $ie );\n\nmy $scripts = $ie->Document->{scripts};\n\nfor my $script (in $scripts ) {\n print $script->text;\n}\n\nsub is_ready { $_[0]->{ReadyState} == 4 }\n\nsub get_ie {\n Win32::OLE->new('InternetExplorer.Application', \n sub { $_[0] and $_[0]->Quit },\n );\n}\n\n__END__\n\nC:\\Temp> ie > output\n\noutput now contains everything within the script tags.\n",
"fetch the source of that page using ajax, and parse the response text like XML using jquery. It should be simple enought to get the text of the first tag you encounter inside the \nI'm out of touch with jquery, or I would have posted code examples.\nEDIT: I assume you are talking about fetching the csv on the client side.\n",
"If this is just a one off script then exctracting this csv data is as simple as this:\nimport urllib2\n\nresponse = urllib2.urlopen('http://www.bmreports.com/foo?bar?')\nhtml = response.read()\ncsv = data.split('gs_csv=')[1].split('</SCRIPT>')[0]\n\n#process csv data here\n\n",
"Thanks to Sinan (this is mostly his solution transcribed into Python).\nimport win32com.client \nimport time import os \nimport os.path\nie = Dispatch(\"InternetExplorer.Application\") ie.Visible=False \nie.Navigate(\"http://www.bmreports.com/servlet/com.logica.neta.bwp_PanBMDataServlet?param1=¶m2=¶m3=¶m4=¶m5=2009-04-22¶m6=37#\")\ntime.sleep(20)\nwebpage=ie.document.body.innerHTML\ns1=ie.document.scripts(1).text s1=s1[s1.find(\"gs_csv\")+8:-11]\nscriptfilepath=\"c:\\FO Share\\bmreports\\script.txt\" \nscriptfile = open(scriptfilepath, 'wb') \nscriptfile.write(s1.replace('\\n','\\n')) \nscriptfile.close()\nie.quit\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"com",
"css",
"dom",
"html",
"python"
] |
stackoverflow_0000906660_com_css_dom_html_python.txt
|
Q:
Python Twisted protocol unregistering?
I've came up with problem regarding unregistering protocols from reactor in twisted while application is running.
I use hardware modems connected to PC by USB and that's why this scenario is so important for my solution.
Has anyone an idea how to do it?
Greets,
Chris
A:
When you first call reactor.listen on your protocol factory, it returns an object that implements IListeningPort, see http://twistedmatrix.com/documents/8.2.0/api/twisted.internet.interfaces.IListeningPort.html -- just save that object somewhere and when you want to stop listening on that protocol factori, call that object's stopListening method.
I assume that reactor.listen on the protocol factory is what you implicitly mean by "registering" a protocol (which logically should be what you're trying to undo by "unregistering" it), if you mean something else please clarify exactly how you "register a protocol" and we'll work out how to undo that!-)
|
Python Twisted protocol unregistering?
|
I've came up with problem regarding unregistering protocols from reactor in twisted while application is running.
I use hardware modems connected to PC by USB and that's why this scenario is so important for my solution.
Has anyone an idea how to do it?
Greets,
Chris
|
[
"When you first call reactor.listen on your protocol factory, it returns an object that implements IListeningPort, see http://twistedmatrix.com/documents/8.2.0/api/twisted.internet.interfaces.IListeningPort.html -- just save that object somewhere and when you want to stop listening on that protocol factori, call that object's stopListening method.\nI assume that reactor.listen on the protocol factory is what you implicitly mean by \"registering\" a protocol (which logically should be what you're trying to undo by \"unregistering\" it), if you mean something else please clarify exactly how you \"register a protocol\" and we'll work out how to undo that!-)\n"
] |
[
6
] |
[] |
[] |
[
"protocols",
"python",
"twisted"
] |
stackoverflow_0000906496_protocols_python_twisted.txt
|
Q:
Making a Python script executable chmod755?
My hosting provider says my python script must be made to be executable(chmod755). What does this mean & how do I do it?
Cheers!
A:
Unix-like systems have "file modes" that say who can read/write/execute a file. The mode 755 means owner can read/write/execute, and everyone else can read/execute but not write. To make your Python script have this mode, you type
chmod 0755 script.py
You also need a shebang like
#!/usr/bin/python
on the very first line of the file to tell the operating system what kind of script it is.
A:
If you have ssh access to your web space, connect and issue
chmod 755 nameofyourscript.py
If you do not have ssh access but rather connect by FTP, check your FTP application to see whether it supports setting the permissions.
As to the meaning of 755:
first digit is user settings (yourself)
second digit is group settings
third digit is the rest of the system
The digits are constructed by adding the permision values. 1 = executable, 2 = writable and 4 = readable. I.e. 755 means that you yourself can read, write and execute the file and everybody else can read and execute it.
A:
It means, that somebody (the user, a group or everybody) has the right so execute (or read or write) the script (or a file in general).
The permissions are expressed in different ways:
$ chmod +x file.py # makes it executable by anyone
$ chmod +w file.py # makes it writeabel by anyone
$ chmod +r file.py # makes it readably by anyone
$ chmod u+x file.py # makes it executable for the owner (user) of the file
$ chmod g+x file.py # makes it executable for the group (of the file)
$ chmod o+x file.py # makes it executable for the others (everybody)
You can take away permissions in the same way, just substitute the + by a -
$ chmod o-x file.py # makes a file non-executable for the others (everybody)
$ ...
Octal numbers express the same in a different way.
4 is reading, 2 writing, 1 execution.
simple math:
read + execute = 5
read + write + execute = 7
execute + write = 3
...
packed all in one short and sweet command:
# 1st digit: user permissions
# 2nd digit: group permissions
# 3rd digit: 'other' permissions
# add the owner all perms.,
# the group and other only write and execution
$ chmod 755 file.py
A:
on the shell (such as bash), just type
chmod 755 file.py
to make it executable. You can use
ls -l file.py
to verify the execution permission is set (the "x" in "rwxr-xr-x")
A:
In addition to the other fine answers here, you should be aware that most FTP clients have a chmod command to allow you to set permissions on files at the server. You may not need this if permissions come across properly, but there's a good chance they do not.
|
Making a Python script executable chmod755?
|
My hosting provider says my python script must be made to be executable(chmod755). What does this mean & how do I do it?
Cheers!
|
[
"Unix-like systems have \"file modes\" that say who can read/write/execute a file. The mode 755 means owner can read/write/execute, and everyone else can read/execute but not write. To make your Python script have this mode, you type\nchmod 0755 script.py\n\nYou also need a shebang like\n#!/usr/bin/python\n\non the very first line of the file to tell the operating system what kind of script it is.\n",
"If you have ssh access to your web space, connect and issue\nchmod 755 nameofyourscript.py\n\nIf you do not have ssh access but rather connect by FTP, check your FTP application to see whether it supports setting the permissions.\nAs to the meaning of 755:\n\nfirst digit is user settings (yourself)\nsecond digit is group settings\nthird digit is the rest of the system\n\nThe digits are constructed by adding the permision values. 1 = executable, 2 = writable and 4 = readable. I.e. 755 means that you yourself can read, write and execute the file and everybody else can read and execute it.\n",
"It means, that somebody (the user, a group or everybody) has the right so execute (or read or write) the script (or a file in general). \nThe permissions are expressed in different ways:\n$ chmod +x file.py # makes it executable by anyone\n$ chmod +w file.py # makes it writeabel by anyone\n$ chmod +r file.py # makes it readably by anyone\n\n$ chmod u+x file.py # makes it executable for the owner (user) of the file\n$ chmod g+x file.py # makes it executable for the group (of the file)\n$ chmod o+x file.py # makes it executable for the others (everybody)\n\nYou can take away permissions in the same way, just substitute the + by a -\n$ chmod o-x file.py # makes a file non-executable for the others (everybody)\n$ ...\n\nOctal numbers express the same in a different way.\n4 is reading, 2 writing, 1 execution. \nsimple math:\nread + execute = 5\nread + write + execute = 7\nexecute + write = 3\n...\n\npacked all in one short and sweet command:\n# 1st digit: user permissions\n# 2nd digit: group permissions\n# 3rd digit: 'other' permissions\n\n# add the owner all perms., \n# the group and other only write and execution\n\n$ chmod 755 file.py\n\n",
"on the shell (such as bash), just type\nchmod 755 file.py\n\nto make it executable. You can use\nls -l file.py\n\nto verify the execution permission is set (the \"x\" in \"rwxr-xr-x\")\n",
"In addition to the other fine answers here, you should be aware that most FTP clients have a chmod command to allow you to set permissions on files at the server. You may not need this if permissions come across properly, but there's a good chance they do not.\n"
] |
[
5,
5,
1,
0,
0
] |
[] |
[] |
[
"hosting",
"python"
] |
stackoverflow_0000907579_hosting_python.txt
|
Q:
Gather all Python modules used into one folder?
I don't think this has been asked before-I have a folder that has lots of different .py files. The script I've made only uses some-but some call others & I don't know all the ones being used. Is there a program that will get everything needed to make that script run into one folder?
Cheers!
A:
# zipmod.py - make a zip archive consisting of Python modules and their dependencies as reported by modulefinder
# To use: cd to the directory containing your Python module tree and type
# $ python zipmod.py archive.zip mod1.py mod2.py ...
# Only modules in the current working directory and its subdirectories will be included.
# Written and tested on Mac OS X, but it should work on other platforms with minimal modifications.
import modulefinder
import os
import sys
import zipfile
def main(output, *mnames):
mf = modulefinder.ModuleFinder()
for mname in mnames:
mf.run_script(mname)
cwd = os.getcwd()
zf = zipfile.ZipFile(output, 'w')
for mod in mf.modules.itervalues():
if not mod.__file__:
continue
modfile = os.path.abspath(mod.__file__)
if os.path.commonprefix([cwd, modfile]) == cwd:
zf.write(modfile, os.path.relpath(modfile))
zf.close()
if __name__ == '__main__':
main(*sys.argv[1:])
A:
Use the modulefinder module in the standard library, see e.g. http://docs.python.org/library/modulefinder.html
A:
Freeze does pretty close to what you describe. It does an extra step of generating C files to create a stand-alone executable, but you could use the log output it produces to get the list of modules your script uses. From there it's a simple matter to copy them all into a directory to be zipped up
(or whatever).
|
Gather all Python modules used into one folder?
|
I don't think this has been asked before-I have a folder that has lots of different .py files. The script I've made only uses some-but some call others & I don't know all the ones being used. Is there a program that will get everything needed to make that script run into one folder?
Cheers!
|
[
"# zipmod.py - make a zip archive consisting of Python modules and their dependencies as reported by modulefinder\n# To use: cd to the directory containing your Python module tree and type\n# $ python zipmod.py archive.zip mod1.py mod2.py ...\n# Only modules in the current working directory and its subdirectories will be included.\n# Written and tested on Mac OS X, but it should work on other platforms with minimal modifications.\n\nimport modulefinder\nimport os\nimport sys\nimport zipfile\n\ndef main(output, *mnames):\n mf = modulefinder.ModuleFinder()\n for mname in mnames:\n mf.run_script(mname)\n cwd = os.getcwd()\n zf = zipfile.ZipFile(output, 'w')\n for mod in mf.modules.itervalues():\n if not mod.__file__:\n continue\n modfile = os.path.abspath(mod.__file__)\n if os.path.commonprefix([cwd, modfile]) == cwd:\n zf.write(modfile, os.path.relpath(modfile))\n zf.close()\n\nif __name__ == '__main__':\n main(*sys.argv[1:])\n\n",
"Use the modulefinder module in the standard library, see e.g. http://docs.python.org/library/modulefinder.html\n",
"Freeze does pretty close to what you describe. It does an extra step of generating C files to create a stand-alone executable, but you could use the log output it produces to get the list of modules your script uses. From there it's a simple matter to copy them all into a directory to be zipped up \n(or whatever).\n"
] |
[
6,
6,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000907660_python.txt
|
Q:
python-mysql : How to get interpolated query string?
In diagnosing SQL query problems, it would sometimes be useful to be able to see the query string after parameters are interpolated into it, using MySQLdb's safe interpolation.
Is there a way to get that information from either a MySQL exception object or from the connection object itself?
A:
Use mysql's own ability to log the queries and watch for them.
A:
Perhaps You could use the slow_query_log?
If You cannot turn on the mysql's internal ability to log all queries, You need to write down all the queries before You execute them... You can store them in an own log-file, or in a table (or in some other system). If that would be the case, and if I were You, I'd create an wrapper for the connection with the logging ability.
|
python-mysql : How to get interpolated query string?
|
In diagnosing SQL query problems, it would sometimes be useful to be able to see the query string after parameters are interpolated into it, using MySQLdb's safe interpolation.
Is there a way to get that information from either a MySQL exception object or from the connection object itself?
|
[
"Use mysql's own ability to log the queries and watch for them.\n",
"Perhaps You could use the slow_query_log?\nIf You cannot turn on the mysql's internal ability to log all queries, You need to write down all the queries before You execute them... You can store them in an own log-file, or in a table (or in some other system). If that would be the case, and if I were You, I'd create an wrapper for the connection with the logging ability. \n"
] |
[
2,
0
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0000904042_mysql_python.txt
|
Q:
exec statement with/without prior compile
These weekend I've been tearing down to pieces Michele Simionato's decorator module, that builds signature-preserving decorators. At the heart of it all there is a dynamically generated function, which works something similar to this...
src = """def function(a,b,c) :\n return _caller_(a,b,c)\n"""
evaldict = {'_caller_' : _caller_}
code = compile(src, '<string>', 'single')
exec code in evaldict
new_func = evaldict[function]
I have found, fooling around with this code, that the compile step can be completely avoided and go for a single:
exec src in evaldict
Now, I'm sure there is a good reason for that additional step, but I haven't been able to find what the difference between both approaches is. Performance?
And since I'm asking, could something similar, i.e. define a new function and get a handle to it, be achieved with eval? I tried, but couldn't get that to work...
A:
There are a few differences that I see. Firstly, compile has slightly better semantics in the face of syntax errors than exec. I suspect that the real reason is that the definition of compile is very explicit with respect to the handling of new line characters where exec is a little less precise.
I was curious as to why compile and exec where being used in lieu of inner functions. I didn't know that compile/exec lets you control what globals are available. Very interesting.
A:
compile() allows you to control the code object created and its name and source, while exec is not so flexible. it is also worth doing so that others, when reading your code, will learn they are separate steps and have this in mind later, when they need to exec the same code more than once (where compile() once, exec multiple times would be faster), and writing your code to educate the next who reads it is always a worthy influence on design choices.
|
exec statement with/without prior compile
|
These weekend I've been tearing down to pieces Michele Simionato's decorator module, that builds signature-preserving decorators. At the heart of it all there is a dynamically generated function, which works something similar to this...
src = """def function(a,b,c) :\n return _caller_(a,b,c)\n"""
evaldict = {'_caller_' : _caller_}
code = compile(src, '<string>', 'single')
exec code in evaldict
new_func = evaldict[function]
I have found, fooling around with this code, that the compile step can be completely avoided and go for a single:
exec src in evaldict
Now, I'm sure there is a good reason for that additional step, but I haven't been able to find what the difference between both approaches is. Performance?
And since I'm asking, could something similar, i.e. define a new function and get a handle to it, be achieved with eval? I tried, but couldn't get that to work...
|
[
"There are a few differences that I see. Firstly, compile has slightly better semantics in the face of syntax errors than exec. I suspect that the real reason is that the definition of compile is very explicit with respect to the handling of new line characters where exec is a little less precise.\nI was curious as to why compile and exec where being used in lieu of inner functions. I didn't know that compile/exec lets you control what globals are available. Very interesting.\n",
"compile() allows you to control the code object created and its name and source, while exec is not so flexible. it is also worth doing so that others, when reading your code, will learn they are separate steps and have this in mind later, when they need to exec the same code more than once (where compile() once, exec multiple times would be faster), and writing your code to educate the next who reads it is always a worthy influence on design choices.\n"
] |
[
2,
2
] |
[] |
[] |
[
"compilation",
"eval",
"exec",
"python"
] |
stackoverflow_0000906920_compilation_eval_exec_python.txt
|
Q:
A problem with downloading a file with Python
I try to automatically download a file by clicking on a link on the webpage.
After clicking on the link, I get the 'File Download' Window dialog with 'Open', 'Save' and 'Cancel' buttons. I would like to click the Save button.
I use watsup library in the following way:
from watsup.winGuiAuto import *
optDialog = findTopWindow(wantedText="File Download")
SaveButton = findControl(optDialog,wantedClass="Button", wantedText="Save")
clickButton(SaveButton)
For some reason it does not work. The interesting thing is that exactly the same
code works perfectly to click on 'Cancel' button, however it refuses to work with
'Save' or 'Open'.
Anybody knows what I should do?
Thank you very much,
Sasha
A:
Sasha,
It is highly likely that the file dialog you refer to (the Security Warning file download dialog) will NOT respond to windows messages in this manner, for security reasons. The dialog is specifically designed to respond only to a user physically clicking on the OK button with his mouse. I think you will find that the Run button will not work this way either.
A:
Try this:
from watsup.winGuiAuto import *
optDialog = findTopWindow(wantedText="File Download")
SaveButton = findControl(optDialog, wantedClass="Button", wantedText="Submit")
clickButton(SaveButton)
A:
It's possible that the save button is not always enabled. While it may look to your eye that it is, a program might see an initial state that you're missing. Check it's state and wait until it's enabled.
[EDIT] But it's possible that Robert is right and the dialog will just ignore you for security reasons. In this case, I suggest to use BeautifulSoup to parse the HTML, extract the URL and download the file in Python using the urllib2 module.
A:
Sasha,
The code at this link is supposed to work. It uses ctypes instead of watsup.winGuiAuto, and relies on win32 calls. Here is the code:
from ctypes import *
user32 = windll.user32
EnumWindowsProc = WINFUNCTYPE(c_int, c_int, c_int)
def GetHandles(title, parent=None):
'Returns handles to windows with matching titles'
hwnds = []
def EnumCB(hwnd, lparam, match=title.lower(), hwnds=hwnds):
title = c_buffer(' ' * 256)
user32.GetWindowTextA(hwnd, title, 255)
if title.value.lower() == match:
hwnds.append(hwnd)
if parent is not None:
user32.EnumChildWindows(parent, EnumWindowsProc(EnumCB), 0)
else:
user32.EnumWindows(EnumWindowsProc(EnumCB), 0)
return hwnds
Here's an example of calling it to click the Ok button on any window that has
the title "Downloads properties" (most likely there are 0 or 1 such windows):
for handle in GetHandles('Downloads properties'):
for childHandle in GetHandles('ok', handle):
user32.SendMessageA(childHandle, 0x00F5, 0, 0) # 0x00F5 = BM_CLICK
|
A problem with downloading a file with Python
|
I try to automatically download a file by clicking on a link on the webpage.
After clicking on the link, I get the 'File Download' Window dialog with 'Open', 'Save' and 'Cancel' buttons. I would like to click the Save button.
I use watsup library in the following way:
from watsup.winGuiAuto import *
optDialog = findTopWindow(wantedText="File Download")
SaveButton = findControl(optDialog,wantedClass="Button", wantedText="Save")
clickButton(SaveButton)
For some reason it does not work. The interesting thing is that exactly the same
code works perfectly to click on 'Cancel' button, however it refuses to work with
'Save' or 'Open'.
Anybody knows what I should do?
Thank you very much,
Sasha
|
[
"Sasha,\nIt is highly likely that the file dialog you refer to (the Security Warning file download dialog) will NOT respond to windows messages in this manner, for security reasons. The dialog is specifically designed to respond only to a user physically clicking on the OK button with his mouse. I think you will find that the Run button will not work this way either.\n",
"Try this:\nfrom watsup.winGuiAuto import *\noptDialog = findTopWindow(wantedText=\"File Download\")\nSaveButton = findControl(optDialog, wantedClass=\"Button\", wantedText=\"Submit\")\nclickButton(SaveButton) \n\n",
"It's possible that the save button is not always enabled. While it may look to your eye that it is, a program might see an initial state that you're missing. Check it's state and wait until it's enabled.\n[EDIT] But it's possible that Robert is right and the dialog will just ignore you for security reasons. In this case, I suggest to use BeautifulSoup to parse the HTML, extract the URL and download the file in Python using the urllib2 module.\n",
"Sasha,\nThe code at this link is supposed to work. It uses ctypes instead of watsup.winGuiAuto, and relies on win32 calls. Here is the code:\nfrom ctypes import *\nuser32 = windll.user32\n\nEnumWindowsProc = WINFUNCTYPE(c_int, c_int, c_int)\n\ndef GetHandles(title, parent=None):\n'Returns handles to windows with matching titles'\nhwnds = []\ndef EnumCB(hwnd, lparam, match=title.lower(), hwnds=hwnds):\ntitle = c_buffer(' ' * 256)\nuser32.GetWindowTextA(hwnd, title, 255)\nif title.value.lower() == match:\nhwnds.append(hwnd)\n\nif parent is not None:\nuser32.EnumChildWindows(parent, EnumWindowsProc(EnumCB), 0)\nelse:\nuser32.EnumWindows(EnumWindowsProc(EnumCB), 0)\nreturn hwnds\n\nHere's an example of calling it to click the Ok button on any window that has\nthe title \"Downloads properties\" (most likely there are 0 or 1 such windows):\nfor handle in GetHandles('Downloads properties'):\nfor childHandle in GetHandles('ok', handle):\nuser32.SendMessageA(childHandle, 0x00F5, 0, 0) # 0x00F5 = BM_CLICK\n\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"download",
"file",
"python",
"user_interface"
] |
stackoverflow_0000904555_download_file_python_user_interface.txt
|
Q:
Gather all Python modules used into one folder?
Possible Duplicate:
Gather all Python modules used into one folder?
I don't think this has been asked before-I have a folder that has lots of different .py files. The script I've made only uses some-but some call others & I don't know all the ones being used. Is there a program that will get everything needed to make that script run into one folder?
Cheers!
A:
Since Python is not statically linked language, this task would be rather a challenging one. Especially if some of your code uses eval(...) or exec(...).
If your script is not very big, I would just move it out, make sure that your python.exe does not load modules from that directory and would run the script and add missing modules until it works.
I you have multiple scripts like this, then this manual work is not really the way to go. But in this case also having lots of different .py files in a directory is not a good deployment technique and you should think about packaging them into installable modules and install into your python site-packages.
Still you may use snakefood package to find our the dependencies (has already been discussed here). Again, it just cannot be 100% accurate, but should give you an easy start.
A:
you should be able to extract the needed information from a so called call graph
See for example
http://pycallgraph.slowchop.com/ or
http://blog.prashanthellina.com/2007/11/14/generating-call-graphs-for-understanding-and-refactoring-python-code/
Also, py2exe converts a python call into an executable and in this process it gathers all used modules. I think py2exe is cross platform
A:
I'd try 2½ solutions, one elaborate and 1½ quick-and-dirty:
elaborate: a custom import hook, logging all imports
quick and dirty, part a: os.utime the *.py[co]? (re notation, not glob) files to having access times of yesterday, then run the program and collect all recent access times. Prerequisite: a filesystem that marks access times (by itself and by its mount options).
quick and dirty, part b: remove all *.py[co] files (same in glob and re notation), run the program, see which have been created. Prerequisite: user should have write access to the folder.
|
Gather all Python modules used into one folder?
|
Possible Duplicate:
Gather all Python modules used into one folder?
I don't think this has been asked before-I have a folder that has lots of different .py files. The script I've made only uses some-but some call others & I don't know all the ones being used. Is there a program that will get everything needed to make that script run into one folder?
Cheers!
|
[
"Since Python is not statically linked language, this task would be rather a challenging one. Especially if some of your code uses eval(...) or exec(...).\nIf your script is not very big, I would just move it out, make sure that your python.exe does not load modules from that directory and would run the script and add missing modules until it works.\nI you have multiple scripts like this, then this manual work is not really the way to go. But in this case also having lots of different .py files in a directory is not a good deployment technique and you should think about packaging them into installable modules and install into your python site-packages.\nStill you may use snakefood package to find our the dependencies (has already been discussed here). Again, it just cannot be 100% accurate, but should give you an easy start.\n",
"you should be able to extract the needed information from a so called call graph\nSee for example\n\nhttp://pycallgraph.slowchop.com/ or\nhttp://blog.prashanthellina.com/2007/11/14/generating-call-graphs-for-understanding-and-refactoring-python-code/\n\nAlso, py2exe converts a python call into an executable and in this process it gathers all used modules. I think py2exe is cross platform\n",
"I'd try 2½ solutions, one elaborate and 1½ quick-and-dirty:\n\nelaborate: a custom import hook, logging all imports\nquick and dirty, part a: os.utime the *.py[co]? (re notation, not glob) files to having access times of yesterday, then run the program and collect all recent access times. Prerequisite: a filesystem that marks access times (by itself and by its mount options).\nquick and dirty, part b: remove all *.py[co] files (same in glob and re notation), run the program, see which have been created. Prerequisite: user should have write access to the folder.\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000907972_python.txt
|
Q:
Using python to develop web application
I have been doing some work in python, but that was all for stand alone applications. I'm curious to know whether any offshoot of python supports web development?
Would some one also suggest a good tutorial or a website from where I can pick up some of the basics of web development using python?
A:
Now that everyone has said Django, I can add my two cents: I would argue that you might learn more by looking at the different components first, before using Django. For web development with Python, you often want 3 components:
Something that takes care
of the HTTP stuff (e.g.
CherryPy)
A templating language
to create your web pages.
Mako
is very pythonic and works with Cherrpy.
If you get your data from a
database, an ORM comes in handy.
SQLAlchemy
would be an example.
All the links above have good tutorials. For many real-world use-cases, Django will be a better solution than such a stack as it seamlessly integrates this functionality (and more). And if you need a CMS, Django is your best bet short of Zope. Nevertheless, to get a good grasp of what's going on, a stack of loosely coupled programs might be better. Django hides a lot of the details.
A:
Edited 3 years later: Don't use mod_python, use mod_wsgi. Flask and Werkzeug are good frameworks too. Needing to know what's going on is useful, but it isn't a requirement. That would be stupid.
Don't lookup Django until you have a good grasp of what Django is doing on your behalf. for you. Write some basic apps using mod_python and it's request object. I just started learning Python for web-development using mod_python and it has been great.
mod_python also uses a dispatcher in site-packages/mod_python/publisher.py. Have a ganders through this to see how requests can be handled in a simple-ish way.
You may need to add a bit of config to your Apache config file to get mod_python up and running but the mod_python site explains it well.
<Directory /path/to/python/files>
AddHandler mod_python .py
PythonHandler mod_python.publisher
PythonDebug On
</Directory>
And you are away!
use (as a stupidly basic example):
def foo(req):
req.write("Hello World")
in /path/to/python/files/bar.py assuming /path/to is your site root.
And then you can do
http://www.mysite.com/python/files/bar/foo
to see "Hello World". Also, something that tripped me up is the dispatcher uses a lame method to work out the content-type, so to force HTML use:
req.content_type = 'text/html'
Good Luck
After you have a good idea of how Python interacts with mod_python and Apache, then use a framework that does all the boring stuff for you. Up to you though, just my recommendation
A:
If you really don't want to delve into the frameworks - and you should, I heartily recommend Django or Pylons - there's still need to go down the road of CGI. This is a totally out-of-date technology, not to mention slow and inefficient.
There is a standard way of building Python web applications, and it's called WSGI. If you want to roll your own web app from scratch, this is absolutely the way to go.
That said, if you're just starting out, really you should go with one of the frameworks.
A:
Python Wiki: Web Frameworks for Python
If you decide to use Django, the official tutorial is an excellent place to start. The Django Book is also free.
A:
There are a couple of choices for web development. From my experience, your choice will again be dependent on your application. I used django and web.py in production and I am about to deploy an app based on pylons.
Django hides a lot of choices (comes with its ORM and templating). The documentation is extensive and well-written. There are many reusable app available for django, but you will likely to invest a little time in integrating them seamlessly. One thing mentioned on djangocon 08 was the fact, that there is cool stuff in django, which can't be easily
accessed in non-django projects.
web.py impressed me by its raw simplicity. Before I knew it, I wrote a small app (78 lines quasi-wiki) in it.
pylons feels like in the middle of both. I can use sqlalchemy and jinja, all in all a pleasant experience for the start.
A:
Lookup Django.
A:
Python can be used for web development, but there isn't a special language extension or anything in the language that will handle all the HTML generation or that works like PHP.
It's pretty much run through some sort of interpreter on a web server (CGI, mod_python, etc.).
I would recommend looking into Python Web Application Frameworks or how to write Python CGI scripts.
A:
There are quite a few web frameworks for python out there, but the only one I've used is Django, and I really like it.
If you've got a few hours, do the tutorial, I promise you, you'll enjoy it :)
A:
As others have mentioned, one of the more prominent python "offshoots" as you call them would be Django. It is a rather powerful framework that allows you to quickly and securely build web applications. The first place to look would be their overview which gives some insight as to what Django does as a framework.
Going through their tutorial taught me alot about the prominent Model-View-Controler design pattern and how it may be used in a web-development context. I found it a great way to start writing an application that worked and learn by improving it.
|
Using python to develop web application
|
I have been doing some work in python, but that was all for stand alone applications. I'm curious to know whether any offshoot of python supports web development?
Would some one also suggest a good tutorial or a website from where I can pick up some of the basics of web development using python?
|
[
"Now that everyone has said Django, I can add my two cents: I would argue that you might learn more by looking at the different components first, before using Django. For web development with Python, you often want 3 components:\n\nSomething that takes care\nof the HTTP stuff (e.g.\nCherryPy)\nA templating language\nto create your web pages.\nMako\nis very pythonic and works with Cherrpy.\nIf you get your data from a\ndatabase, an ORM comes in handy.\nSQLAlchemy\nwould be an example.\n\nAll the links above have good tutorials. For many real-world use-cases, Django will be a better solution than such a stack as it seamlessly integrates this functionality (and more). And if you need a CMS, Django is your best bet short of Zope. Nevertheless, to get a good grasp of what's going on, a stack of loosely coupled programs might be better. Django hides a lot of the details.\n",
"Edited 3 years later: Don't use mod_python, use mod_wsgi. Flask and Werkzeug are good frameworks too. Needing to know what's going on is useful, but it isn't a requirement. That would be stupid. \nDon't lookup Django until you have a good grasp of what Django is doing on your behalf. for you. Write some basic apps using mod_python and it's request object. I just started learning Python for web-development using mod_python and it has been great.\nmod_python also uses a dispatcher in site-packages/mod_python/publisher.py. Have a ganders through this to see how requests can be handled in a simple-ish way.\nYou may need to add a bit of config to your Apache config file to get mod_python up and running but the mod_python site explains it well.\n<Directory /path/to/python/files>\n AddHandler mod_python .py\n PythonHandler mod_python.publisher\n PythonDebug On\n</Directory>\n\nAnd you are away!\nuse (as a stupidly basic example):\ndef foo(req):\n req.write(\"Hello World\")\n\nin /path/to/python/files/bar.py assuming /path/to is your site root.\nAnd then you can do \nhttp://www.mysite.com/python/files/bar/foo\n\nto see \"Hello World\". Also, something that tripped me up is the dispatcher uses a lame method to work out the content-type, so to force HTML use:\nreq.content_type = 'text/html'\n\nGood Luck\nAfter you have a good idea of how Python interacts with mod_python and Apache, then use a framework that does all the boring stuff for you. Up to you though, just my recommendation\n",
"If you really don't want to delve into the frameworks - and you should, I heartily recommend Django or Pylons - there's still need to go down the road of CGI. This is a totally out-of-date technology, not to mention slow and inefficient.\nThere is a standard way of building Python web applications, and it's called WSGI. If you want to roll your own web app from scratch, this is absolutely the way to go.\nThat said, if you're just starting out, really you should go with one of the frameworks.\n",
"Python Wiki: Web Frameworks for Python\nIf you decide to use Django, the official tutorial is an excellent place to start. The Django Book is also free.\n",
"There are a couple of choices for web development. From my experience, your choice will again be dependent on your application. I used django and web.py in production and I am about to deploy an app based on pylons.\nDjango hides a lot of choices (comes with its ORM and templating). The documentation is extensive and well-written. There are many reusable app available for django, but you will likely to invest a little time in integrating them seamlessly. One thing mentioned on djangocon 08 was the fact, that there is cool stuff in django, which can't be easily\naccessed in non-django projects.\nweb.py impressed me by its raw simplicity. Before I knew it, I wrote a small app (78 lines quasi-wiki) in it. \npylons feels like in the middle of both. I can use sqlalchemy and jinja, all in all a pleasant experience for the start.\n",
"Lookup Django.\n",
"Python can be used for web development, but there isn't a special language extension or anything in the language that will handle all the HTML generation or that works like PHP.\nIt's pretty much run through some sort of interpreter on a web server (CGI, mod_python, etc.).\nI would recommend looking into Python Web Application Frameworks or how to write Python CGI scripts.\n",
"There are quite a few web frameworks for python out there, but the only one I've used is Django, and I really like it.\nIf you've got a few hours, do the tutorial, I promise you, you'll enjoy it :)\n",
"As others have mentioned, one of the more prominent python \"offshoots\" as you call them would be Django. It is a rather powerful framework that allows you to quickly and securely build web applications. The first place to look would be their overview which gives some insight as to what Django does as a framework.\nGoing through their tutorial taught me alot about the prominent Model-View-Controler design pattern and how it may be used in a web-development context. I found it a great way to start writing an application that worked and learn by improving it.\n"
] |
[
21,
4,
3,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000895420_python.txt
|
Q:
Secure plugin system for python application
I have an application written in python. I created a plugin system for the application that uses egg files. Egg files contain compiled python files and can be easily decompiled and used to hack the application. Is there a way to secure this system? I'd like to use digital signature for this - sign these egg files and check the signature before loading such egg file. Is there a way to do this programmatically from python? Maybe using winapi?
A:
Is there a way to secure this system?
The answer is "that depends".
The two questions you should ask is "what are people supposed to be able to do" and "what are people able to do (for a given implementation)". If there exists an implementation where the latter is a subset of the former, the system can be secured.
One of my friend is working on a programming competition judge: a program which runs a user-submitted program on some test data and compares its output to a reference output. That's damn hard to secure: you want to run other peoples' code, but you don't want to let them run arbitrary code. Is your scenario somewhat similar to this? Then the answer is "it's difficult".
Do you want users to download untrustworthy code from the web and run it with some assurance that it won't hose their machine? Then look at various web languages. One solution is not offering access to system calls (JavaScript) or offering limited access to certain potentially dangerous calls (Java's SecurityManager). None of them can be done in python as far as I'm aware, but you can always hack the interpreter and disallow the loading of external modules not on some whitelist. This is probably error-prone.
Do you want users to write plugins, and not be able to tinker with what the main body of code in your application does? Consider that users can decompile .pyc files and modify them. Assume that those running your code can always modify it, and consider the gold-farming bots for WoW.
One Linux-only solution, similar to the sandboxed web-ish model, is to use AppArmor, which limits which files your app can access and which system calls it can make. This might be a feasible solution, but I don't know much about it so I can't give you advice other than "investigate".
If all you worry about is evil people modifying code while it's in transit in the intertubes, standard cryptographic solutions exist (SSL). If you want to only load signed plugins (because you want to control what the users do?), signing code sounds like the right solution (but beware of crafty users or evil people who edit the .pyc files and disables the is-it-signed check).
A:
Maybe some crypto library like this http://chandlerproject.org/Projects/MeTooCrypto helps to build an ad-hoc solution. Example usage: http://tdilshod.livejournal.com/38040.html
|
Secure plugin system for python application
|
I have an application written in python. I created a plugin system for the application that uses egg files. Egg files contain compiled python files and can be easily decompiled and used to hack the application. Is there a way to secure this system? I'd like to use digital signature for this - sign these egg files and check the signature before loading such egg file. Is there a way to do this programmatically from python? Maybe using winapi?
|
[
"\nIs there a way to secure this system?\n\nThe answer is \"that depends\".\nThe two questions you should ask is \"what are people supposed to be able to do\" and \"what are people able to do (for a given implementation)\". If there exists an implementation where the latter is a subset of the former, the system can be secured.\nOne of my friend is working on a programming competition judge: a program which runs a user-submitted program on some test data and compares its output to a reference output. That's damn hard to secure: you want to run other peoples' code, but you don't want to let them run arbitrary code. Is your scenario somewhat similar to this? Then the answer is \"it's difficult\".\nDo you want users to download untrustworthy code from the web and run it with some assurance that it won't hose their machine? Then look at various web languages. One solution is not offering access to system calls (JavaScript) or offering limited access to certain potentially dangerous calls (Java's SecurityManager). None of them can be done in python as far as I'm aware, but you can always hack the interpreter and disallow the loading of external modules not on some whitelist. This is probably error-prone.\nDo you want users to write plugins, and not be able to tinker with what the main body of code in your application does? Consider that users can decompile .pyc files and modify them. Assume that those running your code can always modify it, and consider the gold-farming bots for WoW.\nOne Linux-only solution, similar to the sandboxed web-ish model, is to use AppArmor, which limits which files your app can access and which system calls it can make. This might be a feasible solution, but I don't know much about it so I can't give you advice other than \"investigate\".\nIf all you worry about is evil people modifying code while it's in transit in the intertubes, standard cryptographic solutions exist (SSL). If you want to only load signed plugins (because you want to control what the users do?), signing code sounds like the right solution (but beware of crafty users or evil people who edit the .pyc files and disables the is-it-signed check).\n",
"Maybe some crypto library like this http://chandlerproject.org/Projects/MeTooCrypto helps to build an ad-hoc solution. Example usage: http://tdilshod.livejournal.com/38040.html\n"
] |
[
3,
1
] |
[] |
[] |
[
"plugins",
"python",
"signing"
] |
stackoverflow_0000908285_plugins_python_signing.txt
|
Q:
Regex Substitution in Python
I have a CSV file with several entries, and each entry has 2 unix timestamp formatted dates.
I have a method called convert(), which takes in the timestamp and converts it to YYYYMMDD.
Now, since I have 2 timestamps in each line, how would I replace each one with the new value?
EDIT: Just to clarify, I would like to convert each occurrence of the timestamp into the YYYYMMDD format. This is what is bugging me, as re.findall() returns a list.
A:
If you know the replacement:
p = re.compile( r',\d{8},')
p.sub( ','+someval+',', csvstring )
if it's a format change:
p = re.compile( r',(\d{4})(\d\d)(\d\d),')
p.sub( r',\3-\2-\1,', csvstring )
EDIT: sorry, just realised you said python, modified above
A:
I assume that by "unix timestamp formatted date" you mean a number of seconds since the epoch. This assumes that every number in the file is a UNIX timestamp. If that isn't the case you'll need to adjust the regex:
import re, sys
# your convert function goes here
regex = re.compile(r'(\d+)')
for line in sys.stdin:
sys.stdout.write(regex.sub(lambda m:
convert(int(m.group(1))), line))
This reads from stdin and calls convert on each number found.
The "trick" here is that re.sub can take a function that transforms from a match object into a string. I'm assuming your convert function expects an int and returns a string, so I've used a lambda as an adapter function to grab the first group of the match, convert it to an int, and then pass that resulting int to convert.
A:
Not able to comment your question, but did you take a look at the CSV module of python?
http://docs.python.org/library/csv.html#module-csv
A:
I'd use something along these lines. A lot like Laurence's response but with the timestamp conversion that you requested and takes the filename as a param. This code assumes you are working with recent dates (after 9/9/2001). If you need earlier dates, lower 10 to 9 or less.
import re, sys, time
regex = re.compile(r'(\d{10,})')
def convert(unixtime):
return time.strftime("%Y%m%d", time.gmtime(unixtime))
for line in open(sys.argv[1]):
sys.stdout.write(regex.sub(lambda m: convert(int(m.group(0))), line))
EDIT: Cleaned up the code.
Sample Input
foo,1234567890,bar,1243310263
cat,1243310263,pants,1234567890
baz,987654321,raz,1
Output
foo,20090213,bar,20090526
cat,20090526,pants,20090213
baz,987654321,raz,1 # not converted (too short to be a recent)
|
Regex Substitution in Python
|
I have a CSV file with several entries, and each entry has 2 unix timestamp formatted dates.
I have a method called convert(), which takes in the timestamp and converts it to YYYYMMDD.
Now, since I have 2 timestamps in each line, how would I replace each one with the new value?
EDIT: Just to clarify, I would like to convert each occurrence of the timestamp into the YYYYMMDD format. This is what is bugging me, as re.findall() returns a list.
|
[
"If you know the replacement:\np = re.compile( r',\\d{8},')\np.sub( ','+someval+',', csvstring )\n\nif it's a format change:\np = re.compile( r',(\\d{4})(\\d\\d)(\\d\\d),')\np.sub( r',\\3-\\2-\\1,', csvstring )\n\nEDIT: sorry, just realised you said python, modified above\n",
"I assume that by \"unix timestamp formatted date\" you mean a number of seconds since the epoch. This assumes that every number in the file is a UNIX timestamp. If that isn't the case you'll need to adjust the regex:\nimport re, sys\n\n# your convert function goes here\n\nregex = re.compile(r'(\\d+)')\nfor line in sys.stdin:\n sys.stdout.write(regex.sub(lambda m:\n convert(int(m.group(1))), line))\n\nThis reads from stdin and calls convert on each number found.\nThe \"trick\" here is that re.sub can take a function that transforms from a match object into a string. I'm assuming your convert function expects an int and returns a string, so I've used a lambda as an adapter function to grab the first group of the match, convert it to an int, and then pass that resulting int to convert.\n",
"Not able to comment your question, but did you take a look at the CSV module of python?\nhttp://docs.python.org/library/csv.html#module-csv\n",
"I'd use something along these lines. A lot like Laurence's response but with the timestamp conversion that you requested and takes the filename as a param. This code assumes you are working with recent dates (after 9/9/2001). If you need earlier dates, lower 10 to 9 or less.\nimport re, sys, time\n\nregex = re.compile(r'(\\d{10,})')\n\ndef convert(unixtime):\n return time.strftime(\"%Y%m%d\", time.gmtime(unixtime))\n\nfor line in open(sys.argv[1]):\n sys.stdout.write(regex.sub(lambda m: convert(int(m.group(0))), line))\n\nEDIT: Cleaned up the code.\nSample Input\nfoo,1234567890,bar,1243310263\ncat,1243310263,pants,1234567890\nbaz,987654321,raz,1\n\nOutput\nfoo,20090213,bar,20090526\ncat,20090526,pants,20090213\nbaz,987654321,raz,1 # not converted (too short to be a recent)\n\n"
] |
[
3,
1,
1,
0
] |
[] |
[] |
[
"python",
"regex",
"timestamp"
] |
stackoverflow_0000908739_python_regex_timestamp.txt
|
Q:
Windows Authentication with Python and urllib2
I want to grab some data off a webpage that requires my windows username and password.
So far, I've got:
opener = build_opener()
try:
page = opener.open("http://somepagewhichneedsmywindowsusernameandpassword/")
print page
except URLError:
print "Oh noes."
Is this supported by urllib2? I've found Python NTLM, but that requires me to put my username and password in. Is there any way to just grab the authentication information somehow (e.g. like IE does, or Firefox, if I changed the network.automatic-ntlm-auth.trusted-uris settings).
Edit after msander's answer
So I've now got this:
# Send a simple "message" over a socket - send the number of bytes first,
# then the string. Ditto for receive.
def _send_msg(s, m):
s.send(struct.pack("i", len(m)))
s.send(m)
def _get_msg(s):
size_data = s.recv(struct.calcsize("i"))
if not size_data:
return None
cb = struct.unpack("i", size_data)[0]
return s.recv(cb)
def sspi_client():
c = httplib.HTTPConnection("myserver")
c.connect()
# Do the auth dance.
ca = sspi.ClientAuth("NTLM", win32api.GetUserName())
data = None
while 1:
err, out_buf = ca.authorize(data) # error 400 triggered by this line
_send_msg(c.sock, out_buf[0].Buffer)
if err==0:
break
data = _get_msg(c.sock)
print "Auth dance complete - sending a few encryted messages"
# Assume out data is sensitive - encrypt the message.
for data in "Hello from the client".split():
blob, key = ca.encrypt(data)
_send_msg(c.sock, blob)
_send_msg(c.sock, key)
c.sock.close()
print "Client completed."
which is pretty well ripped from socket_server.py (see here). But I get an error 400 - bad request. Does anyone have any further ideas?
Thanks,
Dom
A:
Assuming you are writing your client code on Windows and need seamless NTLM authentication then you should read Mark Hammond's Hooking in NTLM post from the python-win32 mailing list which essentially answers the same question. This points at the sspi example code included with the Python Win32 extensions (which are included with ActivePython and otherwise can be downloaded here).
|
Windows Authentication with Python and urllib2
|
I want to grab some data off a webpage that requires my windows username and password.
So far, I've got:
opener = build_opener()
try:
page = opener.open("http://somepagewhichneedsmywindowsusernameandpassword/")
print page
except URLError:
print "Oh noes."
Is this supported by urllib2? I've found Python NTLM, but that requires me to put my username and password in. Is there any way to just grab the authentication information somehow (e.g. like IE does, or Firefox, if I changed the network.automatic-ntlm-auth.trusted-uris settings).
Edit after msander's answer
So I've now got this:
# Send a simple "message" over a socket - send the number of bytes first,
# then the string. Ditto for receive.
def _send_msg(s, m):
s.send(struct.pack("i", len(m)))
s.send(m)
def _get_msg(s):
size_data = s.recv(struct.calcsize("i"))
if not size_data:
return None
cb = struct.unpack("i", size_data)[0]
return s.recv(cb)
def sspi_client():
c = httplib.HTTPConnection("myserver")
c.connect()
# Do the auth dance.
ca = sspi.ClientAuth("NTLM", win32api.GetUserName())
data = None
while 1:
err, out_buf = ca.authorize(data) # error 400 triggered by this line
_send_msg(c.sock, out_buf[0].Buffer)
if err==0:
break
data = _get_msg(c.sock)
print "Auth dance complete - sending a few encryted messages"
# Assume out data is sensitive - encrypt the message.
for data in "Hello from the client".split():
blob, key = ca.encrypt(data)
_send_msg(c.sock, blob)
_send_msg(c.sock, key)
c.sock.close()
print "Client completed."
which is pretty well ripped from socket_server.py (see here). But I get an error 400 - bad request. Does anyone have any further ideas?
Thanks,
Dom
|
[
"Assuming you are writing your client code on Windows and need seamless NTLM authentication then you should read Mark Hammond's Hooking in NTLM post from the python-win32 mailing list which essentially answers the same question. This points at the sspi example code included with the Python Win32 extensions (which are included with ActivePython and otherwise can be downloaded here).\n"
] |
[
16
] |
[
"There are several forms of authentication that web sites can use.\n\nHTTP Authentication. This where the browser pops up a window for you to enter your username and password. There are two mechanisms: basic and digest. There is an \"Authorization\" Header that comes along with the page that tells a browser (or a program using urllib2) what to do.\nIn this case, you must configure your urlopener to provide the answers that the authorization header needs to see. You'll need to build either an HTTPBasicAuthHandler or HTTPDigestAuthHandler. \nAuthHandlers require a PasswordManager. This password manager could have a hard-coded username and password (very common) or it could be clever and work out your Windows password from some Windows API.\nApplication Authentication. This is where the web application directs you to a page with a form you fill in with a username and password. In this case, your Python program must use urllib2 to do a POST (a request with data) where the data is the form filled in properly. The reply to the post usually contains a cookie, which is what allows you further access. You don't need to worry much about the cookie, urllib2 handles this automatically.\n\nHow do you know which you have? You dump the headers on the response. The response from urllib2.openurl includes all the headers (in page.info()) as well as the page content.\nRead HTTP Authentication in Python\nHow would one log into a phpBB3 forum through a Python script using urllib, urllib2 and ClientCookie?\nHow do you access an authenticated Google App Engine service from a (non-web) python client?\n"
] |
[
-2
] |
[
"python",
"urllib2"
] |
stackoverflow_0000909658_python_urllib2.txt
|
Q:
Changing a get request to a post in python?
I have this-
en.wikipedia.org/w/api.php?action=login&lgname=user&lgpassword=password
But it doesn't work because it is a get request. What would the the post request version of this?
Cheers!
A:
The variables for a POST request are in the HTTP headers, not in the URL.
Check urllib.
edit:
Try this (i got it from here):
import urllib
import urllib2
url = 'en.wikipedia.org/w/api.php'
values = {'action' : 'login',
'lgname' : 'user',
'password' : 'password' }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
the_page = response.read()
A:
params = urllib.urlencode({'action' : 'login', 'lgname' : 'user', 'lgpassword' : 'password'})
response = urllib.urlopen("http://en.wikipedia.org/w/api.php", params)
info about urllib can be found here.
A:
Since your sample is in PHP, use $_REQUEST, this holds the contents of both $_GET and $_POST.
|
Changing a get request to a post in python?
|
I have this-
en.wikipedia.org/w/api.php?action=login&lgname=user&lgpassword=password
But it doesn't work because it is a get request. What would the the post request version of this?
Cheers!
|
[
"The variables for a POST request are in the HTTP headers, not in the URL.\nCheck urllib.\nedit:\nTry this (i got it from here):\nimport urllib\nimport urllib2\n\nurl = 'en.wikipedia.org/w/api.php'\nvalues = {'action' : 'login',\n 'lgname' : 'user',\n 'password' : 'password' }\n\ndata = urllib.urlencode(values)\nreq = urllib2.Request(url, data)\nresponse = urllib2.urlopen(req)\nthe_page = response.read()\n\n",
"params = urllib.urlencode({'action' : 'login', 'lgname' : 'user', 'lgpassword' : 'password'})\nresponse = urllib.urlopen(\"http://en.wikipedia.org/w/api.php\", params)\n\ninfo about urllib can be found here.\n",
"Since your sample is in PHP, use $_REQUEST, this holds the contents of both $_GET and $_POST. \n"
] |
[
3,
2,
0
] |
[] |
[] |
[
"forms",
"get",
"post",
"python"
] |
stackoverflow_0000909929_forms_get_post_python.txt
|
Q:
Reading "raw" Unicode-strings in Python
I am quite new to Python so my question might be silly, but even though reading through a lot of threads I didn't find an answer to my question.
I have a mixed source document which contains html, xml, latex and other textformats and which I try to get into a latex-only format.
Therefore, I have used python to recognise the different commands as regular expresssions and replace them with the adequate latex command. Everything has worked out fine so far.
Now I am left with some "raw-type" Unicode signs, such as the greek letters. Unfortunaltly is just about to much to do it by hand. Therefore, I am looking for a way to do this the smart way too. Is there a way for Python to recognise / read them? And how do I tell python to recognise / read e.g. Pi written as a Greek letter?
A minimal example of the code I use is:
fh = open('SOURCE_DOCUMENT','r')
stuff = fh.read()
fh.close()
new_stuff = re.sub('READ','REPLACE',stuff)
fh = open('LATEX_DOCUMENT','w')
fh.write(new_stuff)
fh.close()
I am not sure whether it is an important information or not, but I am using Python 2.6 running on windows.
I would be really glad, if someone might be able to give me hint, at least where to find the according information or how this might work. Or whether I am completely wrong, and Python can't do this job ...
Many thanks in advance.
Cheers,
Britta
A:
You talk of ``raw'' Unicode strings. What does that mean? Unicode itself is not an encoding, but there are different encodings to store Unicode characters (read this post by Joel).
The open function in Python 3.0 takes an optional encoding argument that lets you specify the encoding, e.g. UTF-8 (a very common way to encode Unicode). In Python 2.x, have a look at the codecs module, which also provides an open function that allows specifying the encoding of the file.
Edit: alternatively, why not just let those poor characters be, and specify the encoding of your LaTeX file at the top:
\usepackage[utf8]{inputenc}
(I never tried this, but I figure it should work. You may need to replace utf8 by utf8x, though)
A:
Please, first, read this:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
Then, come back and ask questions.
A:
You need to determine the "encoding" of the input document. Unicode can encode millions of characters but files can only story 8-bit values (0-255). So the Unicode text must be encoded in some way.
If the document is XML, it should be in the first line (encoding="..."; "utf-8" is the default if there is no "encoding" field). For HTML, look for "charset".
If all else fails, open the document in an editor where you can set the encoding (jEdit, for example). Try them until the text looks right. Then use this value as the encoding parameter for codecs.open() in Python.
|
Reading "raw" Unicode-strings in Python
|
I am quite new to Python so my question might be silly, but even though reading through a lot of threads I didn't find an answer to my question.
I have a mixed source document which contains html, xml, latex and other textformats and which I try to get into a latex-only format.
Therefore, I have used python to recognise the different commands as regular expresssions and replace them with the adequate latex command. Everything has worked out fine so far.
Now I am left with some "raw-type" Unicode signs, such as the greek letters. Unfortunaltly is just about to much to do it by hand. Therefore, I am looking for a way to do this the smart way too. Is there a way for Python to recognise / read them? And how do I tell python to recognise / read e.g. Pi written as a Greek letter?
A minimal example of the code I use is:
fh = open('SOURCE_DOCUMENT','r')
stuff = fh.read()
fh.close()
new_stuff = re.sub('READ','REPLACE',stuff)
fh = open('LATEX_DOCUMENT','w')
fh.write(new_stuff)
fh.close()
I am not sure whether it is an important information or not, but I am using Python 2.6 running on windows.
I would be really glad, if someone might be able to give me hint, at least where to find the according information or how this might work. Or whether I am completely wrong, and Python can't do this job ...
Many thanks in advance.
Cheers,
Britta
|
[
"You talk of ``raw'' Unicode strings. What does that mean? Unicode itself is not an encoding, but there are different encodings to store Unicode characters (read this post by Joel).\nThe open function in Python 3.0 takes an optional encoding argument that lets you specify the encoding, e.g. UTF-8 (a very common way to encode Unicode). In Python 2.x, have a look at the codecs module, which also provides an open function that allows specifying the encoding of the file.\nEdit: alternatively, why not just let those poor characters be, and specify the encoding of your LaTeX file at the top:\n\\usepackage[utf8]{inputenc}\n\n(I never tried this, but I figure it should work. You may need to replace utf8 by utf8x, though)\n",
"Please, first, read this:\nThe Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)\nThen, come back and ask questions.\n",
"You need to determine the \"encoding\" of the input document. Unicode can encode millions of characters but files can only story 8-bit values (0-255). So the Unicode text must be encoded in some way.\nIf the document is XML, it should be in the first line (encoding=\"...\"; \"utf-8\" is the default if there is no \"encoding\" field). For HTML, look for \"charset\".\nIf all else fails, open the document in an editor where you can set the encoding (jEdit, for example). Try them until the text looks right. Then use this value as the encoding parameter for codecs.open() in Python.\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"python",
"readability",
"string",
"unicode"
] |
stackoverflow_0000909886_python_readability_string_unicode.txt
|
Q:
Python MemoryError - how can I force object deletion
I have a program that process several files, and for each file a report is generated. The report generating part is a separate function that takes a filename, then returns. During report generation, intermediate parts are cached in memory, as they may be used for several parts of the report, to avoid recalculating.
When I run this program on all files in a directory, it will run for a while, then crash with a MemoryError. If I then rerun it on the same directory, it will skip all files that it successfully created a report for, and continue on. It will process a couple of files before crashing again.
Now, why isn't all resources cleared, or marked at least for garbage collection, after the method call that generates the report? There are no instances leaving, and I am not using any global objects, and after each file processing, all open files are closed.
Are there ways for me to verify that there is no extra references to an object? Is there a way to force garbage collection in Python?
A bit more detail about the implementation and cache. Each report has several elements in it, each element can then rely on different computations, each computation can depend on other computations. If one computation is already done, I don't want to do it again (most of these are expensive).
Here is an abbreviated version off the cache:
class MathCache:
def __init__(self): self.cache = {}
def get(data_provider):
if not data_provider.id in self.cache:
self.cache[data_provider.id] = data_provider.get_value(self)
return self.cache[data_provider.id]
An instance of it is created, and then passed to each element in the report. This instance is only kept in a local reference in the report creation method.
All data_providers inherit from a common class that serves to make a unique id for the instance based on a hash off constructor arguments and class name. I pass on the MathCache as the data_provider itself may rely on other calculations.
A:
You should check out the gc module: http://docs.python.org/library/gc.html#module-gc.
|
Python MemoryError - how can I force object deletion
|
I have a program that process several files, and for each file a report is generated. The report generating part is a separate function that takes a filename, then returns. During report generation, intermediate parts are cached in memory, as they may be used for several parts of the report, to avoid recalculating.
When I run this program on all files in a directory, it will run for a while, then crash with a MemoryError. If I then rerun it on the same directory, it will skip all files that it successfully created a report for, and continue on. It will process a couple of files before crashing again.
Now, why isn't all resources cleared, or marked at least for garbage collection, after the method call that generates the report? There are no instances leaving, and I am not using any global objects, and after each file processing, all open files are closed.
Are there ways for me to verify that there is no extra references to an object? Is there a way to force garbage collection in Python?
A bit more detail about the implementation and cache. Each report has several elements in it, each element can then rely on different computations, each computation can depend on other computations. If one computation is already done, I don't want to do it again (most of these are expensive).
Here is an abbreviated version off the cache:
class MathCache:
def __init__(self): self.cache = {}
def get(data_provider):
if not data_provider.id in self.cache:
self.cache[data_provider.id] = data_provider.get_value(self)
return self.cache[data_provider.id]
An instance of it is created, and then passed to each element in the report. This instance is only kept in a local reference in the report creation method.
All data_providers inherit from a common class that serves to make a unique id for the instance based on a hash off constructor arguments and class name. I pass on the MathCache as the data_provider itself may rely on other calculations.
|
[
"You should check out the gc module: http://docs.python.org/library/gc.html#module-gc. \n"
] |
[
3
] |
[] |
[] |
[
"garbage_collection",
"memory",
"python",
"resources"
] |
stackoverflow_0000910153_garbage_collection_memory_python_resources.txt
|
Q:
Problems with python script on web hosting
I have written a script for Wikipedia & it works fine on my computer, yet when I upload it to my web host(Dreamhost) it doesn't work & says that the user I am trying to log in as is blocked-this is not true, it works on my computer & I#m not blocked.
This is the exact error message I get-
A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred.
/home/tris1601/thewikipediaforum.com/pywikipedia/wikitest.py
35 site = wikipedia.getSite()
36 newpage = wikipedia.Page(site, u"User:Dottydotdot/test")
37 newpage.put(text + "<br><br>'''Imported from [http://en.wikiquote.org '''Wikiquote'''] by [[User:DottyQuoteBot|'''DottyQuoteBot''']]", u"Testing")
38
39 wikipedia.stopme()
newpage = Page{[[User:Dottydotdot/test]]}, newpage.put = <bound method Page.put of Page{[[User:Dottydotdot/test]]}>, text = u'You have so many things in the background that y... could possibly work?" <p> [[Ward Cunningham]] \n'
/home/tris1601/thewikipediaforum.com/pywikipedia/wikipedia.py in put(self=Page{[[User:Dottydotdot/test]]}, newtext=u"You have so many things in the background that y...''] by [[User:DottyQuoteBot|'''DottyQuoteBot''']]", comment=u'Testing', watchArticle=None, minorEdit=True, force=False, sysop=False, botflag=True)
1380
1381 # Check blocks
1382 self.site().checkBlocks(sysop = sysop)
1383
1384 # Determine if we are allowed to edit
self = Page{[[User:Dottydotdot/test]]}, self.site = <bound method Page.site of Page{[[User:Dottydotdot/test]]}>, ).checkBlocks undefined, sysop = False
/home/tris1601/thewikipediaforum.com/pywikipedia/wikipedia.py in checkBlocks(self=wikipedia:en, sysop=False)
4457 if self._isBlocked[index]:
4458 # User blocked
4459 raise UserBlocked('User is blocked in site %s' % self)
4460
4461 def isBlocked(self, sysop = False):
global UserBlocked = <class wikipedia.UserBlocked>, self = wikipedia:en
UserBlocked: User is blocked in site wikipedia:en
args = ('User is blocked in site wikipedia:en',)
Any ideas as to why it isn't working?
Thanks, much appreciated!
A:
It could be that your host (Dreamhost) is blocked, and not your user.
A:
I'd start by adding in some debug. Can you capture the output you're sending to wikipedia and the results it resturns? There's probably some more information lodged in there which you can extract to see why it's failing.
[Edit] r.e. debugging - it's hard to give advice given the small snippet you provided. The fact that you've got over 3.5k lines in a single file suggests there's either some rather innefficient coding in there or that the problem wasn't particularly well broken down... which is likely to make debugging more tricky.
Having said that, the .put() mentioned in the debug above is almost certainly sending a request to the server. You could start by printing out those requests or bits of the request. To try and piece together what request is being sent and then try doing just those requests and recording the output using python's print command:
print "Sending '%s' to server%(my_put_request)
...where my_put_request is the bits of data you're sending.
[Edit2] I just spotted that this it's the pywikipedia bot script you're using. The wikipedia article on the bot mentions some points on permissions which would support uggedals suggestion of it being an access problem. It's quite possible that wikipedia recognises dreamhosts IP and that someone else has tried doing something bad in the past which has caused them to be blocked in some way.
|
Problems with python script on web hosting
|
I have written a script for Wikipedia & it works fine on my computer, yet when I upload it to my web host(Dreamhost) it doesn't work & says that the user I am trying to log in as is blocked-this is not true, it works on my computer & I#m not blocked.
This is the exact error message I get-
A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred.
/home/tris1601/thewikipediaforum.com/pywikipedia/wikitest.py
35 site = wikipedia.getSite()
36 newpage = wikipedia.Page(site, u"User:Dottydotdot/test")
37 newpage.put(text + "<br><br>'''Imported from [http://en.wikiquote.org '''Wikiquote'''] by [[User:DottyQuoteBot|'''DottyQuoteBot''']]", u"Testing")
38
39 wikipedia.stopme()
newpage = Page{[[User:Dottydotdot/test]]}, newpage.put = <bound method Page.put of Page{[[User:Dottydotdot/test]]}>, text = u'You have so many things in the background that y... could possibly work?" <p> [[Ward Cunningham]] \n'
/home/tris1601/thewikipediaforum.com/pywikipedia/wikipedia.py in put(self=Page{[[User:Dottydotdot/test]]}, newtext=u"You have so many things in the background that y...''] by [[User:DottyQuoteBot|'''DottyQuoteBot''']]", comment=u'Testing', watchArticle=None, minorEdit=True, force=False, sysop=False, botflag=True)
1380
1381 # Check blocks
1382 self.site().checkBlocks(sysop = sysop)
1383
1384 # Determine if we are allowed to edit
self = Page{[[User:Dottydotdot/test]]}, self.site = <bound method Page.site of Page{[[User:Dottydotdot/test]]}>, ).checkBlocks undefined, sysop = False
/home/tris1601/thewikipediaforum.com/pywikipedia/wikipedia.py in checkBlocks(self=wikipedia:en, sysop=False)
4457 if self._isBlocked[index]:
4458 # User blocked
4459 raise UserBlocked('User is blocked in site %s' % self)
4460
4461 def isBlocked(self, sysop = False):
global UserBlocked = <class wikipedia.UserBlocked>, self = wikipedia:en
UserBlocked: User is blocked in site wikipedia:en
args = ('User is blocked in site wikipedia:en',)
Any ideas as to why it isn't working?
Thanks, much appreciated!
|
[
"It could be that your host (Dreamhost) is blocked, and not your user.\n",
"I'd start by adding in some debug. Can you capture the output you're sending to wikipedia and the results it resturns? There's probably some more information lodged in there which you can extract to see why it's failing.\n[Edit] r.e. debugging - it's hard to give advice given the small snippet you provided. The fact that you've got over 3.5k lines in a single file suggests there's either some rather innefficient coding in there or that the problem wasn't particularly well broken down... which is likely to make debugging more tricky.\nHaving said that, the .put() mentioned in the debug above is almost certainly sending a request to the server. You could start by printing out those requests or bits of the request. To try and piece together what request is being sent and then try doing just those requests and recording the output using python's print command:\nprint \"Sending '%s' to server%(my_put_request)\n\n...where my_put_request is the bits of data you're sending.\n[Edit2] I just spotted that this it's the pywikipedia bot script you're using. The wikipedia article on the bot mentions some points on permissions which would support uggedals suggestion of it being an access problem. It's quite possible that wikipedia recognises dreamhosts IP and that someone else has tried doing something bad in the past which has caused them to be blocked in some way.\n"
] |
[
1,
0
] |
[] |
[] |
[
"hosting",
"python",
"pywikibot"
] |
stackoverflow_0000910219_hosting_python_pywikibot.txt
|
Q:
Possible: Program executing Qt3 and Qt4 code?
Maybe its a very dumb question but I hope you can give me some answers.
I have a commercial application which uses Qt3 for its GUI and an embedded Python interpreter (command line) for scripting. I want to write a custom plugin for this application which uses Qt4. The plugin is mainly a subclassed QMainWindow-class that is linked into a dll (so I am on Windows) together with a boost python wrapper. The python wrapper should be the interface between my plugin and my commercial application.
So my question: is this possible?? So is running Qt3 code independent from running Qt4 code in the same application.
First experiments resulted in application shutdown, I will try to investigate this further...
Thank you!
Edit:
My application crashed because I didn´t created a QT4 qapplication instance. So when I create the instance everything works well without the additional Qt namespace (which is suggested in the answers, so no need to recompile)! ;)
A:
See this thread on a Trolltech forum.
(Well actually that's about Qt3 plugins in a Qt4 app but I suspect the answer is much the same).
Update: link now a dud, but the wayback machine has it.
A:
This might be possible by namespacing Qt. From configure --help;
-qtnamespace <name> Wraps all Qt library code in 'namespace <name> {...}'.
Theoretically this should prevent the symbol clashes which is likely making your current approach fail.
|
Possible: Program executing Qt3 and Qt4 code?
|
Maybe its a very dumb question but I hope you can give me some answers.
I have a commercial application which uses Qt3 for its GUI and an embedded Python interpreter (command line) for scripting. I want to write a custom plugin for this application which uses Qt4. The plugin is mainly a subclassed QMainWindow-class that is linked into a dll (so I am on Windows) together with a boost python wrapper. The python wrapper should be the interface between my plugin and my commercial application.
So my question: is this possible?? So is running Qt3 code independent from running Qt4 code in the same application.
First experiments resulted in application shutdown, I will try to investigate this further...
Thank you!
Edit:
My application crashed because I didn´t created a QT4 qapplication instance. So when I create the instance everything works well without the additional Qt namespace (which is suggested in the answers, so no need to recompile)! ;)
|
[
"See this thread on a Trolltech forum.\n(Well actually that's about Qt3 plugins in a Qt4 app but I suspect the answer is much the same). \nUpdate: link now a dud, but the wayback machine has it.\n",
"This might be possible by namespacing Qt. From configure --help;\n-qtnamespace <name> Wraps all Qt library code in 'namespace <name> {...}'.\n\nTheoretically this should prevent the symbol clashes which is likely making your current approach fail.\n"
] |
[
3,
3
] |
[] |
[] |
[
"boost",
"c++",
"python",
"qt",
"windows"
] |
stackoverflow_0000910230_boost_c++_python_qt_windows.txt
|
Q:
Recommended way to run another program from within a Python script
Possible Duplicate:
How to call external command in Python
I'm writing a Python script on a windows machine. I need to launch another application "OtherApp.exe". What is the most suitable way to do so?
Till now I've been looking at os.system() or os.execl() and they don't quite look appropriate (I don't even know if the latter will work in windows at all).
A:
The recommended way is to use the subprocess module. All other ways (like os.system() or exec) are brittle, unsecure and have subtle side effects that you should not need to care about. subprocess replaces all of them.
|
Recommended way to run another program from within a Python script
|
Possible Duplicate:
How to call external command in Python
I'm writing a Python script on a windows machine. I need to launch another application "OtherApp.exe". What is the most suitable way to do so?
Till now I've been looking at os.system() or os.execl() and they don't quite look appropriate (I don't even know if the latter will work in windows at all).
|
[
"The recommended way is to use the subprocess module. All other ways (like os.system() or exec) are brittle, unsecure and have subtle side effects that you should not need to care about. subprocess replaces all of them.\n"
] |
[
5
] |
[
"Note that this answer is specific to python versions 2.x which I am locked to due to an embedded system. I am only leaving it for historical reasons just in case. \nOne of the things the subprocess offers is a method to catch the output of a command, for example using [popen] on a windows machine\nimport os\nos.popen('dir').read()\n\nwill yield\n' Volume in drive C has no label.\\n Volume Serial Number is 54CD-5392\\n\\n Directory of C:\\Python25\\Lib\\site-packages\\pythonwin\\n\\n[.] [..] dde.pyd license.txt\\nPythonwin.exe [pywin] scintilla.dll tmp.txt\\nwin32ui.pyd win32uiole.pyd \\n 7 File(s) 984,178 bytes\\n 3 Dir(s) 30,539,644,928 bytes free\\n'\nWhich can then be parsed or manipulated any way you want.\n"
] |
[
-1
] |
[
"python",
"windows"
] |
stackoverflow_0000910733_python_windows.txt
|
Q:
Factory for Callback methods - Python TKinter
Writing a test app to emulate PIO lines, I have a very simple Python/Tk GUI app. Using the numeric Keys 1 to 8 to simulate PIO pins 1 to 8. Press the key down = PIO High, release the Key = PIO goes low. What I need it for is not the problem. I kind of went down a rabbit hole trying to use a factory to create the key press call back functions.
Here is some stripped down code:
#!usr/bin/env python
"""
Python + Tk GUI interface to simulate a 8 Pio lines.
"""
from Tkinter import *
def cb_factory(numberic_key):
"""
Return a call back function for a specific keyboard numeric key (0-9)
"""
def cb( self, event, key=numberic_key ):
bit_val = 1<<numberic_key-1
if int(event.type) == 2 and not (bit_val & self.bitfield):
self.bitfield |= bit_val
self.message("Key %d Down" % key)
elif int(event.type) == 3 and (bit_val & self.bitfield):
self.bitfield &= (~bit_val & 0xFF)
self.message("Key %d Up" % key)
else:
# Key repeat
return
print hex(self.bitfield)
self.display_bitfield()
return cb
class App( Frame ):
"""
Main TK App class
"""
cb1 = cb_factory(1)
cb2 = cb_factory(2)
cb3 = cb_factory(3)
cb4 = cb_factory(4)
cb5 = cb_factory(5)
cb6 = cb_factory(6)
cb7 = cb_factory(7)
cb8 = cb_factory(8)
def __init__(self, parent):
"Init"
self.parent = parent
self.bitfield = 0x00
Frame.__init__(self, parent)
self.messages = StringVar()
self.messages.set("Initialised")
Label( parent, bd=1,
relief=SUNKEN,
anchor=W,
textvariable=self.messages,
text="Testing" ).pack(fill=X)
self.bf_label = StringVar()
self.bf_label.set("0 0 0 0 0 0 0 0")
Label( parent, bd=1,
relief=SUNKEN,
anchor=W,
textvariable=self.bf_label,
text="Testing" ).pack(fill=X)
# This Doesn't work! Get a traceback saying 'cb' expected 2 arguements
# but only got 1?
#
# for x in xrange(1,9):
# cb = self.cb_factory(x)
# self.parent.bind("<KeyPress-%d>" % x, cb)
# self.parent.bind("<KeyRelease-%d>" % x, cb)
self.parent.bind("<KeyPress-1>", self.cb1)
self.parent.bind("<KeyRelease-1>", self.cb1)
self.parent.bind("<KeyPress-2>", self.cb2)
self.parent.bind("<KeyRelease-2>", self.cb2)
self.parent.bind("<KeyPress-3>", self.cb3)
self.parent.bind("<KeyRelease-3>", self.cb3)
self.parent.bind("<KeyPress-4>", self.cb4)
self.parent.bind("<KeyRelease-4>", self.cb4)
self.parent.bind("<KeyPress-5>", self.cb5)
self.parent.bind("<KeyRelease-5>", self.cb5)
self.parent.bind("<KeyPress-6>", self.cb6)
self.parent.bind("<KeyRelease-6>", self.cb6)
self.parent.bind("<KeyPress-7>", self.cb7)
self.parent.bind("<KeyRelease-7>", self.cb7)
self.parent.bind("<KeyPress-8>", self.cb8)
self.parent.bind("<KeyRelease-8>", self.cb8)
def display_bitfield(self):
"""
Display the PIO lines (1 for on, 0 for off)
"""
bin_lst = []
for x in xrange(8):
bit = 1 << x
if bit & self.bitfield:
bin_lst.append("1")
else:
bin_lst.append("0")
bin_lst.reverse()
bin_str = " ".join( bin_lst )
self.bf_label.set( bin_str )
def message( self, msg_txt ):
"set"
self.messages.set( msg_txt )
def cb_factory(self, numberic_key ):
"""
Return a call back function for a specific keyboard numeric key (0-9)
"""
def cb( self, event, key=numberic_key ):
bit_val = 1<<numberic_key-1
if int(event.type) == 2:
self.bitfield |= bit_val
self.message("Key %d Down" % key)
else:
self.bitfield &= (~bit_val & 0xFF)
self.message("Key %d Up" % key)
print hex(self.bitfield)
self.display_bitfield()
return cb
##########################################################################
if __name__ == "__main__":
root = Tk()
root.title("PIO Test")
theApp = App( root )
root.mainloop()
I finally got some sort of method factory working for the callback but I don't find it very satisfactory.
So my question is, can you have a class method factory, that will produce class methods the way I tried (see commented out code and App class method cb_factory())?
NOTES: Yes, I know that this app only lets you hold down 4 keys at a time, but that is good enough for my purposes.
A:
cb expects 'self' and 'event'. Maybe it only gets event from the bind?
A:
In answer to your followup question.
I'm not sure which part you don't understand but I'm guessing you don't quite have a handle on how event callbacks work? If so it's pretty easy. Tk runs in a loop looking for events (keypresses, mouseclicks, etc..). When you bind a callback/function to a event you're just telling it to call your function and pass an event object as it's argument. You can then query the event object for more details of what actually occurred. You are currently building seperate callback functions and binding each to 18 key events (down and release on keys 1-9). I think you can rewrite this to simply have cb as a method of your class because the event object will almost certainly contain the keycode as well.
class:
def __init__(self):
for x in xrange(8):
self.parent.bind("<KeyPress-%d>" % x, self.keyaction)
self.parent.bind("<KeyRelease-%d>" % x, self.keyaction)
def keyaction(self, event):
key = event.keycode # attribute may have another name, I haven't checked tk docs
... do stuff ...
Since we're now using self.keyaction as the callback it should get self as its first argument. It gets its keycode from the event object. Now neither value needs to be 'built' into the function when the function is created so the need to actually create different callbacks for each key is removed and the code easier to understand.
A:
Here is the amended code, taking into account SpliFF's answer. I find this much more aesthetically pleasing but it bugs me that I don't understand how it works. So, for extra credit, can anyone explain how this does work?
#!usr/bin/env python
"""
Python + Tk GUI interface to simulate a 8 Pio lines.
"""
from Tkinter import *
from pio_handler import *
class App( Frame ):
"""
Main TK App class
"""
def __init__(self, parent):
"Init"
self.parent = parent
self.bitfield = 0x00
Frame.__init__(self, parent)
self.messages = StringVar()
self.messages.set("Initialised")
Label( parent, bd=1,
relief=SUNKEN,
anchor=W,
textvariable=self.messages,
text="Testing" ).pack(fill=X)
self.bf_label = StringVar()
self.bf_label.set("0 0 0 0 0 0 0 0")
Label( parent, bd=1,
relief=SUNKEN,
anchor=W,
textvariable=self.bf_label,
text="Testing" ).pack(fill=X)
# This is the clever bit!
# Use a factory to assign a callback function for keys 1 to 8
for x in xrange(1,9):
cb = self.cb_factory(x)
self.parent.bind("<KeyPress-%d>" % x, cb)
self.parent.bind("<KeyRelease-%d>" % x, cb)
def display_bitfield(self):
"""
Display the PIO lines (1 for on, 0 for off)
"""
bin_lst = []
for x in xrange(8):
bit = 1 << x
if bit & self.bitfield:
bin_lst.append("1")
else:
bin_lst.append("0")
bin_lst.reverse()
bin_str = " ".join( bin_lst )
self.bf_label.set( bin_str )
def message( self, msg_txt ):
"set"
self.messages.set( msg_txt )
def cb_factory(self, numeric_key ):
"""
Return a call back function for a specific keyboard numeric key (0-9)
"""
def cb( event, key=numeric_key ):
bit_val = 1<<numeric_key-1
if int(event.type) == 2:
self.bitfield |= bit_val
self.message("Key %d Down" % key)
else:
self.bitfield &= (~bit_val & 0xFF)
self.message("Key %d Up" % key)
print hex(self.bitfield)
self.display_bitfield()
return cb
##########################################################################
if __name__ == "__main__":
root = Tk()
root.title("PIO Test")
theApp = App( root )
root.mainloop()
|
Factory for Callback methods - Python TKinter
|
Writing a test app to emulate PIO lines, I have a very simple Python/Tk GUI app. Using the numeric Keys 1 to 8 to simulate PIO pins 1 to 8. Press the key down = PIO High, release the Key = PIO goes low. What I need it for is not the problem. I kind of went down a rabbit hole trying to use a factory to create the key press call back functions.
Here is some stripped down code:
#!usr/bin/env python
"""
Python + Tk GUI interface to simulate a 8 Pio lines.
"""
from Tkinter import *
def cb_factory(numberic_key):
"""
Return a call back function for a specific keyboard numeric key (0-9)
"""
def cb( self, event, key=numberic_key ):
bit_val = 1<<numberic_key-1
if int(event.type) == 2 and not (bit_val & self.bitfield):
self.bitfield |= bit_val
self.message("Key %d Down" % key)
elif int(event.type) == 3 and (bit_val & self.bitfield):
self.bitfield &= (~bit_val & 0xFF)
self.message("Key %d Up" % key)
else:
# Key repeat
return
print hex(self.bitfield)
self.display_bitfield()
return cb
class App( Frame ):
"""
Main TK App class
"""
cb1 = cb_factory(1)
cb2 = cb_factory(2)
cb3 = cb_factory(3)
cb4 = cb_factory(4)
cb5 = cb_factory(5)
cb6 = cb_factory(6)
cb7 = cb_factory(7)
cb8 = cb_factory(8)
def __init__(self, parent):
"Init"
self.parent = parent
self.bitfield = 0x00
Frame.__init__(self, parent)
self.messages = StringVar()
self.messages.set("Initialised")
Label( parent, bd=1,
relief=SUNKEN,
anchor=W,
textvariable=self.messages,
text="Testing" ).pack(fill=X)
self.bf_label = StringVar()
self.bf_label.set("0 0 0 0 0 0 0 0")
Label( parent, bd=1,
relief=SUNKEN,
anchor=W,
textvariable=self.bf_label,
text="Testing" ).pack(fill=X)
# This Doesn't work! Get a traceback saying 'cb' expected 2 arguements
# but only got 1?
#
# for x in xrange(1,9):
# cb = self.cb_factory(x)
# self.parent.bind("<KeyPress-%d>" % x, cb)
# self.parent.bind("<KeyRelease-%d>" % x, cb)
self.parent.bind("<KeyPress-1>", self.cb1)
self.parent.bind("<KeyRelease-1>", self.cb1)
self.parent.bind("<KeyPress-2>", self.cb2)
self.parent.bind("<KeyRelease-2>", self.cb2)
self.parent.bind("<KeyPress-3>", self.cb3)
self.parent.bind("<KeyRelease-3>", self.cb3)
self.parent.bind("<KeyPress-4>", self.cb4)
self.parent.bind("<KeyRelease-4>", self.cb4)
self.parent.bind("<KeyPress-5>", self.cb5)
self.parent.bind("<KeyRelease-5>", self.cb5)
self.parent.bind("<KeyPress-6>", self.cb6)
self.parent.bind("<KeyRelease-6>", self.cb6)
self.parent.bind("<KeyPress-7>", self.cb7)
self.parent.bind("<KeyRelease-7>", self.cb7)
self.parent.bind("<KeyPress-8>", self.cb8)
self.parent.bind("<KeyRelease-8>", self.cb8)
def display_bitfield(self):
"""
Display the PIO lines (1 for on, 0 for off)
"""
bin_lst = []
for x in xrange(8):
bit = 1 << x
if bit & self.bitfield:
bin_lst.append("1")
else:
bin_lst.append("0")
bin_lst.reverse()
bin_str = " ".join( bin_lst )
self.bf_label.set( bin_str )
def message( self, msg_txt ):
"set"
self.messages.set( msg_txt )
def cb_factory(self, numberic_key ):
"""
Return a call back function for a specific keyboard numeric key (0-9)
"""
def cb( self, event, key=numberic_key ):
bit_val = 1<<numberic_key-1
if int(event.type) == 2:
self.bitfield |= bit_val
self.message("Key %d Down" % key)
else:
self.bitfield &= (~bit_val & 0xFF)
self.message("Key %d Up" % key)
print hex(self.bitfield)
self.display_bitfield()
return cb
##########################################################################
if __name__ == "__main__":
root = Tk()
root.title("PIO Test")
theApp = App( root )
root.mainloop()
I finally got some sort of method factory working for the callback but I don't find it very satisfactory.
So my question is, can you have a class method factory, that will produce class methods the way I tried (see commented out code and App class method cb_factory())?
NOTES: Yes, I know that this app only lets you hold down 4 keys at a time, but that is good enough for my purposes.
|
[
"cb expects 'self' and 'event'. Maybe it only gets event from the bind?\n",
"In answer to your followup question.\nI'm not sure which part you don't understand but I'm guessing you don't quite have a handle on how event callbacks work? If so it's pretty easy. Tk runs in a loop looking for events (keypresses, mouseclicks, etc..). When you bind a callback/function to a event you're just telling it to call your function and pass an event object as it's argument. You can then query the event object for more details of what actually occurred. You are currently building seperate callback functions and binding each to 18 key events (down and release on keys 1-9). I think you can rewrite this to simply have cb as a method of your class because the event object will almost certainly contain the keycode as well.\nclass:\n def __init__(self):\n for x in xrange(8):\n self.parent.bind(\"<KeyPress-%d>\" % x, self.keyaction)\n self.parent.bind(\"<KeyRelease-%d>\" % x, self.keyaction)\n\n def keyaction(self, event):\n key = event.keycode # attribute may have another name, I haven't checked tk docs\n ... do stuff ...\n\nSince we're now using self.keyaction as the callback it should get self as its first argument. It gets its keycode from the event object. Now neither value needs to be 'built' into the function when the function is created so the need to actually create different callbacks for each key is removed and the code easier to understand.\n",
"Here is the amended code, taking into account SpliFF's answer. I find this much more aesthetically pleasing but it bugs me that I don't understand how it works. So, for extra credit, can anyone explain how this does work?\n#!usr/bin/env python\n\"\"\"\nPython + Tk GUI interface to simulate a 8 Pio lines.\n\"\"\"\n\nfrom Tkinter import *\nfrom pio_handler import *\n\nclass App( Frame ):\n \"\"\"\n Main TK App class \n \"\"\"\n\n def __init__(self, parent):\n \"Init\"\n self.parent = parent\n self.bitfield = 0x00\n Frame.__init__(self, parent)\n\n self.messages = StringVar()\n self.messages.set(\"Initialised\")\n\n Label( parent, bd=1, \n relief=SUNKEN, \n anchor=W, \n textvariable=self.messages,\n text=\"Testing\" ).pack(fill=X)\n\n self.bf_label = StringVar()\n self.bf_label.set(\"0 0 0 0 0 0 0 0\")\n\n Label( parent, bd=1, \n relief=SUNKEN, \n anchor=W, \n textvariable=self.bf_label,\n text=\"Testing\" ).pack(fill=X)\n\n # This is the clever bit!\n # Use a factory to assign a callback function for keys 1 to 8\n for x in xrange(1,9):\n cb = self.cb_factory(x)\n self.parent.bind(\"<KeyPress-%d>\" % x, cb) \n self.parent.bind(\"<KeyRelease-%d>\" % x, cb) \n\n def display_bitfield(self):\n \"\"\"\n Display the PIO lines (1 for on, 0 for off)\n \"\"\"\n bin_lst = []\n for x in xrange(8):\n bit = 1 << x\n if bit & self.bitfield:\n bin_lst.append(\"1\")\n else:\n bin_lst.append(\"0\")\n bin_lst.reverse()\n bin_str = \" \".join( bin_lst )\n self.bf_label.set( bin_str )\n\n def message( self, msg_txt ):\n \"set\"\n self.messages.set( msg_txt )\n\n def cb_factory(self, numeric_key ):\n \"\"\"\n Return a call back function for a specific keyboard numeric key (0-9)\n \"\"\"\n def cb( event, key=numeric_key ):\n bit_val = 1<<numeric_key-1\n if int(event.type) == 2:\n self.bitfield |= bit_val\n self.message(\"Key %d Down\" % key)\n else:\n self.bitfield &= (~bit_val & 0xFF)\n self.message(\"Key %d Up\" % key)\n print hex(self.bitfield)\n self.display_bitfield()\n return cb\n\n##########################################################################\n\nif __name__ == \"__main__\":\n\n root = Tk()\n root.title(\"PIO Test\")\n theApp = App( root )\n\n root.mainloop()\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"factory",
"methods",
"python"
] |
stackoverflow_0000909551_factory_methods_python.txt
|
Q:
How to show characters non ascii in python?
I'm using the Python Shell in this way:
>>> s = 'Ã'
>>> s
'\xc3'
How can I print s variable to show the character Ã??? This is the first and easiest question. Really, I'm getting the content from a web page that has non ascii characters like the previous and others with tilde like á, é, í, ñ, etc. Also, I'm trying to execute a regex with these characters in the pattern expression against the content of the web page.
How can solve this problem??
This is an example of one regex:
u'<td[^>]*>\s*Definición\s*</td><td class="value"[^>]*>\s*(?P<data>[\w ,-:\.\(\)]+)\s*</td>'
If I use Expresson application works fine.
EDIT[05/26/2009 16:38]:
Sorry, about my explanation. I'll try to explain better.
I have to get some text from a page. I have the url of that page and I have the regex to get that text. The first thing I thought was the regex was wrong. I checked it with Expresso and works fine, I got the text I wanted. So, the second thing I thought was to print the content of the page and that was when I saw that the content was not what I see in the source code of the web page. The differences are the non ascii characters like á, é, í, etc. Now, I don't know what I have to do and if the problem is in the encoding of the page content or in the pattern text of the regex. One of the regex I've defined is the previous one.
The question wolud be: is there any problem using regex which pattern text has non ascii characters???
A:
How can I print s variable to show the character Ã???
use print:
>>> s = 'Ã'
>>> s
'\xc3'
>>> print s
Ã
A:
Suppose you want to print it as utf-8. Before python 3, the best is to specifically encode it
print u'Ã'.encode('utf-8')
if you get the text externally then you have to specifically decode('utf-8) such as
f = open(my_file)
a = f.next().decode('utf-8') # you have a unicode line in a
print a.encode('utf-8')
A:
I would use ord() to find out if a character is ASCII/special:
if ord(c) > 127:
# special character
This probably won't work with multibyte encodings such as UTF-8. In this case, I would convert to Unicode before testing.
If you get special characters from a web page, you should know the encoding. Then decode it, see Unicode HOWTO.
Edit: I'm definitely not sure what this question is about... It may be a good idea to clarify it.
|
How to show characters non ascii in python?
|
I'm using the Python Shell in this way:
>>> s = 'Ã'
>>> s
'\xc3'
How can I print s variable to show the character Ã??? This is the first and easiest question. Really, I'm getting the content from a web page that has non ascii characters like the previous and others with tilde like á, é, í, ñ, etc. Also, I'm trying to execute a regex with these characters in the pattern expression against the content of the web page.
How can solve this problem??
This is an example of one regex:
u'<td[^>]*>\s*Definición\s*</td><td class="value"[^>]*>\s*(?P<data>[\w ,-:\.\(\)]+)\s*</td>'
If I use Expresson application works fine.
EDIT[05/26/2009 16:38]:
Sorry, about my explanation. I'll try to explain better.
I have to get some text from a page. I have the url of that page and I have the regex to get that text. The first thing I thought was the regex was wrong. I checked it with Expresso and works fine, I got the text I wanted. So, the second thing I thought was to print the content of the page and that was when I saw that the content was not what I see in the source code of the web page. The differences are the non ascii characters like á, é, í, etc. Now, I don't know what I have to do and if the problem is in the encoding of the page content or in the pattern text of the regex. One of the regex I've defined is the previous one.
The question wolud be: is there any problem using regex which pattern text has non ascii characters???
|
[
"How can I print s variable to show the character Ã???\nuse print:\n>>> s = 'Ã'\n>>> s\n'\\xc3'\n>>> print s\nÃ\n\n",
"Suppose you want to print it as utf-8. Before python 3, the best is to specifically encode it\nprint u'Ã'.encode('utf-8')\n\nif you get the text externally then you have to specifically decode('utf-8) such as\nf = open(my_file)\na = f.next().decode('utf-8') # you have a unicode line in a\nprint a.encode('utf-8') \n\n",
"I would use ord() to find out if a character is ASCII/special:\nif ord(c) > 127:\n # special character\n\nThis probably won't work with multibyte encodings such as UTF-8. In this case, I would convert to Unicode before testing.\nIf you get special characters from a web page, you should know the encoding. Then decode it, see Unicode HOWTO.\nEdit: I'm definitely not sure what this question is about... It may be a good idea to clarify it.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"python",
"urllib2"
] |
stackoverflow_0000910809_python_urllib2.txt
|
Q:
Python and Qt (PyQt) - calling method before resize event
i have a question. There is application class in my program. It is inherited from QtGui.QMainWindow. In ini I call my own method which works with graphic. And it should be called before resize event. How can i do that?
Thanks.
EDIT: As you can se here the value of resize event is 14, and show event is 17. So i should find event with less than 14 value.
I found my problem. In constructor before creating handle of image i'm moving window to some position... So during that action resizeEvent calls. Sorry for this question.
A:
You could override in your class the resizeEvent method (which QMainWindows inherits from QWidget), see http://doc.trolltech.com/4.4/qwidget.html#resizeEvent -- in your override, call your other code, then delegate the rest of the work to the parent's version of the method.
|
Python and Qt (PyQt) - calling method before resize event
|
i have a question. There is application class in my program. It is inherited from QtGui.QMainWindow. In ini I call my own method which works with graphic. And it should be called before resize event. How can i do that?
Thanks.
EDIT: As you can se here the value of resize event is 14, and show event is 17. So i should find event with less than 14 value.
I found my problem. In constructor before creating handle of image i'm moving window to some position... So during that action resizeEvent calls. Sorry for this question.
|
[
"You could override in your class the resizeEvent method (which QMainWindows inherits from QWidget), see http://doc.trolltech.com/4.4/qwidget.html#resizeEvent -- in your override, call your other code, then delegate the rest of the work to the parent's version of the method.\n"
] |
[
1
] |
[] |
[] |
[
"constructor",
"pyqt",
"python"
] |
stackoverflow_0000911167_constructor_pyqt_python.txt
|
Q:
Python: Inheriting from Built-In Types
I have a question concerning subtypes of built-in types and their constructors. I want a class to inherit both from tuple and from a custom class.
Let me give you the concrete example. I work a lot with graphs, meaning nodes connected with edges. I am starting to do some work on my own graph framework.
There is a class Edge, which has its own attributes and methods. It should also inherit from a class GraphElement. (A GraphElement is every object that has no meaning outside the context of a specific graph.) But at the most basic level, an edge is just a tuple containing two nodes. It would be nice syntactic sugar if you could do the following:
edge = graph.create_edge("Spam","Eggs")
(u, v) = edge
So (u,v) would contain "Spam" and "Eggs". It would also support iteration like
for node in edge: ...
I hope you see why I would want to subtype tuple (or other basic types like set).
So here is my Edge class and its init:
class Edge(GraphElement, tuple):
def __init__(self, graph, (source, target)):
GraphElement.__init__(self, graph)
tuple.__init__((source, target))
When i call
Edge(aGraph, (source, target))
I get a TypeError: tuple() takes at most 1 argument (2 given). What am I doing wrong?
A:
Since tuples are immutable, you need to override the __new__ method as well. See http://www.python.org/download/releases/2.2.3/descrintro/#__new__
class GraphElement:
def __init__(self, graph):
pass
class Edge(GraphElement, tuple):
def __new__(cls, graph, (source, target)):
return tuple.__new__(cls, (source, target))
def __init__(self, graph, (source, target)):
GraphElement.__init__(self, graph)
A:
For what you need, I would avoid multiple inheritance and would implement an iterator using generator:
class GraphElement:
def __init__(self, graph):
pass
class Edge(GraphElement):
def __init__(self, graph, (source, target)):
GraphElement.__init__(self, graph)
self.source = source
self.target = target
def __iter__(self):
yield self.source
yield self.target
In this case both usages work just fine:
e = Edge(None, ("Spam","Eggs"))
(s, t) = e
print s, t
for p in e:
print p
A:
You need to override __new__ -- currently tuple.__new__ is getting called (as you don't override it) with all the arguments you're passing to Edge.
|
Python: Inheriting from Built-In Types
|
I have a question concerning subtypes of built-in types and their constructors. I want a class to inherit both from tuple and from a custom class.
Let me give you the concrete example. I work a lot with graphs, meaning nodes connected with edges. I am starting to do some work on my own graph framework.
There is a class Edge, which has its own attributes and methods. It should also inherit from a class GraphElement. (A GraphElement is every object that has no meaning outside the context of a specific graph.) But at the most basic level, an edge is just a tuple containing two nodes. It would be nice syntactic sugar if you could do the following:
edge = graph.create_edge("Spam","Eggs")
(u, v) = edge
So (u,v) would contain "Spam" and "Eggs". It would also support iteration like
for node in edge: ...
I hope you see why I would want to subtype tuple (or other basic types like set).
So here is my Edge class and its init:
class Edge(GraphElement, tuple):
def __init__(self, graph, (source, target)):
GraphElement.__init__(self, graph)
tuple.__init__((source, target))
When i call
Edge(aGraph, (source, target))
I get a TypeError: tuple() takes at most 1 argument (2 given). What am I doing wrong?
|
[
"Since tuples are immutable, you need to override the __new__ method as well. See http://www.python.org/download/releases/2.2.3/descrintro/#__new__\nclass GraphElement:\n def __init__(self, graph):\n pass\n\nclass Edge(GraphElement, tuple):\n def __new__(cls, graph, (source, target)):\n return tuple.__new__(cls, (source, target))\n def __init__(self, graph, (source, target)):\n GraphElement.__init__(self, graph)\n\n",
"For what you need, I would avoid multiple inheritance and would implement an iterator using generator:\nclass GraphElement:\n def __init__(self, graph):\n pass\n\nclass Edge(GraphElement):\n def __init__(self, graph, (source, target)):\n GraphElement.__init__(self, graph)\n self.source = source\n self.target = target\n\n def __iter__(self):\n yield self.source\n yield self.target\n\nIn this case both usages work just fine:\ne = Edge(None, (\"Spam\",\"Eggs\"))\n(s, t) = e\nprint s, t\nfor p in e:\n print p\n\n",
"You need to override __new__ -- currently tuple.__new__ is getting called (as you don't override it) with all the arguments you're passing to Edge.\n"
] |
[
10,
6,
3
] |
[] |
[] |
[
"graph",
"oop",
"python"
] |
stackoverflow_0000911375_graph_oop_python.txt
|
Q:
Python MS Word
Possible Duplicate:
Reading/Writing MS Word files in Python
I'm looking into a requirements management system (like requiste pro - Rational Rose) - and will need to read through a MS Word doc searching for specific tags - on either a windows or Apple OS environment. Are there any known frameworks for this (I couldn't find any) - or suggested approaches?
Just to add some clarification - this would not be a one-time read, I'd review the doc every time there is an update to it and perform a CRUD on the requirement specific areas.
A:
First, get it out of native Word (.doc) format.
Do a "Save As XML" and insist your users work with that file instead of the .doc file. They'll hardly notice the difference -- except that the file is bigger.
Use lxml or element tree to parse the XML and find the headings, sections, paragraphs and lists.
You can also do a "Save As HTML" before doing your analysis. This works just as well as the XML version. The HTML version isn't as easy for users, however, so do this prior to your analysis only.
Use Beautiful Soup to parse the HTML and find the headings, sections, paragraphs and lists.
Once you have a parse structure (XML or HTML) you can analyze the document looking for specific tags.
A:
You can build on the ability of openoffice.org to read Word documents.
The Python-UNO bridge allows to use the standard OpenOffice.org API from the python scripting language. Using Python-UNO and having relevant parts of openoffice on your machine, it should be straightforward to read most Word documents.
A:
Using Visual Studio Tools for Office (VSTO), it is possible to script Word from any .NET language. The How to: Search for Text in Documents example shows C# and Visual Basic code, but IronPython can also call the same .NET methods.
If you are ready to use IronPython (no Mac equivalent), this could be a Windows specific solution for searching inside Word documents.
A:
Assuming you are on windows and have Word installed you can control Word from within python using COM - see Python for win32 On Linux you can do the same with OpenOffice.
Alternatively there are a bunch of string extractors for Word for both win32 or Linux, you can then use the normal python regex tools.
See this question extracting text from MS word files in python
A:
If you have a bit of cash you can buy the Aspose.Words Java API. With it you can programatically access and manipulate any Word document
A:
I know this is a Python question but...
On Windows you should use VBScript (VBA Macros) and OLE to programmically access Word.
Examples | How-tos | Automating Word using OLE
On MacOSX you use VBA for older versions and AppleScript for Office 2008.
Article
With VBA you have a choice of either modifying the document in-place or performing an automated "Save As" to get the data in a more easily handled format (though be warned its HTML export is abysmal).
I strongly recommend staying away from third-party libraries/products for this, even if you dislike vbscript. The format is far too complex, undocumented and inconsistent for accurate external handling. StarOffice/OpenOffice is proof of that. They've been trying for years and still haven't got accurate .doc parsing, let alone .docx. Yes it works in general but you run an unquantifiable risk of mangling documents once you start trying to programmically modify them outside of Word. You should be able to call VBscript from Python using os.system. I think the interpreter is wscript.exe but don't hold me to that. This may work though:
os.system('start script.vb')
|
Python MS Word
|
Possible Duplicate:
Reading/Writing MS Word files in Python
I'm looking into a requirements management system (like requiste pro - Rational Rose) - and will need to read through a MS Word doc searching for specific tags - on either a windows or Apple OS environment. Are there any known frameworks for this (I couldn't find any) - or suggested approaches?
Just to add some clarification - this would not be a one-time read, I'd review the doc every time there is an update to it and perform a CRUD on the requirement specific areas.
|
[
"First, get it out of native Word (.doc) format.\n\nDo a \"Save As XML\" and insist your users work with that file instead of the .doc file. They'll hardly notice the difference -- except that the file is bigger.\nUse lxml or element tree to parse the XML and find the headings, sections, paragraphs and lists.\n\nYou can also do a \"Save As HTML\" before doing your analysis. This works just as well as the XML version. The HTML version isn't as easy for users, however, so do this prior to your analysis only.\nUse Beautiful Soup to parse the HTML and find the headings, sections, paragraphs and lists.\n\n\nOnce you have a parse structure (XML or HTML) you can analyze the document looking for specific tags.\n",
"You can build on the ability of openoffice.org to read Word documents.\nThe Python-UNO bridge allows to use the standard OpenOffice.org API from the python scripting language. Using Python-UNO and having relevant parts of openoffice on your machine, it should be straightforward to read most Word documents.\n",
"Using Visual Studio Tools for Office (VSTO), it is possible to script Word from any .NET language. The How to: Search for Text in Documents example shows C# and Visual Basic code, but IronPython can also call the same .NET methods.\nIf you are ready to use IronPython (no Mac equivalent), this could be a Windows specific solution for searching inside Word documents.\n",
"Assuming you are on windows and have Word installed you can control Word from within python using COM - see Python for win32 On Linux you can do the same with OpenOffice.\nAlternatively there are a bunch of string extractors for Word for both win32 or Linux, you can then use the normal python regex tools.\nSee this question extracting text from MS word files in python\n",
"If you have a bit of cash you can buy the Aspose.Words Java API. With it you can programatically access and manipulate any Word document\n",
"I know this is a Python question but...\nOn Windows you should use VBScript (VBA Macros) and OLE to programmically access Word.\nExamples | How-tos | Automating Word using OLE\nOn MacOSX you use VBA for older versions and AppleScript for Office 2008.\nArticle\nWith VBA you have a choice of either modifying the document in-place or performing an automated \"Save As\" to get the data in a more easily handled format (though be warned its HTML export is abysmal).\nI strongly recommend staying away from third-party libraries/products for this, even if you dislike vbscript. The format is far too complex, undocumented and inconsistent for accurate external handling. StarOffice/OpenOffice is proof of that. They've been trying for years and still haven't got accurate .doc parsing, let alone .docx. Yes it works in general but you run an unquantifiable risk of mangling documents once you start trying to programmically modify them outside of Word. You should be able to call VBscript from Python using os.system. I think the interpreter is wscript.exe but don't hold me to that. This may work though:\nos.system('start script.vb')\n\n"
] |
[
4,
2,
2,
2,
0,
0
] |
[] |
[] |
[
"ms_word",
"python"
] |
stackoverflow_0000910730_ms_word_python.txt
|
Q:
Django/Python UserWarning Error
I keep getting this error/warning, which is annoying, and wanted to see if I can fix it, but I'm not sure where to start (I'm a newbie):
/home/simi/workspace/hssn_svn/hssn/../hssn/log/loggers.py:28: UserWarning: ERROR: Could not configure logging
warnings.warn('ERROR: Could not configure logging', UserWarning)
I'm getting this when I do:
python manage.py runserver
python manage.py syncdb
python manane.py shell
Any guidance is greatly appreciated.
Thanks,
--simi
A:
Do you have write permissions to all the files within the application you are working with. Also make sure you have everything in settings.py setup correctly, make sure specified paths exist and you have permissions.
|
Django/Python UserWarning Error
|
I keep getting this error/warning, which is annoying, and wanted to see if I can fix it, but I'm not sure where to start (I'm a newbie):
/home/simi/workspace/hssn_svn/hssn/../hssn/log/loggers.py:28: UserWarning: ERROR: Could not configure logging
warnings.warn('ERROR: Could not configure logging', UserWarning)
I'm getting this when I do:
python manage.py runserver
python manage.py syncdb
python manane.py shell
Any guidance is greatly appreciated.
Thanks,
--simi
|
[
"Do you have write permissions to all the files within the application you are working with. Also make sure you have everything in settings.py setup correctly, make sure specified paths exist and you have permissions.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000911273_django_python.txt
|
Q:
Storing information on points in a 3d space
I'm writing some code (just for fun so far) in Python that will store some data on every point in a 3d space. I'm basically after a 3d matrix object that stores arbitary objects that will allow me to do some advanced selections, like:
Get the point where x=1,y=2,z=3.
Getting all points where y=2.
Getting all points within 3 units of position x=1,y=2,z=3.
Getting all points where point.getType() == "Foo"
In all of the above, I'd need to end up with some sort of output that would give me the original position in the space, and the data stored at that point.
Apparently numpy can do what I want, but it seems highly optimised for scientific computing and working out how to get the data like I want above has so far eluded me.
Is there a better alternative or should I return to banging my head on the numpy wall? :)
EDIT: some more info the first three answers made me realise I should include: I'm not worried about performance, this is purely a proof-of-concept where I'd prefer clean code to good performance. I will also have data for every point in the given 3d space, so I guess a Spare Matrix is bad?
A:
Here's another common approach
class Point( object ):
def __init__( self, x, y, z, data ):
self.x, self.y, self.z = x, y, z
self.data = data
def distFrom( self, x, y, z )
return math.sqrt( (self.x-x)**2 + (self.y-y)**2 + (self.z-z)**2 )
database = [ Point(x,y,z,data), Point(x,y,z,data), ... ]
Let's look at your use cases.
Get the point where x=1,y=2,z=3.
[ p for p in database if (p.x, p.y, p.z) == ( 1, 2, 3 ) ]
Getting all points where y=2.
[ p for p in database if p.y == 2 ]
Getting all points within 3 units of position x=1,y=2,z=3.
[ p for p in database if p.distFrom( 1, 2, 3 ) <= 3.0 ]
Getting all points where point.getType() == "Foo"
[ p for p in database if type(p.data) == Foo ]
A:
Well ... If you expect to really fill that space, then you're probably best off with a densely packed matrix-like structure, basically voxels.
If you don't expect to fill it, look into something a bit more optimized. I would start by looking at octrees, which are often used for things like this.
A:
One advantage of numpy is that it is blazingly fast,
e.g. calculating the pagerank of a 8000x8000 adjacency matrix takes milliseconds. Even though numpy.ndarray will only accept numbers, you can store number/id-object mappings in an external hash-table i.e. dictionary (which in again is a highly optimized datastructure).
The slicing would be as easy as list slicing in python:
>>> from numpy import arange
>>> the_matrix = arange(64).reshape(4, 4, 4)
>>> print the_matrix[0][1][2]
6
>>> print the_matrix[0][1]
[4 5 6 7]
>>> print the_matrix[0]
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
If you wrap some of your desired functions (distances) around some core matrix and a id-object-mapping hash, you could have your application running within a short period of time.
Good luck!
A:
You can do the first 2 queries with slicing in numpy :
a = numpy.zeros((4, 4, 4))
a[1, 2, 3] # The point at x=1,y=2,z=3
a[:, 2, :] # All points where y=2
For the third one if you mean "getting all points within a sphere of radius 3 and centered at x=1,y=2,z=3", you will have to write a custom function to do that ; if you want a cube you can proceed with slicing, e.g.:
a[1:3, 1:3, 1:3] # The 2x2x2 array sliced from the center of 'a'
For the fourth query if the only data stored in your array is the cells type, you could encode it as integers:
FOO = 1
BAR = 2
a = numpy.zeros((4, 4, 4), dtype="i")
a[1, 2, 3] = FOO
a[3, 2, 1] = BAR
def filter(a, type_code):
coords = []
for z in range(4):
for y in range(4):
for x in range(4):
if a[x, y, z] == type_code:
coords.append((x, y, z))
return coords
filter(a, FOO) # => [(1, 2, 3)]
numpy looks like the good tool for doing what you want, as the arrays will be smaller in memory, easily accessible in C (or even better, cython !) and extended slicing syntax will avoid you writing code.
A:
Here's an approach that may work.
Each point is a 4-tuple (x,y,z,data), and your database looks like this:
database = [ (x,y,z,data), (x,y,z,data), ... ]
Let's look at your use cases.
Get the point where x=1,y=2,z=3.
[ (x,y,z,data) for x,y,z,data in database if (x,y,z) == (1,2,3) ]
Getting all points where y=2.
[ (x,y,z,data) for x,y,z,data in database if y == 2 ]
Getting all points within 3 units of position x=1,y=2,z=3.
[ (x,y,z,data) for x,y,z,data in database if math.sqrt((x-1)**2+(y-2)**2+(z-3)**2)<=3.0 ]
Getting all points where point.getType() == "Foo"
[ (x,y,z,data) for x,y,z,data in database if type(data) == Foo ]
A:
When to use Binary Space Partitioning, Quadtree, Octree?
3d array imo is worthless. Especially if your world is dynamic. You should decide between BSP, Quadtree or Octtree. BSP would do just fine. Since your world is in 3d, you need planes when splitting the bsp, not lines.
Cheers !
Edit
I will also have data for every point
in the given 3d space, so I guess a
Spare Matrix is bad?
I guess this is alright if always know how large you data set is and that it never changes, i.e. if more points are added to it that in turn are out of bound. You would have to resize the 3d array in that case.
A:
Using a dictionary with x,y,z tuples as keys is another solution, if you want a relatively simple solution with the standard library.
import math
#use indexing to get point at (1,2,3): points[(1,2,3)]
get_points(points, x=None, y=None, x=None):
"""returns dict of all points with given x,y and/or z values. Call using keywords (eg get_points(points, x=3)"""
filteredPoints = points.items()
if x:
filteredPoints = [p for p in filteredPoints if p[0][0] == x]
if y:
filteredPoints = [p for p in filteredPoints if p[0][1] == y]
if z:
filteredPoints = [p for p in filteredPoints if p[0][0] == x]
return dict(filteredPoints)
get_point_with_type(points, type_):
"""returns dict of points with point.getType() == type_"""
filteredPoints = points.items()
return dict((position,data) for position,data in points.iterItems() if data.getType == type_)
get_points_in_radius(points,x,y,z,r):
"""Returns a dict of points within radius r of point (x,y,z)"""
def get_dist(x1,y1,z1,x2,y2,z3):
return math.sqrt((x1-x2)*(x1-x2)+(y1-y2)*(y1-y2)+(z1-z2)*(z1-z2))
return dict((position,data) for position,data in points.iterItems() if get_dist(x,y,z, *position) <= r))
And due to python referencing, you can alter "points" in the returned dictionaries, and have the original points change as well (I think).
|
Storing information on points in a 3d space
|
I'm writing some code (just for fun so far) in Python that will store some data on every point in a 3d space. I'm basically after a 3d matrix object that stores arbitary objects that will allow me to do some advanced selections, like:
Get the point where x=1,y=2,z=3.
Getting all points where y=2.
Getting all points within 3 units of position x=1,y=2,z=3.
Getting all points where point.getType() == "Foo"
In all of the above, I'd need to end up with some sort of output that would give me the original position in the space, and the data stored at that point.
Apparently numpy can do what I want, but it seems highly optimised for scientific computing and working out how to get the data like I want above has so far eluded me.
Is there a better alternative or should I return to banging my head on the numpy wall? :)
EDIT: some more info the first three answers made me realise I should include: I'm not worried about performance, this is purely a proof-of-concept where I'd prefer clean code to good performance. I will also have data for every point in the given 3d space, so I guess a Spare Matrix is bad?
|
[
"Here's another common approach\nclass Point( object ):\n def __init__( self, x, y, z, data ):\n self.x, self.y, self.z = x, y, z\n self.data = data\n def distFrom( self, x, y, z )\n return math.sqrt( (self.x-x)**2 + (self.y-y)**2 + (self.z-z)**2 )\n\ndatabase = [ Point(x,y,z,data), Point(x,y,z,data), ... ]\n\nLet's look at your use cases.\nGet the point where x=1,y=2,z=3.\n[ p for p in database if (p.x, p.y, p.z) == ( 1, 2, 3 ) ]\n\nGetting all points where y=2.\n[ p for p in database if p.y == 2 ]\n\nGetting all points within 3 units of position x=1,y=2,z=3.\n[ p for p in database if p.distFrom( 1, 2, 3 ) <= 3.0 ]\n\nGetting all points where point.getType() == \"Foo\"\n[ p for p in database if type(p.data) == Foo ]\n\n",
"Well ... If you expect to really fill that space, then you're probably best off with a densely packed matrix-like structure, basically voxels.\nIf you don't expect to fill it, look into something a bit more optimized. I would start by looking at octrees, which are often used for things like this.\n",
"One advantage of numpy is that it is blazingly fast, \ne.g. calculating the pagerank of a 8000x8000 adjacency matrix takes milliseconds. Even though numpy.ndarray will only accept numbers, you can store number/id-object mappings in an external hash-table i.e. dictionary (which in again is a highly optimized datastructure). \nThe slicing would be as easy as list slicing in python:\n>>> from numpy import arange\n\n>>> the_matrix = arange(64).reshape(4, 4, 4)\n>>> print the_matrix[0][1][2]\n 6\n>>> print the_matrix[0][1]\n [4 5 6 7]\n>>> print the_matrix[0]\n [[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]\n [12 13 14 15]]\n\nIf you wrap some of your desired functions (distances) around some core matrix and a id-object-mapping hash, you could have your application running within a short period of time.\nGood luck!\n",
"You can do the first 2 queries with slicing in numpy :\na = numpy.zeros((4, 4, 4))\na[1, 2, 3] # The point at x=1,y=2,z=3\na[:, 2, :] # All points where y=2\n\nFor the third one if you mean \"getting all points within a sphere of radius 3 and centered at x=1,y=2,z=3\", you will have to write a custom function to do that ; if you want a cube you can proceed with slicing, e.g.:\na[1:3, 1:3, 1:3] # The 2x2x2 array sliced from the center of 'a'\n\nFor the fourth query if the only data stored in your array is the cells type, you could encode it as integers:\nFOO = 1\nBAR = 2\na = numpy.zeros((4, 4, 4), dtype=\"i\")\na[1, 2, 3] = FOO\na[3, 2, 1] = BAR\ndef filter(a, type_code):\n coords = []\n for z in range(4):\n for y in range(4):\n for x in range(4):\n if a[x, y, z] == type_code:\n coords.append((x, y, z))\n return coords\nfilter(a, FOO) # => [(1, 2, 3)]\n\nnumpy looks like the good tool for doing what you want, as the arrays will be smaller in memory, easily accessible in C (or even better, cython !) and extended slicing syntax will avoid you writing code.\n",
"Here's an approach that may work.\nEach point is a 4-tuple (x,y,z,data), and your database looks like this:\ndatabase = [ (x,y,z,data), (x,y,z,data), ... ]\n\nLet's look at your use cases.\nGet the point where x=1,y=2,z=3.\n[ (x,y,z,data) for x,y,z,data in database if (x,y,z) == (1,2,3) ]\n\nGetting all points where y=2.\n[ (x,y,z,data) for x,y,z,data in database if y == 2 ]\n\nGetting all points within 3 units of position x=1,y=2,z=3.\n[ (x,y,z,data) for x,y,z,data in database if math.sqrt((x-1)**2+(y-2)**2+(z-3)**2)<=3.0 ]\n\nGetting all points where point.getType() == \"Foo\"\n[ (x,y,z,data) for x,y,z,data in database if type(data) == Foo ]\n\n",
"When to use Binary Space Partitioning, Quadtree, Octree?\n3d array imo is worthless. Especially if your world is dynamic. You should decide between BSP, Quadtree or Octtree. BSP would do just fine. Since your world is in 3d, you need planes when splitting the bsp, not lines.\nCheers !\nEdit\n\nI will also have data for every point\n in the given 3d space, so I guess a\n Spare Matrix is bad?\n\nI guess this is alright if always know how large you data set is and that it never changes, i.e. if more points are added to it that in turn are out of bound. You would have to resize the 3d array in that case. \n",
"Using a dictionary with x,y,z tuples as keys is another solution, if you want a relatively simple solution with the standard library.\nimport math\n\n#use indexing to get point at (1,2,3): points[(1,2,3)]\nget_points(points, x=None, y=None, x=None):\n \"\"\"returns dict of all points with given x,y and/or z values. Call using keywords (eg get_points(points, x=3)\"\"\"\n filteredPoints = points.items()\n if x:\n filteredPoints = [p for p in filteredPoints if p[0][0] == x]\n if y:\n filteredPoints = [p for p in filteredPoints if p[0][1] == y]\n if z:\n filteredPoints = [p for p in filteredPoints if p[0][0] == x]\n return dict(filteredPoints)\n\nget_point_with_type(points, type_):\n \"\"\"returns dict of points with point.getType() == type_\"\"\"\n filteredPoints = points.items()\n return dict((position,data) for position,data in points.iterItems() if data.getType == type_)\n\nget_points_in_radius(points,x,y,z,r):\n \"\"\"Returns a dict of points within radius r of point (x,y,z)\"\"\"\n def get_dist(x1,y1,z1,x2,y2,z3):\n return math.sqrt((x1-x2)*(x1-x2)+(y1-y2)*(y1-y2)+(z1-z2)*(z1-z2))\n return dict((position,data) for position,data in points.iterItems() if get_dist(x,y,z, *position) <= r))\n\nAnd due to python referencing, you can alter \"points\" in the returned dictionaries, and have the original points change as well (I think).\n"
] |
[
6,
3,
1,
0,
0,
0,
0
] |
[
"It depends upon the precise configuration of your system, but from the example you give you are using integers and discrete points, so it would probably be appropriate to consider Sparse Matrix data structures. \n"
] |
[
-1
] |
[
"3d",
"data_structures",
"matrix",
"numpy",
"python"
] |
stackoverflow_0000910930_3d_data_structures_matrix_numpy_python.txt
|
Q:
Django -- how to use templatetags filter with multiple arguments
I have a few values that I would like to pass into a filter and get a URL out of it.
In my template I have:
{% if names %}
{% for name in names %}
<a href='{{name|slugify|add_args:"custid=name.id, sortid=2"}}'>{{name}}</a>
{%if not forloop.last %} | {% endif %}
{% endfor %}
{% endif %}
In my templatetags I have:
@register.filter
def add_args(value, args):
argz = value.strip() + '-' + 'ARGS'
arglist = args.split(',')
for arg in arglist:
keyval = arg.split('=')
argz.join(keyval[0] + 'ZZ' + keyval[1])
argz.join('QQ')
return argz
The output URL should look like:
http://foo.org/john-smith-ARGScustidZZ11QQsortidZZ2
Where ARGS is the start of the arguments, ZZ is '=' and QQ is an '&' equivalent.
First of all: This would work, but I get the custid=name.id coming in the add_args(), where I want to have custid=11 to come in. How pass in the id as an id and not text.
Also, is there a way to just send in an array of key=>value like in PHP.
In PHP I would build an array, let say:
arglist = array('custid' => $nameid, 'sortid' => $sortid );
Then I would pass the arglist as an argument to add_args() and in add_args() I would do
foreach( arglist as $key => $value)
$argstr .= $key . 'ZZ' . $value . 'QQ'.
Does anyone have a better way of making this work?
Note: if I have to pass all arguments as a string and split them up in the filter I don't mind. I just don't know how to pass the name.id as its value ...
A:
This "smart" stuff logic should not be in the template.
Build your end-of-urls in your view and then pass them to template:
def the_view(request):
url_stuff = "custid=%s, sortid, ...." % (name.id, 2 ...)
return render_to_response('template.html',
{'url_stuff':url_stuff,},
context_instance = RequestContext(request))
In template.html:
....
<a href='{{url_stuff}}'>{{name}}</a>
....
If you need a url for a whole bunch of objects consider using get_absolute_url on the model.
A:
You can't pass name.id to your filter. Filter arguments can be asingle value or a single literal. Python/Django doesn't attempt any "smart" variable replacement like PHP.
I suggest you to create a tag for this task:
<a href='{% add_args "custid" name.id "sortid" "2" %}{{name|slugify}}{% end_add_args %}'>{{name}}</a>
This way you can know which argument is a literal value and which should be taken fron context etc... Docs are quite clear about this, take a look at the example.
Also if this name is any way related to a model, say we want to get to the permalink, adding a method that returns the URL with the proper arguments might be the tidiest solution.
Overall, I would refrain putting too much logic into templates. Django is not PHP.
A:
You're calling argz.join a couple times and never assigning the results to anything: maybe you're operating under the misconception that the join method of a string has some mysterious side effect, but it doesn't -- it just returns a new string, and if you don't do anything with that new string, poof, it's gone. Is that at least part of your problem...?
|
Django -- how to use templatetags filter with multiple arguments
|
I have a few values that I would like to pass into a filter and get a URL out of it.
In my template I have:
{% if names %}
{% for name in names %}
<a href='{{name|slugify|add_args:"custid=name.id, sortid=2"}}'>{{name}}</a>
{%if not forloop.last %} | {% endif %}
{% endfor %}
{% endif %}
In my templatetags I have:
@register.filter
def add_args(value, args):
argz = value.strip() + '-' + 'ARGS'
arglist = args.split(',')
for arg in arglist:
keyval = arg.split('=')
argz.join(keyval[0] + 'ZZ' + keyval[1])
argz.join('QQ')
return argz
The output URL should look like:
http://foo.org/john-smith-ARGScustidZZ11QQsortidZZ2
Where ARGS is the start of the arguments, ZZ is '=' and QQ is an '&' equivalent.
First of all: This would work, but I get the custid=name.id coming in the add_args(), where I want to have custid=11 to come in. How pass in the id as an id and not text.
Also, is there a way to just send in an array of key=>value like in PHP.
In PHP I would build an array, let say:
arglist = array('custid' => $nameid, 'sortid' => $sortid );
Then I would pass the arglist as an argument to add_args() and in add_args() I would do
foreach( arglist as $key => $value)
$argstr .= $key . 'ZZ' . $value . 'QQ'.
Does anyone have a better way of making this work?
Note: if I have to pass all arguments as a string and split them up in the filter I don't mind. I just don't know how to pass the name.id as its value ...
|
[
"This \"smart\" stuff logic should not be in the template.\nBuild your end-of-urls in your view and then pass them to template:\ndef the_view(request):\n url_stuff = \"custid=%s, sortid, ....\" % (name.id, 2 ...)\n\n return render_to_response('template.html',\n {'url_stuff':url_stuff,},\n context_instance = RequestContext(request))\n\nIn template.html:\n ....\n\n <a href='{{url_stuff}}'>{{name}}</a>\n\n ....\n\nIf you need a url for a whole bunch of objects consider using get_absolute_url on the model.\n",
"You can't pass name.id to your filter. Filter arguments can be asingle value or a single literal. Python/Django doesn't attempt any \"smart\" variable replacement like PHP.\nI suggest you to create a tag for this task:\n<a href='{% add_args \"custid\" name.id \"sortid\" \"2\" %}{{name|slugify}}{% end_add_args %}'>{{name}}</a>\n\nThis way you can know which argument is a literal value and which should be taken fron context etc... Docs are quite clear about this, take a look at the example.\nAlso if this name is any way related to a model, say we want to get to the permalink, adding a method that returns the URL with the proper arguments might be the tidiest solution.\nOverall, I would refrain putting too much logic into templates. Django is not PHP.\n",
"You're calling argz.join a couple times and never assigning the results to anything: maybe you're operating under the misconception that the join method of a string has some mysterious side effect, but it doesn't -- it just returns a new string, and if you don't do anything with that new string, poof, it's gone. Is that at least part of your problem...?\n"
] |
[
6,
4,
3
] |
[] |
[] |
[
"django",
"django_templates",
"filter",
"python",
"tags"
] |
stackoverflow_0000896166_django_django_templates_filter_python_tags.txt
|
Q:
Django objects change model field
This doesn't work:
>>> pa = Person.objects.all()
>>> pa[2].nickname
u'arst'
>>> pa[2].nickname = 'something else'
>>> pa[2].save()
>>> pa[2].nickname
u'arst'
But it works if you take
p = Person.objects.get(pk=2)
and change the nick.
Why so.
A:
>>> type(Person.objects.all())
<class 'django.db.models.query.QuerySet'>
>>> pa = Person.objects.all() # Not evaluated yet - lazy
>>> type(pa)
<class 'django.db.models.query.QuerySet'>
DB queried to give you a Person object
>>> pa[2]
DB queried again to give you yet another Person object.
>>> pa[2].first_name = "Blah"
Let's call this instance PersonObject1 that resides in memory. So it's equivalent to something like this:
>>> PersonObject1.first_name = "Blah"
Now let's do this:
>>> pa[2].save()
The pa[2] again queries a db an returns Another instance of person object, say PersonObject2 for example. Which will be unchanged! So it's equvivalent to calling something like:
PersonObject2.save()
But this has nothing to do with PersonObject1.
A:
If you assigned your pa[2] to a variable, like you do with Person.objects.get(pk=2) you'd have it right:
pa = Person.objects.all()
print pa[2].nickname
'Jonny'
pa[2].nickname = 'Billy'
print pa[2].nickname
'Jonny'
# when you assign it to some variable, your operations
# change this particular object, not something that is queried out each time
p1 = pa[2]
print p1.nickname
'Jonny'
p1.nickname = 'Billy'
print p1.nickname
'Billy'
This has nothing to do with the method you pull the objects from database.
And, btw, django numbers PrimaryKeys starting from 1, not 0, so
Person.objects.all()[2] == Person.objects.get(pk=2)
False
Person.objects.all()[2] == Person.objects.get(pk=3)
True
A:
Person.objects.all() returns a QuerySet, which is lazy (doesn't perform a DB query until data is requested from it). Slicing a QuerySet (pa[2]) performs a database query to get a single row from the database (using LIMIT and OFFSET in SQL). Slicing the same QuerySet again doesn't do the DB query again (results are cached) but it does return a new instance of the model. Each time you access pa[2] you are getting a new Person instance (albeit with all the same data in it).
|
Django objects change model field
|
This doesn't work:
>>> pa = Person.objects.all()
>>> pa[2].nickname
u'arst'
>>> pa[2].nickname = 'something else'
>>> pa[2].save()
>>> pa[2].nickname
u'arst'
But it works if you take
p = Person.objects.get(pk=2)
and change the nick.
Why so.
|
[
">>> type(Person.objects.all())\n<class 'django.db.models.query.QuerySet'>\n\n>>> pa = Person.objects.all() # Not evaluated yet - lazy\n>>> type(pa)\n<class 'django.db.models.query.QuerySet'>\n\nDB queried to give you a Person object\n>>> pa[2]\n\nDB queried again to give you yet another Person object. \n>>> pa[2].first_name = \"Blah\" \n\nLet's call this instance PersonObject1 that resides in memory. So it's equivalent to something like this:\n>>> PersonObject1.first_name = \"Blah\"\n\nNow let's do this:\n>>> pa[2].save() \n\nThe pa[2] again queries a db an returns Another instance of person object, say PersonObject2 for example. Which will be unchanged! So it's equvivalent to calling something like:\nPersonObject2.save()\n\nBut this has nothing to do with PersonObject1.\n",
"If you assigned your pa[2] to a variable, like you do with Person.objects.get(pk=2) you'd have it right:\npa = Person.objects.all()\nprint pa[2].nickname\n'Jonny'\npa[2].nickname = 'Billy'\nprint pa[2].nickname\n'Jonny'\n\n# when you assign it to some variable, your operations \n# change this particular object, not something that is queried out each time\np1 = pa[2]\nprint p1.nickname \n'Jonny'\np1.nickname = 'Billy'\nprint p1.nickname \n'Billy'\n\nThis has nothing to do with the method you pull the objects from database.\nAnd, btw, django numbers PrimaryKeys starting from 1, not 0, so\nPerson.objects.all()[2] == Person.objects.get(pk=2)\nFalse\nPerson.objects.all()[2] == Person.objects.get(pk=3)\nTrue\n\n",
"Person.objects.all() returns a QuerySet, which is lazy (doesn't perform a DB query until data is requested from it). Slicing a QuerySet (pa[2]) performs a database query to get a single row from the database (using LIMIT and OFFSET in SQL). Slicing the same QuerySet again doesn't do the DB query again (results are cached) but it does return a new instance of the model. Each time you access pa[2] you are getting a new Person instance (albeit with all the same data in it).\n"
] |
[
10,
4,
2
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0000910287_django_django_models_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.