content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Python: manipulating sub trees
I'm a nooby. I'd like to acknowledge Allen Downey, Jeffrey Elkner and Chris Meyers
and 'How to think like a computer scientist' for what I know.
I'm building a genetics inspired program to generate equations that match some provided problem.
The node class looks like this:
class Node(object):
'''
'''
def __init__(self, cargo, left=None, right=None):
self.cargo = cargo
self.left = left
self.right = right
self.parent = None
self.branch = None
self.seq = 0
def __str__(self):
return str(self.cargo)
def copy(self):
return copy.deepcopy(self)
I have a Tree class that contains an attribute: self.data which is a linked series of nodes forming a tree which I can traverse to produce an equation.
To perform crossover, I'd like to be able to swap subtrees chosen at random from two instances of Tree.
As self.data is being constructed, it builds a dictionary with a sequential key holding each node as a value. One such record looks like this:
3: <__main__.Node object at 0x0167B6B0>}
I thought I'd be clever and simply choose a node each from two tree instances and exchange their respective parents node.left or node.right values. Each node records if it is a left or right in its node.branch attribute.
I don't know how to reference self.data(subnode) to change it.
And both tree instances would have to have access to each other's nodes by the address saved in the dictionary.
I fear I shall have to copy and replace each subtree.
Any comments would be appreciated.
Thanks,
Peter Stewart
Nanaimo, Canada
A:
Unfortunately you don't provide us with the Tree class, but let's assume it's something like:
class Tree(object):
def __init__(self):
self.data = None
self.nextkey = 0
self.thedict = {}
with the various attributes being updated accurately when new nodes are inserted. Now, while you talk about "the address saved in the dictionary", it's clear that the dict's value is NOT "an address" -- rather, it's a Node object (if you define a special method __repr__ in your node you may be able to see that in a clearer way; what you're seeing is the default representation, used for all Python objects whose type don't define or inherit __repr__).
So, swapping random subtree between two different trees only requires care in updating all of the many redundant pieces of information that you're keeping (and that must ALL be in sync). By the way, it would be simpler if such updates were methods of Tree and/or Node and so usable for any of various kinds of "edit" (insertion, removal, etc), rather than buried deep in a function that performs the updates as part of a random swap -- that's good OO practice. But, that's somewhat of a side issue.
You also don't tell us exactly how the branch attribute works, I'll assume it's a string, 'left' or 'right' as appropriate (or None if there's no parent, i.e., a root node).
To remove a subtree, you need to update: the parent node, setting to None its appropriate attribute; the root of the subtree, setting to None its parent and branch attributes; AND the Tree, removing that entry from the Tree's thedict attribute. You will also need to remember what the parent and branch were in order to be able to insert some other subtree at that spot. Therefore...:
def removeSubtreeFromTree(tree, keyindict):
subtreenode = tree.thedict.pop(keyindict)
parent, branch = subtreenode.parent, subtreenode.branch
# a sanity chech can't hurt...;-)
assert getattr(parent, branch) is subtreenode
subtreenode.parent, subtreenode.branch = None, None
setattr(parent, branch, None)
return subtreenode, parent, branch
Now to ADD a new subtree to a given parent and branch in a Tree is simpler:
def addNewSubtree(tree, subtreenode, parent, branch):
# sanity checks R us
assert getattr(parent, branch) is None
assert subtreenode.parent is None
assert subtreenode.branch is None
setattr(parent, branch, subtreenode)
subtreenode.parent = parent
subtreenode.branch = branch
tree.thedict[tree.nextkey] = subtreenode
tree.nextkey += 1
Note you can't just reuse the previous keys: there might be a "conflict" (assuming keys are unique only within a single given Tree... if you made them globally unique instead, then you could indeed reuse them).
Finally, putting these two operations and a little more together can be done. If you never need to "swap" a tree's very root, it's simpler (no special case needed to deal with a parentless subtree...) so I'm temporarily going to assume that (if you want more generality you WILL have to code the finicky special cases -- ideally after refactoring things to be methods as I previously suggested;-)...:
def randomNonrootSubtree(tree):
# we're in trouble if the tree ONLY has a root w/no really SUB trees;-)
assert len(tree.thedict) > 1
while True:
thekey = random.choice(tree.thedict.keys())
subtree = tree.thedict[thekey]
if subtree.parent: return thekey
and at last...:
def theSwapper(t1, t2):
k1 = randomNonrootSubtree(t1)
k2 = randomNonrootSubtree(t2)
st1, p1, b1 = removeSubtreeFromTree(t1, k1)
st2, p2, b2 = removeSubtreeFromTree(t2, k2)
addNewSubtree(t1, st2, p1, b1)
addNewSubtree(t2, st1, p2, b2)
A:
If I understand correctly, you are looking for something like this...
(I have not tested this.)
def swap_nodes(dict_1, key_1, dict_2, key_2):
node_1 = dict_1[key_1]
node_2 = dict_2[key_2]
# Update dicts and seq fields for the two nodes...
dict_1[key_1] = node_2
node_2.seq = key_1
dict_2[key_2] = node_1
node_1.seq = key_2
# Update the parents...
if node_1.branch == "left":
node_1.parent.left = node_2
else:
node_1.parent.right = node_2
if node_2.branch == "left":
node_2.parent.left = node_1
else:
node_2.parent.right = node_1
# Now update the branch and parent fields of the nodes...
node_1.branch, node_2.branch = node_2.branch, node_1.branch
node_1.parent, node_2.parent = node_2.parent, node_1.parent
|
Python: manipulating sub trees
|
I'm a nooby. I'd like to acknowledge Allen Downey, Jeffrey Elkner and Chris Meyers
and 'How to think like a computer scientist' for what I know.
I'm building a genetics inspired program to generate equations that match some provided problem.
The node class looks like this:
class Node(object):
'''
'''
def __init__(self, cargo, left=None, right=None):
self.cargo = cargo
self.left = left
self.right = right
self.parent = None
self.branch = None
self.seq = 0
def __str__(self):
return str(self.cargo)
def copy(self):
return copy.deepcopy(self)
I have a Tree class that contains an attribute: self.data which is a linked series of nodes forming a tree which I can traverse to produce an equation.
To perform crossover, I'd like to be able to swap subtrees chosen at random from two instances of Tree.
As self.data is being constructed, it builds a dictionary with a sequential key holding each node as a value. One such record looks like this:
3: <__main__.Node object at 0x0167B6B0>}
I thought I'd be clever and simply choose a node each from two tree instances and exchange their respective parents node.left or node.right values. Each node records if it is a left or right in its node.branch attribute.
I don't know how to reference self.data(subnode) to change it.
And both tree instances would have to have access to each other's nodes by the address saved in the dictionary.
I fear I shall have to copy and replace each subtree.
Any comments would be appreciated.
Thanks,
Peter Stewart
Nanaimo, Canada
|
[
"Unfortunately you don't provide us with the Tree class, but let's assume it's something like:\nclass Tree(object):\n def __init__(self):\n self.data = None\n self.nextkey = 0\n self.thedict = {}\n\nwith the various attributes being updated accurately when new nodes are inserted. Now, while you talk about \"the address saved in the dictionary\", it's clear that the dict's value is NOT \"an address\" -- rather, it's a Node object (if you define a special method __repr__ in your node you may be able to see that in a clearer way; what you're seeing is the default representation, used for all Python objects whose type don't define or inherit __repr__).\nSo, swapping random subtree between two different trees only requires care in updating all of the many redundant pieces of information that you're keeping (and that must ALL be in sync). By the way, it would be simpler if such updates were methods of Tree and/or Node and so usable for any of various kinds of \"edit\" (insertion, removal, etc), rather than buried deep in a function that performs the updates as part of a random swap -- that's good OO practice. But, that's somewhat of a side issue.\nYou also don't tell us exactly how the branch attribute works, I'll assume it's a string, 'left' or 'right' as appropriate (or None if there's no parent, i.e., a root node).\nTo remove a subtree, you need to update: the parent node, setting to None its appropriate attribute; the root of the subtree, setting to None its parent and branch attributes; AND the Tree, removing that entry from the Tree's thedict attribute. You will also need to remember what the parent and branch were in order to be able to insert some other subtree at that spot. Therefore...:\ndef removeSubtreeFromTree(tree, keyindict):\n subtreenode = tree.thedict.pop(keyindict)\n parent, branch = subtreenode.parent, subtreenode.branch\n # a sanity chech can't hurt...;-)\n assert getattr(parent, branch) is subtreenode\n subtreenode.parent, subtreenode.branch = None, None\n setattr(parent, branch, None)\n return subtreenode, parent, branch\n\nNow to ADD a new subtree to a given parent and branch in a Tree is simpler:\ndef addNewSubtree(tree, subtreenode, parent, branch):\n # sanity checks R us\n assert getattr(parent, branch) is None\n assert subtreenode.parent is None\n assert subtreenode.branch is None\n setattr(parent, branch, subtreenode)\n subtreenode.parent = parent\n subtreenode.branch = branch\n tree.thedict[tree.nextkey] = subtreenode\n tree.nextkey += 1\n\nNote you can't just reuse the previous keys: there might be a \"conflict\" (assuming keys are unique only within a single given Tree... if you made them globally unique instead, then you could indeed reuse them).\nFinally, putting these two operations and a little more together can be done. If you never need to \"swap\" a tree's very root, it's simpler (no special case needed to deal with a parentless subtree...) so I'm temporarily going to assume that (if you want more generality you WILL have to code the finicky special cases -- ideally after refactoring things to be methods as I previously suggested;-)...:\n def randomNonrootSubtree(tree):\n # we're in trouble if the tree ONLY has a root w/no really SUB trees;-)\n assert len(tree.thedict) > 1\n while True:\n thekey = random.choice(tree.thedict.keys())\n subtree = tree.thedict[thekey]\n if subtree.parent: return thekey\n\nand at last...:\n def theSwapper(t1, t2):\n k1 = randomNonrootSubtree(t1)\n k2 = randomNonrootSubtree(t2)\n st1, p1, b1 = removeSubtreeFromTree(t1, k1)\n st2, p2, b2 = removeSubtreeFromTree(t2, k2)\n addNewSubtree(t1, st2, p1, b1)\n addNewSubtree(t2, st1, p2, b2)\n\n",
"If I understand correctly, you are looking for something like this...\n(I have not tested this.)\ndef swap_nodes(dict_1, key_1, dict_2, key_2):\n node_1 = dict_1[key_1]\n node_2 = dict_2[key_2]\n\n # Update dicts and seq fields for the two nodes...\n dict_1[key_1] = node_2\n node_2.seq = key_1\n dict_2[key_2] = node_1\n node_1.seq = key_2\n\n # Update the parents...\n if node_1.branch == \"left\":\n node_1.parent.left = node_2\n else:\n node_1.parent.right = node_2\n\n if node_2.branch == \"left\":\n node_2.parent.left = node_1\n else:\n node_2.parent.right = node_1\n\n # Now update the branch and parent fields of the nodes...\n node_1.branch, node_2.branch = node_2.branch, node_1.branch\n node_1.parent, node_2.parent = node_2.parent, node_1.parent\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"data_structures",
"python",
"tree"
] |
stackoverflow_0001386493_data_structures_python_tree.txt
|
Q:
python Invalid literal for float
I am running a code to select chunks from a big file. I am getting some strange error that is
"Invalid literal for float(): E-135"
Does anybody know how to fix this? Thanks in advance.
Actually this is the statement that is giving me error
float (line_temp[line(line_temp)-1])
This statement produces error
line_temp is a string
'line' is any line in an open and file also a string.
A:
You need a number in front of the E to make it a valid string representation of a float number
>>> float('1E-135')
1e-135
>>> float('E-135')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for float(): E-135
In fact, which number is E-135 supposed to represent? 1x10^-135?
Valid literal forms for floats are here.
A:
Looks like you are trying to convert a string to a float. If the string is E-135, then it is indeed an invalid value to be converted to a float. Perhaps you are chopping off a digit in the beginning of the string and it really ought to be something like 1E-135? That would be a valid float.
A:
May I suggest you replace
float(x-y)
with
float(x) - float(y)
A:
Ronald, kindly check the answers again. They are right.
What you are doing is: float(EXPRESSION), where the result of EXPRESSION is E-135. E-135 is not valid input into the float() function. I have no idea what the "line_temp[line(line_temp)-1]" does, but it returns incorrect data for the float() function.
|
python Invalid literal for float
|
I am running a code to select chunks from a big file. I am getting some strange error that is
"Invalid literal for float(): E-135"
Does anybody know how to fix this? Thanks in advance.
Actually this is the statement that is giving me error
float (line_temp[line(line_temp)-1])
This statement produces error
line_temp is a string
'line' is any line in an open and file also a string.
|
[
"You need a number in front of the E to make it a valid string representation of a float number\n>>> float('1E-135')\n1e-135\n>>> float('E-135')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nValueError: invalid literal for float(): E-135\n\nIn fact, which number is E-135 supposed to represent? 1x10^-135? \nValid literal forms for floats are here.\n",
"Looks like you are trying to convert a string to a float. If the string is E-135, then it is indeed an invalid value to be converted to a float. Perhaps you are chopping off a digit in the beginning of the string and it really ought to be something like 1E-135? That would be a valid float.\n",
"May I suggest you replace\nfloat(x-y)\n\nwith\nfloat(x) - float(y)\n\n",
"Ronald, kindly check the answers again. They are right.\nWhat you are doing is: float(EXPRESSION), where the result of EXPRESSION is E-135. E-135 is not valid input into the float() function. I have no idea what the \"line_temp[line(line_temp)-1]\" does, but it returns incorrect data for the float() function.\n"
] |
[
6,
3,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001386420_python.txt
|
Q:
Where should one place the code to autoincrement a sharded counter on Google App Engine/Django when one creates a new model?
I've a model MyModel (extending Google's db.Model), and I want to keep track of the number of Models that have been created.
I think the code at from Google's I/O talk on Sharding Counters is quite good, so I'm using that. But I'm not sure where I ought to be calling the increment when creating a new code. (I'm using Django, and I've kept the familiar models.py, views.py, etc. layout to the project's applications.)
There are a couple possibilities that seem to come to mind for where to put the incrementing code:
Overload the Model.put() so that it increments the counter when the model is saved for the first time, and similarly overload Model.delete() to decrement the counter
Attach some sort of listener to saves/deletes, and check that the save is of a new model (does GAE have such listeners?)
Put the counter incrementing code in the function in view.py that creates/deletes models
I'd be much obliged for suggestions and thoughts as to how to do this best (and pros/cons of each option).
Thank you for reading.
Best,
Brian
A:
I suggest the approach (curiously close to "aspect oriented programming") suggested by "App Engine Fan" here (essentially "setting the scene") and especially here (showing the right solution: not "monkey patching" but rather the use of the well-architected built-in "hooks" facility of App Engine).
The two "Hacks" he gives as examples are close enough to your use case that you should have no trouble implementing your code -- indeed it's not all that far from the "listener" solution you considered sub point (2), just somewhat more general because "hooks" can actually "interfere" with the operation (not that you need that here) as well as being able to run either before or after the operation itself (in your case I suspect "after" may be better, just in case the put fails somehow, in which case I imagine you don't want to count it).
|
Where should one place the code to autoincrement a sharded counter on Google App Engine/Django when one creates a new model?
|
I've a model MyModel (extending Google's db.Model), and I want to keep track of the number of Models that have been created.
I think the code at from Google's I/O talk on Sharding Counters is quite good, so I'm using that. But I'm not sure where I ought to be calling the increment when creating a new code. (I'm using Django, and I've kept the familiar models.py, views.py, etc. layout to the project's applications.)
There are a couple possibilities that seem to come to mind for where to put the incrementing code:
Overload the Model.put() so that it increments the counter when the model is saved for the first time, and similarly overload Model.delete() to decrement the counter
Attach some sort of listener to saves/deletes, and check that the save is of a new model (does GAE have such listeners?)
Put the counter incrementing code in the function in view.py that creates/deletes models
I'd be much obliged for suggestions and thoughts as to how to do this best (and pros/cons of each option).
Thank you for reading.
Best,
Brian
|
[
"I suggest the approach (curiously close to \"aspect oriented programming\") suggested by \"App Engine Fan\" here (essentially \"setting the scene\") and especially here (showing the right solution: not \"monkey patching\" but rather the use of the well-architected built-in \"hooks\" facility of App Engine).\nThe two \"Hacks\" he gives as examples are close enough to your use case that you should have no trouble implementing your code -- indeed it's not all that far from the \"listener\" solution you considered sub point (2), just somewhat more general because \"hooks\" can actually \"interfere\" with the operation (not that you need that here) as well as being able to run either before or after the operation itself (in your case I suspect \"after\" may be better, just in case the put fails somehow, in which case I imagine you don't want to count it).\n"
] |
[
2
] |
[] |
[] |
[
"auto_increment",
"django",
"google_app_engine",
"python",
"sharding"
] |
stackoverflow_0001384932_auto_increment_django_google_app_engine_python_sharding.txt
|
Q:
python/genshi newline to html paragraphs
I'm trying to output the content of a comment with genshi, but I can't figure out how to transform the newlines into HTML paragraphs.
Here's a test case of what it should look like:
input: 'foo\n\n\n\n\nbar\nbaz'
output: <p>foo</p><p>bar</p><p>baz</p>
I've looked everywhere for this function. I couldn't find it in genshi or in python's std lib. I'm using TG 1.0.
A:
def tohtml(manylinesstr):
return ''.join("<p>%s</p>" % line
for line in manylinesstr.splitlines()
if line)
So for example,
print repr(tohtml('foo\n\n\n\n\nbar\nbaz'))
emits:
'<p>foo</p><p>bar</p><p>baz</p>'
as required.
A:
There may be a built-in function in Genshi, but if not, this will do it for you:
output = ''.join([("<p>%s</p>" % l) for l in input.split('\n')])
A:
I know you said TG1 my solution is TG2 but can be backported or simply depend on webhelpers but IMO all other implementations are flawed.
Take a look at the converters module both nl2br and format_paragraphs.
|
python/genshi newline to html paragraphs
|
I'm trying to output the content of a comment with genshi, but I can't figure out how to transform the newlines into HTML paragraphs.
Here's a test case of what it should look like:
input: 'foo\n\n\n\n\nbar\nbaz'
output: <p>foo</p><p>bar</p><p>baz</p>
I've looked everywhere for this function. I couldn't find it in genshi or in python's std lib. I'm using TG 1.0.
|
[
"def tohtml(manylinesstr):\n return ''.join(\"<p>%s</p>\" % line\n for line in manylinesstr.splitlines()\n if line)\n\nSo for example,\nprint repr(tohtml('foo\\n\\n\\n\\n\\nbar\\nbaz'))\n\nemits:\n'<p>foo</p><p>bar</p><p>baz</p>'\n\nas required.\n",
"There may be a built-in function in Genshi, but if not, this will do it for you:\noutput = ''.join([(\"<p>%s</p>\" % l) for l in input.split('\\n')])\n\n",
"I know you said TG1 my solution is TG2 but can be backported or simply depend on webhelpers but IMO all other implementations are flawed. \nTake a look at the converters module both nl2br and format_paragraphs.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"genshi",
"python",
"turbogears"
] |
stackoverflow_0001257746_genshi_python_turbogears.txt
|
Q:
Globbing the processing of an object's attributes in Python?
Here is a Django model I'm using.
class Person(models.Model):
surname = models.CharField(max_length=255, null=True, blank=True)
first_name = models.CharField(max_length=255, null=True, blank=True)
middle_names = models.CharField(max_length=255, null=True, blank=True)
birth_year = WideYear(null=True, blank=True)
birth_year_uncertain = models.BooleanField()
death_year = WideYear(null=True, blank=True)
death_year_uncertain = models.BooleanField()
flourit_year = WideYear(null=True, blank=True)
flourit_year_uncertain = models.BooleanField()
FLOURIT_CHOICES = (
(u'D', u'Birth and death dates'),
(u'F', u'Flourit date'),
)
use_flourit = models.CharField('Date(s) to use', max_length=2, choices=FLOURIT_CHOICES)
def __unicode__(self):
if str(self.birth_year) == 'None':
self.birth_year = ''
if str(self.death_year) == 'None':
self.death_year = ''
if str(self.flourit_year) == 'None':
self.flourit_year = ''
if self.use_flourit == u'D':
return '%s, %s %s (%s - %s)' % (self.surname, self.first_name, self.middle_names, self.birth_year, self.death_year)
else:
return '%s, %s %s (fl. %s)' % (self.surname, self.first_name, self.middle_names, self.flourit_year)
This bit of code from the model's __unicode__ method seems rather verbose:
if str(self.birth_year) == 'None':
self.birth_year = ''
if str(self.death_year) == 'None':
self.death_year = ''
if str(self.flourit_year) == 'None':
self.flourit_year = ''
Its aim is to stop the __unicode__ method from returning something like
Murdoch, Rupert (1931 - None)
and to ensure the method instead returns something like
Murdoch, Rupert (1931 - )
Is there a way to "glob" that bit of code somehow, e.g. using a wildcard so that all attributes of the self object are processed?
Something like this:
if str(self.(*)) == 'None':
self.$1 = ''
Here, I've just used regular expression syntax to illustrate what I mean; obviously, it's not working python code. Essentially the idea is to loop through each of the attributes, checking if their str() representations are equal to 'None' and if so, setting them to ''. But it would be nice if this could be done more concisely than by setting up a for loop.
A:
for n in dir(self):
if getattr(self, n) is None:
setattr(self, n, '')
I'm using the normal is None idiom, assuming there's no hypersubtle motivation for that weird alternative you're using, but that's a separate issue;-)
Edit: if you're using a framework laden with VERY deep black-magic, like Django, perfectly normal Python approaches suddenly may become fraught -- as the OP seems to have indicated with a similarly-murky edit. Well, if the dark deep metaclasses of Django don't let this work (though I can't reproduce the issue as the OP reports it), there are always alternatives. In particular, since this is taking place inside a special method that SHOULDN'T alter the object (__unicode__, specifically), I recommend a simple auxiliary function (a plain good old stand-alone module-level function!):
def b(atr): return atr or u''
to be used as follows:
def __unicode__(self):
if self.use_flourit == u'D':
return '%s, %s %s (%s - %s)' % (
b(self.surname), b(self.first_name), b(self.middle_names),
b(self.birth_year), b(self.death_year)
)
else:
return '%s, %s %s (fl. %s)' % (
b(self.surname), b(self.first_name), b(self.middle_names),
b(self.flourit_year)
)
Note that my original answer is perfectly fine when (A) you WANT to alter self as needed (in converters such as __unicode__, __str__, __repr__, ..., you shouldn't!), AND (B) you're in a class that doesn't use REALLY deep, dark, soot-black magic (apparently Django's models superclass is breaking something absolutely fundamental such as dir, setattr, and/or getattr -- though even with that hypothesis I just can't reproduce the specific symptoms somewhat-reported by the OP).
|
Globbing the processing of an object's attributes in Python?
|
Here is a Django model I'm using.
class Person(models.Model):
surname = models.CharField(max_length=255, null=True, blank=True)
first_name = models.CharField(max_length=255, null=True, blank=True)
middle_names = models.CharField(max_length=255, null=True, blank=True)
birth_year = WideYear(null=True, blank=True)
birth_year_uncertain = models.BooleanField()
death_year = WideYear(null=True, blank=True)
death_year_uncertain = models.BooleanField()
flourit_year = WideYear(null=True, blank=True)
flourit_year_uncertain = models.BooleanField()
FLOURIT_CHOICES = (
(u'D', u'Birth and death dates'),
(u'F', u'Flourit date'),
)
use_flourit = models.CharField('Date(s) to use', max_length=2, choices=FLOURIT_CHOICES)
def __unicode__(self):
if str(self.birth_year) == 'None':
self.birth_year = ''
if str(self.death_year) == 'None':
self.death_year = ''
if str(self.flourit_year) == 'None':
self.flourit_year = ''
if self.use_flourit == u'D':
return '%s, %s %s (%s - %s)' % (self.surname, self.first_name, self.middle_names, self.birth_year, self.death_year)
else:
return '%s, %s %s (fl. %s)' % (self.surname, self.first_name, self.middle_names, self.flourit_year)
This bit of code from the model's __unicode__ method seems rather verbose:
if str(self.birth_year) == 'None':
self.birth_year = ''
if str(self.death_year) == 'None':
self.death_year = ''
if str(self.flourit_year) == 'None':
self.flourit_year = ''
Its aim is to stop the __unicode__ method from returning something like
Murdoch, Rupert (1931 - None)
and to ensure the method instead returns something like
Murdoch, Rupert (1931 - )
Is there a way to "glob" that bit of code somehow, e.g. using a wildcard so that all attributes of the self object are processed?
Something like this:
if str(self.(*)) == 'None':
self.$1 = ''
Here, I've just used regular expression syntax to illustrate what I mean; obviously, it's not working python code. Essentially the idea is to loop through each of the attributes, checking if their str() representations are equal to 'None' and if so, setting them to ''. But it would be nice if this could be done more concisely than by setting up a for loop.
|
[
"for n in dir(self):\n if getattr(self, n) is None:\n setattr(self, n, '')\n\nI'm using the normal is None idiom, assuming there's no hypersubtle motivation for that weird alternative you're using, but that's a separate issue;-)\nEdit: if you're using a framework laden with VERY deep black-magic, like Django, perfectly normal Python approaches suddenly may become fraught -- as the OP seems to have indicated with a similarly-murky edit. Well, if the dark deep metaclasses of Django don't let this work (though I can't reproduce the issue as the OP reports it), there are always alternatives. In particular, since this is taking place inside a special method that SHOULDN'T alter the object (__unicode__, specifically), I recommend a simple auxiliary function (a plain good old stand-alone module-level function!):\ndef b(atr): return atr or u''\n\nto be used as follows:\ndef __unicode__(self):\n if self.use_flourit == u'D':\n return '%s, %s %s (%s - %s)' % (\n b(self.surname), b(self.first_name), b(self.middle_names),\n b(self.birth_year), b(self.death_year)\n )\n else:\n return '%s, %s %s (fl. %s)' % (\n b(self.surname), b(self.first_name), b(self.middle_names),\n b(self.flourit_year)\n )\n\nNote that my original answer is perfectly fine when (A) you WANT to alter self as needed (in converters such as __unicode__, __str__, __repr__, ..., you shouldn't!), AND (B) you're in a class that doesn't use REALLY deep, dark, soot-black magic (apparently Django's models superclass is breaking something absolutely fundamental such as dir, setattr, and/or getattr -- though even with that hypothesis I just can't reproduce the specific symptoms somewhat-reported by the OP).\n"
] |
[
5
] |
[] |
[] |
[
"django_models",
"oop",
"python"
] |
stackoverflow_0001387315_django_models_oop_python.txt
|
Q:
Multipart form post to google app engine not working
I am trying to post a multi-part form using httplib, url is hosted on google app engine, on post it says Method not allowed, though the post using urllib2 works. Full working example is attached.
My question is what is the difference between two, why one works but not the other
is there a problem in my mulipart form post code?
or the problem is with google app engine?
or something else ?
import httplib
import urllib2, urllib
# multipart form post using httplib fails, saying
# 405, 'Method Not Allowed'
url = "http://mockpublish.appspot.com/publish/api/revision_screen_create"
_, host, selector, _, _ = urllib2.urlparse.urlsplit(url)
print host, selector
h = httplib.HTTP(host)
h.putrequest('POST', selector)
BOUNDARY = '----------THE_FORM_BOUNDARY'
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
h.putheader('content-type', content_type)
h.putheader('User-Agent', 'Python-urllib/2.5,gzip(gfe)')
content = ""
L = []
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="test"')
L.append('')
L.append("xxx")
L.append('--' + BOUNDARY + '--')
L.append('')
content = '\r\n'.join(L)
h.putheader('content-length', str(len(content)))
h.endheaders()
h.send(content)
print h.getreply()
# post using urllib2 works
data = urllib.urlencode({'test':'xxx'})
request = urllib2.Request(url)
f = urllib2.urlopen(request, data)
output = f.read()
print output
Edit: After changing putrequest to request (on Nick Johnson's suggestion), it works
url = "http://mockpublish.appspot.com/publish/api/revision_screen_create"
_, host, selector, _, _ = urllib2.urlparse.urlsplit(url)
h = httplib.HTTPConnection(host)
BOUNDARY = '----------THE_FORM_BOUNDARY'
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
content = ""
L = []
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="test"')
L.append('')
L.append("xxx")
L.append('--' + BOUNDARY + '--')
L.append('')
content = '\r\n'.join(L)
h.request('POST', selector, content,{'content-type':content_type})
res = h.getresponse()
print res.status, res.reason, res.read()
so now the question remains what is the difference between two approaches and can first first be made to work?
A:
Nick Johnson's answer
Have you tried sending the request with httplib using .request() instead of .putrequest() etc, supplying the headers as a dict?
it works!
|
Multipart form post to google app engine not working
|
I am trying to post a multi-part form using httplib, url is hosted on google app engine, on post it says Method not allowed, though the post using urllib2 works. Full working example is attached.
My question is what is the difference between two, why one works but not the other
is there a problem in my mulipart form post code?
or the problem is with google app engine?
or something else ?
import httplib
import urllib2, urllib
# multipart form post using httplib fails, saying
# 405, 'Method Not Allowed'
url = "http://mockpublish.appspot.com/publish/api/revision_screen_create"
_, host, selector, _, _ = urllib2.urlparse.urlsplit(url)
print host, selector
h = httplib.HTTP(host)
h.putrequest('POST', selector)
BOUNDARY = '----------THE_FORM_BOUNDARY'
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
h.putheader('content-type', content_type)
h.putheader('User-Agent', 'Python-urllib/2.5,gzip(gfe)')
content = ""
L = []
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="test"')
L.append('')
L.append("xxx")
L.append('--' + BOUNDARY + '--')
L.append('')
content = '\r\n'.join(L)
h.putheader('content-length', str(len(content)))
h.endheaders()
h.send(content)
print h.getreply()
# post using urllib2 works
data = urllib.urlencode({'test':'xxx'})
request = urllib2.Request(url)
f = urllib2.urlopen(request, data)
output = f.read()
print output
Edit: After changing putrequest to request (on Nick Johnson's suggestion), it works
url = "http://mockpublish.appspot.com/publish/api/revision_screen_create"
_, host, selector, _, _ = urllib2.urlparse.urlsplit(url)
h = httplib.HTTPConnection(host)
BOUNDARY = '----------THE_FORM_BOUNDARY'
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
content = ""
L = []
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="test"')
L.append('')
L.append("xxx")
L.append('--' + BOUNDARY + '--')
L.append('')
content = '\r\n'.join(L)
h.request('POST', selector, content,{'content-type':content_type})
res = h.getresponse()
print res.status, res.reason, res.read()
so now the question remains what is the difference between two approaches and can first first be made to work?
|
[
"Nick Johnson's answer \nHave you tried sending the request with httplib using .request() instead of .putrequest() etc, supplying the headers as a dict?\nit works!\n"
] |
[
0
] |
[] |
[] |
[
"forms",
"google_app_engine",
"html_post",
"python"
] |
stackoverflow_0001254270_forms_google_app_engine_html_post_python.txt
|
Q:
Applescript - pygame, application bundle
I'm trying to learn pygame, And I found the best way to have the finished game (assuming python 2.6 and pygame installed) is to have an applescript that runs it, and saved as an app bundle (with python files etc. inside the bundle). Here is what I have:
do shell script "cd " & the quoted form of the POSIX path of (path to me) & "Contents/Resources/files\n/usr/local/bin/pythonw creeps.py"
I need the cd command because the python code uses the relative path to get at its images folder. The files directory is where my python files are, and subdirectories such as 'images'. I think having an app file like this is a lot better than a lone .py file, which could opened by anything by default. Do you think this is a good way to bundle a python script? Also, would I be able to just bundle pygame along with it instead of requiring that to be installed too? Thanks.
Also, now, the script runs and python is also running, each with their own dock icons. Could I make it so that the script just executes and quits, leaving python running? Thanks.
I changed the script to:
do shell script ". ~/.bash_profile\npythonw2.6 " & the quoted form of the POSIX path of (path to me) & "Contents/Resources/files/creeps.py"
That way, it searches the path instead of only looking in /usr/local/bin. ~/.bash_profile needs to be called to set and export the $PATH for python (which it automatically adds in .bash_profile when you install python).
A problem is that the script app bundle 'isn't responding' while its running, but the Python app is fine. How can I make the script app bundle get python going and then quit and leave python there by itself? And couldn't I just put the pygame module inside the bundle? Its only ~9MB.
A:
pyinstaller should let you bundle pygame (use the SVN version: the released one is WAY out of date). Also, I suggest you have your code find relative directories more nicely:
import os
resourcesdir = os.path.join(os.path.dirname(__file__), 'Resources')
or the like, to avoid that clunky cd;-).
|
Applescript - pygame, application bundle
|
I'm trying to learn pygame, And I found the best way to have the finished game (assuming python 2.6 and pygame installed) is to have an applescript that runs it, and saved as an app bundle (with python files etc. inside the bundle). Here is what I have:
do shell script "cd " & the quoted form of the POSIX path of (path to me) & "Contents/Resources/files\n/usr/local/bin/pythonw creeps.py"
I need the cd command because the python code uses the relative path to get at its images folder. The files directory is where my python files are, and subdirectories such as 'images'. I think having an app file like this is a lot better than a lone .py file, which could opened by anything by default. Do you think this is a good way to bundle a python script? Also, would I be able to just bundle pygame along with it instead of requiring that to be installed too? Thanks.
Also, now, the script runs and python is also running, each with their own dock icons. Could I make it so that the script just executes and quits, leaving python running? Thanks.
I changed the script to:
do shell script ". ~/.bash_profile\npythonw2.6 " & the quoted form of the POSIX path of (path to me) & "Contents/Resources/files/creeps.py"
That way, it searches the path instead of only looking in /usr/local/bin. ~/.bash_profile needs to be called to set and export the $PATH for python (which it automatically adds in .bash_profile when you install python).
A problem is that the script app bundle 'isn't responding' while its running, but the Python app is fine. How can I make the script app bundle get python going and then quit and leave python there by itself? And couldn't I just put the pygame module inside the bundle? Its only ~9MB.
|
[
"pyinstaller should let you bundle pygame (use the SVN version: the released one is WAY out of date). Also, I suggest you have your code find relative directories more nicely:\nimport os\nresourcesdir = os.path.join(os.path.dirname(__file__), 'Resources')\n\nor the like, to avoid that clunky cd;-).\n"
] |
[
1
] |
[] |
[] |
[
"applescript",
"macos",
"pygame",
"python"
] |
stackoverflow_0001387775_applescript_macos_pygame_python.txt
|
Q:
IF statement causing internal server error with webpy
I have this class:
class View(object):
def main_page(self, extra_placeholders = None):
file = '/media/Shared/sites/www/subdomains/pypular/static/layout.tmpl'
placeholders = { 'site_name' : 'pypular' }
# If we passed placeholders vars, append them
if extra_placeholders != None:
for k, v in extra_placeholders.iteritems():
placeholders[k] = v
My problem in the code above is the if statement
As you can see, the function takes an argument(extra_placeholders) which is a dict.
If i don't pass a parameter to main_page(),
if extra_placeholders == None:
return 'i executed'
runs fine. however,
if extra_placeholders != None:
return 'i cause error'
does not work. it causes a 500 internal server error. Why?
A:
should you be using instead
if !( extra_placeholders is None) :
Edit: To reflect comment:
It appears (thanks) that you can also use:
if extra_placeholders is not None :
Update: The orginal link is now dead so this SO answer is a good reference : https://stackoverflow.com/a/3289606/30225
|
IF statement causing internal server error with webpy
|
I have this class:
class View(object):
def main_page(self, extra_placeholders = None):
file = '/media/Shared/sites/www/subdomains/pypular/static/layout.tmpl'
placeholders = { 'site_name' : 'pypular' }
# If we passed placeholders vars, append them
if extra_placeholders != None:
for k, v in extra_placeholders.iteritems():
placeholders[k] = v
My problem in the code above is the if statement
As you can see, the function takes an argument(extra_placeholders) which is a dict.
If i don't pass a parameter to main_page(),
if extra_placeholders == None:
return 'i executed'
runs fine. however,
if extra_placeholders != None:
return 'i cause error'
does not work. it causes a 500 internal server error. Why?
|
[
"should you be using instead\nif !( extra_placeholders is None) :\n\nEdit: To reflect comment:\nIt appears (thanks) that you can also use:\n if extra_placeholders is not None :\n\nUpdate: The orginal link is now dead so this SO answer is a good reference : https://stackoverflow.com/a/3289606/30225\n"
] |
[
1
] |
[] |
[] |
[
"mod_wsgi",
"python",
"web.py"
] |
stackoverflow_0001387902_mod_wsgi_python_web.py.txt
|
Q:
How can I tokenize this with a regex?
Suppose I have strings like the following :
OneTwo
ThreeFour
AnotherString
DVDPlayer
CDPlayer
I know how to tokenize the camel-case ones, except the "DVDPlayer" and "CDPlayer". I know I could tokenize them manually, but maybe you can show me a regex that can handle all the cases?
EDIT:
the expected tokens are :
OneTwo -> One Two
...
CDPlayer -> CD Player
DVDPlayer -> DVD Player
A:
Look at my answer on the question, .NET - How can you split a “caps” delimited string into an array?.
The regex looks like this:
/([A-Z]+(?=$|[A-Z][a-z])|[A-Z]?[a-z]+)/g
It can be modified slightly to allow searching for camel-cased tokens, by replacing the $ with \b:
/([A-Z]+(?=\b|[A-Z][a-z])|[A-Z]?[a-z]+)/g
A:
Try this regular expression:
[A-Z](?:[a-z]+|[A-Z]*?(?=[A-Z][a-z]|\b))
A:
The regex
([A-Z]+[a-z]*)([A-Z][a-z]*)
would do what you want assuming that all your strings are 2 words long and the second word is not like DVD.
I.e. it would work for your examples but maybe not for what you are actually trying to do.
A:
Here's my attempt:
([A-Z][a-z]+)|([A-Z]+(?=[A-Z][a-z]+))
A:
Try a non-greedy look ahead. A token would be one or more uppercase characters followed by zero or more lowercase characters. The token would terminate when the next two character are an upper case and lower case - matching this section is what the non-greedy matching can be used. This approach has limitation but it should work for the examples you provided.
|
How can I tokenize this with a regex?
|
Suppose I have strings like the following :
OneTwo
ThreeFour
AnotherString
DVDPlayer
CDPlayer
I know how to tokenize the camel-case ones, except the "DVDPlayer" and "CDPlayer". I know I could tokenize them manually, but maybe you can show me a regex that can handle all the cases?
EDIT:
the expected tokens are :
OneTwo -> One Two
...
CDPlayer -> CD Player
DVDPlayer -> DVD Player
|
[
"Look at my answer on the question, .NET - How can you split a “caps” delimited string into an array?.\nThe regex looks like this:\n/([A-Z]+(?=$|[A-Z][a-z])|[A-Z]?[a-z]+)/g\n\nIt can be modified slightly to allow searching for camel-cased tokens, by replacing the $ with \\b:\n/([A-Z]+(?=\\b|[A-Z][a-z])|[A-Z]?[a-z]+)/g\n\n",
"Try this regular expression:\n[A-Z](?:[a-z]+|[A-Z]*?(?=[A-Z][a-z]|\\b))\n\n",
"The regex\n([A-Z]+[a-z]*)([A-Z][a-z]*)\n\nwould do what you want assuming that all your strings are 2 words long and the second word is not like DVD.\nI.e. it would work for your examples but maybe not for what you are actually trying to do.\n",
"Here's my attempt: \n([A-Z][a-z]+)|([A-Z]+(?=[A-Z][a-z]+))\n\n",
"Try a non-greedy look ahead. A token would be one or more uppercase characters followed by zero or more lowercase characters. The token would terminate when the next two character are an upper case and lower case - matching this section is what the non-greedy matching can be used. This approach has limitation but it should work for the examples you provided.\n"
] |
[
4,
4,
1,
1,
0
] |
[] |
[] |
[
"lexical_analysis",
"python",
"regex",
"ruby",
"tokenize"
] |
stackoverflow_0001389062_lexical_analysis_python_regex_ruby_tokenize.txt
|
Q:
how to add json library
i am new to python, on my Mac, when i issue command
User:ihasfriendz user$ python main.py
Traceback (most recent call last):
File "main.py", line 2, in <module>
import json
ImportError: No module named json
I get error on json. how to add this library? i'm using 2.5 (the default came with leopard)
A:
You can also install simplejson.
If you have pip (see https://pypi.python.org/pypi/pip) as your Python package manager you can install simplejson with:
pip install simplejson
This is similar to the comment of installing with easy_install, but I prefer pip to easy_install as you can easily uninstall in pip with "pip uninstall package".
A:
AFAIK the json module was added in version 2.6, see here. I'm guessing you can update your python installation to the latest stable 2.6 from this page.
A:
You can also install json-py from here http://sourceforge.net/projects/json-py/
|
how to add json library
|
i am new to python, on my Mac, when i issue command
User:ihasfriendz user$ python main.py
Traceback (most recent call last):
File "main.py", line 2, in <module>
import json
ImportError: No module named json
I get error on json. how to add this library? i'm using 2.5 (the default came with leopard)
|
[
"You can also install simplejson.\nIf you have pip (see https://pypi.python.org/pypi/pip) as your Python package manager you can install simplejson with:\n pip install simplejson\n\nThis is similar to the comment of installing with easy_install, but I prefer pip to easy_install as you can easily uninstall in pip with \"pip uninstall package\".\n",
"AFAIK the json module was added in version 2.6, see here. I'm guessing you can update your python installation to the latest stable 2.6 from this page.\n",
"You can also install json-py from here http://sourceforge.net/projects/json-py/\n"
] |
[
31,
5,
2
] |
[] |
[] |
[
"json",
"macos",
"python",
"python_2.5"
] |
stackoverflow_0001389141_json_macos_python_python_2.5.txt
|
Q:
What kind of setup (IDE et al) do I need to run IronPython unit tests of C#.NET developed assemblies?
I'm attempting to learn IronPython, to broaden my .NET horizons. I want to be able to use Python to write the unit-tests for my next personal project. So being able to access C#.NET assemblies from my Python code is necessary. I also wanted an IDE with auto-complete and smart indenting. PyScripter seemed like a good option, but can I run IronPython from it, and can I link to .NET assemblies from it?
What kind of setup (IDE et al) do I need to run IronPython unit tests of C#.NET developed assemblies?
A:
Here's a good link to an article about different IDE's and how they work with IronPython:
http://www.voidspace.org.uk/ironpython/tools-and-ides.shtml
A:
See Michael Foord's website for IDE and unittest also discover. And many IronPython articles and the book IronPython in Action
and his tweets save you having to hunt for IronPython references
A:
FWIW, Frood himself uses Wing IDE.
But, if its on Windows, why not VS?
A:
You can also consider using Eclipse with PyDev which has support for IronPython
A:
You can use Visual Studio. I use IronPython Studio integrated with VS2008. But I feel that it has very poor intellisense for Python.
|
What kind of setup (IDE et al) do I need to run IronPython unit tests of C#.NET developed assemblies?
|
I'm attempting to learn IronPython, to broaden my .NET horizons. I want to be able to use Python to write the unit-tests for my next personal project. So being able to access C#.NET assemblies from my Python code is necessary. I also wanted an IDE with auto-complete and smart indenting. PyScripter seemed like a good option, but can I run IronPython from it, and can I link to .NET assemblies from it?
What kind of setup (IDE et al) do I need to run IronPython unit tests of C#.NET developed assemblies?
|
[
"Here's a good link to an article about different IDE's and how they work with IronPython:\nhttp://www.voidspace.org.uk/ironpython/tools-and-ides.shtml\n",
"See Michael Foord's website for IDE and unittest also discover. And many IronPython articles and the book IronPython in Action\nand his tweets save you having to hunt for IronPython references \n",
"FWIW, Frood himself uses Wing IDE.\nBut, if its on Windows, why not VS?\n",
"You can also consider using Eclipse with PyDev which has support for IronPython\n",
"You can use Visual Studio. I use IronPython Studio integrated with VS2008. But I feel that it has very poor intellisense for Python.\n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
".net",
"ironpython",
"python"
] |
stackoverflow_0001377548_.net_ironpython_python.txt
|
Q:
What's wrong with my simple HTTP socket based proxy script?
I wrote a simple Python script for a proxy functionality. It works fine, however, if the requested webpage has many other HTTP requests, e.g. Google maps, the page is rendered quite slow.
Any hints as to what might be the bottleneck in my code, and how I can improve?
#!/usr/bin/python
import socket,select,re
from threading import Thread
class ProxyServer():
def __init__(self, host, port):
self.host=host
self.port=port
self.sk1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def startServer(self):
self.sk1.bind((self.host,self.port))
self.sk1.listen(256)
print "proxy is ready for connections..."
while(1):
conn,clientAddr = self.sk1.accept()
# print "new request coming in from " + str(clientAddr)
handler = RequestHandler(conn)
handler.start()
class RequestHandler(Thread):
def __init__(self, sk1):
Thread.__init__(self)
self.clientSK = sk1
self.buffer = ''
self.header = {}
def run(self):
sk1 = self.clientSK
sk2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while 1:
self.buffer += sk1.recv(8192)
if self.buffer.find('\n') != -1:
break;
self.header = self.processHeader(self.buffer)
if len(self.header)>0: #header got processed
hostString = self.header['Host']
host=port=''
if hostString.__contains__(':'): # with port number
host,port = hostString.split(':')
else:
host,port = hostString,"80"
sk2.connect((host,int(port)))
else:
sk1.send('bad request')
sk1.close();
return
inputs=[sk1,sk2]
sk2.send(self.buffer)
#counter
count = 0
while 1:
count+=1
rl, wl, xl = select.select(inputs, [], [], 3)
if xl:
break
if rl:
for x in rl:
data = x.recv(8192)
if x is sk1:
output = sk2
else:
output = sk1
if data:
output.send(data)
count = 0
if count == 20:
break
sk1.close()
sk2.close()
def processHeader(self,header):
header = header.replace("\r\n","\n")
lines = header.split('\n')
result = {}
uLine = lines[0] # url line
if len(uLine) == 0: return result # if url line empty return empty dict
vl = uLine.split(' ')
result['method'] = vl[0]
result['url'] = vl[1]
result['protocol'] = vl[2]
for line in lines[1: - 1]:
if len(line)>3: # if line is not empty
exp = re.compile(': ')
nvp = exp.split(line, 1)
if(len(nvp)>1):
result[nvp[0]] = nvp[1]
return result
if __name__ == "__main__":
HOST, PORT = "0.0.0.0", 8088
proxy = ProxyServer(HOST,PORT)
proxy.startServer()
A:
I'm not sure what your speed problems are, but here are some other nits I found to pick:
result['protocal'] = vl[2]
should be
result['protocol'] = vl[2]
This code is indented one level too deep:
sk2.connect((host,int(port)))
You can use this decorator to profile your individual methods by line.
|
What's wrong with my simple HTTP socket based proxy script?
|
I wrote a simple Python script for a proxy functionality. It works fine, however, if the requested webpage has many other HTTP requests, e.g. Google maps, the page is rendered quite slow.
Any hints as to what might be the bottleneck in my code, and how I can improve?
#!/usr/bin/python
import socket,select,re
from threading import Thread
class ProxyServer():
def __init__(self, host, port):
self.host=host
self.port=port
self.sk1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def startServer(self):
self.sk1.bind((self.host,self.port))
self.sk1.listen(256)
print "proxy is ready for connections..."
while(1):
conn,clientAddr = self.sk1.accept()
# print "new request coming in from " + str(clientAddr)
handler = RequestHandler(conn)
handler.start()
class RequestHandler(Thread):
def __init__(self, sk1):
Thread.__init__(self)
self.clientSK = sk1
self.buffer = ''
self.header = {}
def run(self):
sk1 = self.clientSK
sk2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while 1:
self.buffer += sk1.recv(8192)
if self.buffer.find('\n') != -1:
break;
self.header = self.processHeader(self.buffer)
if len(self.header)>0: #header got processed
hostString = self.header['Host']
host=port=''
if hostString.__contains__(':'): # with port number
host,port = hostString.split(':')
else:
host,port = hostString,"80"
sk2.connect((host,int(port)))
else:
sk1.send('bad request')
sk1.close();
return
inputs=[sk1,sk2]
sk2.send(self.buffer)
#counter
count = 0
while 1:
count+=1
rl, wl, xl = select.select(inputs, [], [], 3)
if xl:
break
if rl:
for x in rl:
data = x.recv(8192)
if x is sk1:
output = sk2
else:
output = sk1
if data:
output.send(data)
count = 0
if count == 20:
break
sk1.close()
sk2.close()
def processHeader(self,header):
header = header.replace("\r\n","\n")
lines = header.split('\n')
result = {}
uLine = lines[0] # url line
if len(uLine) == 0: return result # if url line empty return empty dict
vl = uLine.split(' ')
result['method'] = vl[0]
result['url'] = vl[1]
result['protocol'] = vl[2]
for line in lines[1: - 1]:
if len(line)>3: # if line is not empty
exp = re.compile(': ')
nvp = exp.split(line, 1)
if(len(nvp)>1):
result[nvp[0]] = nvp[1]
return result
if __name__ == "__main__":
HOST, PORT = "0.0.0.0", 8088
proxy = ProxyServer(HOST,PORT)
proxy.startServer()
|
[
"I'm not sure what your speed problems are, but here are some other nits I found to pick:\nresult['protocal'] = vl[2]\n\nshould be\nresult['protocol'] = vl[2]\n\nThis code is indented one level too deep:\nsk2.connect((host,int(port)))\n\nYou can use this decorator to profile your individual methods by line.\n"
] |
[
0
] |
[] |
[] |
[
"http_proxy",
"python",
"sockets"
] |
stackoverflow_0001389278_http_proxy_python_sockets.txt
|
Q:
Should properties do nontrivial initialization?
I have an object that is basically a Python implementation of an Oracle sequence. For a variety of reasons, we have to get the nextval of an Oracle sequence, count up manually when determining primary keys, then update the sequence once the records have been inserted.
So here's the steps my object does:
Construct an object, with a key_generator attribute initially set to None.
Get the first value from the database, passing it to an itertools.count.
Return keys from that generator using a property next_key.
I'm a little bit unsure about where to do step 2. I can think of three possibilities:
Skip step 1 and do step 2 in the constructor. I find this evil because I tend to dislike doing this kind of initialization in a constructor.
Make next_key get the starting key from the database the first time it is called. I find this evil because properties are typically assumed to be trivial.
Make next_key into a get_next_key method. I dislike this because properties just seem more natural here.
Which is the lesser of 3 evils? I'm leaning towards #2, because only the first call to this property will result in a database query.
A:
I think your doubts come from PEP-8:
Note 3: Avoid using properties for computationally expensive
operations; the attribute notation makes the caller believe
that access is (relatively) cheap.
Adherence to a standard behavior is usually quite a good idea; and this would be a reason to scrap away solution #2.
However, if you feel the interface is better with property than a method, then I would simply document that the first call is more expensive, and go with that (solution #2).
In the end, recommendations are meant to be interpreted.
A:
I agree that attribute access and everything that looks like it (i.e. properties in the Python context) should be fairly trivial. If a property is going to perform a potentially costly operation, use a method to make this explicit. I recommend a name like "fetch_XYZ" or "retrieve_XYZ", since "get_XYZ" is used in some languages (e.g. Java) as a convention for simple attribute access, is quite generic, and does not sound "costly" either.
A good guideline is: If your property could throw an exception that is not due to a programming error, it should be a method. For example, throwing a (hypothetical) DatabaseConnectionError from a property is bad, while throwing an ObjectStateError would be okay.
Also, when I understood you correctly, you want to return the next key, whenever the next_key property is accessed. I recommend strongly against having side-effects (apart from caching, cheap lazy initialization, etc.) in your properties. Properties (and attributes for that matter) should be idempotent.
A:
I've decided that the key smell in the solution I'm proposing is that the property I was creating contained the word "next" in it. Thus, instead of making a next_key property, I've decided to turn my DatabaseIntrospector class into a KeyCounter class and implemented the iterator protocol (ie making a plain old next method that returns the next key).
|
Should properties do nontrivial initialization?
|
I have an object that is basically a Python implementation of an Oracle sequence. For a variety of reasons, we have to get the nextval of an Oracle sequence, count up manually when determining primary keys, then update the sequence once the records have been inserted.
So here's the steps my object does:
Construct an object, with a key_generator attribute initially set to None.
Get the first value from the database, passing it to an itertools.count.
Return keys from that generator using a property next_key.
I'm a little bit unsure about where to do step 2. I can think of three possibilities:
Skip step 1 and do step 2 in the constructor. I find this evil because I tend to dislike doing this kind of initialization in a constructor.
Make next_key get the starting key from the database the first time it is called. I find this evil because properties are typically assumed to be trivial.
Make next_key into a get_next_key method. I dislike this because properties just seem more natural here.
Which is the lesser of 3 evils? I'm leaning towards #2, because only the first call to this property will result in a database query.
|
[
"I think your doubts come from PEP-8:\n\n Note 3: Avoid using properties for computationally expensive\n operations; the attribute notation makes the caller believe\n that access is (relatively) cheap.\n\n\nAdherence to a standard behavior is usually quite a good idea; and this would be a reason to scrap away solution #2.\nHowever, if you feel the interface is better with property than a method, then I would simply document that the first call is more expensive, and go with that (solution #2).\nIn the end, recommendations are meant to be interpreted.\n",
"I agree that attribute access and everything that looks like it (i.e. properties in the Python context) should be fairly trivial. If a property is going to perform a potentially costly operation, use a method to make this explicit. I recommend a name like \"fetch_XYZ\" or \"retrieve_XYZ\", since \"get_XYZ\" is used in some languages (e.g. Java) as a convention for simple attribute access, is quite generic, and does not sound \"costly\" either.\nA good guideline is: If your property could throw an exception that is not due to a programming error, it should be a method. For example, throwing a (hypothetical) DatabaseConnectionError from a property is bad, while throwing an ObjectStateError would be okay.\nAlso, when I understood you correctly, you want to return the next key, whenever the next_key property is accessed. I recommend strongly against having side-effects (apart from caching, cheap lazy initialization, etc.) in your properties. Properties (and attributes for that matter) should be idempotent.\n",
"I've decided that the key smell in the solution I'm proposing is that the property I was creating contained the word \"next\" in it. Thus, instead of making a next_key property, I've decided to turn my DatabaseIntrospector class into a KeyCounter class and implemented the iterator protocol (ie making a plain old next method that returns the next key).\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
"initialization",
"properties",
"python"
] |
stackoverflow_0001386210_initialization_properties_python.txt
|
Q:
AN error about "User.add_to_class" to extend my user?I do not know why
In my models.py, I user these code to extent two fields:
User.add_to_class('bio', models.TextField(blank=True))
User.add_to_class('about', models.TextField(blank=True))
But when I creat a User :
user = User.objects.create_user(username=self.cleaned_data['username'], \
email=self.cleaned_data['email'],password=self.cleaned_data['password1'])
There is an error like this :
ProgrammingError at /account/register/
(1110, "Column 'about' specified twice")
Request Method: POST
Request URL: http://127.0.0.1:8000/account/register/
Exception Type: ProgrammingError
Exception Value: (1110, "Column 'about' specified twice")
An I check the sql that django creats,I find it is very weird:
'INSERT INTO `auth_user` (`username`, `first_name`, `last_name`, `email`, `password`, `is_staff`, `is_active`, `is_superuser`, `last_login`, `date_joined`, `about`,'bio','about','bio') VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)'
There are two about and bio .But in Mysql table there is only one 'about' and 'bio'.
On the another hand,in which case the models.py will be run twice, I`ve no idea.
I do not know why.Please help me!
A:
This is not a good way to store additional user information, for a number of reasons, as James Bennett points out in the linked thread. It's no surprise that you're getting weird SQL output and struggling to debug it. Keep things easy for yourself by using a related profile model instead.
|
AN error about "User.add_to_class" to extend my user?I do not know why
|
In my models.py, I user these code to extent two fields:
User.add_to_class('bio', models.TextField(blank=True))
User.add_to_class('about', models.TextField(blank=True))
But when I creat a User :
user = User.objects.create_user(username=self.cleaned_data['username'], \
email=self.cleaned_data['email'],password=self.cleaned_data['password1'])
There is an error like this :
ProgrammingError at /account/register/
(1110, "Column 'about' specified twice")
Request Method: POST
Request URL: http://127.0.0.1:8000/account/register/
Exception Type: ProgrammingError
Exception Value: (1110, "Column 'about' specified twice")
An I check the sql that django creats,I find it is very weird:
'INSERT INTO `auth_user` (`username`, `first_name`, `last_name`, `email`, `password`, `is_staff`, `is_active`, `is_superuser`, `last_login`, `date_joined`, `about`,'bio','about','bio') VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)'
There are two about and bio .But in Mysql table there is only one 'about' and 'bio'.
On the another hand,in which case the models.py will be run twice, I`ve no idea.
I do not know why.Please help me!
|
[
"This is not a good way to store additional user information, for a number of reasons, as James Bennett points out in the linked thread. It's no surprise that you're getting weird SQL output and struggling to debug it. Keep things easy for yourself by using a related profile model instead.\n"
] |
[
3
] |
[] |
[] |
[
"django",
"mysql",
"python"
] |
stackoverflow_0001389627_django_mysql_python.txt
|
Q:
how to convert base64 /radix64 public key to a pem format in python
is there any python method for converting base64 encoded key to a pem format .
how to convert ASCII-armored PGP public key to a MIME encoded form.
thanks
A:
ASCII-armored and PEM are very similar. You just need to change the BEGIN/END markers, strip the PGP headers and checksums. I've done this before in PHP. I just ported it to Python for you,
import re
import StringIO
def pgp_pubkey_to_pem(pgp_key):
# Normalise newlines
pgp_key = re.compile('(\n|\r\n|\r)').sub('\n', pgp_key)
# Extract block
buffer = StringIO.StringIO()
# Write PEM header
buffer.write('-----BEGIN RSA PUBLIC KEY-----\n')
in_block = 0
in_body = 0
for line in pgp_key.split('\n'):
if line.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----'):
in_block = 1
elif in_block and line.strip() == '':
in_body = 1
elif in_block and line.startswith('-----END PGP PUBLIC KEY BLOCK-----'):
# No checksum, ignored for now
break
elif in_body and line.startswith('='):
# Checksum, end of the body
break
elif in_body:
buffer.write(line+'\n')
# Write PEM footer
buffer.write('-----END RSA PUBLIC KEY-----\n')
return buffer.getvalue()
|
how to convert base64 /radix64 public key to a pem format in python
|
is there any python method for converting base64 encoded key to a pem format .
how to convert ASCII-armored PGP public key to a MIME encoded form.
thanks
|
[
"ASCII-armored and PEM are very similar. You just need to change the BEGIN/END markers, strip the PGP headers and checksums. I've done this before in PHP. I just ported it to Python for you,\nimport re\nimport StringIO\n\ndef pgp_pubkey_to_pem(pgp_key):\n # Normalise newlines\n pgp_key = re.compile('(\\n|\\r\\n|\\r)').sub('\\n', pgp_key)\n\n # Extract block\n buffer = StringIO.StringIO()\n # Write PEM header\n buffer.write('-----BEGIN RSA PUBLIC KEY-----\\n')\n\n in_block = 0\n in_body = 0\n for line in pgp_key.split('\\n'):\n if line.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----'):\n in_block = 1\n elif in_block and line.strip() == '':\n in_body = 1\n elif in_block and line.startswith('-----END PGP PUBLIC KEY BLOCK-----'):\n # No checksum, ignored for now\n break\n elif in_body and line.startswith('='):\n # Checksum, end of the body\n break\n elif in_body:\n buffer.write(line+'\\n')\n\n # Write PEM footer\n buffer.write('-----END RSA PUBLIC KEY-----\\n')\n\n return buffer.getvalue()\n\n"
] |
[
4
] |
[] |
[] |
[
"encoding",
"python"
] |
stackoverflow_0001387867_encoding_python.txt
|
Q:
Verify CSV against given format
I am expecting users to upload a CSV file of max size 1MB to a web form that should fit a given format similar to:
"<String>","<String>",<Int>,<Float>
That will be processed later. I would like to verify the file fits a specified format so that the program that shall later use the file doesnt receive unexpected input and that there are no security concerns (say some injection attack against the parsing script that does some calculations and db insert).
(1) What would be the best way to go about doing this that would be fast and thorough? From what I've researched I could go the path of regex or something more like this. I've looked at the python csv module but that doesnt appear to have any built in verification.
(2) Assuming I go for a regex, can anyone direct me to towards the best way to do this? Do I match for illegal characters and reject on that? (eg. no '/' '\' '<' '>' '{' '}' etc.) or match on all legal eg. [a-zA-Z0-9]{1,10} for the string component? I'm not too familiar with regular expressions so pointers or examples would be appreciated.
EDIT:
Strings should contain no commas or quotes it would just contain a name (ie. first name, last name). And yes I forgot to add they would be double quoted.
EDIT #2:
Thanks for all the answers. Cutplace is quite interesting but is a standalone. Decided to go with pyparsing in the end because it gives more flexibility should I add more formats.
A:
Pyparsing will process this data, and will be tolerant of unexpected things like spaces before and after commas, commas within quotes, etc. (csv module is too, but regex solutions force you to add "\s*" bits all over the place).
from pyparsing import *
integer = Regex(r"-?\d+").setName("integer")
integer.setParseAction(lambda tokens: int(tokens[0]))
floatnum = Regex(r"-?\d+\.\d*").setName("float")
floatnum.setParseAction(lambda tokens: float(tokens[0]))
dblQuotedString.setParseAction(removeQuotes)
COMMA = Suppress(',')
validLine = dblQuotedString + COMMA + dblQuotedString + COMMA + \
integer + COMMA + floatnum + LineEnd()
tests = """\
"good data","good2",100,3.14
"good data" , "good2", 100, 3.14
bad, "good","good2",100,3.14
"bad","good2",100,3
"bad","good2",100.5,3
""".splitlines()
for t in tests:
print t
try:
print validLine.parseString(t).asList()
except ParseException, pe:
print pe.markInputline('?')
print pe.msg
print
Prints
"good data","good2",100,3.14
['good data', 'good2', 100, 3.1400000000000001]
"good data" , "good2", 100, 3.14
['good data', 'good2', 100, 3.1400000000000001]
bad, "good","good2",100,3.14
?bad, "good","good2",100,3.14
Expected string enclosed in double quotes
"bad","good2",100,3
"bad","good2",100,?3
Expected float
"bad","good2",100.5,3
"bad","good2",100?.5,3
Expected ","
You will probably be stripping those quotation marks off at some future time, pyparsing can do that at parse time by adding:
dblQuotedString.setParseAction(removeQuotes)
If you want to add comment support to your input file, say a '#' followed by the rest of the line, you can do this:
comment = '#' + restOfline
validLine.ignore(comment)
You can also add names to these fields, so that you can access them by name instead of index position (which I find gives more robust code in light of changes down the road):
validLine = dblQuotedString("key") + COMMA + dblQuotedString("title") + COMMA + \
integer("qty") + COMMA + floatnum("price") + LineEnd()
And your post-processing code can then do this:
data = validLine.parseString(t)
print "%(key)s: %(title)s, %(qty)d in stock at $%(price).2f" % data
print data.qty*data.price
A:
I'd vote for parsing the file, checking you've got 4 components per record, that the first two components are strings, the third is an int (checking for NaN conditions), and the fourth is a float (also checking for NaN conditions).
Python would be an excellent tool for the job.
I'm not aware of any libraries in Python to deal with validation of CSV files against a spec, but it really shouldn't be too hard to write.
import csv
import math
dataChecker = csv.reader(open('data.csv'))
for row in dataChecker:
if len(row) != 4:
print 'Invalid row length.'
return
my_int = int(row[2])
my_float = float(row[3])
if math.isnan(my_int):
print 'Bad int found'
return
if math.isnan(my_float):
print 'Bad float found'
return
print 'All good!'
A:
Here's a small snippet I made:
import csv
f = csv.reader(open("test.csv"))
for value in f:
value[0] = str(value[0])
value[1] = str(value[1])
value[2] = int(value[2])
value[3] = float(value[3])
If you run that with a file that doesn't have the format your specified, you'll get an exception:
$ python valid.py
Traceback (most recent call last):
File "valid.py", line 8, in <module>
i[2] = int(i[2])
ValueError: invalid literal for int() with base 10: 'a3'
You can then make a try-except ValueError to catch it and let the users know what they did wrong.
A:
There can be a lot of corner-cases for parsing CSV, so you probably don't want to try doing it "by hand". At least start with a package/library built-in to the language that you're using, even if it doesn't do all the "verification" you can think of.
Once you get there, then examine the fields for your list of "illegal" chars, or examine the values in each field to determine they're valid (if you can do so). You also don't even need a regex for this task necessarily, but it may be more concise to do it that way.
You might also disallow embedded \r or \n, \0 or \t. Just loop through the fields and check them after you've loaded the data with your csv lib.
A:
Try Cutplace. It verifies that tabluar data conforms to an interface control document.
A:
Ideally, you want your filtering to be as restrictive as possible - the fewer things you allow, the fewer potential avenues of attack. For instance, a float or int field has a very small number of characters (and very few configurations of those characters) which should actually be allowed. String filtering should ideally be restricted to only what characters people would have a reason to input - without knowing the larger context it's hard to tell you exactly which you should allow, but at a bare minimum the string match regex should require quoting of strings and disallow anything that would terminate the string early.
Keep in mind, however, that some names may contain things like single quotes ("O'Neil", for instance) or dashes, so you couldn't necessarily rule those out.
Something like...
/"[a-zA-Z' -]+"/
...would probably be ideal for double-quoted strings which are supposed to contain names. You could replace the + with a {x,y} length min/max if you wanted to enforce certain lengths as well.
|
Verify CSV against given format
|
I am expecting users to upload a CSV file of max size 1MB to a web form that should fit a given format similar to:
"<String>","<String>",<Int>,<Float>
That will be processed later. I would like to verify the file fits a specified format so that the program that shall later use the file doesnt receive unexpected input and that there are no security concerns (say some injection attack against the parsing script that does some calculations and db insert).
(1) What would be the best way to go about doing this that would be fast and thorough? From what I've researched I could go the path of regex or something more like this. I've looked at the python csv module but that doesnt appear to have any built in verification.
(2) Assuming I go for a regex, can anyone direct me to towards the best way to do this? Do I match for illegal characters and reject on that? (eg. no '/' '\' '<' '>' '{' '}' etc.) or match on all legal eg. [a-zA-Z0-9]{1,10} for the string component? I'm not too familiar with regular expressions so pointers or examples would be appreciated.
EDIT:
Strings should contain no commas or quotes it would just contain a name (ie. first name, last name). And yes I forgot to add they would be double quoted.
EDIT #2:
Thanks for all the answers. Cutplace is quite interesting but is a standalone. Decided to go with pyparsing in the end because it gives more flexibility should I add more formats.
|
[
"Pyparsing will process this data, and will be tolerant of unexpected things like spaces before and after commas, commas within quotes, etc. (csv module is too, but regex solutions force you to add \"\\s*\" bits all over the place).\nfrom pyparsing import *\n\ninteger = Regex(r\"-?\\d+\").setName(\"integer\")\ninteger.setParseAction(lambda tokens: int(tokens[0]))\nfloatnum = Regex(r\"-?\\d+\\.\\d*\").setName(\"float\")\nfloatnum.setParseAction(lambda tokens: float(tokens[0]))\ndblQuotedString.setParseAction(removeQuotes)\nCOMMA = Suppress(',')\nvalidLine = dblQuotedString + COMMA + dblQuotedString + COMMA + \\\n integer + COMMA + floatnum + LineEnd()\n\ntests = \"\"\"\\\n\"good data\",\"good2\",100,3.14\n\"good data\" , \"good2\", 100, 3.14\nbad, \"good\",\"good2\",100,3.14\n\"bad\",\"good2\",100,3\n\"bad\",\"good2\",100.5,3\n\"\"\".splitlines()\n\nfor t in tests:\n print t\n try:\n print validLine.parseString(t).asList()\n except ParseException, pe:\n print pe.markInputline('?')\n print pe.msg\n print\n\nPrints\n\"good data\",\"good2\",100,3.14\n['good data', 'good2', 100, 3.1400000000000001]\n\n\"good data\" , \"good2\", 100, 3.14\n['good data', 'good2', 100, 3.1400000000000001]\n\nbad, \"good\",\"good2\",100,3.14\n?bad, \"good\",\"good2\",100,3.14\nExpected string enclosed in double quotes\n\n\"bad\",\"good2\",100,3\n\"bad\",\"good2\",100,?3\nExpected float\n\n\"bad\",\"good2\",100.5,3\n\"bad\",\"good2\",100?.5,3\nExpected \",\"\n\nYou will probably be stripping those quotation marks off at some future time, pyparsing can do that at parse time by adding:\ndblQuotedString.setParseAction(removeQuotes)\n\nIf you want to add comment support to your input file, say a '#' followed by the rest of the line, you can do this:\ncomment = '#' + restOfline\nvalidLine.ignore(comment)\n\nYou can also add names to these fields, so that you can access them by name instead of index position (which I find gives more robust code in light of changes down the road):\nvalidLine = dblQuotedString(\"key\") + COMMA + dblQuotedString(\"title\") + COMMA + \\\n integer(\"qty\") + COMMA + floatnum(\"price\") + LineEnd()\n\nAnd your post-processing code can then do this:\ndata = validLine.parseString(t)\nprint \"%(key)s: %(title)s, %(qty)d in stock at $%(price).2f\" % data\nprint data.qty*data.price\n\n",
"I'd vote for parsing the file, checking you've got 4 components per record, that the first two components are strings, the third is an int (checking for NaN conditions), and the fourth is a float (also checking for NaN conditions).\nPython would be an excellent tool for the job.\nI'm not aware of any libraries in Python to deal with validation of CSV files against a spec, but it really shouldn't be too hard to write.\nimport csv\nimport math\n\ndataChecker = csv.reader(open('data.csv'))\nfor row in dataChecker:\n if len(row) != 4:\n print 'Invalid row length.'\n return\n\n my_int = int(row[2])\n my_float = float(row[3])\n\n if math.isnan(my_int):\n print 'Bad int found'\n return\n\n if math.isnan(my_float):\n print 'Bad float found'\n return\n\nprint 'All good!'\n\n",
"Here's a small snippet I made:\nimport csv \n\nf = csv.reader(open(\"test.csv\"))\n\nfor value in f:\n value[0] = str(value[0])\n value[1] = str(value[1])\n value[2] = int(value[2])\n value[3] = float(value[3])\n\nIf you run that with a file that doesn't have the format your specified, you'll get an exception:\n$ python valid.py \nTraceback (most recent call last):\n File \"valid.py\", line 8, in <module>\n i[2] = int(i[2])\nValueError: invalid literal for int() with base 10: 'a3'\n\nYou can then make a try-except ValueError to catch it and let the users know what they did wrong.\n",
"There can be a lot of corner-cases for parsing CSV, so you probably don't want to try doing it \"by hand\". At least start with a package/library built-in to the language that you're using, even if it doesn't do all the \"verification\" you can think of.\nOnce you get there, then examine the fields for your list of \"illegal\" chars, or examine the values in each field to determine they're valid (if you can do so). You also don't even need a regex for this task necessarily, but it may be more concise to do it that way.\nYou might also disallow embedded \\r or \\n, \\0 or \\t. Just loop through the fields and check them after you've loaded the data with your csv lib.\n",
"Try Cutplace. It verifies that tabluar data conforms to an interface control document.\n",
"Ideally, you want your filtering to be as restrictive as possible - the fewer things you allow, the fewer potential avenues of attack. For instance, a float or int field has a very small number of characters (and very few configurations of those characters) which should actually be allowed. String filtering should ideally be restricted to only what characters people would have a reason to input - without knowing the larger context it's hard to tell you exactly which you should allow, but at a bare minimum the string match regex should require quoting of strings and disallow anything that would terminate the string early.\nKeep in mind, however, that some names may contain things like single quotes (\"O'Neil\", for instance) or dashes, so you couldn't necessarily rule those out.\nSomething like...\n/\"[a-zA-Z' -]+\"/\n\n...would probably be ideal for double-quoted strings which are supposed to contain names. You could replace the + with a {x,y} length min/max if you wanted to enforce certain lengths as well.\n"
] |
[
4,
2,
1,
1,
1,
0
] |
[] |
[] |
[
"csv",
"python",
"regex"
] |
stackoverflow_0001387644_csv_python_regex.txt
|
Q:
PyGTK: IM Client Window
I'm trying to write something very similar to an IM client (for learning purposes only). I don't know how to write the Chat window. I want to display Users picture, name and message as any other IM client. The problem is that I don't know which gtk widget is best suited for it. Currently I use TextView and TextBuffers but I can't display Pictures and Links in the TextView. How does pidgin or empathy handle this?.
I'm using Glade with GtkBuilder
A:
You can display images in a gtk.TextBuffer, here is how: http://pygtk.org/pygtk2tutorial/sec-TextBuffers.html#id2855808
|
PyGTK: IM Client Window
|
I'm trying to write something very similar to an IM client (for learning purposes only). I don't know how to write the Chat window. I want to display Users picture, name and message as any other IM client. The problem is that I don't know which gtk widget is best suited for it. Currently I use TextView and TextBuffers but I can't display Pictures and Links in the TextView. How does pidgin or empathy handle this?.
I'm using Glade with GtkBuilder
|
[
"You can display images in a gtk.TextBuffer, here is how: http://pygtk.org/pygtk2tutorial/sec-TextBuffers.html#id2855808\n"
] |
[
2
] |
[] |
[] |
[
"gtk",
"pygtk",
"python"
] |
stackoverflow_0001387729_gtk_pygtk_python.txt
|
Q:
Writing Interpreters in Python. Is isinstance considered harmful?
I'm porting over the interpreter for a domain specific language I created from Scala to Python. In the process I tried to find a way that way pythonic to emulate the case class feature of Scala that I used extensively. In the end I resorted to using isinstance, but was left feeling that I was perhaps missing something.
Articles such as this one attacking the use of isinstance made me wonder whether there was a better way to solve my problem that doesn't involve some fundamental rewrite.
I've built up a number of Python classes that each represent a different type of abstract syntax tree node, such as For, While, Break, Return, Statement etc
Scala allows for the handling of operator evaluation like this:
case EOp("==",EInt(l),EInt(r)) => EBool(l==r)
case EOp("==",EBool(l),EBool(r)) => EBool(l==r)
So far for the port to Python I've made extensive use of elif blocks and isinstance calls to achieve the same effect, much more verbose and un-pythonic. Is there a better way?
A:
Yes.
Instead of instance, just use Polymorphism. It's simpler.
class Node( object ):
def eval( self, context ):
raise NotImplementedError
class Add( object ):
def eval( self, context ):
return self.arg1.eval( context ) + self.arg2.eval( context )
This kind of this is very simple, and never requires isinstance.
What about something like this where there is coercion required?
Add( Double(this), Integer(that) )
This is still a polymorphism issue.
class MyType( object ):
rank= None
def coerce( self, another ):
return NotImplemented
class Double( object ):
rank = 2
def coerce( self, another ):
return another.toDouble()
def toDouble( self ):
return self
def toInteger( self ):
return int(self)
class Integer( object ):
rank = 1
def coerce( self, another ):
return another.toInteger()
def toDouble( self ):
return float(self)
def toInteger( self ):
return self
class Operation( Node ):
def conform( self, another ):
if self.rank > another.rank:
this, that = self, self.coerce( another )
else:
this, that = another.coerce( self ), another
return this, that
def add( self, another ):
this, that = self.coerce( another )
return this + that
A:
There's a rule of thumb in python, if you find yourself writing a large block of if/elif statements, with similar conditions (a bunch of isinstance(...) for example) then you're probably solving the problem the wrong way.
Better ways involve using classes and polymorphism, visitor pattern, dict lookup, etc. In your case making an Operators class with overloads for different types could work (as noted above), so could a dict with (type, operator) items.
A:
Summary: This is a common way to write compilers, and its just fine here.
A very common way to handle this in other languages is by "pattern matching", which is exactly what you've described. I expect that's the name for that case statement in Scala. Its a very common idiom for writing programming language implementations and tools: compilers, interpreters etc. Why is it so good? Because the implementation is completely separated from the data (which is often bad, but generally desirable in compilers).
The problem then is that this common idiom for programming language implementation is an anti-pattern in Python. Uh oh. As you can probably tell, this is more a political issue than a language issue. If other Pythonistas saw the code they would scream; if other language implementers saw it, they would understand it immediately.
The reason this is an anti-pattern in Python is because Python encourages duck-typed interfaces: you shouldn't have behaviour based on type, but rather they should be defined by the methods that an object has available at run-time. S. Lott's answer works fine if you want it to be idiomatic Python, but it adds little.
I suspect that your design isn't really duck-typed - its a compiler after all, and classes defined using a name, with a static structure, are pretty common. If you prefer, you could think of your objects as having a "type" field, and isinstance is used to pattern-match based on that type.
Addenum:
Pattern-matching is probably the number one reason that people love writing compilers etc in functional languages.
A:
The article does not attack isinstance. It attacks the idea of making your code test for specific classes.
And yes, there is a better way. Or several. You can for example make the handling of a type into a function, and then find the correct function by looking up per type. Like this:
def int_function(value):
# Do what you mean to do here
def str_function(value):
# Do what you mean to do here
type_function = {int: int_function, str: str_function, etc, etc}
def handle_value(value):
function = type_function[type(value)]
result = function(value)
print "Oh, lovely", result
If you don't want to do this registry yourself, you can look at the Zope Component Architecture, which handles this through interfaces and adapters, and it really cool. But that's probably overkill.
Even better is if you can somehow avoid doing any type of type checking, but that may be tricky.
A:
In a DSL I wrote using Python 3, I used the Composite design pattern so the nodes were all polymorphic in their use, as S. Lott is recommending.
But, when I was reading in the input to create those nodes in the first place, I did use isinstance checks a lot (against abstract base classes like collections.Iterable, etc., which Python 3 provides, and which are in 2.6 as well I believe), as well as checks for hasattr '__call__' since callables were allowed in my input. This was the cleanest way I found to do it (particulary with recursion involved), rather than just trying operations against input and catching exceptions, which is the alternative that comes to mind. I was raising custom exceptions myself when the input was invalid to give as much precise failure information as possible.
Using isinstance for such tests is more general than using type(), since isinstance will catch subclasses - and if you can test against the abstract base classes, that is all the better. See http://www.python.org/dev/peps/pep-3119/ for info on the abstract base classes.
A:
In this particular case, what you seem to be implementing is an operator overloading system that uses the types of the objects as the selection mechanism for the operator you intend to call. Your node types happen to fairly directly correspond to your language's types, but in reality you're writing an interpreter. The type of the node is just a piece of data.
I don't know if people can add their own types to your domain specific language. But I would recommend a table driven design regardless.
Make a table of data containing (binary_operator, type1, type2, result_type, evalfunc). Search through that table for matches using isinstance and have some criteria for preferring some matches over others. It may be possible to use a somewhat more sophisticated data structure than a table to make searching faster, but right now you're basically using long lists of ifelse statements to do a linear search anyway, so I'm betting a plain old table will be slightly faster than what you're doing now.
I do not consider isinstance to be the wrong choice here largely because the type is just a piece of data your interpreter is working with to make a decision. Double dispatch and other techniques of that ilk are just going to obscure the real meat of what your program is doing.
One of the neat things in Python is that since operator functions and types are all first class objects, you can just stuff them in the table (or whatever data structure you choose) directly.
|
Writing Interpreters in Python. Is isinstance considered harmful?
|
I'm porting over the interpreter for a domain specific language I created from Scala to Python. In the process I tried to find a way that way pythonic to emulate the case class feature of Scala that I used extensively. In the end I resorted to using isinstance, but was left feeling that I was perhaps missing something.
Articles such as this one attacking the use of isinstance made me wonder whether there was a better way to solve my problem that doesn't involve some fundamental rewrite.
I've built up a number of Python classes that each represent a different type of abstract syntax tree node, such as For, While, Break, Return, Statement etc
Scala allows for the handling of operator evaluation like this:
case EOp("==",EInt(l),EInt(r)) => EBool(l==r)
case EOp("==",EBool(l),EBool(r)) => EBool(l==r)
So far for the port to Python I've made extensive use of elif blocks and isinstance calls to achieve the same effect, much more verbose and un-pythonic. Is there a better way?
|
[
"Yes.\nInstead of instance, just use Polymorphism. It's simpler.\nclass Node( object ):\n def eval( self, context ):\n raise NotImplementedError\n\nclass Add( object ):\n def eval( self, context ):\n return self.arg1.eval( context ) + self.arg2.eval( context )\n\nThis kind of this is very simple, and never requires isinstance.\n\nWhat about something like this where there is coercion required?\nAdd( Double(this), Integer(that) )\n\nThis is still a polymorphism issue.\nclass MyType( object ):\n rank= None\n def coerce( self, another ):\n return NotImplemented\n\nclass Double( object ):\n rank = 2\n def coerce( self, another ):\n return another.toDouble()\n def toDouble( self ):\n return self\n def toInteger( self ):\n return int(self)\n\nclass Integer( object ):\n rank = 1\n def coerce( self, another ):\n return another.toInteger() \n def toDouble( self ):\n return float(self)\n def toInteger( self ): \n return self\n\n class Operation( Node ):\n def conform( self, another ):\n if self.rank > another.rank:\n this, that = self, self.coerce( another )\n else:\n this, that = another.coerce( self ), another\n return this, that\n def add( self, another ):\n this, that = self.coerce( another )\n return this + that\n\n",
"There's a rule of thumb in python, if you find yourself writing a large block of if/elif statements, with similar conditions (a bunch of isinstance(...) for example) then you're probably solving the problem the wrong way.\nBetter ways involve using classes and polymorphism, visitor pattern, dict lookup, etc. In your case making an Operators class with overloads for different types could work (as noted above), so could a dict with (type, operator) items.\n",
"Summary: This is a common way to write compilers, and its just fine here.\nA very common way to handle this in other languages is by \"pattern matching\", which is exactly what you've described. I expect that's the name for that case statement in Scala. Its a very common idiom for writing programming language implementations and tools: compilers, interpreters etc. Why is it so good? Because the implementation is completely separated from the data (which is often bad, but generally desirable in compilers).\nThe problem then is that this common idiom for programming language implementation is an anti-pattern in Python. Uh oh. As you can probably tell, this is more a political issue than a language issue. If other Pythonistas saw the code they would scream; if other language implementers saw it, they would understand it immediately.\nThe reason this is an anti-pattern in Python is because Python encourages duck-typed interfaces: you shouldn't have behaviour based on type, but rather they should be defined by the methods that an object has available at run-time. S. Lott's answer works fine if you want it to be idiomatic Python, but it adds little.\nI suspect that your design isn't really duck-typed - its a compiler after all, and classes defined using a name, with a static structure, are pretty common. If you prefer, you could think of your objects as having a \"type\" field, and isinstance is used to pattern-match based on that type. \nAddenum:\nPattern-matching is probably the number one reason that people love writing compilers etc in functional languages.\n",
"The article does not attack isinstance. It attacks the idea of making your code test for specific classes.\nAnd yes, there is a better way. Or several. You can for example make the handling of a type into a function, and then find the correct function by looking up per type. Like this:\ndef int_function(value):\n # Do what you mean to do here\n\ndef str_function(value):\n # Do what you mean to do here\n\ntype_function = {int: int_function, str: str_function, etc, etc}\n\ndef handle_value(value):\n function = type_function[type(value)]\n result = function(value)\n print \"Oh, lovely\", result\n\nIf you don't want to do this registry yourself, you can look at the Zope Component Architecture, which handles this through interfaces and adapters, and it really cool. But that's probably overkill.\nEven better is if you can somehow avoid doing any type of type checking, but that may be tricky.\n",
"In a DSL I wrote using Python 3, I used the Composite design pattern so the nodes were all polymorphic in their use, as S. Lott is recommending.\nBut, when I was reading in the input to create those nodes in the first place, I did use isinstance checks a lot (against abstract base classes like collections.Iterable, etc., which Python 3 provides, and which are in 2.6 as well I believe), as well as checks for hasattr '__call__' since callables were allowed in my input. This was the cleanest way I found to do it (particulary with recursion involved), rather than just trying operations against input and catching exceptions, which is the alternative that comes to mind. I was raising custom exceptions myself when the input was invalid to give as much precise failure information as possible.\nUsing isinstance for such tests is more general than using type(), since isinstance will catch subclasses - and if you can test against the abstract base classes, that is all the better. See http://www.python.org/dev/peps/pep-3119/ for info on the abstract base classes.\n",
"In this particular case, what you seem to be implementing is an operator overloading system that uses the types of the objects as the selection mechanism for the operator you intend to call. Your node types happen to fairly directly correspond to your language's types, but in reality you're writing an interpreter. The type of the node is just a piece of data.\nI don't know if people can add their own types to your domain specific language. But I would recommend a table driven design regardless.\nMake a table of data containing (binary_operator, type1, type2, result_type, evalfunc). Search through that table for matches using isinstance and have some criteria for preferring some matches over others. It may be possible to use a somewhat more sophisticated data structure than a table to make searching faster, but right now you're basically using long lists of ifelse statements to do a linear search anyway, so I'm betting a plain old table will be slightly faster than what you're doing now.\nI do not consider isinstance to be the wrong choice here largely because the type is just a piece of data your interpreter is working with to make a decision. Double dispatch and other techniques of that ilk are just going to obscure the real meat of what your program is doing.\nOne of the neat things in Python is that since operator functions and types are all first class objects, you can just stuff them in the table (or whatever data structure you choose) directly.\n"
] |
[
2,
2,
2,
1,
0,
0
] |
[
"If you need Polymorphism on arguments (in addition to the receiver), for example to handle type conversions with binary operators as suggested by your example, you can use the following trick:\nclass EValue(object):\n\n def __init__(self, v):\n self.value = v\n\n def __str__(self):\n return str(self.value)\n\n def opequal(self, r):\n r.opequal_value(self)\n\n def opequal_int(self, l):\n print \"(int)\", l, \"==\", \"(value)\", self\n\n def opequal_bool(self, l):\n print \"(bool)\", l, \"==\", \"(value)\", self\n\n def opequal_value(self, l):\n print \"(value)\", l, \"==\", \"(value)\", self\n\n\nclass EInt(EValue):\n\n def opequal(self, r):\n r.opequal_int(self)\n\n def opequal_int(self, l):\n print \"(int)\", l, \"==\", \"(int)\", self\n\n def opequal_bool(self, l):\n print \"(bool)\", l, \"==\", \"(int)\", self\n\n def opequal_value(self, l):\n print \"(value)\", l, \"==\", \"(int)\", self\n\nclass EBool(EValue):\n\n def opequal(self, r):\n r.opequal_bool(self)\n\n def opequal_int(self, l):\n print \"(int)\", l, \"==\", \"(bool)\", self\n\n def opequal_bool(self, l):\n print \"(bool)\", l, \"==\", \"(bool)\", self\n\n def opequal_value(self, l):\n print \"(value)\", l, \"==\", \"(bool)\", self\n\n\nif __name__ == \"__main__\":\n\n v1 = EBool(\"true\")\n v2 = EInt(5)\n v1.opequal(v2)\n\n"
] |
[
-1
] |
[
"interpreter",
"language_design",
"python",
"scala"
] |
stackoverflow_0001381845_interpreter_language_design_python_scala.txt
|
Q:
Match database output (balanced parentheses, table & rows structure) and output as a list?
How would I parse the following input (either going line by line or via regex... or combination of both):
Table[
Row[
C_ID[Data:12345.0][Sec:12345.0][Type:Double]
F_ID[Data:17660][Sec:17660][Type:Long]
NAME[Data:Mike Jones][Sec:Mike Jones][Type:String]
]
Row[
C_ID[Data:2560.0][Sec:2560.0][Type:Double]
...
]
]
there is indentation in there, of course, so it can be split by \n\t (and then cleaned up for the extra tabs \t in C_ID, F_ID lines and such...
The desired output is something more usable in python:
{'C_ID': 12345, 'F_ID': 17660, 'NAME': 'Mike Jones',....} {'C_ID': 2560, ....}
I've tried going line by line, and then using multiple splits() to throw away what I don't need and keep what I do need, but I'm sure there is a much more elegant and faster way of doing it...
A:
Parsing recursive structures with regex is a pain because you have to keep state.
Instead, use pyparsing or some other real parser.
Some folks like PLY because it follows the traditional Lex/Yacc architecture.
A:
There really isn't a lot of unpredictable nesting going on here, so you could do this with regex's. But pyparsing is my tool of choice, so here is my solution:
from pyparsing import *
LBRACK,RBRACK,COLON = map(Suppress,"[]:")
ident = Word(alphas, alphanums+"_")
datatype = oneOf("Double Long String Boolean")
# define expressions for pieces of attribute definitions
data = LBRACK + "Data" + COLON + SkipTo(RBRACK)("contents") + RBRACK
sec = LBRACK + "Sec" + COLON + SkipTo(RBRACK)("contents") + RBRACK
type = LBRACK + "Type" + COLON + datatype("datatype") + RBRACK
# define entire attribute definition, giving each piece its own results name
attrDef = Group(ident("key") + data("data") + sec("sec") + type("type"))
# now a row is just a "Row[" and one or more attrDef's and "]"
rowDef = Group("Row" + LBRACK + Group(OneOrMore(attrDef))("attrs") + RBRACK)
# this method will process each row, and convert the key and data fields
# to addressable results names
def assignAttrs(tokens):
ret = ParseResults(tokens.asList())
for attr in tokens[0].attrs:
# use datatype mapped to function to convert data at parse time
value = {
'Double' : float,
'Long' : int,
'String' : str,
'Boolean' : bool,
}[attr.type.datatype](attr.data.contents)
ret[attr.key] = value
# replace parse results created by pyparsing with our own named results
tokens[0] = ret
rowDef.setParseAction(assignAttrs)
# a TABLE is just "Table[", one or more rows and "]"
tableDef = "Table" + LBRACK + OneOrMore(rowDef)("rows") + RBRACK
test = """
Table[
Row[
C_ID[Data:12345.0][Sec:12345.0][Type:Double]
F_ID[Data:17660][Sec:17660][Type:Long]
NAME[Data:Mike Jones][Sec:Mike Jones][Type:String]
]
Row[
C_ID[Data:2560.0][Sec:2560.0][Type:Double]
NAME[Data:Casey Jones][Sec:Mike Jones][Type:String]
]
]"""
# now parse table, and access each row and its defined attributes
results = tableDef.parseString(test)
for row in results.rows:
print row.dump()
print row.NAME, row.C_ID
print
prints:
[[[['C_ID', 'Data', '12345.0', 'Sec', '12345.0', 'Type', 'Double'],...
- C_ID: 12345.0
- F_ID: 17660
- NAME: Mike Jones
Mike Jones 12345.0
[[[['C_ID', 'Data', '2560.0', 'Sec', '2560.0', 'Type', 'Double'], ...
- C_ID: 2560.0
- NAME: Casey Jones
Casey Jones 2560.0
The results names assigned in assignAttrs give you access to each of your attributes by name. To see if a name has been omitted, just test "if not row.F_ID:".
A:
This excellent page lists many parsers available to Python programmers. Regexes are unsuitable for "balanced parentheses" matching, but any of the third party packages reviewed on that page will serve you well.
|
Match database output (balanced parentheses, table & rows structure) and output as a list?
|
How would I parse the following input (either going line by line or via regex... or combination of both):
Table[
Row[
C_ID[Data:12345.0][Sec:12345.0][Type:Double]
F_ID[Data:17660][Sec:17660][Type:Long]
NAME[Data:Mike Jones][Sec:Mike Jones][Type:String]
]
Row[
C_ID[Data:2560.0][Sec:2560.0][Type:Double]
...
]
]
there is indentation in there, of course, so it can be split by \n\t (and then cleaned up for the extra tabs \t in C_ID, F_ID lines and such...
The desired output is something more usable in python:
{'C_ID': 12345, 'F_ID': 17660, 'NAME': 'Mike Jones',....} {'C_ID': 2560, ....}
I've tried going line by line, and then using multiple splits() to throw away what I don't need and keep what I do need, but I'm sure there is a much more elegant and faster way of doing it...
|
[
"Parsing recursive structures with regex is a pain because you have to keep state.\nInstead, use pyparsing or some other real parser.\nSome folks like PLY because it follows the traditional Lex/Yacc architecture.\n",
"There really isn't a lot of unpredictable nesting going on here, so you could do this with regex's. But pyparsing is my tool of choice, so here is my solution:\nfrom pyparsing import *\n\nLBRACK,RBRACK,COLON = map(Suppress,\"[]:\")\nident = Word(alphas, alphanums+\"_\")\ndatatype = oneOf(\"Double Long String Boolean\")\n\n# define expressions for pieces of attribute definitions\ndata = LBRACK + \"Data\" + COLON + SkipTo(RBRACK)(\"contents\") + RBRACK\nsec = LBRACK + \"Sec\" + COLON + SkipTo(RBRACK)(\"contents\") + RBRACK\ntype = LBRACK + \"Type\" + COLON + datatype(\"datatype\") + RBRACK\n\n# define entire attribute definition, giving each piece its own results name\nattrDef = Group(ident(\"key\") + data(\"data\") + sec(\"sec\") + type(\"type\"))\n\n# now a row is just a \"Row[\" and one or more attrDef's and \"]\"\nrowDef = Group(\"Row\" + LBRACK + Group(OneOrMore(attrDef))(\"attrs\") + RBRACK)\n\n# this method will process each row, and convert the key and data fields\n# to addressable results names\ndef assignAttrs(tokens):\n ret = ParseResults(tokens.asList())\n for attr in tokens[0].attrs:\n # use datatype mapped to function to convert data at parse time\n value = {\n 'Double' : float,\n 'Long' : int,\n 'String' : str,\n 'Boolean' : bool,\n }[attr.type.datatype](attr.data.contents)\n ret[attr.key] = value\n # replace parse results created by pyparsing with our own named results\n tokens[0] = ret\nrowDef.setParseAction(assignAttrs)\n\n# a TABLE is just \"Table[\", one or more rows and \"]\"\ntableDef = \"Table\" + LBRACK + OneOrMore(rowDef)(\"rows\") + RBRACK\n\ntest = \"\"\"\nTable[ \n Row[\n C_ID[Data:12345.0][Sec:12345.0][Type:Double]\n F_ID[Data:17660][Sec:17660][Type:Long]\n NAME[Data:Mike Jones][Sec:Mike Jones][Type:String]\n ] \n Row[\n C_ID[Data:2560.0][Sec:2560.0][Type:Double] \n NAME[Data:Casey Jones][Sec:Mike Jones][Type:String]\n ]\n]\"\"\"\n\n# now parse table, and access each row and its defined attributes\nresults = tableDef.parseString(test)\nfor row in results.rows:\n print row.dump()\n print row.NAME, row.C_ID\n print\n\nprints:\n[[[['C_ID', 'Data', '12345.0', 'Sec', '12345.0', 'Type', 'Double'],...\n- C_ID: 12345.0\n- F_ID: 17660\n- NAME: Mike Jones\nMike Jones 12345.0\n\n[[[['C_ID', 'Data', '2560.0', 'Sec', '2560.0', 'Type', 'Double'], ...\n- C_ID: 2560.0\n- NAME: Casey Jones\nCasey Jones 2560.0\n\nThe results names assigned in assignAttrs give you access to each of your attributes by name. To see if a name has been omitted, just test \"if not row.F_ID:\".\n",
"This excellent page lists many parsers available to Python programmers. Regexes are unsuitable for \"balanced parentheses\" matching, but any of the third party packages reviewed on that page will serve you well.\n"
] |
[
3,
1,
0
] |
[
"This regex:\nRow\\[[\\s]*C_ID\\[[\\W]*Data:([0-9.]*)[\\S\\W]*F_ID\\[[\\S\\W]*Data:([0-9.]*)[\\S\\W]*NAME\\[[\\S\\W]*Data:([\\w ]*)[\\S ]*\n\nfor the first row will match:\n$1=12345.0\n$2=17660\n$3=Mike Jones\nThen you can use something like this:\n{'C_ID': $1, 'F_ID': $2, 'NAME': '$3'}\n\nto produce:\n{'C_ID': 12345.0, 'F_ID': 17660, 'NAME': 'Mike Jones'}\n\nSo you need to iterate through your input until it stops matching your rows...\nDoes it make sense?\n"
] |
[
-1
] |
[
"parsing",
"python",
"regex"
] |
stackoverflow_0001324949_parsing_python_regex.txt
|
Q:
OpenGL in Python with Snow Leopard?
I'm interested in playing around with OpenGL in Python. I've used OpenGL in C++ and Objective-C, but I don't have much experience in Python. I'm wondering if there's a good tutorial that works in Snow Leopard. I'd prefer to stay in 64-bit mode if possible, since I've heard 32-bit programs require loading a lot of extra 32-bit libraries.
I've already tried a PyOpenGL/wxPython tutorial. When I ran the code, it crashed with this message:
ImportError: /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/wx-2.8-mac-unicode/wx/_core_.so: no appropriate 64-bit architecture (see "man python" for running in 32-bit mode)
It looks like there's a bug in wxPython that prevents it from working on a 64-bit system.
I also looked at Pyglet, but they have a similar issue. They provide a work-around (setting Python to 32-bit mode), but it doesn't look like they're going to fix it.
Finally, I looked at PyGLUT, but I think it's only for Windows.
Are there any other libraries that would let me access OpenGL and draw on the screen? Again, I'd prefer to stay in 64-bit mode, but if nothing works, I'll switch to 32-bit and try wxPython or Pyglet again.
Edit: I've also tried PyGame. It depends on SDL which is broken in SL. I thought about trying to use Cocoa through PyObjc, but the Xcode Python application templates have been removed.
A:
I've used PyOpenGL 3.0.0 quite successfully on Snow Leopard. It uses ctypes, so it should be making 64-bit calls if those libraries are available (and Snow Leopard's Python includes a 64-bit version). I haven't used the wxPython stuff with PyOpenGL so that's where you might be running into problems, but PyOpenGL also includes GLUT, which both run fine.
A:
There's probably no good reason to avoid 32-bit mode. Unless your Python programs need to larger address space, of course.
A:
You could try pygame. Pygame is a python wrapper around SDL. According to their website they have Max OS X binaries. Here is a simple example of using pygame with OpenGL. Once you are able to create the window and handle events most OpenGL programming is just like it would be in C or C++, but with some added python goodness. For OpenGL a great tutorial is NeHe.
A:
Also when programing with OpenGL in python remember that Python datastructures can be rather slow when it comes to requirements for 3D graphics. PyGL developers for example recommend using ctypes for operations that concern graphics, since that way you can get enough performance for some complicated geometry with bareable FPS.
|
OpenGL in Python with Snow Leopard?
|
I'm interested in playing around with OpenGL in Python. I've used OpenGL in C++ and Objective-C, but I don't have much experience in Python. I'm wondering if there's a good tutorial that works in Snow Leopard. I'd prefer to stay in 64-bit mode if possible, since I've heard 32-bit programs require loading a lot of extra 32-bit libraries.
I've already tried a PyOpenGL/wxPython tutorial. When I ran the code, it crashed with this message:
ImportError: /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/wx-2.8-mac-unicode/wx/_core_.so: no appropriate 64-bit architecture (see "man python" for running in 32-bit mode)
It looks like there's a bug in wxPython that prevents it from working on a 64-bit system.
I also looked at Pyglet, but they have a similar issue. They provide a work-around (setting Python to 32-bit mode), but it doesn't look like they're going to fix it.
Finally, I looked at PyGLUT, but I think it's only for Windows.
Are there any other libraries that would let me access OpenGL and draw on the screen? Again, I'd prefer to stay in 64-bit mode, but if nothing works, I'll switch to 32-bit and try wxPython or Pyglet again.
Edit: I've also tried PyGame. It depends on SDL which is broken in SL. I thought about trying to use Cocoa through PyObjc, but the Xcode Python application templates have been removed.
|
[
"I've used PyOpenGL 3.0.0 quite successfully on Snow Leopard. It uses ctypes, so it should be making 64-bit calls if those libraries are available (and Snow Leopard's Python includes a 64-bit version). I haven't used the wxPython stuff with PyOpenGL so that's where you might be running into problems, but PyOpenGL also includes GLUT, which both run fine.\n",
"There's probably no good reason to avoid 32-bit mode. Unless your Python programs need to larger address space, of course.\n",
"You could try pygame. Pygame is a python wrapper around SDL. According to their website they have Max OS X binaries. Here is a simple example of using pygame with OpenGL. Once you are able to create the window and handle events most OpenGL programming is just like it would be in C or C++, but with some added python goodness. For OpenGL a great tutorial is NeHe.\n",
"Also when programing with OpenGL in python remember that Python datastructures can be rather slow when it comes to requirements for 3D graphics. PyGL developers for example recommend using ctypes for operations that concern graphics, since that way you can get enough performance for some complicated geometry with bareable FPS.\n"
] |
[
4,
2,
0,
0
] |
[] |
[] |
[
"opengl",
"osx_snow_leopard",
"python"
] |
stackoverflow_0001389928_opengl_osx_snow_leopard_python.txt
|
Q:
What are the different options for processing uploaded PDF files in a Django application?
Our Django application needs to do a few things with uploaded PDF files:
Verify that the file is a PDF and isn't corrupted
Check that the file isn't encrypted
Count the number of pages
We run into problems with one unfortunately popular application that's idea of an unencrypted PDF export is actually an encrypted PDF file, just with a blank password. We've been working with PyPDF to date, which is unable to read those files because the encryption is non-standard. The application exporting these files is quite popular among our users, which is a pain.
Another application exported files with a bad MIME type (something other than application/pdf), so whatever we end up using needs to be able to cope with silly choking points like that.
Is there an actively maintained, robust PDF library anywhere that we could utilize? Even PDFtk, a CLI utility that a couple people have been recommending, was last updated in 2006.
Any help is appreciated.
Update: To clarify, it can be free or paid-for. Suggest whatever you think is the best option.
A:
PDFlib is excellent, but costs money. You didn't say it had to be free, though implicitly somehow I assume you want it to be! :)
|
What are the different options for processing uploaded PDF files in a Django application?
|
Our Django application needs to do a few things with uploaded PDF files:
Verify that the file is a PDF and isn't corrupted
Check that the file isn't encrypted
Count the number of pages
We run into problems with one unfortunately popular application that's idea of an unencrypted PDF export is actually an encrypted PDF file, just with a blank password. We've been working with PyPDF to date, which is unable to read those files because the encryption is non-standard. The application exporting these files is quite popular among our users, which is a pain.
Another application exported files with a bad MIME type (something other than application/pdf), so whatever we end up using needs to be able to cope with silly choking points like that.
Is there an actively maintained, robust PDF library anywhere that we could utilize? Even PDFtk, a CLI utility that a couple people have been recommending, was last updated in 2006.
Any help is appreciated.
Update: To clarify, it can be free or paid-for. Suggest whatever you think is the best option.
|
[
"PDFlib is excellent, but costs money. You didn't say it had to be free, though implicitly somehow I assume you want it to be! :)\n"
] |
[
1
] |
[] |
[] |
[
"django",
"pdf",
"python"
] |
stackoverflow_0001390371_django_pdf_python.txt
|
Q:
Removing redundant symbols from string
Let's say I have a string like that: '12,423,343.93'. How to convert it to float in simple, effective and yet elegant way?
It seems I need to remove redundant commas from the string and then call float(), but I have no good solution for that.
Thanks
A:
s = "12,423,343.93"
f = float(s.replace(",", ""))
A:
Note that the seperator symbols used vary from country to country. In some cultures, "." is used to seperate groups, and "," indicates a decimal point for instance. If you're parsing user-entered strings like this, it may be better to use the locale module instead. For example:
>>> import locale
>>> locale.atof('12,423,343.93') # No locale set yet, so this will refuse to parse
ValueError: invalid literal for float(): 12,423,343.93
>>> locale.setlocale(locale.LC_NUMERIC, "en_GB") # Use a UK locale.
>>> locale.atof('12,423,343.93')
12423343.93
|
Removing redundant symbols from string
|
Let's say I have a string like that: '12,423,343.93'. How to convert it to float in simple, effective and yet elegant way?
It seems I need to remove redundant commas from the string and then call float(), but I have no good solution for that.
Thanks
|
[
"s = \"12,423,343.93\"\nf = float(s.replace(\",\", \"\"))\n\n",
"Note that the seperator symbols used vary from country to country. In some cultures, \".\" is used to seperate groups, and \",\" indicates a decimal point for instance. If you're parsing user-entered strings like this, it may be better to use the locale module instead. For example:\n>>> import locale\n>>> locale.atof('12,423,343.93') # No locale set yet, so this will refuse to parse\nValueError: invalid literal for float(): 12,423,343.93 \n\n>>> locale.setlocale(locale.LC_NUMERIC, \"en_GB\") # Use a UK locale.\n>>> locale.atof('12,423,343.93')\n12423343.93\n\n"
] |
[
9,
6
] |
[] |
[] |
[
"floating_point",
"python"
] |
stackoverflow_0001390657_floating_point_python.txt
|
Q:
Application Structure for GUI & Functions
I'm starting a basic application using Python and PyQt and could use some experienced insight. Here's the structure I was thinking. This is understandably subjective, but is there a better way?
myApp/GUI/__init__.py
mainWindow.py
subWindow1.py
subWindow2.py
myApp/Logic/__init__.py
setOfMethods1.py
setOfMethods2.py
mainWindow imports subWindows
mainWindow imports Logic module
A:
MVC
It looks like you have been reading about model-view-controller.
Separating the UI from the back end is a good idea. It will make runnings tests and debugging just the logic side easier, and the internal structure will be more modular.
I'm not certain it makes as much sense to split the UI into the currently anticipated windows, though. I might just let the UI part grow and factor for common code.
|
Application Structure for GUI & Functions
|
I'm starting a basic application using Python and PyQt and could use some experienced insight. Here's the structure I was thinking. This is understandably subjective, but is there a better way?
myApp/GUI/__init__.py
mainWindow.py
subWindow1.py
subWindow2.py
myApp/Logic/__init__.py
setOfMethods1.py
setOfMethods2.py
mainWindow imports subWindows
mainWindow imports Logic module
|
[
"MVC\nIt looks like you have been reading about model-view-controller.\nSeparating the UI from the back end is a good idea. It will make runnings tests and debugging just the logic side easier, and the internal structure will be more modular.\nI'm not certain it makes as much sense to split the UI into the currently anticipated windows, though. I might just let the UI part grow and factor for common code.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"structure",
"user_interface"
] |
stackoverflow_0001391190_python_structure_user_interface.txt
|
Q:
Upgrade python in linux
I have a linux VPS that uses an older version of python (2.4.3). This version doesn't include the UUID module, but I need it for a project. My options are to upgrade to python2.6 or find a way to make uuid work with the older version. I am a complete linux newbie. I don't know how to upgrade python safely or how I could get the UUID modules working with the already installed version. What is a better option and how would I go about doing it?
A:
The safest way to upgrading Python is to install it to a different location (away from the default system path).
To do this, download the source of python and do a
./configure --prefix=/opt
(Assuming you want to install it to /opt which is where most install non system dependant stuff to)
The reason why I say this is because some other system libraries may depend on the current version of python.
Another reason is that as you are doing your own custom development, it is much better to have control over what version of the libraries (or interpreters) you are using rather than have a operating system patch break something that was working before. A controlled upgrade is better than having the application break on you all of a sudden.
A:
The UUID module exists as a separate package for Python 2.3 and up:
http://pypi.python.org/pypi/uuid/1.30
So you can either install that in your Python2.4, or install Python2.6. If your distro doesn't have it, then Python is quite simple to compile from source. Look through the requirements to make sure all the libraries you need/want are installed before compiling Python. That's it.
A:
The best solution will be installing python2.6 in the choosen directory - It will you give you access to many great features and better memory handling (infamous python=2.4 memory leak problem).
I have got several pythons installed onto my two computers, I found that the best solution for are two directories:
$HOME/usr-32
$HOME/usr-64
respectively to using operating system (I share $HOME between 32 and 64 bit versions of Linux).
In each I have one directory for every application/program, for example:
ls ~/usr-64/python-2.6.2/
bin include lib share
It leads completetely to avoiding conflicts between version and gives great portability (you can use usb pendrives etc).
Python 2.6.2 in previously example has been installed with option:
./configure --prefix=$HOME/usr-64/python-2.6.2
|
Upgrade python in linux
|
I have a linux VPS that uses an older version of python (2.4.3). This version doesn't include the UUID module, but I need it for a project. My options are to upgrade to python2.6 or find a way to make uuid work with the older version. I am a complete linux newbie. I don't know how to upgrade python safely or how I could get the UUID modules working with the already installed version. What is a better option and how would I go about doing it?
|
[
"The safest way to upgrading Python is to install it to a different location (away from the default system path).\nTo do this, download the source of python and do a \n./configure --prefix=/opt\n(Assuming you want to install it to /opt which is where most install non system dependant stuff to)\nThe reason why I say this is because some other system libraries may depend on the current version of python.\nAnother reason is that as you are doing your own custom development, it is much better to have control over what version of the libraries (or interpreters) you are using rather than have a operating system patch break something that was working before. A controlled upgrade is better than having the application break on you all of a sudden.\n",
"The UUID module exists as a separate package for Python 2.3 and up:\nhttp://pypi.python.org/pypi/uuid/1.30\nSo you can either install that in your Python2.4, or install Python2.6. If your distro doesn't have it, then Python is quite simple to compile from source. Look through the requirements to make sure all the libraries you need/want are installed before compiling Python. That's it.\n",
"The best solution will be installing python2.6 in the choosen directory - It will you give you access to many great features and better memory handling (infamous python=2.4 memory leak problem).\nI have got several pythons installed onto my two computers, I found that the best solution for are two directories:\n$HOME/usr-32\n$HOME/usr-64\nrespectively to using operating system (I share $HOME between 32 and 64 bit versions of Linux).\nIn each I have one directory for every application/program, for example:\n\nls ~/usr-64/python-2.6.2/\nbin include lib share\n\nIt leads completetely to avoiding conflicts between version and gives great portability (you can use usb pendrives etc).\nPython 2.6.2 in previously example has been installed with option:\n\n./configure --prefix=$HOME/usr-64/python-2.6.2\n\n"
] |
[
6,
2,
0
] |
[] |
[] |
[
"linux",
"python"
] |
stackoverflow_0001388464_linux_python.txt
|
Q:
function decorators in c#
Is there a C# analog for Python's function decorators? It feels like it's doable with attributes and the reflection framework, but I don't see a way to replace functions at runtime.
Python decorators generally work this way:
class decorator(obj):
def __init__(self, f):
self.f = f
def __call__(self, *args, **kwargs):
print "Before"
self.f()
print "After"
@decorator
def func1():
print "Function 1"
@decorator
def func2():
print "Function 2"
Calling func1 and func2 would then result in
Before
Function 1
After
Before
Function 2
After
The idea is that decorators will let me easily add common tasks at the entry and exit points of multiple functions.
A:
The way I achieve this is through AOP frameworks like Castle Dynamic Proxy, Spring.NET or even the Policy Injection Application Block.
A:
You can do that using Post Sharp. Check out the demo video for instructions.
A:
you can sort of achieve that by "ContextBoundObject" in the .NET framework.
but it's a bit convoluted because it basically uses .net remoting infrastructure to intercept method calls. (the runtime overhead is minimum if your calls are within the same app domain but just across different "context").
so the better approaches would be the ones mentioned above by the other posters.
|
function decorators in c#
|
Is there a C# analog for Python's function decorators? It feels like it's doable with attributes and the reflection framework, but I don't see a way to replace functions at runtime.
Python decorators generally work this way:
class decorator(obj):
def __init__(self, f):
self.f = f
def __call__(self, *args, **kwargs):
print "Before"
self.f()
print "After"
@decorator
def func1():
print "Function 1"
@decorator
def func2():
print "Function 2"
Calling func1 and func2 would then result in
Before
Function 1
After
Before
Function 2
After
The idea is that decorators will let me easily add common tasks at the entry and exit points of multiple functions.
|
[
"The way I achieve this is through AOP frameworks like Castle Dynamic Proxy, Spring.NET or even the Policy Injection Application Block.\n",
"You can do that using Post Sharp. Check out the demo video for instructions.\n",
"you can sort of achieve that by \"ContextBoundObject\" in the .NET framework.\nbut it's a bit convoluted because it basically uses .net remoting infrastructure to intercept method calls. (the runtime overhead is minimum if your calls are within the same app domain but just across different \"context\").\nso the better approaches would be the ones mentioned above by the other posters. \n"
] |
[
12,
9,
4
] |
[] |
[] |
[
"c#",
"decorator",
"python",
"reflection"
] |
stackoverflow_0001391157_c#_decorator_python_reflection.txt
|
Q:
Python: how do I implement 'pop' in this class?
I'd like this class to act like a list. It's data resides in the attribute self.data.
If I have an instance, pp = population, does defining __getitem__ mean I can refer to
pp instead of pp.data? Or is it the defining of __repr__ that does that?
Would deriving this class from list instead of object provide me with 'pop'.
Right now I need to implement 'pop'method. Thanks
class population (object):
def __init__ (self):
self.data = []
def append(self, item):
self.data.append(item)
def extend(self, item):
self.data.extend(item)
def sort(self):
self.data.sort(cmp=fitnesscompare)
def __getitem__(self, index): return self.data[index]
def __setitem__(self, index, item): self.data[index] = item
def __len__(self): return len(self.data)
def __repr__(self): return repr(self.data)
def copy(self):
return copy.deepcopy(self)
A:
Why not just extend the list class? Then you have all of that functionality built in.
class population(list):
# custom methods here
Just remember, instead of referencing self.data for the list, just reference self.
A:
def pop(self, index=-1) :
return self.data.pop(index)
This will implement the expected behavior :
return the last item if no index is passed;
return the item at "index" if passed;
will use the type check of the underlying pop();
will raise the same exceptions as the underlying pop().
I would have sub classed List as Evan Fosmark suggested before, but I can see good reasons for not doing so. Composition can ease low coupling, and you have full control over it. But you have to write bridges for all the methods you want to delegate, it can be a pain...
A:
If I have an instance, pp = population, does defining __getitem__ mean I can refer to pp instead of pp.data?
Essentially, yes. Adding a __getitem__ method is equivalent to overloading the [] operator in some other languages. Thus the following two would be equivalent:
pp.data[0]
pp[0]
Or is it the defining of __repr__ that does that?
Defining __repr__ will give you a string representation of the object. Thus these two calls would be equivalent:
repr(pp.data)
repr(pp)
Would deriving this class from list instead of object provide me with 'pop'.
From what I see right now, inheriting from list is probably the way to go. Unless I'm missing something, the only thing that's different is the sort method and copy. Just about everything else is the same.
A:
Is this really a list of things, or are you using a list to store a bunch of things? Before you inherit from list, why not just use a list?
I try to save inheritance for things that are true is-a relationships. A Form is-a Window, a Dialog is-a Form, etc. If you are just modeling is-implemented-using-a, then just use the base class, or use containment and delegation.
What is it about a population that is more than just a list?
|
Python: how do I implement 'pop' in this class?
|
I'd like this class to act like a list. It's data resides in the attribute self.data.
If I have an instance, pp = population, does defining __getitem__ mean I can refer to
pp instead of pp.data? Or is it the defining of __repr__ that does that?
Would deriving this class from list instead of object provide me with 'pop'.
Right now I need to implement 'pop'method. Thanks
class population (object):
def __init__ (self):
self.data = []
def append(self, item):
self.data.append(item)
def extend(self, item):
self.data.extend(item)
def sort(self):
self.data.sort(cmp=fitnesscompare)
def __getitem__(self, index): return self.data[index]
def __setitem__(self, index, item): self.data[index] = item
def __len__(self): return len(self.data)
def __repr__(self): return repr(self.data)
def copy(self):
return copy.deepcopy(self)
|
[
"Why not just extend the list class? Then you have all of that functionality built in.\nclass population(list):\n # custom methods here\n\nJust remember, instead of referencing self.data for the list, just reference self.\n",
"def pop(self, index=-1) :\n return self.data.pop(index)\n\nThis will implement the expected behavior :\n\nreturn the last item if no index is passed;\nreturn the item at \"index\" if passed;\nwill use the type check of the underlying pop();\nwill raise the same exceptions as the underlying pop().\n\nI would have sub classed List as Evan Fosmark suggested before, but I can see good reasons for not doing so. Composition can ease low coupling, and you have full control over it. But you have to write bridges for all the methods you want to delegate, it can be a pain...\n",
"\nIf I have an instance, pp = population, does defining __getitem__ mean I can refer to pp instead of pp.data?\n\nEssentially, yes. Adding a __getitem__ method is equivalent to overloading the [] operator in some other languages. Thus the following two would be equivalent:\npp.data[0]\npp[0]\n\n\nOr is it the defining of __repr__ that does that?\n\nDefining __repr__ will give you a string representation of the object. Thus these two calls would be equivalent:\nrepr(pp.data)\nrepr(pp)\n\n\nWould deriving this class from list instead of object provide me with 'pop'.\n\nFrom what I see right now, inheriting from list is probably the way to go. Unless I'm missing something, the only thing that's different is the sort method and copy. Just about everything else is the same.\n",
"Is this really a list of things, or are you using a list to store a bunch of things? Before you inherit from list, why not just use a list?\nI try to save inheritance for things that are true is-a relationships. A Form is-a Window, a Dialog is-a Form, etc. If you are just modeling is-implemented-using-a, then just use the base class, or use containment and delegation.\nWhat is it about a population that is more than just a list?\n"
] |
[
8,
3,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001390966_python.txt
|
Q:
Should I use Mako for Templating?
I've been considering a templating solution, although my choices are between Mako and Genshi. I find templating in Genshi a bit ugly, so I'm shifting more towards Mako.
I've gone to wonder: what is so good about the fact that Mako allows embedded Python code? How is it convenient for the average joe?
Wouldn't templating JUST suffice without having embedded Python code?
A:
As the mako homepage points out, Mako's advantages are pretty clear: insanely fast, instantly familiar to anyone who's handy with Python in terms of both syntax and features.
Genshi chooses "interpretation" instead of ahead-of-time Python code generation (according to their FAQ, that's for clarity of error messages) and an "arm's length" approach to Python (e.g. by using xpath for selectors, xinclude instead of inheritance, etc) so it might be more natural to people who know no Python but are very competent with XML.
So what's your "audience"? If Python programmers, I suggest Mako (for speed and familiarity); if XML experts that are uncomfortable with Python, Genshi might be a better fit (for "arm's length from Python" approach and closer match to XML culture).
You mention "the average Joe", but Joe doesn't know Python AND xpath is a deep dark mystery to him; if THAT was really your audience, other templating systems such as Django's might actually be a better fit (help him to avoid getting in trouble;-).
A:
Wouldn't templating JUST suffice without having embedded Python code?
Only if your templating language has enough logical functionality that it is essentially a scripting language in itself. At which point, you might just as well have used Python.
More involved sites often need complex presentation logic and non-trivial templated structures like sections repeated in different places/pages and recursive trees. This is no fun if your templating language ties your hands behind your back because it takes the religious position that "code in template are BAD".
Then you just end up writing presentation helper functions in your Python business logic, which is a worse blending of presentation and application logic than you had to start with. Languages that take power away from you because they don't trust you to use it tastefully are lame.
A:
This seems to be a bit of a religious issue. Django templates take a hard line: no code in templates. They do this because of their history as a system used in shops where there's a clear separation between those who write code and those who create pages. Others (perhaps you) don't make such a clear distinction, and would feel more comfortable having a more flexible line between layout and logic.
It really comes down to a matter of taste.
A:
Genshi is conceived (read: biased, optimized) for generation of xml docs (even if it does offer support for generating any kind of text document). Mako and Django templates are conceived as generic text templating system. Evoque also, but with one fundamental difference that it makes the design choice to only allow python expressions in templates i.e. no python statements.
One important net result of this is that Evoque is able to execute template evaluation in a sandbox -- i.e. you could safely give untrusted users write-access to a template's source code -- a feature that is virtually impossible for template engines that also allow embedding of python statements. Oh, and while not losing out any in a direct feature comparison, Evoque is in some cases actually faster than Mako, and it also runs on Python 3.
A:
You could discipline yourself to not inject any Python code within the template unless it's really the last resort to get the job done. I have faced a similar issue with Django's template where I have to do some serious CSS gymnastics to display my content. If I could have used some Python code in the template, it would have been better.
|
Should I use Mako for Templating?
|
I've been considering a templating solution, although my choices are between Mako and Genshi. I find templating in Genshi a bit ugly, so I'm shifting more towards Mako.
I've gone to wonder: what is so good about the fact that Mako allows embedded Python code? How is it convenient for the average joe?
Wouldn't templating JUST suffice without having embedded Python code?
|
[
"As the mako homepage points out, Mako's advantages are pretty clear: insanely fast, instantly familiar to anyone who's handy with Python in terms of both syntax and features.\nGenshi chooses \"interpretation\" instead of ahead-of-time Python code generation (according to their FAQ, that's for clarity of error messages) and an \"arm's length\" approach to Python (e.g. by using xpath for selectors, xinclude instead of inheritance, etc) so it might be more natural to people who know no Python but are very competent with XML.\nSo what's your \"audience\"? If Python programmers, I suggest Mako (for speed and familiarity); if XML experts that are uncomfortable with Python, Genshi might be a better fit (for \"arm's length from Python\" approach and closer match to XML culture).\nYou mention \"the average Joe\", but Joe doesn't know Python AND xpath is a deep dark mystery to him; if THAT was really your audience, other templating systems such as Django's might actually be a better fit (help him to avoid getting in trouble;-).\n",
"\nWouldn't templating JUST suffice without having embedded Python code?\n\nOnly if your templating language has enough logical functionality that it is essentially a scripting language in itself. At which point, you might just as well have used Python.\nMore involved sites often need complex presentation logic and non-trivial templated structures like sections repeated in different places/pages and recursive trees. This is no fun if your templating language ties your hands behind your back because it takes the religious position that \"code in template are BAD\".\nThen you just end up writing presentation helper functions in your Python business logic, which is a worse blending of presentation and application logic than you had to start with. Languages that take power away from you because they don't trust you to use it tastefully are lame.\n",
"This seems to be a bit of a religious issue. Django templates take a hard line: no code in templates. They do this because of their history as a system used in shops where there's a clear separation between those who write code and those who create pages. Others (perhaps you) don't make such a clear distinction, and would feel more comfortable having a more flexible line between layout and logic.\nIt really comes down to a matter of taste.\n",
"Genshi is conceived (read: biased, optimized) for generation of xml docs (even if it does offer support for generating any kind of text document). Mako and Django templates are conceived as generic text templating system. Evoque also, but with one fundamental difference that it makes the design choice to only allow python expressions in templates i.e. no python statements. \nOne important net result of this is that Evoque is able to execute template evaluation in a sandbox -- i.e. you could safely give untrusted users write-access to a template's source code -- a feature that is virtually impossible for template engines that also allow embedding of python statements. Oh, and while not losing out any in a direct feature comparison, Evoque is in some cases actually faster than Mako, and it also runs on Python 3. \n",
"You could discipline yourself to not inject any Python code within the template unless it's really the last resort to get the job done. I have faced a similar issue with Django's template where I have to do some serious CSS gymnastics to display my content. If I could have used some Python code in the template, it would have been better.\n"
] |
[
19,
16,
2,
2,
0
] |
[] |
[] |
[
"genshi",
"mako",
"python",
"template_engine"
] |
stackoverflow_0001384634_genshi_mako_python_template_engine.txt
|
Q:
easy way of installing python apps without using PYTHON path or muli symlink in site-package
I didn't want to install python modules using easy install, symlinks in site-packages or PYTHONPATH.
So, I am trying something that I do wants system wide, then any application installation is done locally. Note, the root password is required only once here.
First create a symblink of.../pythonX.Y/site-packages/mymodules -> /home/me/lib/python_related
So, I create a directory called
/home/me/lib/python_related/
In there:
/home/me/lib/python_related
/home/me/lib/python_related/__init__.py
/home/me/lib/python_related/django_related/
/home/me/lib/python_related/django_related/core
/home/me/lib/python_related/django_related/core/Django1.0
/home/me/lib/python_related/django_related/core/Django1.1
/home/me/lib/python_related/django_related/core/mycurrent_django -> Django1.1/django
/home/me/lib/python_related/django_related/apps
/home/me/lib/python_related/django_related/apps/tagging
/home/me/lib/python_related/django_related/apps/tagging/django-tagging-0.2
/home/me/lib/python_related/django_related/apps/tagging/django-tagging-0.3
/home/me/lib/python_related/django_related/apps/tagging/mycurrent_tagging -> django-tagging-0.3
Now, here is the content of:
/home/me/lib/python_related/__init__.py
==========================================
import sys, os
# tell us where you keep all your modules and this didn't work as it gave me
# the location of the site-packages
#PYTHON_MODULE_PATH = os.path.dirname(__file__)
PYTHON_MODULE_PATH = "/home/me/libs/python_bucket"
def run_cmd(cmd):
"""
Given a command name, this function will run the command and returns the output
in a list.
"""
output = []
phdl = os.popen(cmd)
while 1:
line = phdl.readline()
if line == "":
break
output.append(line.replace("\n", ""))
return output
def install():
"""
A cheesy way of installing and managing your python apps locally without
a need to install them in the site-package. All you'd need is to install
the directory containing this file in the site-package and that's it.
Anytime you have a python package you want to install, just put it in a
proper sub-directory and make a symlink to that directory called mycurrent_xyz
and you are done. (e.g. mycurrent_django, mycurrent_tagging .. etc)
"""
cmd = "find %s -name mycurrent_*" % PYTHON_MODULE_PATH
modules_to_be_installed = run_cmd(cmd)
sys.path += modules_to_be_installed
install()
=======================================================
Now in any new python project, just import your mymodules and that pulls in any apps that you have in the above directory with the proper symbolic link. This way you can have multiple copies of apps and just use the mycurrent_xyz to the one you want to use.
Now here is question. Is this a good way of doing it?
A:
Have a look at virtualenv.
It may do what you are after.
|
easy way of installing python apps without using PYTHON path or muli symlink in site-package
|
I didn't want to install python modules using easy install, symlinks in site-packages or PYTHONPATH.
So, I am trying something that I do wants system wide, then any application installation is done locally. Note, the root password is required only once here.
First create a symblink of.../pythonX.Y/site-packages/mymodules -> /home/me/lib/python_related
So, I create a directory called
/home/me/lib/python_related/
In there:
/home/me/lib/python_related
/home/me/lib/python_related/__init__.py
/home/me/lib/python_related/django_related/
/home/me/lib/python_related/django_related/core
/home/me/lib/python_related/django_related/core/Django1.0
/home/me/lib/python_related/django_related/core/Django1.1
/home/me/lib/python_related/django_related/core/mycurrent_django -> Django1.1/django
/home/me/lib/python_related/django_related/apps
/home/me/lib/python_related/django_related/apps/tagging
/home/me/lib/python_related/django_related/apps/tagging/django-tagging-0.2
/home/me/lib/python_related/django_related/apps/tagging/django-tagging-0.3
/home/me/lib/python_related/django_related/apps/tagging/mycurrent_tagging -> django-tagging-0.3
Now, here is the content of:
/home/me/lib/python_related/__init__.py
==========================================
import sys, os
# tell us where you keep all your modules and this didn't work as it gave me
# the location of the site-packages
#PYTHON_MODULE_PATH = os.path.dirname(__file__)
PYTHON_MODULE_PATH = "/home/me/libs/python_bucket"
def run_cmd(cmd):
"""
Given a command name, this function will run the command and returns the output
in a list.
"""
output = []
phdl = os.popen(cmd)
while 1:
line = phdl.readline()
if line == "":
break
output.append(line.replace("\n", ""))
return output
def install():
"""
A cheesy way of installing and managing your python apps locally without
a need to install them in the site-package. All you'd need is to install
the directory containing this file in the site-package and that's it.
Anytime you have a python package you want to install, just put it in a
proper sub-directory and make a symlink to that directory called mycurrent_xyz
and you are done. (e.g. mycurrent_django, mycurrent_tagging .. etc)
"""
cmd = "find %s -name mycurrent_*" % PYTHON_MODULE_PATH
modules_to_be_installed = run_cmd(cmd)
sys.path += modules_to_be_installed
install()
=======================================================
Now in any new python project, just import your mymodules and that pulls in any apps that you have in the above directory with the proper symbolic link. This way you can have multiple copies of apps and just use the mycurrent_xyz to the one you want to use.
Now here is question. Is this a good way of doing it?
|
[
"Have a look at virtualenv.\nIt may do what you are after.\n"
] |
[
4
] |
[] |
[] |
[
"django",
"module",
"python",
"pythonpath"
] |
stackoverflow_0001391584_django_module_python_pythonpath.txt
|
Q:
What python data structure and parser should I use with Apple's system_profiler?
My problem is one like a simulated problem from
http://my.safaribooksonline.com/0596007973/pythoncook2-CHP-10-SECT-17
which eventually made its way into Python Cookbook, 2nd Edition using an outdated xpath method from 2005 that I haven't been able to get to work with 10.6's build-in python(nor installing older packages)
I want to ... "retrieve detailed information about a Mac OS X system" using system_profiler to summarize it in a script each time a computer starts up(The script will launch on login).
The information I'm gathering varies from SW versions to HW config.
An example line is,
system_profiler SPSoftwareDataType | grep 'Boot Volume'
which returns the startup volume name. I make 15 to 20 other calls for information.
I've tried to output the full 'system_profiler > data' and parse that using cat data | grep, but that's obviously inefficient to the point where it's been faster if I just run each line like my example above.
18 seconds if ouputting to a file and cat | grep.
13 seconds if making individual calls
*I'm trying to make it as fast as possible.
I deduce that I probably need to create a dictionary and use keys to reference out the data but I'm wondering what's the most efficient way for me to parse and retrieve the data? I've seen a suggestion elsewhere to use system_profiler to output to XML and use a XML parser but I think there's probably some cache and parse method that does it more efficiently than outputting to a file first.
A:
Use the -xml option to system_profiler to format the output in a standard OS X plist format, then use Python's built-in plistlib to parse into an appropriate data structure you can introspect. A simple example:
>>> from subprocess import Popen, PIPE
>>> from plistlib import readPlistFromString
>>> from pprint import pprint
>>> sp = Popen(["system_profiler", "-xml"], stdout=PIPE).communicate()[0]
>>> pprint(readPlistFromString(sp))
[{'_dataType': 'SPHardwareDataType',
'_detailLevel': '-2',
'_items': [{'SMC_version_system': '1.21f4',
'_name': 'hardware_overview',
'boot_rom_version': 'IM71.007A.B03',
'bus_speed': '800 MHz',
'cpu_type': 'Intel Core 2 Duo',
...
|
What python data structure and parser should I use with Apple's system_profiler?
|
My problem is one like a simulated problem from
http://my.safaribooksonline.com/0596007973/pythoncook2-CHP-10-SECT-17
which eventually made its way into Python Cookbook, 2nd Edition using an outdated xpath method from 2005 that I haven't been able to get to work with 10.6's build-in python(nor installing older packages)
I want to ... "retrieve detailed information about a Mac OS X system" using system_profiler to summarize it in a script each time a computer starts up(The script will launch on login).
The information I'm gathering varies from SW versions to HW config.
An example line is,
system_profiler SPSoftwareDataType | grep 'Boot Volume'
which returns the startup volume name. I make 15 to 20 other calls for information.
I've tried to output the full 'system_profiler > data' and parse that using cat data | grep, but that's obviously inefficient to the point where it's been faster if I just run each line like my example above.
18 seconds if ouputting to a file and cat | grep.
13 seconds if making individual calls
*I'm trying to make it as fast as possible.
I deduce that I probably need to create a dictionary and use keys to reference out the data but I'm wondering what's the most efficient way for me to parse and retrieve the data? I've seen a suggestion elsewhere to use system_profiler to output to XML and use a XML parser but I think there's probably some cache and parse method that does it more efficiently than outputting to a file first.
|
[
"Use the -xml option to system_profiler to format the output in a standard OS X plist format, then use Python's built-in plistlib to parse into an appropriate data structure you can introspect. A simple example:\n>>> from subprocess import Popen, PIPE\n>>> from plistlib import readPlistFromString\n>>> from pprint import pprint\n>>> sp = Popen([\"system_profiler\", \"-xml\"], stdout=PIPE).communicate()[0]\n>>> pprint(readPlistFromString(sp))\n[{'_dataType': 'SPHardwareDataType',\n '_detailLevel': '-2',\n '_items': [{'SMC_version_system': '1.21f4',\n '_name': 'hardware_overview',\n 'boot_rom_version': 'IM71.007A.B03',\n 'bus_speed': '800 MHz',\n 'cpu_type': 'Intel Core 2 Duo',\n ...\n\n"
] |
[
7
] |
[] |
[] |
[
"macos",
"parsing",
"profiler",
"python",
"system_profiler"
] |
stackoverflow_0001392604_macos_parsing_profiler_python_system_profiler.txt
|
Q:
Django template, how to make a dropdown box with the predefined value selected?
I am trying to create a drop down list box with the selected value equal to a value passed from the template values, but with no success. Can anyone take a look and show me what I am doing wrong.
<select name="movie">
{% for movie in movies %}
{% ifequal movie.id selected_movie.id %}
<option value="{{movie.key}}" selected="true">Movie {{movie.id}}: {{movie.name}}</option>
{% endifequal %}
{% ifnotequal movie.id selected_movie.id %}
<option value="{{movie.key}}">Movie {{movie.id}}: {{movie.name}}</option>
{% endifnotequal %}
{% endfor %}
</select>
In this example, movies and selected_movie are passed from the template values.
Please advice!
A:
Your code works for me with django 1.0.2 and firefox 3.5.
You can use {% else %} instead of {% ifnotequal %} and set selected="selected". Hope it helps.
<select name="movie">
{% for movie in movies %}
{% ifequal movie.id selected_movie.id %}
<option value="{{movie.key}}" selected="selected">Movie {{movie.id}}: {{movie.name}}</option>
{% else %}
<option value="{{movie.key}}">Movie {{movie.id}}: {{movie.name}}</option>
{% endifequal %}
{% endfor %}
</select>
|
Django template, how to make a dropdown box with the predefined value selected?
|
I am trying to create a drop down list box with the selected value equal to a value passed from the template values, but with no success. Can anyone take a look and show me what I am doing wrong.
<select name="movie">
{% for movie in movies %}
{% ifequal movie.id selected_movie.id %}
<option value="{{movie.key}}" selected="true">Movie {{movie.id}}: {{movie.name}}</option>
{% endifequal %}
{% ifnotequal movie.id selected_movie.id %}
<option value="{{movie.key}}">Movie {{movie.id}}: {{movie.name}}</option>
{% endifnotequal %}
{% endfor %}
</select>
In this example, movies and selected_movie are passed from the template values.
Please advice!
|
[
"Your code works for me with django 1.0.2 and firefox 3.5.\nYou can use {% else %} instead of {% ifnotequal %} and set selected=\"selected\". Hope it helps.\n<select name=\"movie\">\n {% for movie in movies %}\n {% ifequal movie.id selected_movie.id %}\n <option value=\"{{movie.key}}\" selected=\"selected\">Movie {{movie.id}}: {{movie.name}}</option>\n {% else %}\n <option value=\"{{movie.key}}\">Movie {{movie.id}}: {{movie.name}}</option>\n {% endifequal %}\n {% endfor %}\n</select>\n\n"
] |
[
14
] |
[] |
[] |
[
"django",
"django_templates",
"python"
] |
stackoverflow_0001392706_django_django_templates_python.txt
|
Q:
Is it possible to deploy one GAE application from another GAE application?
In order to redeploy a GAE application, I currently have to install the GAE deployment tools on the system that I am using for deployment. While this process is relatively straight forward, the deployment process is a manual process that does not work from behind a firewall and the deployment tools must be installed on every machine that will be used for updating GAE apps. A more ideal solution would be if I could update a GAE application from another GAE application that I have deployed previously. This would remove the need to have multiple systems configured to deploy apps.
Since the GAE deployment tools are written in Python and the GAE App Engine supports Python, is it possible to modify appcfg.py to work from within GAE? The use case would be to pull a project from GitHub or some other online repository and update one GAE application from another GAE app. If this is not possible, what is the limiting constraint?
A:
Is it possible? Yes. The protocol appcfg uses to update apps is entirely HTTP-based, so there's absolutely no reason you couldn't write an app that's capable of deploying other apps (or redeploying itself - self-modifying code)! You may even be able to reuse large parts of appcfg.py to do it.
Is it easy? Probably not. It's quite likely you'll need to understand a decent chunk of appcfg's internals, and the RPCs it uses to upload new apps - not a trivial undertaking. You'll also need to store your credentials in the app, in all likelihood - though you can use a role account that is and admin only for the apps it's deploying to minimize risk there.
A:
One limiting constraint could be the protocol that the python sdk uses to communicate with the GAE servers. If it only uses HTTP, you might be OK. but if it's anything else, you might be out of luck because you can't open a socket directly from within GAE.
A:
What problem did you have by trying to update behind a firewall?
I've got some, but finally I manage to work around them.
About your question, the constraint is that you cannot write files into a GAE app, so even though you could possibly pull from the VCS you can't write those pulled files.
So you would have to update from outside the GAE in first place.
Anyway every machine that needs to update the GAE should have the SDK anyway just to see if they changes work.
So, If you really want to do this you have two alternatives:
Host your own "updater" site and istall the SDK there, then when you want to update log into your side ( or run a script ) and do the remote update.
Although I don't know Amazon EC2 well, I think you can do pretty much the same thing as op 1 from there.
Finally I think the password to update has to be typed always. ( you could have the SDK of the App engine and modify that, because it is open source )
|
Is it possible to deploy one GAE application from another GAE application?
|
In order to redeploy a GAE application, I currently have to install the GAE deployment tools on the system that I am using for deployment. While this process is relatively straight forward, the deployment process is a manual process that does not work from behind a firewall and the deployment tools must be installed on every machine that will be used for updating GAE apps. A more ideal solution would be if I could update a GAE application from another GAE application that I have deployed previously. This would remove the need to have multiple systems configured to deploy apps.
Since the GAE deployment tools are written in Python and the GAE App Engine supports Python, is it possible to modify appcfg.py to work from within GAE? The use case would be to pull a project from GitHub or some other online repository and update one GAE application from another GAE app. If this is not possible, what is the limiting constraint?
|
[
"Is it possible? Yes. The protocol appcfg uses to update apps is entirely HTTP-based, so there's absolutely no reason you couldn't write an app that's capable of deploying other apps (or redeploying itself - self-modifying code)! You may even be able to reuse large parts of appcfg.py to do it.\nIs it easy? Probably not. It's quite likely you'll need to understand a decent chunk of appcfg's internals, and the RPCs it uses to upload new apps - not a trivial undertaking. You'll also need to store your credentials in the app, in all likelihood - though you can use a role account that is and admin only for the apps it's deploying to minimize risk there.\n",
"One limiting constraint could be the protocol that the python sdk uses to communicate with the GAE servers. If it only uses HTTP, you might be OK. but if it's anything else, you might be out of luck because you can't open a socket directly from within GAE.\n",
"What problem did you have by trying to update behind a firewall?\nI've got some, but finally I manage to work around them. \nAbout your question, the constraint is that you cannot write files into a GAE app, so even though you could possibly pull from the VCS you can't write those pulled files. \nSo you would have to update from outside the GAE in first place.\nAnyway every machine that needs to update the GAE should have the SDK anyway just to see if they changes work.\nSo, If you really want to do this you have two alternatives:\n\nHost your own \"updater\" site and istall the SDK there, then when you want to update log into your side ( or run a script ) and do the remote update.\nAlthough I don't know Amazon EC2 well, I think you can do pretty much the same thing as op 1 from there. \n\nFinally I think the password to update has to be typed always. ( you could have the SDK of the App engine and modify that, because it is open source ) \n"
] |
[
5,
2,
0
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001391608_google_app_engine_python.txt
|
Q:
Python: Why does ("hello" is "hello") evaluate as True?
Why does "hello" is "hello" produce True in Python?
I read the following here:
If two string literals are equal, they have been put to same
memory location. A string is an immutable entity. No harm can
be done.
So there is one and only one place in memory for every Python string? Sounds pretty strange. What's going on here?
A:
Python (like Java, C, C++, .NET) uses string pooling / interning. The interpreter realises that "hello" is the same as "hello", so it optimizes and uses the same location in memory.
Another goodie: "hell" + "o" is "hello" ==> True
A:
So there is one and only one place in memory for every Python string?
No, only ones the interpreter has decided to optimise, which is a decision based on a policy that isn't part of the language specification and which may change in different CPython versions.
eg. on my install (2.6.2 Linux):
>>> 'X'*10 is 'X'*10
True
>>> 'X'*30 is 'X'*30
False
similarly for ints:
>>> 2**8 is 2**8
True
>>> 2**9 is 2**9
False
So don't rely on 'string' is 'string': even just looking at the C implementation it isn't safe.
A:
Literal strings are probably grouped based on their hash or something similar. Two of the same literal strings will be stored in the same memory, and any references both refer to that.
Memory Code
-------
| myLine = "hello"
| /
|hello <
| \
| myLine = "hello"
-------
A:
The is operator returns true if both arguments are the same object. Your result is a consequence of this, and the quoted bit.
In the case of string literals, these are interned, meaning they are compared to known strings. If an identical string is already known, the literal takes that value, instead of an alternative one. Thus, they become the same object, and the expression is true.
A:
The Python interpreter/compiler parses the string literals, i.e. the quoted list of characters. When it does this, it can detect "I've seen this string before", and use the same representation as last time. It can do this since it knows that strings defined in this way cannot be changed.
A:
Why is it strange. If the string is immutable it makes a lot of sense to only store it once. .NET has the same behavior.
A:
I think if any two variables (not just strings) contain the same value, the value will be stored only once not twice and both the variables will point to the same location. This saves memory.
|
Python: Why does ("hello" is "hello") evaluate as True?
|
Why does "hello" is "hello" produce True in Python?
I read the following here:
If two string literals are equal, they have been put to same
memory location. A string is an immutable entity. No harm can
be done.
So there is one and only one place in memory for every Python string? Sounds pretty strange. What's going on here?
|
[
"Python (like Java, C, C++, .NET) uses string pooling / interning. The interpreter realises that \"hello\" is the same as \"hello\", so it optimizes and uses the same location in memory.\nAnother goodie: \"hell\" + \"o\" is \"hello\" ==> True\n",
"\nSo there is one and only one place in memory for every Python string?\n\nNo, only ones the interpreter has decided to optimise, which is a decision based on a policy that isn't part of the language specification and which may change in different CPython versions.\neg. on my install (2.6.2 Linux):\n>>> 'X'*10 is 'X'*10\nTrue\n>>> 'X'*30 is 'X'*30\nFalse\n\nsimilarly for ints:\n>>> 2**8 is 2**8\nTrue\n>>> 2**9 is 2**9\nFalse\n\nSo don't rely on 'string' is 'string': even just looking at the C implementation it isn't safe.\n",
"Literal strings are probably grouped based on their hash or something similar. Two of the same literal strings will be stored in the same memory, and any references both refer to that.\n Memory Code\n-------\n| myLine = \"hello\"\n| /\n|hello <\n| \\\n| myLine = \"hello\"\n-------\n\n",
"The is operator returns true if both arguments are the same object. Your result is a consequence of this, and the quoted bit. \nIn the case of string literals, these are interned, meaning they are compared to known strings. If an identical string is already known, the literal takes that value, instead of an alternative one. Thus, they become the same object, and the expression is true.\n",
"The Python interpreter/compiler parses the string literals, i.e. the quoted list of characters. When it does this, it can detect \"I've seen this string before\", and use the same representation as last time. It can do this since it knows that strings defined in this way cannot be changed.\n",
"Why is it strange. If the string is immutable it makes a lot of sense to only store it once. .NET has the same behavior. \n",
"I think if any two variables (not just strings) contain the same value, the value will be stored only once not twice and both the variables will point to the same location. This saves memory.\n"
] |
[
93,
65,
13,
6,
2,
1,
0
] |
[] |
[] |
[
"identity",
"object_comparison",
"python",
"string_comparison"
] |
stackoverflow_0001392433_identity_object_comparison_python_string_comparison.txt
|
Q:
'Snippit' based django semi-CMS
I remember reading somewhere on the internets about a half-assed tiny django CMS app, which was basically built on 'snippets' of text.
The idea was, that in the admin, you make a snippet (say a description of a product), give it a name (such as 'google_desc') and call it in a template with something like {% snippet google_desc %} and bam!
I think it was this guy that made it, but im not quite sure.
Would anyone know where i could find this piece of awesomeness?
Edit: I was after an app or something to plug into my project. Not, an existing website/service.
Edit 2: insin got it. I was after django-chunks
A:
Sounds like django-chunks to me.
A:
Are you talking about Django Simplepages? Official site here.
Another project that sounds similar to what you're after is django-page-cms.
A:
If you need some more features just checkout django-blocks (http://code.google.com/p/django-blocks/). Has multi-language Menu, Flatpages and even has a simple Shopping Cart!!
|
'Snippit' based django semi-CMS
|
I remember reading somewhere on the internets about a half-assed tiny django CMS app, which was basically built on 'snippets' of text.
The idea was, that in the admin, you make a snippet (say a description of a product), give it a name (such as 'google_desc') and call it in a template with something like {% snippet google_desc %} and bam!
I think it was this guy that made it, but im not quite sure.
Would anyone know where i could find this piece of awesomeness?
Edit: I was after an app or something to plug into my project. Not, an existing website/service.
Edit 2: insin got it. I was after django-chunks
|
[
"Sounds like django-chunks to me.\n",
"Are you talking about Django Simplepages? Official site here.\nAnother project that sounds similar to what you're after is django-page-cms.\n",
"If you need some more features just checkout django-blocks (http://code.google.com/p/django-blocks/). Has multi-language Menu, Flatpages and even has a simple Shopping Cart!!\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"content_management_system",
"django",
"python"
] |
stackoverflow_0000257655_content_management_system_django_python.txt
|
Q:
Problem with Python interpreter in Eclipse
When trying to set the interpreter for python in Eclipse by choosing the executable, clicking OK displays "An error has occured." Does the interpreter name matter?
A:
I had a similar problems with this on Mac OS X. My problem was that I had a space in Eclipse's application path, e.g. "/Applications/eclipse 3.3/Eclipse".
I changed the folder name to "/Applications/eclipse3.3" and it fixed it.
A:
Testing/running your apps on the command line is the safest bet, especially when writing threaded applications (you can kill your threadlocked program without killing eclipse)
|
Problem with Python interpreter in Eclipse
|
When trying to set the interpreter for python in Eclipse by choosing the executable, clicking OK displays "An error has occured." Does the interpreter name matter?
|
[
"I had a similar problems with this on Mac OS X. My problem was that I had a space in Eclipse's application path, e.g. \"/Applications/eclipse 3.3/Eclipse\".\nI changed the folder name to \"/Applications/eclipse3.3\" and it fixed it.\n",
"Testing/running your apps on the command line is the safest bet, especially when writing threaded applications (you can kill your threadlocked program without killing eclipse)\n"
] |
[
1,
0
] |
[] |
[] |
[
"eclipse",
"interpreter",
"python",
"ubuntu"
] |
stackoverflow_0000976506_eclipse_interpreter_python_ubuntu.txt
|
Q:
Call a function from a running process
my programm starts a subprocess, which has to send some kind of signal to the parent after initialization. It would be perfekt if i could set up a handler in parent, which is called when this signal is sent. Is there any way to do it?
Alendit
A:
If you are using Python 2.6, you can use the multiprocessing module from the standard library, in particular pipes and queues. Simple example from the docs:
from multiprocessing import Process, Pipe
def f(conn): #This code will be spawned as a new child process
conn.send([42, None, 'hello']) #The child process sends a msg to the pipe
conn.close()
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,)) # prepare to spawn the child
p.start() # spawn it
print parent_conn.recv() # prints "[42, None, 'hello']"
p.join() #wait for child to exit
If you are using Python 2.4 or 2.5, don't despair - a backport is available here.
A:
Parent code:
import signal
def my_callback(signal, frame):
print "Signal %d received" % (signal,)
signal.signal(signal.SIGUSR1, my_callback)
# start child
Child code:
import os
import signal
signal.kill(os.getppid(), signal.SIGUSR1)
Be careful with this form of IPC because it has its issues, e.g.:
In the original Unix systems, when a
handler that was established using
signal() was invoked by the
delivery of a signal, the disposition
of the signal would be reset to
SIG_DFL, and the system did not
block delivery of further instances
of the signal. System V also provides
these semantics for signal(). This
was bad because the signal might be
delivered again before the handler
had a chance to reestablish itself.
Furthermore, rapid deliveries of the
same signal could result in recursive
invocations of the handler.
I recommend reading the whole signal(2) man page.
A:
You can use the signal module from the Python standard library to register a signal handler. The subprocess would then use normal signal sending mechanisms.
|
Call a function from a running process
|
my programm starts a subprocess, which has to send some kind of signal to the parent after initialization. It would be perfekt if i could set up a handler in parent, which is called when this signal is sent. Is there any way to do it?
Alendit
|
[
"If you are using Python 2.6, you can use the multiprocessing module from the standard library, in particular pipes and queues. Simple example from the docs:\nfrom multiprocessing import Process, Pipe\n\ndef f(conn): #This code will be spawned as a new child process\n conn.send([42, None, 'hello']) #The child process sends a msg to the pipe\n conn.close()\n\nif __name__ == '__main__':\n parent_conn, child_conn = Pipe()\n p = Process(target=f, args=(child_conn,)) # prepare to spawn the child\n p.start() # spawn it\n print parent_conn.recv() # prints \"[42, None, 'hello']\"\n p.join() #wait for child to exit\n\nIf you are using Python 2.4 or 2.5, don't despair - a backport is available here.\n",
"Parent code:\n\nimport signal\n\ndef my_callback(signal, frame):\n print \"Signal %d received\" % (signal,)\n\nsignal.signal(signal.SIGUSR1, my_callback)\n# start child\n\nChild code:\n\nimport os\nimport signal\n\nsignal.kill(os.getppid(), signal.SIGUSR1)\n\nBe careful with this form of IPC because it has its issues, e.g.:\n\nIn the original Unix systems, when a\n handler that was established using\n signal() was invoked by the\n delivery of a signal, the disposition\n of the signal would be reset to\n SIG_DFL, and the system did not \n block delivery of further instances\n of the signal. System V also provides\n these semantics for signal(). This\n was bad because the signal might be\n delivered again before the handler\n had a chance to reestablish itself.\n Furthermore, rapid deliveries of the\n same signal could result in recursive\n invocations of the handler.\n\nI recommend reading the whole signal(2) man page.\n",
"You can use the signal module from the Python standard library to register a signal handler. The subprocess would then use normal signal sending mechanisms.\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"ipc",
"python",
"subprocess"
] |
stackoverflow_0001393242_ipc_python_subprocess.txt
|
Q:
using RSPython in MacOSX
I am trying to install the
R/SPlus - Python Interface (RSPython) on my Mac OS X 10.4.11 with R version 2.7.2 (2008-08-25) and python 2.6.2 from fink.
The routine:
sudo R CMD INSTALL -c RSPython_0.7-1.tar.gz
produced this error message:
* Installing to library '/Library/Frameworks/R.framework/Resources/library'
* Installing *source* package 'RSPython' ...
checking for python... /sw/bin/python
Python version 2.6
Using threads
checking for gcc... gcc
checking for C compiler default output file name... configure: error: C compiler cannot create executables
See `config.log' for more details.
ERROR: configuration failed for package 'RSPython'
** Removing '/Library/Frameworks/R.framework/Versions/2.7/Resources/library/RSPython'
The config.log was not created o my system.
The contact e-mail address to the author does not work anymore, so I just hope somebody here tried the same already or can give me an alternative for running R routines in python.
Best regards,
Simon
A:
Try running R CMD CHECK RSPython_0.7-1.tar.gz
That should produce at least produce bunch of logs in a RSPython.Rcheck folder
You might get some clues in there.
Update ---
If you can get one of the other packages to work I'd recommend it. On my system (R 2.9.1 using system python (2.6) in /usr/bin/python), install works but then RSPython fails to run due to problems inside its .First.lib function. I expect you would need to hack the sources considerably to get it to work.
A:
To debug this, simply unpack the tar file yourself (tar xzvf RSPython_0.7-1.tar.gz) and run ./configure in the directory created. You should get a config.log file that you can examine.
A:
I found the rpy and rpy2 packages and going to try those as an alternative to the older RSPython.
rpy2 is includes in fink's unstable distribution ... well, and it just works fine.
|
using RSPython in MacOSX
|
I am trying to install the
R/SPlus - Python Interface (RSPython) on my Mac OS X 10.4.11 with R version 2.7.2 (2008-08-25) and python 2.6.2 from fink.
The routine:
sudo R CMD INSTALL -c RSPython_0.7-1.tar.gz
produced this error message:
* Installing to library '/Library/Frameworks/R.framework/Resources/library'
* Installing *source* package 'RSPython' ...
checking for python... /sw/bin/python
Python version 2.6
Using threads
checking for gcc... gcc
checking for C compiler default output file name... configure: error: C compiler cannot create executables
See `config.log' for more details.
ERROR: configuration failed for package 'RSPython'
** Removing '/Library/Frameworks/R.framework/Versions/2.7/Resources/library/RSPython'
The config.log was not created o my system.
The contact e-mail address to the author does not work anymore, so I just hope somebody here tried the same already or can give me an alternative for running R routines in python.
Best regards,
Simon
|
[
"Try running R CMD CHECK RSPython_0.7-1.tar.gz\nThat should produce at least produce bunch of logs in a RSPython.Rcheck folder\nYou might get some clues in there. \nUpdate --- \nIf you can get one of the other packages to work I'd recommend it. On my system (R 2.9.1 using system python (2.6) in /usr/bin/python), install works but then RSPython fails to run due to problems inside its .First.lib function. I expect you would need to hack the sources considerably to get it to work.\n",
"To debug this, simply unpack the tar file yourself (tar xzvf RSPython_0.7-1.tar.gz) and run ./configure in the directory created. You should get a config.log file that you can examine.\n",
"I found the rpy and rpy2 packages and going to try those as an alternative to the older RSPython.\nrpy2 is includes in fink's unstable distribution ... well, and it just works fine.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"installation",
"python",
"r"
] |
stackoverflow_0001392868_installation_python_r.txt
|
Q:
using soaplib to connect to remote SOAP server lacking definition
I am looking at the soaplib python module (it comes with standard ubuntu 9.04). I have used xmlrpclib extensively in the last years but now I am curious about soap. writing servers with soaplib is acceptably easy, I assume writing clients should be even easier.
in my impatience I can't find a way to make use of introspection. do I really need to describe each and every method in the server in order to define the client ( http://trac.optio.webfactional.com/wiki/Client )?
I find this difficult to believe, but I can't find any significant web page holding my three search terms: python soap introspect...
so the question sounds: can I use Python soaplib to access just any remote web service of which I only know the URL? and how do I do that?
am I or is the library missing something?
A:
If I understand your question correctly, you would like to generate client code for a given webservice without defining what methods etc are availible on that service in your own code directly? IE: you would like to introspect the service and generate client automatically.
If this is the case then the answer is that you need to use the soaplib trunk. Specifically you will be interested in a recently contributed script that allows the generation of Python classes to act as a client to a given service as described in a WSDL file. There are scripts in soaplib to allow the generation of classes both in a static manner (where a .py module is generated and written to disk) and in a dynamic manner where the classes exist only at runtime in your program.
Hope that helps.
|
using soaplib to connect to remote SOAP server lacking definition
|
I am looking at the soaplib python module (it comes with standard ubuntu 9.04). I have used xmlrpclib extensively in the last years but now I am curious about soap. writing servers with soaplib is acceptably easy, I assume writing clients should be even easier.
in my impatience I can't find a way to make use of introspection. do I really need to describe each and every method in the server in order to define the client ( http://trac.optio.webfactional.com/wiki/Client )?
I find this difficult to believe, but I can't find any significant web page holding my three search terms: python soap introspect...
so the question sounds: can I use Python soaplib to access just any remote web service of which I only know the URL? and how do I do that?
am I or is the library missing something?
|
[
"If I understand your question correctly, you would like to generate client code for a given webservice without defining what methods etc are availible on that service in your own code directly? IE: you would like to introspect the service and generate client automatically.\nIf this is the case then the answer is that you need to use the soaplib trunk. Specifically you will be interested in a recently contributed script that allows the generation of Python classes to act as a client to a given service as described in a WSDL file. There are scripts in soaplib to allow the generation of classes both in a static manner (where a .py module is generated and written to disk) and in a dynamic manner where the classes exist only at runtime in your program.\nHope that helps.\n"
] |
[
1
] |
[] |
[] |
[
"introspection",
"python",
"soap"
] |
stackoverflow_0001373738_introspection_python_soap.txt
|
Q:
Change Vim command to work in MS-Windows? Use make to check python syntax
:make provides a list of errors which can be navigated through in order to fix.
The problem is that this script only works in Unix based OSes.
autocmd BufRead *.py set makeprg=python\ -c\ \"import\ py_compile,sys;\ sys.stderr=sys.stdout;\ py_compile.compile(r'%')\"
autocmd BufRead *.py set efm=%C\ %.%#,%A\ \ File\ \"%f\"\\,\ line\ %l%.%#,%Z%[%^\ ]%\\@=%m
Anyone could help me make it work or provide the equivalent for Windows XP?
A:
For the first part to work, you must first add python to your PATH variable.
http://vlaurie.com/computers2/Articles/environment.htm
python.exe should be placed in:
c:\PythonXX\bin
But I'm not that sure, check it out before adding that one.
|
Change Vim command to work in MS-Windows? Use make to check python syntax
|
:make provides a list of errors which can be navigated through in order to fix.
The problem is that this script only works in Unix based OSes.
autocmd BufRead *.py set makeprg=python\ -c\ \"import\ py_compile,sys;\ sys.stderr=sys.stdout;\ py_compile.compile(r'%')\"
autocmd BufRead *.py set efm=%C\ %.%#,%A\ \ File\ \"%f\"\\,\ line\ %l%.%#,%Z%[%^\ ]%\\@=%m
Anyone could help me make it work or provide the equivalent for Windows XP?
|
[
"For the first part to work, you must first add python to your PATH variable.\nhttp://vlaurie.com/computers2/Articles/environment.htm\npython.exe should be placed in:\nc:\\PythonXX\\bin\n\nBut I'm not that sure, check it out before adding that one.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"vim",
"windows",
"windows_xp"
] |
stackoverflow_0001394058_python_vim_windows_windows_xp.txt
|
Q:
In Python, what is the best way to execute a local Linux command stored in a string?
In Python, what is the simplest way to execute a local Linux command stored in a string while catching any potential exceptions that are thrown and logging the output of the Linux command and any caught errors to a common log file?
String logfile = “/dev/log”
String cmd = “ls”
#try
#execute cmd sending output to >> logfile
#catch sending caught error to >> logfile
A:
Using the subprocess module is the correct way to do it:
import subprocess
logfile = open("/dev/log", "w")
output, error = subprocess.Popen(
["ls"], stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()
logfile.write(output)
logfile.close()
EDIT
subprocess expects the commands as a list so to run "ls -l" you need to do this:
output, error = subprocess.Popen(
["ls", "-l"], stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()
To generalize it a little bit.
command = "ls -la"
output, error = subprocess.Popen(
command.split(' '), stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()
Alternately you can do this, the output will go directly to the logfile so the output variable will be empty in this case:
import subprocess
logfile = open("/dev/log", "w")
output, error = subprocess.Popen(
["ls"], stdout=logfile,
stderr=subprocess.PIPE).communicate()
A:
subprocess is the best module for this.
You have different ways to run you scripts, in separate threads, or in the same waiting for each command to finish. Check the whole docs that are more than useful:
http://docs.python.org/library/subprocess.html
|
In Python, what is the best way to execute a local Linux command stored in a string?
|
In Python, what is the simplest way to execute a local Linux command stored in a string while catching any potential exceptions that are thrown and logging the output of the Linux command and any caught errors to a common log file?
String logfile = “/dev/log”
String cmd = “ls”
#try
#execute cmd sending output to >> logfile
#catch sending caught error to >> logfile
|
[
"Using the subprocess module is the correct way to do it:\nimport subprocess\nlogfile = open(\"/dev/log\", \"w\")\noutput, error = subprocess.Popen(\n [\"ls\"], stdout=subprocess.PIPE,\n stderr=subprocess.PIPE).communicate()\nlogfile.write(output)\nlogfile.close()\n\nEDIT\nsubprocess expects the commands as a list so to run \"ls -l\" you need to do this:\noutput, error = subprocess.Popen(\n [\"ls\", \"-l\"], stdout=subprocess.PIPE,\n stderr=subprocess.PIPE).communicate()\n\nTo generalize it a little bit.\ncommand = \"ls -la\"\noutput, error = subprocess.Popen(\n command.split(' '), stdout=subprocess.PIPE,\n stderr=subprocess.PIPE).communicate()\n\nAlternately you can do this, the output will go directly to the logfile so the output variable will be empty in this case:\nimport subprocess\nlogfile = open(\"/dev/log\", \"w\")\noutput, error = subprocess.Popen(\n [\"ls\"], stdout=logfile,\n stderr=subprocess.PIPE).communicate()\n\n",
"subprocess is the best module for this.\nYou have different ways to run you scripts, in separate threads, or in the same waiting for each command to finish. Check the whole docs that are more than useful:\nhttp://docs.python.org/library/subprocess.html\n"
] |
[
16,
0
] |
[
"Check out commands module.\n import commands\n f = open('logfile.log', 'w')\n try:\n exe = 'ls'\n content = commands.getoutput(exe)\n f.write(content)\n except Exception, text:\n f.write(text)\n f.close()\n\nSpecifying Exception as an exception class after except will tell Python to catch all possible exceptions.\n"
] |
[
-3
] |
[
"python"
] |
stackoverflow_0001394198_python.txt
|
Q:
Converting vb.net code to python for educational purposes. Output of a numeric value not occuring
My day job is mainly coding in vb.net, so I am very familiar with it. While doing my first few dozen project euler problems, I used vb.net just to get the hang of the problem styles. Now I'd like to use project euler to help me learn a new language and have been running a couple in python. However. I've hit a snag.
The following code will print the largest prime factor of a given number:
Protected Function isPrime(ByVal n As Long) As Boolean
If n = 2 Then
Return True
End If
If n Mod 2 = 0 Then
Return False
End If
Dim maxFactor As Long = Math.Sqrt(n)
Dim i As Integer = 3
While i <= maxFactor
If n Mod i = 0 Then
Return False
End If
i = i + 2
End While
Return True
End Function
Protected Sub LargestPrimeFactor(ByVal n As Long)
Dim factor As Long = Math.Sqrt(n) ''#largest factor of n will be sqrt(n)
While factor > 2
If (n Mod factor) = 0 Then
If isPrime(factor) Then
Me.lblAnswer.Text = factor
factor = 0
End If
End If
factor = factor - 1
End While
End Sub
This vb.net code runs perfectly. The equivalent in python, I believe to be:
from math import sqrt
def IsPrime(n):
if n==2: return true
if not n % 2: return false
maxFactor = sqrt(n)
i = 3
while i <= maxFactor:
if not n % i: return false
i += 2
return true
n = 600851475143
factor = sqrt(n)
while factor > 2:
if not n % factor:
if IsPrime(factor):
print factor
factor = 0
factor -= 1
However, the factor never ends up printing. Am I missing a nuance of python? Where might I have gone wrong? Thanks :)
A:
Previous solutions generate wrong answer.
VB.net code operates on integers, and your Python code operates on floats, and this apparently fails somewhere.
As mentioned before, keyword capitalization (True/False).
You can use foo % bar == 0 with no problem.
You missed one level of indentation in "factor = 0" line.
This code produces the correct answer, 6857:
from math import sqrt
def IsPrime(n):
if n==2: return True
if n % 2 == 0: return False
maxFactor = long(sqrt(n))
i = 3
while i <= maxFactor:
if n % i == 0: return False
i += 2
return True
n = 600851475143
factor = long(sqrt(n))
while factor > 2:
if n % factor == 0:
if IsPrime(factor):
print factor
factor = 0
factor -= 1
|
Converting vb.net code to python for educational purposes. Output of a numeric value not occuring
|
My day job is mainly coding in vb.net, so I am very familiar with it. While doing my first few dozen project euler problems, I used vb.net just to get the hang of the problem styles. Now I'd like to use project euler to help me learn a new language and have been running a couple in python. However. I've hit a snag.
The following code will print the largest prime factor of a given number:
Protected Function isPrime(ByVal n As Long) As Boolean
If n = 2 Then
Return True
End If
If n Mod 2 = 0 Then
Return False
End If
Dim maxFactor As Long = Math.Sqrt(n)
Dim i As Integer = 3
While i <= maxFactor
If n Mod i = 0 Then
Return False
End If
i = i + 2
End While
Return True
End Function
Protected Sub LargestPrimeFactor(ByVal n As Long)
Dim factor As Long = Math.Sqrt(n) ''#largest factor of n will be sqrt(n)
While factor > 2
If (n Mod factor) = 0 Then
If isPrime(factor) Then
Me.lblAnswer.Text = factor
factor = 0
End If
End If
factor = factor - 1
End While
End Sub
This vb.net code runs perfectly. The equivalent in python, I believe to be:
from math import sqrt
def IsPrime(n):
if n==2: return true
if not n % 2: return false
maxFactor = sqrt(n)
i = 3
while i <= maxFactor:
if not n % i: return false
i += 2
return true
n = 600851475143
factor = sqrt(n)
while factor > 2:
if not n % factor:
if IsPrime(factor):
print factor
factor = 0
factor -= 1
However, the factor never ends up printing. Am I missing a nuance of python? Where might I have gone wrong? Thanks :)
|
[
"Previous solutions generate wrong answer.\n\nVB.net code operates on integers, and your Python code operates on floats, and this apparently fails somewhere.\nAs mentioned before, keyword capitalization (True/False).\nYou can use foo % bar == 0 with no problem.\nYou missed one level of indentation in \"factor = 0\" line.\n\nThis code produces the correct answer, 6857:\nfrom math import sqrt\n\ndef IsPrime(n):\n if n==2: return True\n if n % 2 == 0: return False\n\n maxFactor = long(sqrt(n))\n i = 3\n while i <= maxFactor:\n if n % i == 0: return False\n i += 2\n return True\n\nn = 600851475143\nfactor = long(sqrt(n))\nwhile factor > 2:\n if n % factor == 0:\n if IsPrime(factor):\n print factor\n factor = 0\n factor -= 1\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"vb.net"
] |
stackoverflow_0001394737_python_vb.net.txt
|
Q:
configuration filename convention
Is there a general naming conventions for configuration files for a simple python program?
Thanks,
Udi
A:
A convention? Mine would be, if my program was called "Bob", simply "bob.cfg".
I have to admit, I didn't really suffer any angst in coming up with that convention. Maybe I've been here too long :-)
Of course, if your configuration information is of a specific format (e.g., XML), you could consider "bob.xml". But, really, I think ".cfg" just about sums up the intent as much as any convention could.
And, just to state the bleeding obvious, don't call your file "bob.cfg" if your program is actually called "George".
Please don't take offense, I'm not really taking the mickey, just answering an interesting question in the tone of my strange sense of humor. You wouldn't be the first to misunderstand my brand of humor (my wife, for instance, despairs of it most days).
A:
I am not sure if you mean the file basename, of if your question also includes where to put the configuration file. Anyway, on location:
If it is a linux application, you should follow the XDG Base Directory specification (XDG -> Cross-desktop).
It says you should put your configuration files inside a folder named after your program, in $XDG_CONFIG_HOME/programname/ . XDG_CONFIG_HOME is normally ~/.config
A:
.conf is a nice extension too. But since mostly configuration files are application specific, it does not really matter what extension they have as long as they are consistent (i.e do not use .conf with .cfg in the same application).
A:
Django uses settings.py I like that a lot. It's very clear.
A:
I don't think there's any convention for this. You may just use file with extension like: .ini, .cfg, .conf, .config, .pref or anything you like.
|
configuration filename convention
|
Is there a general naming conventions for configuration files for a simple python program?
Thanks,
Udi
|
[
"A convention? Mine would be, if my program was called \"Bob\", simply \"bob.cfg\".\nI have to admit, I didn't really suffer any angst in coming up with that convention. Maybe I've been here too long :-)\nOf course, if your configuration information is of a specific format (e.g., XML), you could consider \"bob.xml\". But, really, I think \".cfg\" just about sums up the intent as much as any convention could.\nAnd, just to state the bleeding obvious, don't call your file \"bob.cfg\" if your program is actually called \"George\".\nPlease don't take offense, I'm not really taking the mickey, just answering an interesting question in the tone of my strange sense of humor. You wouldn't be the first to misunderstand my brand of humor (my wife, for instance, despairs of it most days).\n",
"I am not sure if you mean the file basename, of if your question also includes where to put the configuration file. Anyway, on location:\nIf it is a linux application, you should follow the XDG Base Directory specification (XDG -> Cross-desktop).\nIt says you should put your configuration files inside a folder named after your program, in $XDG_CONFIG_HOME/programname/ . XDG_CONFIG_HOME is normally ~/.config\n",
".conf is a nice extension too. But since mostly configuration files are application specific, it does not really matter what extension they have as long as they are consistent (i.e do not use .conf with .cfg in the same application).\n",
"Django uses settings.py I like that a lot. It's very clear.\n",
"I don't think there's any convention for this. You may just use file with extension like: .ini, .cfg, .conf, .config, .pref or anything you like.\n"
] |
[
4,
4,
4,
1,
0
] |
[] |
[] |
[
"configuration",
"naming_conventions",
"python"
] |
stackoverflow_0001393731_configuration_naming_conventions_python.txt
|
Q:
Django: How to create a leaderboard
Lets say I have around 1,000,000 users. I want to find out what position any given user is in, and which users are around him. A user can get a new achievement at any time, and if he could see his standing update, that would be wonderful.
Honestly, every way I think of doing this would be horrendously expensive in time and/or memory. Ideas? My closest idea so far is to order the users offline and build percentile buckets, but that can't show a user his exact position.
Some code if that helps you django people :
class Alias(models.Model) :
awards = models.ManyToManyField('Award', through='Achiever')
@property
def points(self) :
p = cache.get('alias_points_' + str(self.id))
if p is not None : return p
points = 0
for a in self.achiever_set.all() :
points += a.award.points * a.count
cache.set('alias_points_' + str(self.id), points, 60 * 60) # 1 hour
return points
class Award(MyBaseModel):
owner_points = models.IntegerField(help_text="A non-normalized point value. Very subjective but try to be consistent. Should be proporional. 2x points = 2x effort (or skill)")
true_points = models.FloatField(help_text="The true value of this award. Recalculated with a cron job. Based on number of people who won it", editable=False, null=True)
@property
def points(self) :
if self.true_points :
# blend true_points into real points over 30 days
age = datetime.now() - self.created
blend_days = 30
if age > timedelta(days=blend_days) :
age = timedelta(days=blend_days)
num_days = 1.0 * age.days / blend_days
r = self.true_points * num_days + self.owner_points * (1 - num_days)
return int(r * 10) / 10.0
else :
return self.owner_points
class Achiever(MyBaseModel):
award = models.ForeignKey(Award)
alias = models.ForeignKey(Alias)
count = models.IntegerField(default=1)
A:
I think Counterstrike solves this by requiring users to meet a minimum threshold to become ranked--you only need to accurately sort the top 10% or whatever.
If you want to sort everyone, consider that you don't need to sort them perfectly: sort them to 2 significant figures. With 1M users you could update the leaderboard for the top 100 users in real time, the next 1000 users to the nearest 10, then the masses to the nearest 1% or 10%. You won't jump from place 500,000 to place 99 in one round.
Its meaningless to get the 10 user context above and below place 500,000--the ordering of the masses will be incredibly jittery from round to round due to the exponential distribution.
Edit: Take a look at the SO leaderboard. Now go to page 500 out of 2500 (roughly 20th percentile). Is there any point to telling the people with rep '157' that the 10 people on either side of them also have rep '157'? You'll jump 20 places either way if your rep goes up or down a point. More extreme, is that right now the bottom 1056 pages (out of 2538), or the bottom 42% of users, are tied with rep 1. you get one more point, and you jumped up 1055 pages. Which is roughly a 37,000 increase in rank. It might be cool to tell them "you can beat 37k people if you get one more point!" but does it matter how many significant figures the 37k number has?
There's no value in knowing your peers on a ladder until you're already at the top, because anywhere but the top, there's an overwhelming number of them.
A:
One million is not so much, I would try it the easy way first. If the points property is the thing you are sorting on that needs to be a database column. Then you can just do a count of points greater than the person in question to get the rank. To get other people near a person in question you do a query of people with higher points and sort ascending limit it to the number of people you want.
The tricky thing will be calculating the points on save. You need to use the current time as a bonus multiplier. One point now needs to turn into a number that is less than 1 point 5 days from now. If your users frequently gain points you will need to create a queue to handle the load.
|
Django: How to create a leaderboard
|
Lets say I have around 1,000,000 users. I want to find out what position any given user is in, and which users are around him. A user can get a new achievement at any time, and if he could see his standing update, that would be wonderful.
Honestly, every way I think of doing this would be horrendously expensive in time and/or memory. Ideas? My closest idea so far is to order the users offline and build percentile buckets, but that can't show a user his exact position.
Some code if that helps you django people :
class Alias(models.Model) :
awards = models.ManyToManyField('Award', through='Achiever')
@property
def points(self) :
p = cache.get('alias_points_' + str(self.id))
if p is not None : return p
points = 0
for a in self.achiever_set.all() :
points += a.award.points * a.count
cache.set('alias_points_' + str(self.id), points, 60 * 60) # 1 hour
return points
class Award(MyBaseModel):
owner_points = models.IntegerField(help_text="A non-normalized point value. Very subjective but try to be consistent. Should be proporional. 2x points = 2x effort (or skill)")
true_points = models.FloatField(help_text="The true value of this award. Recalculated with a cron job. Based on number of people who won it", editable=False, null=True)
@property
def points(self) :
if self.true_points :
# blend true_points into real points over 30 days
age = datetime.now() - self.created
blend_days = 30
if age > timedelta(days=blend_days) :
age = timedelta(days=blend_days)
num_days = 1.0 * age.days / blend_days
r = self.true_points * num_days + self.owner_points * (1 - num_days)
return int(r * 10) / 10.0
else :
return self.owner_points
class Achiever(MyBaseModel):
award = models.ForeignKey(Award)
alias = models.ForeignKey(Alias)
count = models.IntegerField(default=1)
|
[
"I think Counterstrike solves this by requiring users to meet a minimum threshold to become ranked--you only need to accurately sort the top 10% or whatever.\nIf you want to sort everyone, consider that you don't need to sort them perfectly: sort them to 2 significant figures. With 1M users you could update the leaderboard for the top 100 users in real time, the next 1000 users to the nearest 10, then the masses to the nearest 1% or 10%. You won't jump from place 500,000 to place 99 in one round.\nIts meaningless to get the 10 user context above and below place 500,000--the ordering of the masses will be incredibly jittery from round to round due to the exponential distribution.\nEdit: Take a look at the SO leaderboard. Now go to page 500 out of 2500 (roughly 20th percentile). Is there any point to telling the people with rep '157' that the 10 people on either side of them also have rep '157'? You'll jump 20 places either way if your rep goes up or down a point. More extreme, is that right now the bottom 1056 pages (out of 2538), or the bottom 42% of users, are tied with rep 1. you get one more point, and you jumped up 1055 pages. Which is roughly a 37,000 increase in rank. It might be cool to tell them \"you can beat 37k people if you get one more point!\" but does it matter how many significant figures the 37k number has?\nThere's no value in knowing your peers on a ladder until you're already at the top, because anywhere but the top, there's an overwhelming number of them.\n",
"One million is not so much, I would try it the easy way first. If the points property is the thing you are sorting on that needs to be a database column. Then you can just do a count of points greater than the person in question to get the rank. To get other people near a person in question you do a query of people with higher points and sort ascending limit it to the number of people you want.\nThe tricky thing will be calculating the points on save. You need to use the current time as a bonus multiplier. One point now needs to turn into a number that is less than 1 point 5 days from now. If your users frequently gain points you will need to create a queue to handle the load.\n"
] |
[
4,
0
] |
[] |
[] |
[
"django",
"leaderboard",
"python",
"sql"
] |
stackoverflow_0001391601_django_leaderboard_python_sql.txt
|
Q:
Trouble with simple Python Code
I'm learning Python, and I'm having trouble with this simple piece of code:
a = raw_input('Enter a number: ')
if a > 0:
print 'Positive'
elif a == 0:
print 'Null'
elif a < 0:
print 'Negative'
It works great, apart from the fact that it always prints 'Positive', no matter if i enter a positive or negative number or zero. I'm guessing there's a simple solution, but i can't find it ;-)
Thanks in advance
A:
That's because a is a string as inputted. Use int() to convert it to an integer before doing numeric comparisons.
a = int(raw_input('Enter a number: '))
if a > 0:
print 'Positive'
elif a == 0:
print 'Null'
elif a < 0:
print 'Negative'
Alternatively, input() will do type conversion for you.
a = input('Enter a number: ')
A:
Because you are using raw_input you are getting the value as a String, which is always considered greater than 0 (even if the String is '-10')
Instead, try using input('Enter a number: ') and python will do the type conversion for you.
The final code would look like this:
a = input('Enter a number: ')
if a > 0:
print 'Positive'
elif a == 0:
print 'Null'
elif a < 0:
print 'Negative'
However, as a number of folks have pointed out, using input() may lead to an error because it actually interprets the python objects passed in.
A safer way to handle this can be to cast raw_input with the desired type, as in:
a = int( raw_input('Enter a number: '))
But beware, you will still need to do some error handling here to avoid trouble!
A:
Expanding on my comment on the accepted answer, here's how I would do it.
value = None
getting_input = True
while getting_input:
try:
value = int(raw_input('Gimme a number: '))
getting_input = False
except ValueError:
print "That's not a number... try again."
if value > 0:
print 'Positive'
elif value < 0:
print 'Negative'
else:
print 'Null'
A:
raw_input
returns a string so you need to convert a which is a string to an integer first: a = int(a)
A:
raw_input is stored as a string, not an integer.
Try using a = int(a) before performing comparisons.
A:
raw input will return a string, not an integer. To convert it, try adding this line immediately after your raw_input statement:
a = int(a)
This will convert the string to an integer. You can crash it by giving it non-numeric data, though, so be careful.
|
Trouble with simple Python Code
|
I'm learning Python, and I'm having trouble with this simple piece of code:
a = raw_input('Enter a number: ')
if a > 0:
print 'Positive'
elif a == 0:
print 'Null'
elif a < 0:
print 'Negative'
It works great, apart from the fact that it always prints 'Positive', no matter if i enter a positive or negative number or zero. I'm guessing there's a simple solution, but i can't find it ;-)
Thanks in advance
|
[
"That's because a is a string as inputted. Use int() to convert it to an integer before doing numeric comparisons.\na = int(raw_input('Enter a number: '))\nif a > 0:\n print 'Positive'\nelif a == 0:\n print 'Null'\nelif a < 0:\n print 'Negative'\n\nAlternatively, input() will do type conversion for you.\na = input('Enter a number: ')\n\n",
"Because you are using raw_input you are getting the value as a String, which is always considered greater than 0 (even if the String is '-10')\nInstead, try using input('Enter a number: ') and python will do the type conversion for you.\nThe final code would look like this:\na = input('Enter a number: ')\nif a > 0:\n print 'Positive'\nelif a == 0:\n print 'Null'\nelif a < 0:\n print 'Negative'\n\nHowever, as a number of folks have pointed out, using input() may lead to an error because it actually interprets the python objects passed in.\nA safer way to handle this can be to cast raw_input with the desired type, as in:\na = int( raw_input('Enter a number: '))\n\nBut beware, you will still need to do some error handling here to avoid trouble!\n",
"Expanding on my comment on the accepted answer, here's how I would do it.\nvalue = None\ngetting_input = True\n\nwhile getting_input:\n try:\n value = int(raw_input('Gimme a number: '))\n getting_input = False\n except ValueError:\n print \"That's not a number... try again.\"\n\nif value > 0:\n print 'Positive'\nelif value < 0:\n print 'Negative'\nelse:\n print 'Null'\n\n",
"raw_input \n\nreturns a string so you need to convert a which is a string to an integer first: a = int(a)\n",
"raw_input is stored as a string, not an integer.\nTry using a = int(a) before performing comparisons. \n",
"raw input will return a string, not an integer. To convert it, try adding this line immediately after your raw_input statement:\na = int(a)\nThis will convert the string to an integer. You can crash it by giving it non-numeric data, though, so be careful.\n"
] |
[
7,
7,
7,
5,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001395603_python.txt
|
Q:
OpenGL Picking with Pyglet
I'm trying to implement picking using Pyglet's OpenGL wrapper, but I'm having trouble converting a C tutorial to Python. Specifically the part below.
#define BUFSIZE 512
GLuint selectBuf[BUFSIZE]
void startPicking(int cursorX, int cursorY) {
GLint viewport[4];
glSelectBuffer(BUFSIZE,selectBuf);
glRenderMode(GL_SELECT);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glGetIntegerv(GL_VIEWPORT,viewport);
gluPickMatrix(cursorX,viewport[3]-cursorY,
5,5,viewport);
gluPerspective(45,ratio,0.1,1000);
glMatrixMode(GL_MODELVIEW);
glInitNames();
}
I'm not sure how to turn declare arrays of GLuint or GLint such that glSelectBuffer and glPickMatrix work. Does anyone know how to do this in Python with Pyglet? Thanks.
A:
I haven't tried your particular example, but the normal way to declare arrays is in the ctypes documentation. Essentially you would create an array type like this:
FourGLints = GLint * 4
viewport = FourGLints(0, 1, 2, 3)
A:
I've had good luck with PyOpenGL.
http://pyopengl.sourceforge.net/
It's pretty straightforward, and using a C tutorial would be easier with it, I believe.
A:
Exactly what sort of trouble are you having? Pyglet's OpenGL implementation is a thin wrapper over the DLL and pretty much maps the C calls one-for-one. It's hard to imagine there would be any other library that could be better in terms of following a C tutorial.
For example, this introduction is pretty much identical to the C equivalent when it comes to the OpenGL calls:
from pyglet.gl import *
# Direct OpenGL commands to this window.
window = pyglet.window.Window()
@window.event
def on_draw():
glClear(GL_COLOR_BUFFER_BIT)
glLoadIdentity()
glBegin(GL_TRIANGLES)
glVertex2f(0, 0)
glVertex2f(window.width, 0)
glVertex2f(window.width, window.height)
glEnd()
pyglet.app.run()
|
OpenGL Picking with Pyglet
|
I'm trying to implement picking using Pyglet's OpenGL wrapper, but I'm having trouble converting a C tutorial to Python. Specifically the part below.
#define BUFSIZE 512
GLuint selectBuf[BUFSIZE]
void startPicking(int cursorX, int cursorY) {
GLint viewport[4];
glSelectBuffer(BUFSIZE,selectBuf);
glRenderMode(GL_SELECT);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glGetIntegerv(GL_VIEWPORT,viewport);
gluPickMatrix(cursorX,viewport[3]-cursorY,
5,5,viewport);
gluPerspective(45,ratio,0.1,1000);
glMatrixMode(GL_MODELVIEW);
glInitNames();
}
I'm not sure how to turn declare arrays of GLuint or GLint such that glSelectBuffer and glPickMatrix work. Does anyone know how to do this in Python with Pyglet? Thanks.
|
[
"I haven't tried your particular example, but the normal way to declare arrays is in the ctypes documentation. Essentially you would create an array type like this:\nFourGLints = GLint * 4\nviewport = FourGLints(0, 1, 2, 3)\n\n",
"I've had good luck with PyOpenGL. \nhttp://pyopengl.sourceforge.net/\nIt's pretty straightforward, and using a C tutorial would be easier with it, I believe. \n",
"Exactly what sort of trouble are you having? Pyglet's OpenGL implementation is a thin wrapper over the DLL and pretty much maps the C calls one-for-one. It's hard to imagine there would be any other library that could be better in terms of following a C tutorial. \nFor example, this introduction is pretty much identical to the C equivalent when it comes to the OpenGL calls:\nfrom pyglet.gl import *\n\n# Direct OpenGL commands to this window.\nwindow = pyglet.window.Window()\n\[email protected]\ndef on_draw():\n glClear(GL_COLOR_BUFFER_BIT)\n glLoadIdentity()\n glBegin(GL_TRIANGLES)\n glVertex2f(0, 0)\n glVertex2f(window.width, 0)\n glVertex2f(window.width, window.height)\n glEnd()\n\npyglet.app.run()\n\n"
] |
[
5,
1,
1
] |
[] |
[] |
[
"opengl",
"picking",
"pyglet",
"python"
] |
stackoverflow_0001290270_opengl_picking_pyglet_python.txt
|
Q:
Efficient Python Data Storage (Abstract Data Types?)
Pardon the ambiguity in the title- I wasn't quite sure how to phrase my question.
Given a string:
blah = "There are three cats in the hat"
and the (I'm not quite sure which data structure to use for this) "userInfo":
cats -> ("tim", "1 infinite loop")
three -> ("sally", "123 fake st")
three -> ("tim", "1 infinite loop")
three cats -> ("john", "123 fake st")
four cats -> ("albert", "345 real road")
dogs -> ("tim", "1 infinite loop")
cats hat -> ("janet", NULL)
The proper output should be:
tim (since 'cats' exists)
sally (since 'three' exists)
tim (since 'three' exists)
john (since both 'three' and 'cats' exist)
janet (since both 'cats' and 'hat' exist somewhere in the string blah)
I want an efficient way of storing this data. There is a possibility for multiple 'three' strings that can be matched (i.e., 150 people will have that string.) Should I just have a list with all this data and duplicate the "keys"?
A:
Something like this?
class Content( object ):
def __init__( self, content, maps_to ):
self.content= content.split()
self.maps_to = maps_to
def matches( self, words ):
return all( c in words for c in self.content )
def __str__( self ):
return "%s -> %r" % ( " ".join(self.content), self.maps_to )
rules = [
Content('cats',("tim", "1 infinite loop")),
Content('three',("sally", "123 fake st")),
Content('three',("tim", "1 infinite loop")),
Content('three cats',("john", "123 fake st")),
Content('four cats',("albert", "345 real road")),
Content('dogs',("tim", "1 infinite loop")),
Content('cats hat', ("janet", None)),
]
blah = "There are three cats in the hat"
for r in rules:
if r.matches(blah.split()):
print r
Output
cats -> ('tim', '1 infinite loop')
three -> ('sally', '123 fake st')
three -> ('tim', '1 infinite loop')
three cats -> ('john', '123 fake st')
cats hat -> ('janet', None)
A:
I haven't got the slightest clue of what you actually are trying to do, but if you have a lot of data, and you need to store it, and you need to search in it, some sort of database with indexing capabilities seems to be the way to go.
ZODB, CouchBD or SQL is a matter of taste. I seriously doubt you need to care about efficiency in disk space as much as in speed for searching and lookups anyway.
A:
I'm not sure what exactly you're trying to do, but maybe you're looking for something like this:
userinfo = {
"tim": "1 infinite loop",
"sally": "123 fake st",
"john": "123 fake st",
"albert": "345 real road",
"janet": None
}
conditions = {
"cats": ["tim"],
"three": ["sally", "tim"],
"three cats": ["john"],
"four cats": ["albert"],
"dogs": ["tim"],
"cats hat": ["janet"]
}
for c in conditions:
if all_words_are_in_the_sentence(c):
for p in conditions[c]:
print p, "because of", c
print "additional info:", userinfo[p]
|
Efficient Python Data Storage (Abstract Data Types?)
|
Pardon the ambiguity in the title- I wasn't quite sure how to phrase my question.
Given a string:
blah = "There are three cats in the hat"
and the (I'm not quite sure which data structure to use for this) "userInfo":
cats -> ("tim", "1 infinite loop")
three -> ("sally", "123 fake st")
three -> ("tim", "1 infinite loop")
three cats -> ("john", "123 fake st")
four cats -> ("albert", "345 real road")
dogs -> ("tim", "1 infinite loop")
cats hat -> ("janet", NULL)
The proper output should be:
tim (since 'cats' exists)
sally (since 'three' exists)
tim (since 'three' exists)
john (since both 'three' and 'cats' exist)
janet (since both 'cats' and 'hat' exist somewhere in the string blah)
I want an efficient way of storing this data. There is a possibility for multiple 'three' strings that can be matched (i.e., 150 people will have that string.) Should I just have a list with all this data and duplicate the "keys"?
|
[
"Something like this?\nclass Content( object ):\n def __init__( self, content, maps_to ):\n self.content= content.split()\n self.maps_to = maps_to\n def matches( self, words ):\n return all( c in words for c in self.content )\n def __str__( self ):\n return \"%s -> %r\" % ( \" \".join(self.content), self.maps_to )\n\nrules = [\n Content('cats',(\"tim\", \"1 infinite loop\")),\n Content('three',(\"sally\", \"123 fake st\")),\n Content('three',(\"tim\", \"1 infinite loop\")),\n Content('three cats',(\"john\", \"123 fake st\")),\n Content('four cats',(\"albert\", \"345 real road\")),\n Content('dogs',(\"tim\", \"1 infinite loop\")),\n Content('cats hat', (\"janet\", None)),\n]\n\nblah = \"There are three cats in the hat\"\n\nfor r in rules:\n if r.matches(blah.split()):\n print r\n\nOutput\ncats -> ('tim', '1 infinite loop')\nthree -> ('sally', '123 fake st')\nthree -> ('tim', '1 infinite loop')\nthree cats -> ('john', '123 fake st')\ncats hat -> ('janet', None)\n\n",
"I haven't got the slightest clue of what you actually are trying to do, but if you have a lot of data, and you need to store it, and you need to search in it, some sort of database with indexing capabilities seems to be the way to go.\nZODB, CouchBD or SQL is a matter of taste. I seriously doubt you need to care about efficiency in disk space as much as in speed for searching and lookups anyway.\n",
"I'm not sure what exactly you're trying to do, but maybe you're looking for something like this:\nuserinfo = {\n \"tim\": \"1 infinite loop\",\n \"sally\": \"123 fake st\",\n \"john\": \"123 fake st\",\n \"albert\": \"345 real road\",\n \"janet\": None\n}\n\nconditions = {\n \"cats\": [\"tim\"],\n \"three\": [\"sally\", \"tim\"],\n \"three cats\": [\"john\"],\n \"four cats\": [\"albert\"],\n \"dogs\": [\"tim\"],\n \"cats hat\": [\"janet\"]\n}\n\nfor c in conditions:\n if all_words_are_in_the_sentence(c):\n for p in conditions[c]:\n print p, \"because of\", c\n print \"additional info:\", userinfo[p]\n\n"
] |
[
6,
1,
0
] |
[] |
[] |
[
"data_structures",
"python"
] |
stackoverflow_0001396241_data_structures_python.txt
|
Q:
Using CSV as a mutable database?
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?
A:
Don't walk, run to get a new host immediately. If your host won't even get you the most basic of free databases, it's time for a change. There are many fish in the sea.
At the very least I'd recommend an xml data store rather than a csv. My blog uses an xml data provider and I haven't had any issues with performance at all.
A:
Take a look at this: http://www.netpromi.com/kirbybase_python.html
A:
Keep calling on the help desk.
While you can use a CSV as a database, it's generally a bad idea. You would have to implement you own locking, searching, updating, and be very careful with how you write it out to make sure that it isn't erased in case of a power outage or other abnormal shutdown. There will be no transactions, no query language unless you write your own, etc.
A:
I couldn't imagine this ever being a good idea. The current mess I've inherited writes vital billing information to CSV and updates it after projects are complete. It runs horribly and thousands of dollars are missed a month. For the current restrictions that you have, I'd consider finding better hosting.
A:
You can probably used sqlite3 for more real database. It's hard to imagine hosting that won't allow you to install it as a python module.
Don't even think of using CSV, your data will be corrupted and lost faster than you say "s#&t"
A:
"Anyways, now, the question: is it possible to update values SQL-style in a CSV database?"
Technically, it's possible. However, it can be hard.
If both PHP and Python are writing the file, you'll need to use OS-level locking to assure that they don't overwrite each other. Each part of your system will have to lock the file, rewrite it from scratch with all the updates, and unlock the file.
This means that PHP and Python must load the entire file into memory before rewriting it.
There are a couple of ways to handle the OS locking.
Use the same file and actually use some OS lock module. Both processes have the file open at all times.
Write to a temp file and do a rename. This means each program must open and read the file for each transaction. Very safe and reliable. A little slow.
Or.
You can rearchitect it so that only Python writes the file. The front-end reads the file when it changes, and drops off little transaction files to create a work queue for Python. In this case, you don't have multiple writers -- you have one reader and one writer -- and life is much, much simpler.
A:
I'd keep calling help desk. You don't want to use CSV for data if it's relational at all. It's going to be nightmare.
A:
I agree. Tell them that 5 random strangers agree that you being forced into a corner to use CSV is absurd and unacceptable.
A:
If I understand you correctly: you need to access the same database from both python and php, and you're screwed because you can only use mysql from php, and only sqlite from python?
Could you further explain this? Maybe you could use xml-rpc or plain http requests with xml/json/... to get the php program to communicate with the python program (or the other way around?), so that only one of them directly accesses the db.
If this is not the case, I'm not really sure what the problem.
A:
It's technically possible. For example, Perl has DBD::CSV that provides a driver that runs SQL queries on the CSV file.
That being said, why not run off a SQLite database on your server?
A:
What about postgresql? I've found that quite nice to work with, and python supports it well.
But I really would look for another provider unless it's really not an option.
A:
Disagreeing with the noble colleagues, I often use DBD::CSV from Perl. There are good reasons to do it. Foremost is data update made simple using a spreadsheet. As a bonus, since I am using SQL queries, the application can be easily upgraded to a real database engine. Bear in mind these were extremely small database in a single user application.
So rephrasing the question: Is there a python module equivalent to Perl's DBD:CSV
|
Using CSV as a mutable database?
|
Yes, this is as stupid a situation as it sounds like. Due to some extremely annoying hosting restrictions and unresponsive tech support, I have to use a CSV file as a database.
While I can use MySQL with PHP, I can't use it with the Python backend of my program because of install issues with the host. I can't use SQLite with PHP because of more install issues, but can use it as it's a Python builtin.
Anyways, now, the question: is it possible to update values SQL-style in a CSV database? Or should I keep on calling the help desk?
|
[
"Don't walk, run to get a new host immediately. If your host won't even get you the most basic of free databases, it's time for a change. There are many fish in the sea.\nAt the very least I'd recommend an xml data store rather than a csv. My blog uses an xml data provider and I haven't had any issues with performance at all.\n",
"Take a look at this: http://www.netpromi.com/kirbybase_python.html\n",
"Keep calling on the help desk.\nWhile you can use a CSV as a database, it's generally a bad idea. You would have to implement you own locking, searching, updating, and be very careful with how you write it out to make sure that it isn't erased in case of a power outage or other abnormal shutdown. There will be no transactions, no query language unless you write your own, etc.\n",
"I couldn't imagine this ever being a good idea. The current mess I've inherited writes vital billing information to CSV and updates it after projects are complete. It runs horribly and thousands of dollars are missed a month. For the current restrictions that you have, I'd consider finding better hosting.\n",
"You can probably used sqlite3 for more real database. It's hard to imagine hosting that won't allow you to install it as a python module.\nDon't even think of using CSV, your data will be corrupted and lost faster than you say \"s#&t\"\n",
"\"Anyways, now, the question: is it possible to update values SQL-style in a CSV database?\"\nTechnically, it's possible. However, it can be hard.\nIf both PHP and Python are writing the file, you'll need to use OS-level locking to assure that they don't overwrite each other. Each part of your system will have to lock the file, rewrite it from scratch with all the updates, and unlock the file.\nThis means that PHP and Python must load the entire file into memory before rewriting it.\nThere are a couple of ways to handle the OS locking.\n\nUse the same file and actually use some OS lock module. Both processes have the file open at all times.\nWrite to a temp file and do a rename. This means each program must open and read the file for each transaction. Very safe and reliable. A little slow.\n\nOr.\nYou can rearchitect it so that only Python writes the file. The front-end reads the file when it changes, and drops off little transaction files to create a work queue for Python. In this case, you don't have multiple writers -- you have one reader and one writer -- and life is much, much simpler.\n",
"I'd keep calling help desk. You don't want to use CSV for data if it's relational at all. It's going to be nightmare. \n",
"I agree. Tell them that 5 random strangers agree that you being forced into a corner to use CSV is absurd and unacceptable.\n",
"If I understand you correctly: you need to access the same database from both python and php, and you're screwed because you can only use mysql from php, and only sqlite from python?\nCould you further explain this? Maybe you could use xml-rpc or plain http requests with xml/json/... to get the php program to communicate with the python program (or the other way around?), so that only one of them directly accesses the db.\nIf this is not the case, I'm not really sure what the problem.\n",
"It's technically possible. For example, Perl has DBD::CSV that provides a driver that runs SQL queries on the CSV file.\nThat being said, why not run off a SQLite database on your server?\n",
"What about postgresql? I've found that quite nice to work with, and python supports it well.\nBut I really would look for another provider unless it's really not an option.\n",
"Disagreeing with the noble colleagues, I often use DBD::CSV from Perl. There are good reasons to do it. Foremost is data update made simple using a spreadsheet. As a bonus, since I am using SQL queries, the application can be easily upgraded to a real database engine. Bear in mind these were extremely small database in a single user application. \nSo rephrasing the question: Is there a python module equivalent to Perl's DBD:CSV\n"
] |
[
15,
3,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"csv",
"python"
] |
stackoverflow_0000712510_csv_python.txt
|
Q:
ctypes pointer question
I was reading the ctypes tutorial, and I came across this:
s = "Hello, World"
c_s = c_char_p(s)
print c_s
c_s.value = "Hi, there"
But I had been using pointers like this:
s = "Hello, World!"
c_s = c_char_p()
c_s = s
print c_s
c_s.value
Traceback (most recent call last):
File "<pyshell#17>", line 1, in <module>
c_s.value
AttributeError: 'str' object has no attribute 'value'
Why is it that when I do it one way, I can access c_s.value, and when I do it the other way, there is no value object?
Thanks all!
A:
In your second example, you've got the statements:
c_s = c_char_p()
c_s = s
The ctypes module can't break the rules of Python assignments, and in the above case the second assignment rebinds the c_s name from the just-created c_char_p object to the s object. In effect, this throws away the newly created c_char_p object, and your code produces the error in your question because a regular Python string doesn't have a .value property.
Try instead:
c_s = c_char_p()
c_s.value = s
and see whether that aligns with your expectations.
|
ctypes pointer question
|
I was reading the ctypes tutorial, and I came across this:
s = "Hello, World"
c_s = c_char_p(s)
print c_s
c_s.value = "Hi, there"
But I had been using pointers like this:
s = "Hello, World!"
c_s = c_char_p()
c_s = s
print c_s
c_s.value
Traceback (most recent call last):
File "<pyshell#17>", line 1, in <module>
c_s.value
AttributeError: 'str' object has no attribute 'value'
Why is it that when I do it one way, I can access c_s.value, and when I do it the other way, there is no value object?
Thanks all!
|
[
"In your second example, you've got the statements:\nc_s = c_char_p()\nc_s = s\n\nThe ctypes module can't break the rules of Python assignments, and in the above case the second assignment rebinds the c_s name from the just-created c_char_p object to the s object. In effect, this throws away the newly created c_char_p object, and your code produces the error in your question because a regular Python string doesn't have a .value property.\nTry instead:\nc_s = c_char_p()\nc_s.value = s\n\nand see whether that aligns with your expectations.\n"
] |
[
3
] |
[] |
[] |
[
"ctypes",
"declaration",
"object",
"pointers",
"python"
] |
stackoverflow_0001396533_ctypes_declaration_object_pointers_python.txt
|
Q:
Unexpected results feeding Django File upload object to Python CSV module
I have no problems getting the file to upload and if I save it to disk, all formatting is intact.
I wrote a function to read in the the file within Django using:
data = csv.reader(f.read())
where f is the Django file object that I get from 'form.cleaned_data['file']' and yes the file is already bound to the form.
When I try to read the file using
for row in data:
logging.debug(row)
I get an unexpected result in that it appears to be producing small packs of the data almost as if its reading some buffer. for example, for my float fields I get this when I log each row:
['0'] ['.'] ['0'] ['5']['', ''] ['0'] ['.'] ['2'] etc etc... where each item between the square bracket is actually from one row (ie. a newline)
csv.reader requires the object it takes to support the iterator protocol which I believe the Django File object does
So what am I doing wrong?
A:
You're actually passing the wrong iterable to csv.reader(). Try changing that line to:
data = csv.reader(f)
What you're doing is passing the whole contents of the file to the csv.reader() function, which will cause it to iterate over every individual character, treating each of them as a separate line. If you pass the actual file object to the function, it will iterate over the lines in that file, as you expect.
A:
What happens if you try reading in the whole file, logging its first few bytes, and then passing it into the parser? Perhaps the file isn't being uploaded as you expect? An http form-encoding issue, perhaps?
|
Unexpected results feeding Django File upload object to Python CSV module
|
I have no problems getting the file to upload and if I save it to disk, all formatting is intact.
I wrote a function to read in the the file within Django using:
data = csv.reader(f.read())
where f is the Django file object that I get from 'form.cleaned_data['file']' and yes the file is already bound to the form.
When I try to read the file using
for row in data:
logging.debug(row)
I get an unexpected result in that it appears to be producing small packs of the data almost as if its reading some buffer. for example, for my float fields I get this when I log each row:
['0'] ['.'] ['0'] ['5']['', ''] ['0'] ['.'] ['2'] etc etc... where each item between the square bracket is actually from one row (ie. a newline)
csv.reader requires the object it takes to support the iterator protocol which I believe the Django File object does
So what am I doing wrong?
|
[
"You're actually passing the wrong iterable to csv.reader(). Try changing that line to:\ndata = csv.reader(f)\n\nWhat you're doing is passing the whole contents of the file to the csv.reader() function, which will cause it to iterate over every individual character, treating each of them as a separate line. If you pass the actual file object to the function, it will iterate over the lines in that file, as you expect.\n",
"What happens if you try reading in the whole file, logging its first few bytes, and then passing it into the parser? Perhaps the file isn't being uploaded as you expect? An http form-encoding issue, perhaps?\n"
] |
[
1,
0
] |
[] |
[] |
[
"csv",
"django",
"iterator",
"python"
] |
stackoverflow_0001396126_csv_django_iterator_python.txt
|
Q:
Working with foreign symbols in python
I'm parsing a JSON feed in Python and it contains this character, causing it not to validate.
Is there a way to handle these symbols? Can they be converted or is they're a tidy way to remove them?
I don't even know what this symbol is called or what causes them, otherwise I would research it myself.
EDIT: Stackover Flow is stripping the character so here:
http://files.getdropbox.com/u/194177/symbol.jpg
It's that [?] symbol in "Classic 80s"
A:
That probably means the text you have is in some sort of encoding, and you need to figure out what encoding, and convert it to Unicode with a thetext.decode('encoding') call.
I not sure, but it could possibly be the [?] character, meaning that the display you have there also doesn't know how to display it. That would probably mean that the data you have is incorrect, and that there is a character in there that doesn't exist in the encoding that you are supposed to use. To handle that you call the decode like this: thetext.decode('encoding', 'ignore'). There are other options than ignore, like "replace", "xmlcharrefreplace" and more.
A:
JSON must be encoded in one of UTF-8, UTF-16, or UTF-32. If a JSON file contains bytes which are illegal in its current encoding, it is garbage.
If you don't know which encoding it's using, you can try parsing using my jsonlib library, which includes an encoding-detector. JSON parsed using jsonlib will be provided to the programmer as Unicode strings, so you don't have to worry about encoding at all.
|
Working with foreign symbols in python
|
I'm parsing a JSON feed in Python and it contains this character, causing it not to validate.
Is there a way to handle these symbols? Can they be converted or is they're a tidy way to remove them?
I don't even know what this symbol is called or what causes them, otherwise I would research it myself.
EDIT: Stackover Flow is stripping the character so here:
http://files.getdropbox.com/u/194177/symbol.jpg
It's that [?] symbol in "Classic 80s"
|
[
"That probably means the text you have is in some sort of encoding, and you need to figure out what encoding, and convert it to Unicode with a thetext.decode('encoding') call.\nI not sure, but it could possibly be the [?] character, meaning that the display you have there also doesn't know how to display it. That would probably mean that the data you have is incorrect, and that there is a character in there that doesn't exist in the encoding that you are supposed to use. To handle that you call the decode like this: thetext.decode('encoding', 'ignore'). There are other options than ignore, like \"replace\", \"xmlcharrefreplace\" and more.\n",
"JSON must be encoded in one of UTF-8, UTF-16, or UTF-32. If a JSON file contains bytes which are illegal in its current encoding, it is garbage.\nIf you don't know which encoding it's using, you can try parsing using my jsonlib library, which includes an encoding-detector. JSON parsed using jsonlib will be provided to the programmer as Unicode strings, so you don't have to worry about encoding at all.\n"
] |
[
1,
0
] |
[] |
[] |
[
"ascii",
"parsing",
"python",
"symbols",
"utf_8"
] |
stackoverflow_0001075866_ascii_parsing_python_symbols_utf_8.txt
|
Q:
python and mechanize.open()
I have some code that is using mechanize and a password protected site. I can login just fine and get the results I expect. However, once I log in I don't want to "click" links I want to iterate through a list of URLs. Unfortunately each .open() call simply gets a re-direct to the login page, which is the behaviour I would expect if I had logged out or tried to login with a different browser. This leads me to believe it is cookie handling of some sort but I'm at a loss.
def main():
browser = mechanize.Browser()
browser.set_handle_robots(False)
# The below code works perfectly
page_stats = login_to_BOE(browser)
print page_stats
# This code ALWAYS gets the login page again NOT the desired
# behaviour of getting the new URL. This is the behaviour I would
# expect if I had logged out of our site.
for page in PAGES:
print '%s%s' % (SITE, page)
page = browser.open('%s%s' % (SITE, page))
page_stats = get_page_statistics(page.get_data())
print page_stats
A:
Instead of using for each link:
browser.open('www.google.com')
Try using the following after doing the initial login:
browser.follow_link(text = 'a href text')
My guess is that calling open is what is resetting your cookies.
A:
Will,
Your suggestion pointed me in exactly the right direction.
Every web browser I have ever used responded to something like the following correct:
http://www.foo.com//bar/baz/trool.html
Since I hate getting things concatenated incorrectly my SITE variable was "http://www.foo.com/"
In addition all the other URLS were "/bar/baz/trool.html"
My calls to open ended up being .open('http://www.foo.com//bar/baz/trool.html') and the mechanize browser obviously doesn't massage that like a "real" browser would. Apache didn't like the urls.
A:
This isn't an answer, but it might lead you in the right direction. Try turning on Mechanize's extensive debugging facilities, using some combination of the statements below:
browser.set_debug_redirects(True)
browser.set_debug_responses(True)
browser.set_debug_http(True)
This will provide a flood of HTTP information, which I found very useful when I developed my one and only Mechanize-based application.
I should note that I'm not doing much (if anything) different in my application than what you showed in your question. I create a browser object the same way, then pass it to this login function:
def login(browser):
browser.open(config.login_url)
browser.select_form(nr=0)
browser[config.username_field] = config.username
browser[config.password_field] = config.password
browser.submit()
return browser
I can then open authentication-required pages with browser.open(url) and all of the cookie handling is handled transparently and automatically for me.
|
python and mechanize.open()
|
I have some code that is using mechanize and a password protected site. I can login just fine and get the results I expect. However, once I log in I don't want to "click" links I want to iterate through a list of URLs. Unfortunately each .open() call simply gets a re-direct to the login page, which is the behaviour I would expect if I had logged out or tried to login with a different browser. This leads me to believe it is cookie handling of some sort but I'm at a loss.
def main():
browser = mechanize.Browser()
browser.set_handle_robots(False)
# The below code works perfectly
page_stats = login_to_BOE(browser)
print page_stats
# This code ALWAYS gets the login page again NOT the desired
# behaviour of getting the new URL. This is the behaviour I would
# expect if I had logged out of our site.
for page in PAGES:
print '%s%s' % (SITE, page)
page = browser.open('%s%s' % (SITE, page))
page_stats = get_page_statistics(page.get_data())
print page_stats
|
[
"Instead of using for each link:\nbrowser.open('www.google.com')\n\nTry using the following after doing the initial login:\nbrowser.follow_link(text = 'a href text')\n\nMy guess is that calling open is what is resetting your cookies.\n",
"Will,\nYour suggestion pointed me in exactly the right direction.\nEvery web browser I have ever used responded to something like the following correct:\nhttp://www.foo.com//bar/baz/trool.html\n\nSince I hate getting things concatenated incorrectly my SITE variable was \"http://www.foo.com/\"\nIn addition all the other URLS were \"/bar/baz/trool.html\"\nMy calls to open ended up being .open('http://www.foo.com//bar/baz/trool.html') and the mechanize browser obviously doesn't massage that like a \"real\" browser would. Apache didn't like the urls.\n",
"This isn't an answer, but it might lead you in the right direction. Try turning on Mechanize's extensive debugging facilities, using some combination of the statements below:\nbrowser.set_debug_redirects(True)\nbrowser.set_debug_responses(True)\nbrowser.set_debug_http(True)\n\nThis will provide a flood of HTTP information, which I found very useful when I developed my one and only Mechanize-based application.\nI should note that I'm not doing much (if anything) different in my application than what you showed in your question. I create a browser object the same way, then pass it to this login function:\ndef login(browser):\n browser.open(config.login_url)\n browser.select_form(nr=0)\n browser[config.username_field] = config.username\n browser[config.password_field] = config.password\n browser.submit()\n return browser\n\nI can then open authentication-required pages with browser.open(url) and all of the cookie handling is handled transparently and automatically for me.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"mechanize",
"python"
] |
stackoverflow_0001396646_mechanize_python.txt
|
Q:
How Do I Select all Objects via a Relationship Model
Given the Model:
class Profile(models.Model):
user = models.ForeignKey(User, unique=True)
class Thingie(models.Model):
children = models.ManyToManyField('self', blank=True, symmetrical=False)
class Relation(models.Model):
profile = models.ForeignKey(Profile)
thingie = models.ForeignKey(Thingie)
How would one return a QuerySet containing all Profile instances related to a given Thingie? That is, every Profile that has a foreign key pointing to it from a Relation and to the given thingie.
I know all about select_related(), and how I could use it to do this by iterating but i find iterating irritating (badoop bah!). Also, values_list() has been looked at, but it doesn't quite do the right thing.
Please help! Thanks!
A:
Do you definitely need it to be a queryset? If you only need it to be an iterable, a simple expression for your purposes is:
profiles = [r.profile for r in thingie.relation_set.all()]
I'm not sure if a list comprehension counts as irritating iterating, but to me this is a perfectly intuitive, pythonic approach. Of course if you need it to be a queryset, you're going to do something messier with two queries, eg:
relation_values = thingie.relation_set.all().values_list('pk', flat=True)
profiles = Profile.objects.filter(relation__in=relation_values)
See the "in" documentation for more. I prefer the first approach, if you don't need a queryset. Oh and if you only want distinct profiles you can just take set(profiles) in the first approach or use the distinct() queryset method in the second approach.
A:
Did you read the Django doc on Making queries? It has a simple example of achieving what you want, more specifically Lookups that span relationships. Make sure you refer the code snippets in the latter link to the model code at the beginning of the page.
A:
Without defining a direct relationship between Profile and Thingie, you can't.
The best way would be to add a ManyToMany to Thingie that points to Profile and use the through argument (django docs) on the ManyToMany to specify your Relation model/table.
That way you can do a direct filter operation on Thingie to get your profiles and still store your intermediate Relation data.
|
How Do I Select all Objects via a Relationship Model
|
Given the Model:
class Profile(models.Model):
user = models.ForeignKey(User, unique=True)
class Thingie(models.Model):
children = models.ManyToManyField('self', blank=True, symmetrical=False)
class Relation(models.Model):
profile = models.ForeignKey(Profile)
thingie = models.ForeignKey(Thingie)
How would one return a QuerySet containing all Profile instances related to a given Thingie? That is, every Profile that has a foreign key pointing to it from a Relation and to the given thingie.
I know all about select_related(), and how I could use it to do this by iterating but i find iterating irritating (badoop bah!). Also, values_list() has been looked at, but it doesn't quite do the right thing.
Please help! Thanks!
|
[
"Do you definitely need it to be a queryset? If you only need it to be an iterable, a simple expression for your purposes is:\nprofiles = [r.profile for r in thingie.relation_set.all()]\n\nI'm not sure if a list comprehension counts as irritating iterating, but to me this is a perfectly intuitive, pythonic approach. Of course if you need it to be a queryset, you're going to do something messier with two queries, eg:\nrelation_values = thingie.relation_set.all().values_list('pk', flat=True)\nprofiles = Profile.objects.filter(relation__in=relation_values)\n\nSee the \"in\" documentation for more. I prefer the first approach, if you don't need a queryset. Oh and if you only want distinct profiles you can just take set(profiles) in the first approach or use the distinct() queryset method in the second approach.\n",
"Did you read the Django doc on Making queries? It has a simple example of achieving what you want, more specifically Lookups that span relationships. Make sure you refer the code snippets in the latter link to the model code at the beginning of the page.\n",
"Without defining a direct relationship between Profile and Thingie, you can't.\nThe best way would be to add a ManyToMany to Thingie that points to Profile and use the through argument (django docs) on the ManyToMany to specify your Relation model/table.\nThat way you can do a direct filter operation on Thingie to get your profiles and still store your intermediate Relation data.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"django",
"django_queryset",
"python"
] |
stackoverflow_0001396985_django_django_queryset_python.txt
|
Q:
Customizing how Python's `copy` module treats my objects
From the copy documentation:
Classes can use the same interfaces to control copying that they use to control pickling.
[...]
In order for a class to define its own copy implementation, it can define special methods __copy__() and __deepcopy__()
So which one is it? __setstate__() and __getstate__() that are used when pickling, or __copy__() and __deepcopy__()?
A:
It works as follows: if a class defines __copy__, that takes precedence for copy.copy purposes (and similarly __deepcopy__ takes precedence for copy.deepcopy purposes). If these very specific special methods are not defined, then the same mechanisms as for pickling and unpickling are tested (this includes, but is not limited to, __getstate__ and __setstate__; I've written more about this in my book "Python in a Nutshell" (which @ilfaraone quotes only partially).
A:
__setstate__() and __getstate__().
Notice that the copy documentation says that they can use the same interface, but they don't necessarily have do so.
See this excerpt from Python in a Nutshell, or this explanation on the Python Mailing List.
|
Customizing how Python's `copy` module treats my objects
|
From the copy documentation:
Classes can use the same interfaces to control copying that they use to control pickling.
[...]
In order for a class to define its own copy implementation, it can define special methods __copy__() and __deepcopy__()
So which one is it? __setstate__() and __getstate__() that are used when pickling, or __copy__() and __deepcopy__()?
|
[
"It works as follows: if a class defines __copy__, that takes precedence for copy.copy purposes (and similarly __deepcopy__ takes precedence for copy.deepcopy purposes). If these very specific special methods are not defined, then the same mechanisms as for pickling and unpickling are tested (this includes, but is not limited to, __getstate__ and __setstate__; I've written more about this in my book \"Python in a Nutshell\" (which @ilfaraone quotes only partially).\n",
"__setstate__() and __getstate__().\nNotice that the copy documentation says that they can use the same interface, but they don't necessarily have do so. \nSee this excerpt from Python in a Nutshell, or this explanation on the Python Mailing List.\n"
] |
[
7,
1
] |
[] |
[] |
[
"copy",
"pickle",
"python"
] |
stackoverflow_0001396547_copy_pickle_python.txt
|
Q:
how to recover the binary stream(original form) from radix 64 encoding
how to get the actual public key i.e its binary form i.e without radix 64 conversion .i need to extract the public key from radix64 encoding .the pgp server gives me the key in radix 64 format now i have to extract the public key from it.
A:
import base64
decoded_bytes = base64.b64decode(ascii_chars)
A:
base64_encoded_data.decode('base64')
|
how to recover the binary stream(original form) from radix 64 encoding
|
how to get the actual public key i.e its binary form i.e without radix 64 conversion .i need to extract the public key from radix64 encoding .the pgp server gives me the key in radix 64 format now i have to extract the public key from it.
|
[
"import base64\n\ndecoded_bytes = base64.b64decode(ascii_chars)\n\n",
"base64_encoded_data.decode('base64')\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"encoding",
"python"
] |
stackoverflow_0001397799_encoding_python.txt
|
Q:
Django: show list of many to many items in the admin interface
This might be a simple question, but i can't seem to grasp it.
I have two simple models in models.py: Service and Host. Host.services has a m2m relationship with Service.
In other words, a host has several services and one service can reside on multiple hosts; a basic m2m.
models.py
class Service(models.Model):
servicename = models.CharField(max_length=50)
def __unicode__(self):
return self.servicename
class Admin:
pass
class Host(models.Model):
#...
hostname = models.CharField(max_length=200)
services = models.ManyToManyField(Service)
#...
def get_services(self):
return self.services.all()
def __unicode__(self):
return self.hostname
class Admin:
pass
admin.py
from cmdb.hosts.models import Host
from django.contrib import admin
class HostAdmin(admin.ModelAdmin):
list_display = ('get_services',)
admin.site.register(Host, HostAdmin)
Now when i open the page where all the host's columns are listed the 'service' column displays the output like:
Get services
[<Service: the_service-1>, <Service: the_service-2>]
Instead of:
Services
the_service-1
the_service-2
etc.
What am i doing wrong?
Thank you for reading my question.
A:
You should change get_services to something like:
def get_services(self):
return "\n".join([s.servicename for s in self.services.all()])
Update: Try using \n as the separator rather than <br/>, as the output of get_services is being escaped.
|
Django: show list of many to many items in the admin interface
|
This might be a simple question, but i can't seem to grasp it.
I have two simple models in models.py: Service and Host. Host.services has a m2m relationship with Service.
In other words, a host has several services and one service can reside on multiple hosts; a basic m2m.
models.py
class Service(models.Model):
servicename = models.CharField(max_length=50)
def __unicode__(self):
return self.servicename
class Admin:
pass
class Host(models.Model):
#...
hostname = models.CharField(max_length=200)
services = models.ManyToManyField(Service)
#...
def get_services(self):
return self.services.all()
def __unicode__(self):
return self.hostname
class Admin:
pass
admin.py
from cmdb.hosts.models import Host
from django.contrib import admin
class HostAdmin(admin.ModelAdmin):
list_display = ('get_services',)
admin.site.register(Host, HostAdmin)
Now when i open the page where all the host's columns are listed the 'service' column displays the output like:
Get services
[<Service: the_service-1>, <Service: the_service-2>]
Instead of:
Services
the_service-1
the_service-2
etc.
What am i doing wrong?
Thank you for reading my question.
|
[
"You should change get_services to something like:\ndef get_services(self):\n return \"\\n\".join([s.servicename for s in self.services.all()])\n\nUpdate: Try using \\n as the separator rather than <br/>, as the output of get_services is being escaped.\n"
] |
[
19
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001398606_django_python.txt
|
Q:
Handling big numbers in code
I'm working on a programming problem where I need to handle a number involving 100000 digits. Can python handle numbers like this?
A:
As other answers indicated, Python does support integer numbers bounded only by the amount of memory available. If you want even faster support for them, try gmpy (as gmpy's author and current co-maintainer I'm of course a little biased here;-):
$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'x+1'
10000 loops, best of 3: 114 usec per loop
$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'y+1'
10000 loops, best of 3: 65.4 usec per loop
Typically, arithmetic is not the bottleneck for working with such numbers (although gmpy's direct support for some combinatorial and number-theoretical functions can help if that's what you're doing with such numbers). Turning the numbers into decimal strings is probably the common operation that will feel slowest...:
$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'str(x)'
10 loops, best of 3: 3.11 sec per loop
$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'str(y)'
10 loops, best of 3: 27.3 msec per loop
As you see, even in gmpy stringification of huge numbers can be hundreds of times slower than a simple addition (alas, it's an intrinsically complicated operation!); but in native Python code the ratio of times can go to stringification being tens of thousands times slower than a simple addition, so you really want to watch out for that, especially if you decide not to download and install gmpy (for example because you can't: e.g., gmpy is not currently supported on Google App Engine).
Finally, an intermediate case:
$ python2.6 -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'x*x'
10 loops, best of 3: 90 msec per loop
$ python2.6 -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'y*y'
100 loops, best of 3: 5.63 msec per loop
$ python2.6 -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'y*x'
100 loops, best of 3: 8.4 msec per loop
As you see, multiplying two huge numbers in native Python code can be almost 1000 times slower than the simple addition, while with gmpy the slowdown is less than 100 times (and it's not too bad even if only one if the numbers is already in gmpy's own format so that there's the overhead of converting the other).
A:
Yes; Python 2.x has two types of integers, int of limited size and long of unlimited size. However all calculations will automatically convert to long if needed. Handling big numbers works fine, but one of the slower things will be if you try to print the 100000 digits to output, or even try to create a string from it.
If you need arbitrary decimal fixed-point precision as well, there is the decimal module.
A:
Sure it can:
>>> s = 10 ** 100000
A:
It seems to work fine:
>>> x = 10**100000
>>> x
10000000000000000000000000000000000000000000000000000000000000000000000000000000
[snip]
00000000L
According to http://docs.python.org/library/stdtypes.html, "Long integers have unlimited precision", which probably means that their size is not limited.
A:
As already pointed out, Python can handle numbers as big as your memory will allow. I would just like to add that as the numbers grow bigger, the cost of all operations on them increases. This is not just for printing/converting to string (although that's the slowest), adding two large numbers (larger that what your hardware can natively handle) is no longer O(1).
I'm just mentioning this to point out that although Python neatly hides the details of working with big numbers, you still have to keep in mind that these big numbers operations are not always like those on ordinary ints.
|
Handling big numbers in code
|
I'm working on a programming problem where I need to handle a number involving 100000 digits. Can python handle numbers like this?
|
[
"As other answers indicated, Python does support integer numbers bounded only by the amount of memory available. If you want even faster support for them, try gmpy (as gmpy's author and current co-maintainer I'm of course a little biased here;-):\n$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'x+1'\n10000 loops, best of 3: 114 usec per loop\n$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'y+1'\n10000 loops, best of 3: 65.4 usec per loop\n\nTypically, arithmetic is not the bottleneck for working with such numbers (although gmpy's direct support for some combinatorial and number-theoretical functions can help if that's what you're doing with such numbers). Turning the numbers into decimal strings is probably the common operation that will feel slowest...:\n$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'str(x)'\n10 loops, best of 3: 3.11 sec per loop\n$ python -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'str(y)'\n10 loops, best of 3: 27.3 msec per loop\n\nAs you see, even in gmpy stringification of huge numbers can be hundreds of times slower than a simple addition (alas, it's an intrinsically complicated operation!); but in native Python code the ratio of times can go to stringification being tens of thousands times slower than a simple addition, so you really want to watch out for that, especially if you decide not to download and install gmpy (for example because you can't: e.g., gmpy is not currently supported on Google App Engine).\nFinally, an intermediate case:\n$ python2.6 -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'x*x'\n10 loops, best of 3: 90 msec per loop\n$ python2.6 -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'y*y'\n100 loops, best of 3: 5.63 msec per loop\n$ python2.6 -mtimeit -s'import gmpy; x=10**100000; y=gmpy.mpz(x)' 'y*x'\n100 loops, best of 3: 8.4 msec per loop\n\nAs you see, multiplying two huge numbers in native Python code can be almost 1000 times slower than the simple addition, while with gmpy the slowdown is less than 100 times (and it's not too bad even if only one if the numbers is already in gmpy's own format so that there's the overhead of converting the other).\n",
"Yes; Python 2.x has two types of integers, int of limited size and long of unlimited size. However all calculations will automatically convert to long if needed. Handling big numbers works fine, but one of the slower things will be if you try to print the 100000 digits to output, or even try to create a string from it.\nIf you need arbitrary decimal fixed-point precision as well, there is the decimal module.\n",
"Sure it can:\n>>> s = 10 ** 100000\n\n",
"It seems to work fine:\n>>> x = 10**100000\n>>> x\n10000000000000000000000000000000000000000000000000000000000000000000000000000000\n[snip]\n00000000L\n\nAccording to http://docs.python.org/library/stdtypes.html, \"Long integers have unlimited precision\", which probably means that their size is not limited.\n",
"As already pointed out, Python can handle numbers as big as your memory will allow. I would just like to add that as the numbers grow bigger, the cost of all operations on them increases. This is not just for printing/converting to string (although that's the slowest), adding two large numbers (larger that what your hardware can natively handle) is no longer O(1). \nI'm just mentioning this to point out that although Python neatly hides the details of working with big numbers, you still have to keep in mind that these big numbers operations are not always like those on ordinary ints.\n"
] |
[
24,
7,
4,
3,
3
] |
[] |
[] |
[
"algorithm",
"python"
] |
stackoverflow_0001386604_algorithm_python.txt
|
Q:
What is the simplest way to implement a remote FIFO queue as a Python GAE application?
What is the simplest way to implement a remote FIFO queue as a Python GAE application and then push/pull name-value pair dictionaries to and from it?
For example, when an http get is made to the GAE application, the GAE app would return the oldest collection of name-value pairs that were posted to the app which have not been previously pulled from the queue. These name-value pairs would then be re-instantiated as a dictionary on the client side. urllib.urlencode provides a simple mechanism to encode dictionaries as parameters, but what is the similarly simple approach to decode parameters into dictionaries when you http "get" them? When there are no items in the queue, the GAE app should return a null or some other more appropriate identifier the client could respond to.
#A local python script
import urllib
targetURL="http://myapp.appspot.com/queue"
#Push to dictionary to GAE queue
params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0})
f = urllib.urlopen(targetURL, params)
print f.read()
params = urllib.urlencode({'foo': 1, 'bar': 2})
f = urllib.urlopen(targetURL, params)
print f.read()
#Pull oldest set of name-value pairs from the GAE queue and create a local dictionary from them.
#f = urllib.urlopen(targetURL, ……)
#returnedDictionary = ????
What would the simplest way to implement this short GAE application?
#queue.py a url handler in a GAE application.
# For posts, create an object from the posted name-value pairs and insert it into the queue as the newest item in the queue
# For gets, return the name-value pairs for the oldest object in the queue and remove the object from the queue.
# If there are no items in the queue, return null
A:
Something along these lines:
from google.appengine.ext import db
from google.appengine.ext import webapp
from google.appengine.ext.webapp import run_wsgi_app
class QueueItem(db.Model):
created = db.DateTimeProperty(required=True, auto_now_add=True)
data = db.BlobProperty(required=True)
@staticmethod
def push(data):
"""Add a new queue item."""
return QueueItem(data=data).put()
@staticmethod
def pop():
"""Pop the oldest item off the queue."""
def _tx_pop(candidate_key):
# Try and grab the candidate key for ourselves. This will fail if
# another task beat us to it.
task = QueueItem.get(candidate_key)
if task:
task.delete()
return task
# Grab some tasks and try getting them until we find one that hasn't been
# taken by someone else ahead of us
while True:
candidate_keys = QueueItem.all(keys_only=True).order('created').fetch(10)
if not candidate_keys:
# No tasks in queue
return None
for candidate_key in candidate_keys:
task = db.run_in_transaction(_tx_pop, candidate_key)
if task:
return task
class QueueHandler(webapp.RequestHandler):
def get(self):
"""Pop a request off the queue and return it."""
self.response.headers['Content-Type'] = 'application/x-www-form-urlencoded'
task = QueueItem.pop()
if not task:
self.error(404)
else:
self.response.out.write(task.data)
def post(self):
"""Add a request to the queue."""
QueueItem.push(self.request.body)
One caveat: Because queue ordering relies on the timestamp, it's possible for tasks that arrive very close together on separate machines to be enqueued out-of-order, since there's no global clock (only NFS synced servers). Probably not a real problem, depending on your use-case, though.
A:
The below assumes you're using the webapp framework.
The simple answer is that you just use self.request.GET, which is a MultiDict (which you can treat as a dict in many cases) containing the form data sent to the request.
Note that HTTP allows form data to contain the same key multiple times; if what you want isn't really a dict but a list of key-value pairs that have been sent to your application, you can get such a list with self.request.GET.items() (see http://pythonpaste.org/webob/reference.html#query-post-variables ) and then add these pairs to your queue.
|
What is the simplest way to implement a remote FIFO queue as a Python GAE application?
|
What is the simplest way to implement a remote FIFO queue as a Python GAE application and then push/pull name-value pair dictionaries to and from it?
For example, when an http get is made to the GAE application, the GAE app would return the oldest collection of name-value pairs that were posted to the app which have not been previously pulled from the queue. These name-value pairs would then be re-instantiated as a dictionary on the client side. urllib.urlencode provides a simple mechanism to encode dictionaries as parameters, but what is the similarly simple approach to decode parameters into dictionaries when you http "get" them? When there are no items in the queue, the GAE app should return a null or some other more appropriate identifier the client could respond to.
#A local python script
import urllib
targetURL="http://myapp.appspot.com/queue"
#Push to dictionary to GAE queue
params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0})
f = urllib.urlopen(targetURL, params)
print f.read()
params = urllib.urlencode({'foo': 1, 'bar': 2})
f = urllib.urlopen(targetURL, params)
print f.read()
#Pull oldest set of name-value pairs from the GAE queue and create a local dictionary from them.
#f = urllib.urlopen(targetURL, ……)
#returnedDictionary = ????
What would the simplest way to implement this short GAE application?
#queue.py a url handler in a GAE application.
# For posts, create an object from the posted name-value pairs and insert it into the queue as the newest item in the queue
# For gets, return the name-value pairs for the oldest object in the queue and remove the object from the queue.
# If there are no items in the queue, return null
|
[
"Something along these lines:\nfrom google.appengine.ext import db\nfrom google.appengine.ext import webapp\nfrom google.appengine.ext.webapp import run_wsgi_app\n\nclass QueueItem(db.Model):\n created = db.DateTimeProperty(required=True, auto_now_add=True)\n data = db.BlobProperty(required=True)\n\n @staticmethod\n def push(data):\n \"\"\"Add a new queue item.\"\"\"\n return QueueItem(data=data).put()\n\n @staticmethod\n def pop():\n \"\"\"Pop the oldest item off the queue.\"\"\"\n def _tx_pop(candidate_key):\n # Try and grab the candidate key for ourselves. This will fail if\n # another task beat us to it.\n task = QueueItem.get(candidate_key)\n if task:\n task.delete()\n return task\n # Grab some tasks and try getting them until we find one that hasn't been\n # taken by someone else ahead of us\n while True:\n candidate_keys = QueueItem.all(keys_only=True).order('created').fetch(10)\n if not candidate_keys:\n # No tasks in queue\n return None\n for candidate_key in candidate_keys:\n task = db.run_in_transaction(_tx_pop, candidate_key)\n if task:\n return task\n\nclass QueueHandler(webapp.RequestHandler):\n def get(self):\n \"\"\"Pop a request off the queue and return it.\"\"\"\n self.response.headers['Content-Type'] = 'application/x-www-form-urlencoded'\n task = QueueItem.pop()\n if not task:\n self.error(404)\n else:\n self.response.out.write(task.data)\n\n def post(self):\n \"\"\"Add a request to the queue.\"\"\"\n QueueItem.push(self.request.body)\n\nOne caveat: Because queue ordering relies on the timestamp, it's possible for tasks that arrive very close together on separate machines to be enqueued out-of-order, since there's no global clock (only NFS synced servers). Probably not a real problem, depending on your use-case, though.\n",
"The below assumes you're using the webapp framework. \nThe simple answer is that you just use self.request.GET, which is a MultiDict (which you can treat as a dict in many cases) containing the form data sent to the request.\nNote that HTTP allows form data to contain the same key multiple times; if what you want isn't really a dict but a list of key-value pairs that have been sent to your application, you can get such a list with self.request.GET.items() (see http://pythonpaste.org/webob/reference.html#query-post-variables ) and then add these pairs to your queue.\n"
] |
[
3,
0
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001397864_google_app_engine_python.txt
|
Q:
Django : import problem with python-twitter module
When I try to import python-twitter module in my app, django tries to import django.templatetags.twitter instead of python-twitter module (in /usr/lib/python2.5/site-packages/twitter.py), but I don't know why. :s
For example:
myproject/
myapp/
templatetags/
file.py
In file.py:
import twitter # this imports django.templatetags.twitter
Any idea to fix it ?
Thank you very much :)
Edit: I've found the problem. My templatetags file was named "twitter.py". I've renamed it to "twitter_tags.py" and now this works. :)
A:
The submodules often need to refer to each other. For example, the surround module might use the echo module. In fact, such references are so common that the import statement first looks in the containing package before looking in the standard module search path. source
Therefore, you will need to use an absolute import.
from some.other.pkg import twitter
|
Django : import problem with python-twitter module
|
When I try to import python-twitter module in my app, django tries to import django.templatetags.twitter instead of python-twitter module (in /usr/lib/python2.5/site-packages/twitter.py), but I don't know why. :s
For example:
myproject/
myapp/
templatetags/
file.py
In file.py:
import twitter # this imports django.templatetags.twitter
Any idea to fix it ?
Thank you very much :)
Edit: I've found the problem. My templatetags file was named "twitter.py". I've renamed it to "twitter_tags.py" and now this works. :)
|
[
"\nThe submodules often need to refer to each other. For example, the surround module might use the echo module. In fact, such references are so common that the import statement first looks in the containing package before looking in the standard module search path. source\n\nTherefore, you will need to use an absolute import.\nfrom some.other.pkg import twitter\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"import",
"python",
"twitter"
] |
stackoverflow_0001399478_django_import_python_twitter.txt
|
Q:
Eclipse (pydev): Is it possible to assign a shortcut to send selection to the python console?
Question so easy that fitted in the title :)
Eclipse (pydev): Is it possible to assign a shortcut to send selection to the python console?
A:
You can only assign shortcuts to "actions". Actions bound to buttons (for example, the toolbar) and menus. If you have a menu for this, you can bind a key to it. If not, then you will have to open an enhancement request in the pydev project.
A:
If you mean to the interactive console, use ctrl+alt+enter in the pydev editor (version 1.5).
That action does a number of things:
opens an interactive console view if it's not opened already
sends the selected text to the console
makes an execfile of the current editor (if no text is selected)
|
Eclipse (pydev): Is it possible to assign a shortcut to send selection to the python console?
|
Question so easy that fitted in the title :)
Eclipse (pydev): Is it possible to assign a shortcut to send selection to the python console?
|
[
"You can only assign shortcuts to \"actions\". Actions bound to buttons (for example, the toolbar) and menus. If you have a menu for this, you can bind a key to it. If not, then you will have to open an enhancement request in the pydev project.\n",
"If you mean to the interactive console, use ctrl+alt+enter in the pydev editor (version 1.5).\nThat action does a number of things:\n\nopens an interactive console view if it's not opened already\nsends the selected text to the console\nmakes an execfile of the current editor (if no text is selected)\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"ide",
"python"
] |
stackoverflow_0000323581_ide_python.txt
|
Q:
Django and monkey patching issue
I have recently started experimenting with Django for some web applications in my spare time. While designing the data model for one, I came across the dilemma of using inheritance to define a user of the website or using a technique known as monkey patching with the User class already supplied by the framework.
I tried to add a field by means of (after having defined all my models etc. without errors, according to python manage.py validate):
User.add_to_class('location', models.CharField(max_length=250,blank=True))
and executed the syncdb command. However, I keep getting this error
OperationalError: no such column:
auth_user.location
whether I am in the admin view of the site or the manage.py shell. There must be an extra step I'm missing, but there seems to be limited documentation on the whole monkey patching technique. So I'm asking you for assistance before I resort to inheritance. Any code, tips, or pointers to additional documentation are of course welcome.
Thanks in advance.
PS. I'm aware this technique is ugly, and probably ill-advised. ;)
A:
There's an alternative to both approaches, which is to simply use a related profile model. This also happens to be a well-documented, highly recommended approach. Perhaps the reason that the add_to_class approach is not well-documented, as you noted, is because it's explicitly discouraged (for good reason).
A:
When you add a field to any model, even if you do it the 'official' way, you need to migrate the database - Django doesn't do it for you. Drop the table and run ./manage.py syncdb again.
You might want to investigate one of the migrations frameworks, such as south, which will manage this sort of thing for you.
A:
Here's a (slightly older) way of extending the User model.
Here's what the docs have to say.
And here's a recent conversation on django-users about the topic.
A:
Djangos framework uses metaclasses to initialize the tables. That means you can't monkey-patch in new columns, unless you also re-initialize the class, which I'm not sure is even possible. (It may be).
See Difference between returning modified class and using type() for some more info.
A:
I guess you might run into problems regarding where is your monkeypatch defined. I guess django syncdb creates databse tables only from the "pure" auth application, so your model will then be without "location", and then your site with the patch will look for the field.
Probably less painful way of adding additional info to user profiles is described in Django docs.
|
Django and monkey patching issue
|
I have recently started experimenting with Django for some web applications in my spare time. While designing the data model for one, I came across the dilemma of using inheritance to define a user of the website or using a technique known as monkey patching with the User class already supplied by the framework.
I tried to add a field by means of (after having defined all my models etc. without errors, according to python manage.py validate):
User.add_to_class('location', models.CharField(max_length=250,blank=True))
and executed the syncdb command. However, I keep getting this error
OperationalError: no such column:
auth_user.location
whether I am in the admin view of the site or the manage.py shell. There must be an extra step I'm missing, but there seems to be limited documentation on the whole monkey patching technique. So I'm asking you for assistance before I resort to inheritance. Any code, tips, or pointers to additional documentation are of course welcome.
Thanks in advance.
PS. I'm aware this technique is ugly, and probably ill-advised. ;)
|
[
"There's an alternative to both approaches, which is to simply use a related profile model. This also happens to be a well-documented, highly recommended approach. Perhaps the reason that the add_to_class approach is not well-documented, as you noted, is because it's explicitly discouraged (for good reason).\n",
"When you add a field to any model, even if you do it the 'official' way, you need to migrate the database - Django doesn't do it for you. Drop the table and run ./manage.py syncdb again.\nYou might want to investigate one of the migrations frameworks, such as south, which will manage this sort of thing for you.\n",
"Here's a (slightly older) way of extending the User model.\nHere's what the docs have to say.\nAnd here's a recent conversation on django-users about the topic.\n",
"Djangos framework uses metaclasses to initialize the tables. That means you can't monkey-patch in new columns, unless you also re-initialize the class, which I'm not sure is even possible. (It may be).\nSee Difference between returning modified class and using type() for some more info.\n",
"I guess you might run into problems regarding where is your monkeypatch defined. I guess django syncdb creates databse tables only from the \"pure\" auth application, so your model will then be without \"location\", and then your site with the patch will look for the field.\nProbably less painful way of adding additional info to user profiles is described in Django docs.\n"
] |
[
13,
7,
2,
0,
0
] |
[] |
[] |
[
"django",
"monkeypatching",
"python"
] |
stackoverflow_0001399746_django_monkeypatching_python.txt
|
Q:
Python: convert free text to date
Assuming the text is typed at the same time in the same (Israeli) timezone, The following free text lines are equivalent:
Wed Sep 9 16:26:57 IDT 2009
2009-09-09 16:26:57
16:26:57
September 9th, 16:26:57
Is there a python module that would convert all these text-dates to an (identical) datetime.datetime instance?
I would like to use it in a command-line tool that would get a freetext date and time as an argument, and return the equivalent date and time in different time zones, e.g.:
~$wdate 16:00 Israel
Israel: 16:00
San Francisco: 06:00
UTC: 13:00
or:
~$wdate 18:00 SanFran
San Francisco 18:00:22
Israel: 01:00:22 (Day after)
UTC: 22:00:22
Any Ideas?
Thanks,
Udi
A:
The python-dateutil package sounds like it would be helpful. Your examples only use simple HH:MM timestamps with a (magically shortened) city identifier, but it seems able to handle more complicated formats like those earlier in the question, too.
A:
parsedatetime seems to be a very capable module for this specific task.
A:
You could you time.strptime
Like this :
time.strptime("2009-09-09 16:26:57", "%Y-%m-%d %H:%M:%S")
It will return a struct_time value (more info on the python doc page).
|
Python: convert free text to date
|
Assuming the text is typed at the same time in the same (Israeli) timezone, The following free text lines are equivalent:
Wed Sep 9 16:26:57 IDT 2009
2009-09-09 16:26:57
16:26:57
September 9th, 16:26:57
Is there a python module that would convert all these text-dates to an (identical) datetime.datetime instance?
I would like to use it in a command-line tool that would get a freetext date and time as an argument, and return the equivalent date and time in different time zones, e.g.:
~$wdate 16:00 Israel
Israel: 16:00
San Francisco: 06:00
UTC: 13:00
or:
~$wdate 18:00 SanFran
San Francisco 18:00:22
Israel: 01:00:22 (Day after)
UTC: 22:00:22
Any Ideas?
Thanks,
Udi
|
[
"The python-dateutil package sounds like it would be helpful. Your examples only use simple HH:MM timestamps with a (magically shortened) city identifier, but it seems able to handle more complicated formats like those earlier in the question, too.\n",
"parsedatetime seems to be a very capable module for this specific task.\n",
"You could you time.strptime\nLike this : \ntime.strptime(\"2009-09-09 16:26:57\", \"%Y-%m-%d %H:%M:%S\")\n\nIt will return a struct_time value (more info on the python doc page).\n"
] |
[
6,
3,
0
] |
[] |
[] |
[
"freetext",
"parsing",
"python",
"time"
] |
stackoverflow_0001399727_freetext_parsing_python_time.txt
|
Q:
Multi-line Pattern and tag search
I'm trying to make a pattern for tags, but the sub method just replaces the first char and 3 at the end of the line, im trying to replace all tags on the line and with multiline
p=re.compile('<img=([^}]*)>([^}]*)</img>', re.S)
p.sub(r'[img=\1]\2[/img]','<img="test">dsad</img> <img="test2">dsad2</img>')
output:
'**[**img="test">dsad</img> <img="test2"]dsad2**[/img]**'
A:
You're using towards the start of your re's pattern:
<img=([^}]*)>
this will gobble up (as group 1) all characters after the leading <img=, including other tags!!!, up to the last > it can possibly gobble; * is GREEDY -- it gobbles up as much as it possibly can. Not sure why you're specifically excluding closed-braces }? Maybe you meant to exclude closed angular brackets instead (>).
For NON-greedy matching, instead of *, you need *?; with that, you'll be gobbling up as little as you can, instead of as much as you can. So, I think you mean:
p = re.compile(r'<img=([^>]*?)>(.*?)</img>', re.S)
this matches one img tag (and all tags inside it), and appears to be performing exactly the substitutions you mean.
|
Multi-line Pattern and tag search
|
I'm trying to make a pattern for tags, but the sub method just replaces the first char and 3 at the end of the line, im trying to replace all tags on the line and with multiline
p=re.compile('<img=([^}]*)>([^}]*)</img>', re.S)
p.sub(r'[img=\1]\2[/img]','<img="test">dsad</img> <img="test2">dsad2</img>')
output:
'**[**img="test">dsad</img> <img="test2"]dsad2**[/img]**'
|
[
"You're using towards the start of your re's pattern:\n<img=([^}]*)>\n\nthis will gobble up (as group 1) all characters after the leading <img=, including other tags!!!, up to the last > it can possibly gobble; * is GREEDY -- it gobbles up as much as it possibly can. Not sure why you're specifically excluding closed-braces }? Maybe you meant to exclude closed angular brackets instead (>).\nFor NON-greedy matching, instead of *, you need *?; with that, you'll be gobbling up as little as you can, instead of as much as you can. So, I think you mean:\np = re.compile(r'<img=([^>]*?)>(.*?)</img>', re.S)\n\nthis matches one img tag (and all tags inside it), and appears to be performing exactly the substitutions you mean.\n"
] |
[
1
] |
[] |
[] |
[
"multiline",
"python",
"regex"
] |
stackoverflow_0001400136_multiline_python_regex.txt
|
Q:
Python datatype suitable for my cache
I'm searching for the a datatype for a cache, basically I need the functionality of a dict, i.e. random access based on a key, which has a limited number of entries so that when the limit is reached the oldest item gets automatically removed.
Furthermore I need to be able to store it via shelve or pickle and rely on Python 2.4.
Should I subclass dict and add a list to it? Any suggestions?
Edit:
I have not mentioned the scale, I need to keep track of already read items which consist of a signature by which they are identified and I only want to keep track of about a few hundred of them.
collections.deque seem nice but that's a list and I need random access. So dict would seem suitable, however somehow I need to expire items if the limit is hit which means I need to keep track the order in which they have been added.
A:
I think you answered the question yourself. You need to subclass a dict. And you also of course needs to have a list of the keys, so when the list gets too long you can purge the oldest one.
I would however possibly look into memcached or similar.
A:
You probably want an LRU cache (one where "oldest" is measured by "least recently accessed" as opposed to "least recently obtained") -- for most access patterns it performs MUCH better than a naive "oldest goes first" cache (an extremely popular item may easily be the oldest one obtained, but half the recent hits go to it -- how silly to evict it just because it was obtained a long time ago, when it's SO popular!-). Read up on caching in general at wikipedia.
LRU is tricky to program in solid and well-performing ways; I recommend you download, install and reuse lrucache instead. If it doesn't match all of your needs exactly, it's easier to tweak existing code than to start from scratch on a tricky subject.
|
Python datatype suitable for my cache
|
I'm searching for the a datatype for a cache, basically I need the functionality of a dict, i.e. random access based on a key, which has a limited number of entries so that when the limit is reached the oldest item gets automatically removed.
Furthermore I need to be able to store it via shelve or pickle and rely on Python 2.4.
Should I subclass dict and add a list to it? Any suggestions?
Edit:
I have not mentioned the scale, I need to keep track of already read items which consist of a signature by which they are identified and I only want to keep track of about a few hundred of them.
collections.deque seem nice but that's a list and I need random access. So dict would seem suitable, however somehow I need to expire items if the limit is hit which means I need to keep track the order in which they have been added.
|
[
"I think you answered the question yourself. You need to subclass a dict. And you also of course needs to have a list of the keys, so when the list gets too long you can purge the oldest one.\nI would however possibly look into memcached or similar.\n",
"You probably want an LRU cache (one where \"oldest\" is measured by \"least recently accessed\" as opposed to \"least recently obtained\") -- for most access patterns it performs MUCH better than a naive \"oldest goes first\" cache (an extremely popular item may easily be the oldest one obtained, but half the recent hits go to it -- how silly to evict it just because it was obtained a long time ago, when it's SO popular!-). Read up on caching in general at wikipedia.\nLRU is tricky to program in solid and well-performing ways; I recommend you download, install and reuse lrucache instead. If it doesn't match all of your needs exactly, it's easier to tweak existing code than to start from scratch on a tricky subject. \n"
] |
[
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001399717_python.txt
|
Q:
how can I get the uuid module for python 2.4.3
I have an older version of python on the server i'm using and cannot upgrade it. is there a way to get the uuid module?
A:
Get it from pypi -- just download and install, it will work with Python 2.3 or better.
Edit: to install, first unpack the .tar.gz you just downloaded, i.e., from a terminal, cd to the directory you downloaded it to, then tar xzvf uuid-1.30.tar.gz, then cd uuid-1.30, and sudo python setup.py install (the sudo may or may not be needed depending on how your Linux system is set up; if it is needed, it will probably ask you for your password unless you've done another sudo very recently).
A:
To continue where Alex left off..
Download the uuid-1.30.tar.gz from Alex's pypi link.
unzip and untar.
place the uuid.py to your application's python path (e.g., same dir with your own .py files)
|
how can I get the uuid module for python 2.4.3
|
I have an older version of python on the server i'm using and cannot upgrade it. is there a way to get the uuid module?
|
[
"Get it from pypi -- just download and install, it will work with Python 2.3 or better.\nEdit: to install, first unpack the .tar.gz you just downloaded, i.e., from a terminal, cd to the directory you downloaded it to, then tar xzvf uuid-1.30.tar.gz, then cd uuid-1.30, and sudo python setup.py install (the sudo may or may not be needed depending on how your Linux system is set up; if it is needed, it will probably ask you for your password unless you've done another sudo very recently).\n",
"To continue where Alex left off..\n\nDownload the uuid-1.30.tar.gz from Alex's pypi link.\nunzip and untar.\nplace the uuid.py to your application's python path (e.g., same dir with your own .py files)\n\n"
] |
[
5,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001213328_python.txt
|
Q:
Are Python extensions produced by Cython/Pyrex threadsafe?
If not, is there a way I can guarantee thread safety by programming a certain way?
To clarify, when talking about "threadsafe,' I mean Python threads, not OS-level threads.
A:
It all depends on the interaction between your Cython code and Python's GIL, as documented in detail here. If you don't do anything special, Cython-generated code will respect the GIL (as will a C-coded extension that doesn't use the GIL-releasing macros); that makes such code "as threadsafe as Python code" -- which isn't much, but is easier to handle than completely free-threading code (you still need to architect multi-threaded cooperation and synchronization, ideally with Queue instances but possibly with locking &c).
Code that has relinquished the GIL and not yet acquired it back MUST NOT in any way interact with the Python runtime and the objects that the Python runtime uses -- this goes for Cython just as well as for C-coded extensions. The upside of it is of course that such code can run on a separate core (until it needs to sync up or in any way communicate with the Python runtime again, of course).
A:
Python's global interpreter lock means that only one thread can be active in the interpreter at any one time. However, once control is passed out to a C extension another thread can be active within the interpreter. Multiple threads can be created, and nothing prevents a thread from being interrupted within the middle of a critical section. N
on thread-safe code can be implemented within the interpreter, so nothing about code running within the interpreter is inherently thread safe. Code in C or Pyrex modules can still modify data structures that are visible to python code. Native code can, of course, also have threading issues with native data structures.
You can't guarantee thread safety beyond using appropriate design and synchronisation - the GIL on the python interpreter doesn't materially change this.
|
Are Python extensions produced by Cython/Pyrex threadsafe?
|
If not, is there a way I can guarantee thread safety by programming a certain way?
To clarify, when talking about "threadsafe,' I mean Python threads, not OS-level threads.
|
[
"It all depends on the interaction between your Cython code and Python's GIL, as documented in detail here. If you don't do anything special, Cython-generated code will respect the GIL (as will a C-coded extension that doesn't use the GIL-releasing macros); that makes such code \"as threadsafe as Python code\" -- which isn't much, but is easier to handle than completely free-threading code (you still need to architect multi-threaded cooperation and synchronization, ideally with Queue instances but possibly with locking &c).\nCode that has relinquished the GIL and not yet acquired it back MUST NOT in any way interact with the Python runtime and the objects that the Python runtime uses -- this goes for Cython just as well as for C-coded extensions. The upside of it is of course that such code can run on a separate core (until it needs to sync up or in any way communicate with the Python runtime again, of course).\n",
"Python's global interpreter lock means that only one thread can be active in the interpreter at any one time. However, once control is passed out to a C extension another thread can be active within the interpreter. Multiple threads can be created, and nothing prevents a thread from being interrupted within the middle of a critical section. N\non thread-safe code can be implemented within the interpreter, so nothing about code running within the interpreter is inherently thread safe. Code in C or Pyrex modules can still modify data structures that are visible to python code. Native code can, of course, also have threading issues with native data structures.\nYou can't guarantee thread safety beyond using appropriate design and synchronisation - the GIL on the python interpreter doesn't materially change this.\n"
] |
[
5,
2
] |
[] |
[] |
[
"cython",
"pyrex",
"python",
"thread_safety"
] |
stackoverflow_0001397977_cython_pyrex_python_thread_safety.txt
|
Q:
Difference between returning modified class and using type()
I guess it's more of a python question than a django one, but I couldn't replicate this behavior anywhere else, so I'll use exact code that doesn't work as expected.
I was working on some dynamic forms in django, when I found this factory function snippet:
def get_employee_form(employee):
"""Return the form for a specific Board."""
employee_fields = EmployeeFieldModel.objects.filter(employee = employee).order_by ('order')
class EmployeeForm(forms.Form):
def __init__(self, *args, **kwargs):
forms.Form.__init__(self, *args, **kwargs)
self.employee = employee
def save(self):
"Do the save"
for field in employee_fields:
setattr(EmployeeForm, field.name, copy(type_mapping[field.type]))
return type('EmployeeForm', (forms.Form, ), dict(EmployeeForm.__dict__))
[from :http://uswaretech.com/blog/2008/10/dynamic-forms-with-django/]
And there's one thing that I don't understand, why returning modified EmployeeForm doesn't do the trick?
I mean something like this:
def get_employee_form(employee):
#[...]same function body as before
for field in employee_fields:
setattr(EmployeeForm, field.name, copy(type_mapping[field.type]))
return EmployeeForm
When I tried returning modified class django ignored my additional fields, but returning type()'s result works perfectly.
A:
Lennart's hypothesis is correct: a metaclass is indeed the culprit. No need to guess, just look at the sources: the metaclass is DeclarativeFieldsMetaclass currently at line 53 of that file, and adds attributes base_fields and possibly media based on what attributes the class has at creation time. At line 329 ff you see:
class Form(BaseForm):
"A collection of Fields, plus their associated data."
# This is a separate class from BaseForm in order to abstract the way
# self.fields is specified. This class (Form) is the one that does the
# fancy metaclass stuff purely for the semantic sugar -- it allows one
# to define a form using declarative syntax.
# BaseForm itself has no way of designating self.fields.
__metaclass__ = DeclarativeFieldsMetaclass
This implies there's some fragility in creating a new class with base type -- the supplied black magic might or might not carry through! A more solid approach is to use the type of EmployeeForm which will pick up any metaclass that may be involved -- i.e.:
return type(EmployeeForm)('EmployeeForm', (forms.Form, ), EmployeeForm.__dict__)
(no need to copy that __dict__, btw). The difference is subtle but important: rather than using directly type's 3-args form, we use the 1-arg form to pick up the type (i.e., the metaclass) of the form class, then call THAT metaclass in the 3-args form.
Blackly magicallish indeed, but then that's the downside of frameworks which do such use of "fancy metaclass stuff purely for the semantic sugar" &c: you're in clover as long as you want to do exactly what the framework supports, but to get out of that support even a little bit may require countervailing wizardry (which goes some way towards explaining why often I'd rather use a lightweight, transparent setup, such as werkzeug, rather than a framework that ladles magic upon me like Rails or Django do: my mastery of deep black magic does NOT mean I'm happy to have to USE it in plain production code... but, that's another discussion;-).
A:
I just tried this with straight non-django classes and it worked. So it's not a Python issue, but a Django issue.
And in this case (although I'm not 100% sure), it's a question of what the Form class does during class creation. I think it has a meta class, and that this meta class will finalize the form initialization during class creation. That means that any fields you add after class creation will be ignored.
Therefore you need to create a new class, as is done with the type() statement, so that the class creation code of the meta class is involved, now with the new fields.
A:
It's worth noting that this code snippet is a very poor means to the desired end, and involves a common misunderstanding about Django Form objects - that a Form object should map one-to-one with an HTML form. The correct way to do something like this (which doesn't require messing with any metaclass magic) is to use multiple Form objects and an inline formset.
Or, if for some odd reason you really want to keep things in a single Form object, just manipulate self.fields in the Form's __init__ method.
|
Difference between returning modified class and using type()
|
I guess it's more of a python question than a django one, but I couldn't replicate this behavior anywhere else, so I'll use exact code that doesn't work as expected.
I was working on some dynamic forms in django, when I found this factory function snippet:
def get_employee_form(employee):
"""Return the form for a specific Board."""
employee_fields = EmployeeFieldModel.objects.filter(employee = employee).order_by ('order')
class EmployeeForm(forms.Form):
def __init__(self, *args, **kwargs):
forms.Form.__init__(self, *args, **kwargs)
self.employee = employee
def save(self):
"Do the save"
for field in employee_fields:
setattr(EmployeeForm, field.name, copy(type_mapping[field.type]))
return type('EmployeeForm', (forms.Form, ), dict(EmployeeForm.__dict__))
[from :http://uswaretech.com/blog/2008/10/dynamic-forms-with-django/]
And there's one thing that I don't understand, why returning modified EmployeeForm doesn't do the trick?
I mean something like this:
def get_employee_form(employee):
#[...]same function body as before
for field in employee_fields:
setattr(EmployeeForm, field.name, copy(type_mapping[field.type]))
return EmployeeForm
When I tried returning modified class django ignored my additional fields, but returning type()'s result works perfectly.
|
[
"Lennart's hypothesis is correct: a metaclass is indeed the culprit. No need to guess, just look at the sources: the metaclass is DeclarativeFieldsMetaclass currently at line 53 of that file, and adds attributes base_fields and possibly media based on what attributes the class has at creation time. At line 329 ff you see:\nclass Form(BaseForm):\n \"A collection of Fields, plus their associated data.\"\n # This is a separate class from BaseForm in order to abstract the way\n # self.fields is specified. This class (Form) is the one that does the\n # fancy metaclass stuff purely for the semantic sugar -- it allows one\n # to define a form using declarative syntax.\n # BaseForm itself has no way of designating self.fields.\n __metaclass__ = DeclarativeFieldsMetaclass\n\nThis implies there's some fragility in creating a new class with base type -- the supplied black magic might or might not carry through! A more solid approach is to use the type of EmployeeForm which will pick up any metaclass that may be involved -- i.e.:\nreturn type(EmployeeForm)('EmployeeForm', (forms.Form, ), EmployeeForm.__dict__)\n\n(no need to copy that __dict__, btw). The difference is subtle but important: rather than using directly type's 3-args form, we use the 1-arg form to pick up the type (i.e., the metaclass) of the form class, then call THAT metaclass in the 3-args form.\nBlackly magicallish indeed, but then that's the downside of frameworks which do such use of \"fancy metaclass stuff purely for the semantic sugar\" &c: you're in clover as long as you want to do exactly what the framework supports, but to get out of that support even a little bit may require countervailing wizardry (which goes some way towards explaining why often I'd rather use a lightweight, transparent setup, such as werkzeug, rather than a framework that ladles magic upon me like Rails or Django do: my mastery of deep black magic does NOT mean I'm happy to have to USE it in plain production code... but, that's another discussion;-).\n",
"I just tried this with straight non-django classes and it worked. So it's not a Python issue, but a Django issue.\nAnd in this case (although I'm not 100% sure), it's a question of what the Form class does during class creation. I think it has a meta class, and that this meta class will finalize the form initialization during class creation. That means that any fields you add after class creation will be ignored.\nTherefore you need to create a new class, as is done with the type() statement, so that the class creation code of the meta class is involved, now with the new fields.\n",
"It's worth noting that this code snippet is a very poor means to the desired end, and involves a common misunderstanding about Django Form objects - that a Form object should map one-to-one with an HTML form. The correct way to do something like this (which doesn't require messing with any metaclass magic) is to use multiple Form objects and an inline formset.\nOr, if for some odd reason you really want to keep things in a single Form object, just manipulate self.fields in the Form's __init__ method.\n"
] |
[
5,
3,
1
] |
[] |
[] |
[
"class",
"django",
"python",
"types"
] |
stackoverflow_0001251294_class_django_python_types.txt
|
Q:
Dictionary of tags in declarative SQLAlchemy?
I am working on a quite large code base that has been implemented using sqlalchemy.ext.declarative, and I need to add a dict-like property to one of the classes. What I need is the same as in this question, but in a declarative fashion. Can anyone with more knowledge in SQLAlchemy give me an example?
Thanks in advance...
A:
Declarative is just another way of defining things. Virtually you end up with the exact same environment than if you used separated mapping.
Since I answered the other question, I'll try this one as well. Hope it gives more upvotes ;)
Well, first we define the classes
from sqlalchemy import Column, Integer, String, Table, create_engine
from sqlalchemy import orm, MetaData, Column, ForeignKey
from sqlalchemy.orm import relation, mapper, sessionmaker
from sqlalchemy.orm.collections import column_mapped_collection
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('sqlite:///:memory:', echo=True)
Base = declarative_base(bind=engine)
class Note(Base):
__tablename__ = 'notes'
id_item = Column(Integer, ForeignKey('items.id'), primary_key=True)
name = Column(String(20), primary_key=True)
value = Column(String(100))
def __init__(self, name, value):
self.name = name
self.value = value
class Item(Base):
__tablename__ = 'items'
id = Column(Integer, primary_key=True)
name = Column(String(20))
description = Column(String(100))
_notesdict = relation(Note,
collection_class=column_mapped_collection(Note.name))
notes = association_proxy('_notesdict', 'value', creator=Note)
def __init__(self, name, description=''):
self.name = name
self.description = description
Base.metadata.create_all()
Now let's make a test:
Session = sessionmaker(bind=engine)
s = Session()
i = Item('ball', 'A round full ball')
i.notes['color'] = 'orange'
i.notes['size'] = 'big'
i.notes['data'] = 'none'
s.add(i)
s.commit()
print i.notes
I get:
{u'color': u'orange', u'data': u'none', u'size': u'big'}
Now let's check the notes table...
for note in s.query(Note):
print note.id_item, note.name, note.value
I get:
1 color orange
1 data none
1 size big
It works!! :D
|
Dictionary of tags in declarative SQLAlchemy?
|
I am working on a quite large code base that has been implemented using sqlalchemy.ext.declarative, and I need to add a dict-like property to one of the classes. What I need is the same as in this question, but in a declarative fashion. Can anyone with more knowledge in SQLAlchemy give me an example?
Thanks in advance...
|
[
"Declarative is just another way of defining things. Virtually you end up with the exact same environment than if you used separated mapping.\nSince I answered the other question, I'll try this one as well. Hope it gives more upvotes ;)\nWell, first we define the classes\nfrom sqlalchemy import Column, Integer, String, Table, create_engine\nfrom sqlalchemy import orm, MetaData, Column, ForeignKey\nfrom sqlalchemy.orm import relation, mapper, sessionmaker\nfrom sqlalchemy.orm.collections import column_mapped_collection\nfrom sqlalchemy.ext.associationproxy import association_proxy\nfrom sqlalchemy.ext.declarative import declarative_base\n\nengine = create_engine('sqlite:///:memory:', echo=True)\nBase = declarative_base(bind=engine)\n\nclass Note(Base):\n __tablename__ = 'notes'\n\n id_item = Column(Integer, ForeignKey('items.id'), primary_key=True)\n name = Column(String(20), primary_key=True)\n value = Column(String(100))\n\n def __init__(self, name, value):\n self.name = name\n self.value = value \n\nclass Item(Base):\n __tablename__ = 'items'\n id = Column(Integer, primary_key=True)\n name = Column(String(20))\n description = Column(String(100))\n _notesdict = relation(Note, \n collection_class=column_mapped_collection(Note.name))\n notes = association_proxy('_notesdict', 'value', creator=Note)\n\n def __init__(self, name, description=''):\n self.name = name\n self.description = description\n\nBase.metadata.create_all()\n\nNow let's make a test:\nSession = sessionmaker(bind=engine)\ns = Session()\n\ni = Item('ball', 'A round full ball')\ni.notes['color'] = 'orange'\ni.notes['size'] = 'big'\ni.notes['data'] = 'none'\n\ns.add(i)\ns.commit()\nprint i.notes\n\nI get:\n{u'color': u'orange', u'data': u'none', u'size': u'big'}\n\nNow let's check the notes table...\nfor note in s.query(Note):\n print note.id_item, note.name, note.value\n\nI get:\n1 color orange\n1 data none\n1 size big\n\nIt works!! :D\n"
] |
[
14
] |
[] |
[] |
[
"declarative",
"python",
"sqlalchemy"
] |
stackoverflow_0001400537_declarative_python_sqlalchemy.txt
|
Q:
CRC32 to make short URL for web
I am trying to understand crc32 to generate the unique url for web page.
If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?
What could be the approximative string length to keep the checksum to be 2^32?
When I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further.
Mostly I want to convert the url (maximum 1024 chars) to shorted id.
A:
There is no such number as the "maximum number of urls can be used so that we can avoid duplicates" for CRC32.
The problem is that CRC32 can produce duplicates, and it's not a function of how many values you throw at it, it's a function of what those values look like.
So you might have a collision on the second url, if you're unlucky.
You should not base your algorithm on producing a unique hash, instead produce a unique value for each url manually.
A:
If you're already storing the full URL in a database table, an integer ID is pretty short, and can be made shorter by converting it to base 16, 64, or 85. If you can use a UUID, you can use an integer, and you may as well, since it's shorter and I don't see what benefit a UUID would provide in your lookup table.
A:
The right way to make a short URL is to store the full one in the database and publish something that maps to the row index. A compact way is to use the Base64 of the row ID, for example. Or you could use a UID for the primary key and show that.
Do not use a checksum, because it's too small and very likely to conflict. A cryptographic hash is larger and less likely, but it's still not the right way to go.
A:
CRC32 means cyclic redundancy check with 32 bits where any arbitrary amount of bits is summed up to a 32 bit check sum. And check sum functions are surjective, that means multiple input values have the same output value. So you cannot inverse the function.
A:
No, even you use md5, or any other check sum, the URL CAN BE duplicate, it all depends on your luck.
So don't make an unique url base on those check sum
|
CRC32 to make short URL for web
|
I am trying to understand crc32 to generate the unique url for web page.
If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?
What could be the approximative string length to keep the checksum to be 2^32?
When I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further.
Mostly I want to convert the url (maximum 1024 chars) to shorted id.
|
[
"There is no such number as the \"maximum number of urls can be used so that we can avoid duplicates\" for CRC32.\nThe problem is that CRC32 can produce duplicates, and it's not a function of how many values you throw at it, it's a function of what those values look like.\nSo you might have a collision on the second url, if you're unlucky.\nYou should not base your algorithm on producing a unique hash, instead produce a unique value for each url manually.\n",
"If you're already storing the full URL in a database table, an integer ID is pretty short, and can be made shorter by converting it to base 16, 64, or 85. If you can use a UUID, you can use an integer, and you may as well, since it's shorter and I don't see what benefit a UUID would provide in your lookup table.\n",
"The right way to make a short URL is to store the full one in the database and publish something that maps to the row index. A compact way is to use the Base64 of the row ID, for example. Or you could use a UID for the primary key and show that.\nDo not use a checksum, because it's too small and very likely to conflict. A cryptographic hash is larger and less likely, but it's still not the right way to go.\n",
"CRC32 means cyclic redundancy check with 32 bits where any arbitrary amount of bits is summed up to a 32 bit check sum. And check sum functions are surjective, that means multiple input values have the same output value. So you cannot inverse the function.\n",
"No, even you use md5, or any other check sum, the URL CAN BE duplicate, it all depends on your luck.\nSo don't make an unique url base on those check sum\n"
] |
[
7,
4,
2,
1,
0
] |
[
"The quickest (and perhaps best!) way to solve things may be to simply use a hash of the local path and query of a given URI, as follows:\nusing System;\n\nnamespace HashSample\n{\n class Program\n {\n static void Main(string[] args)\n {\n Uri uri = new Uri(\n \"http://host.com/folder/file.jpg?code=ABC123\");\n\n string hash = GetPathAndQueryHash(uri);\n\n Console.WriteLine(hash);\n }\n\n public static string GetPathAndQueryHash(Uri uri)\n {\n return uri.PathAndQuery.GetHashCode().ToString();\n }\n }\n}\n\nThe above presumes that the URI scheme and host remain the same. If not GetHashCode will work with any string.\nFor a great discussion on CRC32 Hash Collision visit: http://episteme.arstechnica.com/eve/forums/a/tpc/f/6330927813/m/821008399831\n"
] |
[
-1
] |
[
"c#",
"crc32",
"python",
"short_url",
"url"
] |
stackoverflow_0001401218_c#_crc32_python_short_url_url.txt
|
Q:
Return the number of affected rows from a MERGE with cx_oracle
How can you get the number of affected rows from executing a "MERGE INTO..." sql command within CX_Oracle?
When ever I execute the MERGE SQL on cx_oracle, I get a cursor.rowcount of -1. Is there a way to get the number of rows affected by the merge?
A:
Since cx_oracle follows the python DBAPI specification (I presume), this is expected 'behaviour'. The exact same problem was discussed here on stackoverflow before.
Some more links with possible solutions:
http://www.oracle-developer.net/display.php?id=220
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:122741200346595110
|
Return the number of affected rows from a MERGE with cx_oracle
|
How can you get the number of affected rows from executing a "MERGE INTO..." sql command within CX_Oracle?
When ever I execute the MERGE SQL on cx_oracle, I get a cursor.rowcount of -1. Is there a way to get the number of rows affected by the merge?
|
[
"Since cx_oracle follows the python DBAPI specification (I presume), this is expected 'behaviour'. The exact same problem was discussed here on stackoverflow before.\nSome more links with possible solutions:\n\nhttp://www.oracle-developer.net/display.php?id=220\nhttp://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:122741200346595110\n\n"
] |
[
1
] |
[] |
[] |
[
"cx_oracle",
"oracle",
"python"
] |
stackoverflow_0001401328_cx_oracle_oracle_python.txt
|
Q:
How to using widget PlainTextEdit or TextEdit for output and input text?
How to using widget PlainTextEdit or TextEdit for output and input text? I'm interested PyQt4.
A:
PlainTextEdit
TextEdit
A:
You need to be more specific, but anyways, the following code will create a dialog with a TextEdit that will show the input file:
from PyQt4 import QtCore, QtGui
def read_file(file):
"""
Returns all contents of file
"""
result = ""
with open(file) as f:
for line in f:
result+= line
return result
class ExampleDialog(QtGui.QDialog):
def __init__(Self, parent, file):
QtGui.QDialog.__init__(self, parent)
# create main layout of the dialog
layout = QtGui.QVBoxLayout()
layout.addWidget(QLabel(self.tr("Contents of file:"))
edit = QtGui.QPlainTextEdit(self)
# read the file and get the content
edit.appendPlainText(read_file(file))
layout.addWidget(edit)
self.setLayout(layout)
file = "hello.txt"
dialog = ExampleDialog(None, file)
dialog.exec_()
The above is just an example with a QDialog, but should be more than enough for you to get started.
Hope it helps!
|
How to using widget PlainTextEdit or TextEdit for output and input text?
|
How to using widget PlainTextEdit or TextEdit for output and input text? I'm interested PyQt4.
|
[
"PlainTextEdit\nTextEdit\n",
"You need to be more specific, but anyways, the following code will create a dialog with a TextEdit that will show the input file:\n\nfrom PyQt4 import QtCore, QtGui\ndef read_file(file):\n \"\"\"\n Returns all contents of file\n \"\"\"\n result = \"\"\n with open(file) as f:\n for line in f:\n result+= line\n return result\n\nclass ExampleDialog(QtGui.QDialog):\n\n def __init__(Self, parent, file):\n QtGui.QDialog.__init__(self, parent)\n # create main layout of the dialog\n layout = QtGui.QVBoxLayout()\n layout.addWidget(QLabel(self.tr(\"Contents of file:\"))\n edit = QtGui.QPlainTextEdit(self)\n # read the file and get the content\n edit.appendPlainText(read_file(file))\n layout.addWidget(edit)\n self.setLayout(layout)\n\nfile = \"hello.txt\"\ndialog = ExampleDialog(None, file)\ndialog.exec_()\n\n\nThe above is just an example with a QDialog, but should be more than enough for you to get started. \nHope it helps!\n"
] |
[
3,
0
] |
[] |
[] |
[
"python",
"qt",
"user_interface"
] |
stackoverflow_0001396339_python_qt_user_interface.txt
|
Q:
Performance lost when open a db multiple times in BerkeleyDB
I'm using BerkeleyDB to develop a small app. And I have a question about opening a database multiple time in BDB.
I have a large set of text ( corpus ), and I want to load a part of it to do the calculation. I have two pseudo-code (mix with python) here
@1
def getCorpus(token):
DB.open()
DB.get(token)
DB.close()
@2
#open and wait
def openCorpus():
DB.open()
#close database
def closeCorpus():
DB.close()
def getCorpus(token):
DB.get(token)
In the second example, I'll open the db before the calculation, load token for each loop and then close the db.
In the first example, each time the loop ask for the token, I'll open, get and then close the db.
Is there any performance lost ?
I also note that I'm using a DBEnv to manage the database
A:
If you aren't caching the opened file you will always get performance lost because:
you call open() and close() multiple times which are quite expensive,
you lose all potential buffers (both system buffers and bdb internal buffers).
But I wouldn't care too much about the performance before the code is written.
|
Performance lost when open a db multiple times in BerkeleyDB
|
I'm using BerkeleyDB to develop a small app. And I have a question about opening a database multiple time in BDB.
I have a large set of text ( corpus ), and I want to load a part of it to do the calculation. I have two pseudo-code (mix with python) here
@1
def getCorpus(token):
DB.open()
DB.get(token)
DB.close()
@2
#open and wait
def openCorpus():
DB.open()
#close database
def closeCorpus():
DB.close()
def getCorpus(token):
DB.get(token)
In the second example, I'll open the db before the calculation, load token for each loop and then close the db.
In the first example, each time the loop ask for the token, I'll open, get and then close the db.
Is there any performance lost ?
I also note that I'm using a DBEnv to manage the database
|
[
"If you aren't caching the opened file you will always get performance lost because:\n\nyou call open() and close() multiple times which are quite expensive,\nyou lose all potential buffers (both system buffers and bdb internal buffers).\n\nBut I wouldn't care too much about the performance before the code is written.\n"
] |
[
3
] |
[] |
[] |
[
"berkeley_db",
"database",
"performance",
"python"
] |
stackoverflow_0001401497_berkeley_db_database_performance_python.txt
|
Q:
Python vs Groovy vs Ruby? (based on criteria listed in question)
Considering the criteria listed below, which of Python, Groovy or Ruby would you use?
Criteria (Importance out of 10, 10 being most important)
Richness of API/libraries available (eg. maths, plotting, networking) (9)
Ability to embed in desktop (java/c++) applications (8)
Ease of deployment (8)
Ability to interface with DLLs/Shared Libraries (7)
Ability to generate GUIs (7)
Community/User support (6)
Portability (6)
Database manipulation (3)
Language/Semantics (2)
A:
I think it's going to be difficult to get an objective comparison. I personally prefer Python. To address one of your criteria, Python was designed from the start to be an embeddable language. It has a very rich C API, and the interpreter is modularized to make it easy to call from C. If Java is your host environment, you should look at Jython, an implementation of Python inside the Java environment (VM and libs).
A:
Having worked with all 3 of them, this is what I can say:
Python
has very mature libraries
libraries are documented
documentation can be accessed from your debugger/shell at runtime through the docstrings
you can develop code without an IDE
Ruby
has some great libraries ( even though some are badly documented )
Ruby's instrospection mechanisms are great. They make writing code pretty easy ( even if documentation is not available )
you can develop code without an IDE
Groovy
you can benefit from everything Java has to offer
syntax is somewhat inspired from Ruby
it's hard to write code without an IDE. You have no way to debug stuff from your console ( this is something you can easily do in Python/Ruby ) and the available Groovy plugins have a lot of catching up to do. I wrote some apps using Groovy and as they get bigger I regret not going with Ruby/Python ( debugging would have been WAY more easier ). If you'll only develop from an IDE, Groovy's a cool language.
A:
Just to muddy the waters...
Groovy give you access to Java. Java has an extremely rich set of APIs/Libraries, applications, etc.
Groovy is embeddable, although easiest in Java.
DLLs/Libraries (if you're talking about non-Groovy/Java) may be somewhat problematic, although there are ways and some APIs to help.
I've done some Python programming, but being more familiar with Java, Groovy comes a lot easier to me.
A:
Groovy? I'm just picking it up; try this (inside the groovyconsole):
File.metaClass.invokeMethod = { String name, args ->
System.out.print ("Call to $name intercepted...");
File.metaClass.getMetaMethod(name, args).invoke(delegate, args);
}
new File("c:/temp").eachFile{
if (it.isFile()) println it.canonicalPath
}
The first code is AOP. All calls to any method of File object will be intercepted. No additional tools required. This is executed against existing Java class dynamically.
In the second block, you remove the 'f' closure parameter. Being just one parameter, it defaults to the built in "it" variable available to the closure context.
Here is what you get:
"Call to isFile intercepted...C:\temp\img.jpg"
etc.
A:
try Groovy .. it has all features that you need there. You can use existing java lib without any modification on its classes.
basically .. groovy is java++, it is more dynamic and fun to learn (just like ruby)
I dont like ruby or python syntax so I will put them behind. Groovy is just like C/C++ syntax so I like him lol :)
A:
Python has all nine criteria. It scores a 56.
I'm sure Ruby has everything Python has. It seems to have fewer libraries. So it scores a 51.
I don't know if Groovy has every feature.
Since Python is 56 and Ruby is a 51, Python just barely edges out Ruby.
However, I think this kind of decision can still boil down to some subjective issues outside these nine criteria.
A:
From your critera, I'd pick JRuby:
Richness of API/libraries available (eg. maths, plotting, networking) (9)
Everything the JVM has access to, which is a lot
Ability to embed in desktop (java/c++) applications (8)
Excellent Monkeybars framework, which lets you design a swing GUI in your GUI designer, and then wire it up using clean ruby code
Ease of deployment (8)
Rawr can package your app as an executable jar
Ability to interface with DLLs/Shared Libraries (7)
Java shared libraries easily, C ones via jna + libffi
Ability to generate GUIs (7)
Swing just works. Not sure how easy it is to use QtJambi, but it's definitely possible.
Community/User support (6)
Lots. Ruby has an excellent community.
Portability (6)
Everywhere the JVM works
Database manipulation (3)
All the ruby database libraries and all the java ones
Language/Semantics (2)
Here's where ruby takes the definite lead over groovy and python. The language has had some really beautiful design decisions taken early on, which shows up in the consistency and power of the standard library. Blocks, in particular, make it a joy to use.
A:
This sort of adding-up-scores-by-features is not a good way to choose a programming language. You'd be better off choosing whichever you know the best. If you don't know any of them, try them out for a little while. If you have a really specific project in mind, then maybe some programming languages would be better, but if you just have general preferences you will never come to a consensus.
That said, Python is pretty flexible, it's the most popular on your list so the easiest to solve whatever sorts of problems you have by searching, so I'd recommend Python.
A:
Perl? Yikes.
As someone has observed Perl is like a big explosion in a punctuation factory. It's terseness is not an advantage if the resultant code is not self documenting.
Have used Groovy for some utility tasks, easy to get going. Full access to Java libraries, plus some cool addtions to it, like listing the files in a directory using a closure:
// process all files printing out full name (. and .. auto excluded)
new File(basedir).eachFile{ f->
if (f.isFile()) println f.canonicalPath
}
A:
I know it's not on your list, but at least look at perl.
Richness of Api/Libraries to sink a ship.
Runs on more systems than most people realise exists.
Works well with Binary libraries.
Has a huge community.
Portability, See above.
Database manipulation: more ways to do it. ( Pick your favorite module )
And one of the most expressive/terse languages around.
|
Python vs Groovy vs Ruby? (based on criteria listed in question)
|
Considering the criteria listed below, which of Python, Groovy or Ruby would you use?
Criteria (Importance out of 10, 10 being most important)
Richness of API/libraries available (eg. maths, plotting, networking) (9)
Ability to embed in desktop (java/c++) applications (8)
Ease of deployment (8)
Ability to interface with DLLs/Shared Libraries (7)
Ability to generate GUIs (7)
Community/User support (6)
Portability (6)
Database manipulation (3)
Language/Semantics (2)
|
[
"I think it's going to be difficult to get an objective comparison. I personally prefer Python. To address one of your criteria, Python was designed from the start to be an embeddable language. It has a very rich C API, and the interpreter is modularized to make it easy to call from C. If Java is your host environment, you should look at Jython, an implementation of Python inside the Java environment (VM and libs).\n",
"Having worked with all 3 of them, this is what I can say:\n\nPython \n\nhas very mature libraries\nlibraries are documented\ndocumentation can be accessed from your debugger/shell at runtime through the docstrings\nyou can develop code without an IDE\n\nRuby\n\nhas some great libraries ( even though some are badly documented )\nRuby's instrospection mechanisms are great. They make writing code pretty easy ( even if documentation is not available )\nyou can develop code without an IDE\n\nGroovy\n\nyou can benefit from everything Java has to offer\nsyntax is somewhat inspired from Ruby\nit's hard to write code without an IDE. You have no way to debug stuff from your console ( this is something you can easily do in Python/Ruby ) and the available Groovy plugins have a lot of catching up to do. I wrote some apps using Groovy and as they get bigger I regret not going with Ruby/Python ( debugging would have been WAY more easier ). If you'll only develop from an IDE, Groovy's a cool language.\n\n\n",
"Just to muddy the waters...\nGroovy give you access to Java. Java has an extremely rich set of APIs/Libraries, applications, etc.\nGroovy is embeddable, although easiest in Java.\nDLLs/Libraries (if you're talking about non-Groovy/Java) may be somewhat problematic, although there are ways and some APIs to help.\nI've done some Python programming, but being more familiar with Java, Groovy comes a lot easier to me.\n",
"Groovy? I'm just picking it up; try this (inside the groovyconsole):\nFile.metaClass.invokeMethod = { String name, args ->\n System.out.print (\"Call to $name intercepted...\");\n File.metaClass.getMetaMethod(name, args).invoke(delegate, args);\n}\n\nnew File(\"c:/temp\").eachFile{\n if (it.isFile()) println it.canonicalPath\n}\n\nThe first code is AOP. All calls to any method of File object will be intercepted. No additional tools required. This is executed against existing Java class dynamically.\nIn the second block, you remove the 'f' closure parameter. Being just one parameter, it defaults to the built in \"it\" variable available to the closure context.\nHere is what you get:\n\"Call to isFile intercepted...C:\\temp\\img.jpg\"\netc.\n",
"try Groovy .. it has all features that you need there. You can use existing java lib without any modification on its classes. \nbasically .. groovy is java++, it is more dynamic and fun to learn (just like ruby)\nI dont like ruby or python syntax so I will put them behind. Groovy is just like C/C++ syntax so I like him lol :)\n",
"Python has all nine criteria. It scores a 56.\nI'm sure Ruby has everything Python has. It seems to have fewer libraries. So it scores a 51.\nI don't know if Groovy has every feature.\nSince Python is 56 and Ruby is a 51, Python just barely edges out Ruby.\nHowever, I think this kind of decision can still boil down to some subjective issues outside these nine criteria.\n",
"From your critera, I'd pick JRuby:\n\nRichness of API/libraries available (eg. maths, plotting, networking) (9)\n\nEverything the JVM has access to, which is a lot\n\nAbility to embed in desktop (java/c++) applications (8)\n\nExcellent Monkeybars framework, which lets you design a swing GUI in your GUI designer, and then wire it up using clean ruby code\n\nEase of deployment (8)\n\nRawr can package your app as an executable jar\n\nAbility to interface with DLLs/Shared Libraries (7)\n\nJava shared libraries easily, C ones via jna + libffi\n\nAbility to generate GUIs (7)\n\nSwing just works. Not sure how easy it is to use QtJambi, but it's definitely possible.\n\nCommunity/User support (6)\n\nLots. Ruby has an excellent community.\n\nPortability (6)\n\nEverywhere the JVM works\n\nDatabase manipulation (3)\n\nAll the ruby database libraries and all the java ones\n\nLanguage/Semantics (2)\n\nHere's where ruby takes the definite lead over groovy and python. The language has had some really beautiful design decisions taken early on, which shows up in the consistency and power of the standard library. Blocks, in particular, make it a joy to use.\n",
"This sort of adding-up-scores-by-features is not a good way to choose a programming language. You'd be better off choosing whichever you know the best. If you don't know any of them, try them out for a little while. If you have a really specific project in mind, then maybe some programming languages would be better, but if you just have general preferences you will never come to a consensus.\nThat said, Python is pretty flexible, it's the most popular on your list so the easiest to solve whatever sorts of problems you have by searching, so I'd recommend Python.\n",
"Perl? Yikes.\nAs someone has observed Perl is like a big explosion in a punctuation factory. It's terseness is not an advantage if the resultant code is not self documenting.\nHave used Groovy for some utility tasks, easy to get going. Full access to Java libraries, plus some cool addtions to it, like listing the files in a directory using a closure:\n// process all files printing out full name (. and .. auto excluded)\n\nnew File(basedir).eachFile{ f->\n\n if (f.isFile()) println f.canonicalPath\n}\n\n",
"I know it's not on your list, but at least look at perl.\n\nRichness of Api/Libraries to sink a ship. \nRuns on more systems than most people realise exists. \nWorks well with Binary libraries. \nHas a huge community.\nPortability, See above.\nDatabase manipulation: more ways to do it. ( Pick your favorite module ) \nAnd one of the most expressive/terse languages around. \n\n"
] |
[
34,
29,
24,
10,
8,
7,
6,
3,
2,
0
] |
[] |
[] |
[
"groovy",
"python",
"ruby",
"scripting"
] |
stackoverflow_0000257730_groovy_python_ruby_scripting.txt
|
Q:
Pythonic way to "flatten" object hierarchy to nested dicts?
I need to "flatten" objects into nested dicts of the object's properties. The objects I want to do this with are generally just containers for basic types or other objects which act in a similar way. For example:
class foo(object):
bar = None
baz = None
class spam(object):
eggs = []
x = spam()
y = foo()
y.bar = True
y.baz = u"boz"
x.eggs.append(y)
What I need to "flatten" this to is:
{ 'eggs': [ { 'bar': True, 'baz': u'boz' } ] }
Is there anything in the stdlib which can do this for me? If not, would I have to test isinstance against all known base-types to ensure I don't try to convert an object which can't be converted (eg: bool)?
Edit:
These are objects are being returned to my code from an external library and therefore I have no control over them. I could use them as-is in my methods, but it would be easier (safer?) to convert them to dicts - especially for unit testing.
A:
Code: You may need to handle other iterable types though:
def flatten(obj):
if obj is None:
return None
elif hasattr(obj, '__dict__') and obj.__dict__:
return dict([(k, flatten(v)) for (k, v) in obj.__dict__.items()])
elif isinstance(obj, (dict,)):
return dict([(k, flatten(v)) for (k, v) in obj.items()])
elif isinstance(obj, (list,)):
return [flatten(x) for x in obj]
elif isinstance(obj, (tuple,)):
return tuple([flatten(x) for x in obj])
else:
return obj
Bug?
In your code instead of:
class spam(object):
eggs = []
x = spam()
x.eggs.add(...)
please do:
class spam(object):
eggs = None #// if you need this line at all though
x = spam()
x.eggs = []
x.eggs.add(...)
If you do not, then all instances of spam will share the same eggs list.
A:
No, there is nothing in the standardlib. Yes, you would have to somehow test that the types are basic types like str, unicode, bool, int, float, long...
You could probably make a registry of methods to "serialize" different types, but that would only be useful if you have some types that should not have all it's attributes serialized, for example, or if you also need to flatten class attributes, etc.
A:
Almost every object has a dictionary (called __dict__), with all its methods and members.
With some type checking, you can then write a function that filters out only the members you are interested in.
It is not a big task, but as chrispy said, it could worth to try looking at your problem from a completely different perspective.
A:
Well, I'm not very proud of this, but is possible to do the following:
Create a super class that has the serialization method and loop through its properties.
At runtime extend your classes using bases at runtime
Execute the class from the new super class. It should be able to access the dict data from the children and work.
Here is an example:
class foo(object):
def __init__(self):
self.bar = None
self.baz = None
class spam(object):
delf __init__(self):
self.eggs = []
class Serializable():
def serialize(self):
result = {}
for property in self.__dict__.keys():
result[property] = self.__dict__[property]
return result
foo.__bases__ += (Serializable,)
spam.__bases__ += (Serializable,)
x = spam()
y = foo()
y.bar = True
y.baz = u"boz"
x.eggs.append(y)
y.serialize()
Things to point out. If you do not set the var is init the dict willnot work 'cause it is accessing the instance variables not the class variables (I suppose you meant instance ones). Second, make sure Serializable DOES NOT inherit from object, it is does you will have a
TypeError: Error when calling the metaclass bases
Cannot create a consistent method resolution
Hope it helps!
Edit: If you are just copying the dict use the deepcopy module, this is just an example :P
|
Pythonic way to "flatten" object hierarchy to nested dicts?
|
I need to "flatten" objects into nested dicts of the object's properties. The objects I want to do this with are generally just containers for basic types or other objects which act in a similar way. For example:
class foo(object):
bar = None
baz = None
class spam(object):
eggs = []
x = spam()
y = foo()
y.bar = True
y.baz = u"boz"
x.eggs.append(y)
What I need to "flatten" this to is:
{ 'eggs': [ { 'bar': True, 'baz': u'boz' } ] }
Is there anything in the stdlib which can do this for me? If not, would I have to test isinstance against all known base-types to ensure I don't try to convert an object which can't be converted (eg: bool)?
Edit:
These are objects are being returned to my code from an external library and therefore I have no control over them. I could use them as-is in my methods, but it would be easier (safer?) to convert them to dicts - especially for unit testing.
|
[
"Code: You may need to handle other iterable types though:\ndef flatten(obj):\n if obj is None:\n return None\n elif hasattr(obj, '__dict__') and obj.__dict__:\n return dict([(k, flatten(v)) for (k, v) in obj.__dict__.items()])\n elif isinstance(obj, (dict,)):\n return dict([(k, flatten(v)) for (k, v) in obj.items()])\n elif isinstance(obj, (list,)):\n return [flatten(x) for x in obj]\n elif isinstance(obj, (tuple,)):\n return tuple([flatten(x) for x in obj])\n else:\n return obj\n\nBug?\nIn your code instead of:\nclass spam(object):\n eggs = []\n\nx = spam()\nx.eggs.add(...)\n\nplease do:\nclass spam(object):\n eggs = None #// if you need this line at all though\n\nx = spam()\nx.eggs = []\nx.eggs.add(...)\n\nIf you do not, then all instances of spam will share the same eggs list.\n",
"No, there is nothing in the standardlib. Yes, you would have to somehow test that the types are basic types like str, unicode, bool, int, float, long...\nYou could probably make a registry of methods to \"serialize\" different types, but that would only be useful if you have some types that should not have all it's attributes serialized, for example, or if you also need to flatten class attributes, etc.\n",
"Almost every object has a dictionary (called __dict__), with all its methods and members.\nWith some type checking, you can then write a function that filters out only the members you are interested in.\nIt is not a big task, but as chrispy said, it could worth to try looking at your problem from a completely different perspective.\n",
"Well, I'm not very proud of this, but is possible to do the following:\n\nCreate a super class that has the serialization method and loop through its properties.\nAt runtime extend your classes using bases at runtime\nExecute the class from the new super class. It should be able to access the dict data from the children and work.\n\nHere is an example:\n \nclass foo(object):\n def __init__(self):\n self.bar = None\n self.baz = None\n\nclass spam(object):\n delf __init__(self):\n self.eggs = []\n\nclass Serializable():\n def serialize(self):\n result = {}\n for property in self.__dict__.keys():\n result[property] = self.__dict__[property]\n return result\nfoo.__bases__ += (Serializable,)\nspam.__bases__ += (Serializable,)\n\nx = spam()\ny = foo()\ny.bar = True\ny.baz = u\"boz\"\nx.eggs.append(y)\ny.serialize()\n\n\nThings to point out. If you do not set the var is init the dict willnot work 'cause it is accessing the instance variables not the class variables (I suppose you meant instance ones). Second, make sure Serializable DOES NOT inherit from object, it is does you will have a \n\nTypeError: Error when calling the metaclass bases\n Cannot create a consistent method resolution\n\nHope it helps!\nEdit: If you are just copying the dict use the deepcopy module, this is just an example :P\n"
] |
[
3,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001393010_python.txt
|
Q:
how to generate a many-to-many-relationship FORM in web2py?
Do I need a custom validator? Do I need a custom widget?
If this helps to clear the problem, the relationship is between member and language where a member can have multiple languages and a language is spoken by multiple members.
I would like to add a multi-select box in the "add member" form (that I generate using SQLFORM).
Thanks :)
A:
It depends and I suggest you take this on the web2py mailin list. One way to do it is
db.table.field.requires=IS_IN_DB(db,'othertable.id','%(otherfield)',multiple=True)
A:
Another way to do this:
db.define_table( 'make', Field( 'name' ) )
db.define_table( 'model',
Field( 'name' ),
Field( 'make', db.make, requires = IS_IN_DB( db, 'make.id', '%(name)' ) ) )
|
how to generate a many-to-many-relationship FORM in web2py?
|
Do I need a custom validator? Do I need a custom widget?
If this helps to clear the problem, the relationship is between member and language where a member can have multiple languages and a language is spoken by multiple members.
I would like to add a multi-select box in the "add member" form (that I generate using SQLFORM).
Thanks :)
|
[
"It depends and I suggest you take this on the web2py mailin list. One way to do it is\ndb.table.field.requires=IS_IN_DB(db,'othertable.id','%(otherfield)',multiple=True)\n\n",
"Another way to do this:\ndb.define_table( 'make', Field( 'name' ) )\n\ndb.define_table( 'model', \n Field( 'name' ), \n Field( 'make', db.make, requires = IS_IN_DB( db, 'make.id', '%(name)' ) ) )\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"web2py"
] |
stackoverflow_0001012179_python_web2py.txt
|
Q:
Problem with list of strings in python
Why on Earth doesn't the interpreter raise SyntaxError everytime I do this:
my_abc = ['a',
'b',
'c'
'd',]
I just wanted to add 'c' to the list of strings, and forgot to append the comma. I would expect this to cause some kind of error, as it's cleary incorrect.
Instead, what I got:
>>> my_abc
['a', 'b', 'cd']
And this is never what I want.
Why is it automatically concatenated? I can hardly count how many times I got bitten by this behavior.
Is there anything I can do with it?
Just to clarify*: I don't actually mind auto-concatenation, my problem has to do ONLY with lists of strings, because they often do much more than just carry text, they're used to control flow, to pass field names and many other things.
A:
Is called "Implicit String Concatenation" and a PEP that proposed its removal was rejected: http://www.python.org/dev/peps/pep-3126/
A:
It's by design. It allows, for example, writing long string literals in several lines without using +.
A:
As others said, it's by design.
Why is it so ? Mostly for historical reasons : C also does it.
In some cases it's handy because it reduce syntaxic noise and avoid adding unwanted spaces (inline SQL queries, complexes regexpes, etc).
What you can do about it ? Not much, but if it really happens often for you, try one of the following tricks.
indent your list with coma at the beginning of the line. It's weird, but if you do so the missing comas become obvious.
assign strings to variables and use variables list whenever you can (and it's often a good idea for other reasons, like avoiding duplicate strings).
split your list: for list of words you can put the whole list inside only one string and split it like below. For more than 5 elements it's also shorter.
'a b c d e'.split(' ').
A:
Because two string literals side-by-side, delimited by whitespace, are concatenated. Since the strings are within a list, they are 'side-by-side'.
See: http://docs.python.org/reference/lexical_analysis.html#string-literal-concatenation
A:
Because often people want to do something like this:
line = ("Here's a very long line, with no line breaks,"
" which should be displayed to the user (perhaps"
" as an error message or question box).")
It's easier to write this without having to manually concatenate strings. C, C++, and (I believe) Java and C# also have this behavior.
|
Problem with list of strings in python
|
Why on Earth doesn't the interpreter raise SyntaxError everytime I do this:
my_abc = ['a',
'b',
'c'
'd',]
I just wanted to add 'c' to the list of strings, and forgot to append the comma. I would expect this to cause some kind of error, as it's cleary incorrect.
Instead, what I got:
>>> my_abc
['a', 'b', 'cd']
And this is never what I want.
Why is it automatically concatenated? I can hardly count how many times I got bitten by this behavior.
Is there anything I can do with it?
Just to clarify*: I don't actually mind auto-concatenation, my problem has to do ONLY with lists of strings, because they often do much more than just carry text, they're used to control flow, to pass field names and many other things.
|
[
"Is called \"Implicit String Concatenation\" and a PEP that proposed its removal was rejected: http://www.python.org/dev/peps/pep-3126/\n",
"It's by design. It allows, for example, writing long string literals in several lines without using +.\n",
"As others said, it's by design.\nWhy is it so ? Mostly for historical reasons : C also does it.\nIn some cases it's handy because it reduce syntaxic noise and avoid adding unwanted spaces (inline SQL queries, complexes regexpes, etc).\nWhat you can do about it ? Not much, but if it really happens often for you, try one of the following tricks.\n\nindent your list with coma at the beginning of the line. It's weird, but if you do so the missing comas become obvious.\nassign strings to variables and use variables list whenever you can (and it's often a good idea for other reasons, like avoiding duplicate strings).\nsplit your list: for list of words you can put the whole list inside only one string and split it like below. For more than 5 elements it's also shorter.\n'a b c d e'.split(' ').\n\n",
"Because two string literals side-by-side, delimited by whitespace, are concatenated. Since the strings are within a list, they are 'side-by-side'.\nSee: http://docs.python.org/reference/lexical_analysis.html#string-literal-concatenation\n",
"Because often people want to do something like this:\nline = (\"Here's a very long line, with no line breaks,\"\n \" which should be displayed to the user (perhaps\"\n \" as an error message or question box).\")\n\nIt's easier to write this without having to manually concatenate strings. C, C++, and (I believe) Java and C# also have this behavior.\n"
] |
[
13,
6,
3,
2,
2
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0001401650_list_python.txt
|
Q:
How to get Django admin.TabularInline to NOT require some items
class LineItemInline(admin.TabularInline):
model = LineItem
extra = 10
class InvoiceAdmin(admin.ModelAdmin):
model = Invoice
inlines = (LineItemInline,)
and
class LineItem(models.Model):
invoice = models.ForeignKey(Invoice)
item_product_code = models.CharField(max_length=32)
item_description = models.CharField(max_length=64)
item_commodity_code = models.ForeignKey(CommodityCode)
item_unit_cost = models.IntegerField()
item_unit_of_measure = models.ForeignKey(UnitOfMeasure, default=0)
item_quantity = models.IntegerField()
item_total_cost = models.IntegerField()
item_vat_amount = models.IntegerField(default=0)
item_vat_rate = models.IntegerField(default=0)
When I have it setup like this, the admin interface is requiring me to add data to all ten LineItems. The LineItems have required fields, but I expected it to not require whole line items if there was no data entered.
A:
That's strange, it's supposed not to do that - it shouldn't require any data in a row if you haven't entered anything.
I wonder if the default options are causing it to get confused. Again, Django should cope with this, but try removing those and see what happens.
Also note that this:
item_unit_of_measure = models.ForeignKey(UnitOfMeasure, default=0)
is not valid, since 0 can not be the ID of a UnitOfMeasure object. If you want FKs to not be required, use null=True, blank=True in the field declaration.
A:
Turns out the problem is default values. The one pointed out above about UnitOfMeasure isn't the actual problem though, any field with a default= causes it to require the rest of the data to be present. This to me seems like a bug since a default value should be subtracted out when determining if there is anything in the record that needs saving, but when I remove all the default values, it works.
In this code,
item_unit_of_measure = models.ForeignKey(UnitOfMeasure, default=0)
it was a sneaky way of letting the 0th entry in the database be the default value. That doesn't work unfortunately as he pointed out though.
|
How to get Django admin.TabularInline to NOT require some items
|
class LineItemInline(admin.TabularInline):
model = LineItem
extra = 10
class InvoiceAdmin(admin.ModelAdmin):
model = Invoice
inlines = (LineItemInline,)
and
class LineItem(models.Model):
invoice = models.ForeignKey(Invoice)
item_product_code = models.CharField(max_length=32)
item_description = models.CharField(max_length=64)
item_commodity_code = models.ForeignKey(CommodityCode)
item_unit_cost = models.IntegerField()
item_unit_of_measure = models.ForeignKey(UnitOfMeasure, default=0)
item_quantity = models.IntegerField()
item_total_cost = models.IntegerField()
item_vat_amount = models.IntegerField(default=0)
item_vat_rate = models.IntegerField(default=0)
When I have it setup like this, the admin interface is requiring me to add data to all ten LineItems. The LineItems have required fields, but I expected it to not require whole line items if there was no data entered.
|
[
"That's strange, it's supposed not to do that - it shouldn't require any data in a row if you haven't entered anything.\nI wonder if the default options are causing it to get confused. Again, Django should cope with this, but try removing those and see what happens.\nAlso note that this:\nitem_unit_of_measure = models.ForeignKey(UnitOfMeasure, default=0)\n\nis not valid, since 0 can not be the ID of a UnitOfMeasure object. If you want FKs to not be required, use null=True, blank=True in the field declaration.\n",
"Turns out the problem is default values. The one pointed out above about UnitOfMeasure isn't the actual problem though, any field with a default= causes it to require the rest of the data to be present. This to me seems like a bug since a default value should be subtracted out when determining if there is anything in the record that needs saving, but when I remove all the default values, it works.\nIn this code,\n item_unit_of_measure = models.ForeignKey(UnitOfMeasure, default=0)\nit was a sneaky way of letting the 0th entry in the database be the default value. That doesn't work unfortunately as he pointed out though.\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001400981_django_python.txt
|
Q:
Need advice how to represent a certain datastructure in Python
I'm not sure how to represent a certain datastructure in Python. It consists of groups and users where each user must be a member of exactly one group and groups should be in turn contained in a container, groups and users will only be used within this container. Furthermore I need random access to groups and users. A JSON representation of example data would look like this:
{
"groupa": {
"name": "groupa",
"description": "bla",
"members": {
"usera": {
"name": "usera",
"age": 38
},
"userb": {
"name": "userb",
"age": 20
}
}
},
"groupb": {
"name": "groupb",
"description": "bla bla",
"members": {
"userc": {
"name": "userc",
"age": 56
}
}
}
}
Simply using nested dict seems unsuited because users and groups all have well defined attributes. Because Groups and Users are only used within the container I came up with a nested class:
class AccountContainer:
class Group:
def __init__(self, container, group):
self.name = group
self.members = {}
self.container = container
self.container.groups[self.name] = self # add myself to container
class User:
def __init__(self, group, user, age=None):
self.name = user
self.age = age
self.group = group
self.group.members[self.name] = self # add myself to group
def __init__(self):
self.groups = {}
def add_user(self, group, username, age=None):
# possibly check if group exists
self.groups[group].members[username] = AccountContainer.User(self.groups[group], username, age=age)
def add_group(self, group):
self.groups[group] = AccountContainer.Group(self, group)
# creating
c = AccountContainer()
c.add_group("groupa")
c.add_user("groupa", "usera")
# access
c.groups["groupa"].members["usera"].age = 38
# deleting
del(c.groups["groupa"].members["usera"])
How would you represent such a datastructure?
Is this a reasonable approach?
To me it seems a bit unnatural using a method to create a group or user while otherwise referring to dicts.
A:
I think an abundance of behavior-less classes, in a multi-paradigm language (one like C++ or Python, that while supporting classes doesn't constrain you to use them when simpler structures will do), is a "design smell" -- the design equivalent of a "code smell", albeit a mild one.
If I was doing a code review of this, I'd point that out, although it's nowhere as bad as to have me insist on a re-factoring. Nested classes (that have no specific code-behavioral reason to be nested) compound this: they offer no specific benefits and can on the other hand "get in the way", for example, in Python, by interfering with serialization (pickling).
In addition to good old dicts and full-fledged classes, Python 2.6 offers the handy alternative of namedtuples for "structs" with a predefined set of attributes; they seem particularly suitable to this use case.
The handy "add this group/user to that container/group" functionality that's combined in your add... and __init__ methods can be refactored into standalone functions (so can accessors, even though that's less of a problem -- hiding internal structure into standalone accessors gets you closer to respecting the Law of Demeter).
A:
It's generally good practice to not have objects know about what contains them. Pass the user in to the group, not the group into the user. Just because your current application only has "users" used once, one per group, one group per account, doesn't mean you should hardcode all your classes with that knowledge. What if you want to reuse the User class elsewhere? What if you later need to support multiple AccountContainers with users in common?
You may also get some mileage out of named tuples, especially for your users:
User = collections.namedtuple('User', ('name', 'age'))
class Group:
def __init__(self, name, users=()):
self.name = name
self.members = dict((u.name, u) for u in users)
def add(user):
self.members[user.name] = user
et cetera
A:
I would feel comfortable using dicts. But I'd put the content in lists as a list instead of a dict will keep it clean and less redundant:
[
{
"name": "groupa",
"description": "bla",
"members": [{"name": "usera", "age": 38},
{"name": "userb","age": 20}]
},
{
"name": "groupb",
"description": "bla bla",
"members": [{"name": "userc","age": 56}]
}
]
Update:
You can still use random elements by the use of the random module:
groups_list[random.randrange(len(group_list))]
A:
To echo Alex's answer.... these nested classes reek of code smell to me.
Simpler maybe:
def Group(name=None,description=None,members=None):
if name is None:
name = "UNK!" # some reasonable default
if members is None:
members = dict()
return dict(name = ...., members = ....)
In your original proposal, your objects are just glorified dicts anyway, and the only reason to use objects (in this code) are to get a cleaner init to handle empty attributes. Making them into functions that return actual dicts is nearly as clean, and much easier. Named-tuples seem like an even better solution though, as previously pointed out.
This (nested dicts approach) has the benefit of being trivial to construct from /dump to json.
|
Need advice how to represent a certain datastructure in Python
|
I'm not sure how to represent a certain datastructure in Python. It consists of groups and users where each user must be a member of exactly one group and groups should be in turn contained in a container, groups and users will only be used within this container. Furthermore I need random access to groups and users. A JSON representation of example data would look like this:
{
"groupa": {
"name": "groupa",
"description": "bla",
"members": {
"usera": {
"name": "usera",
"age": 38
},
"userb": {
"name": "userb",
"age": 20
}
}
},
"groupb": {
"name": "groupb",
"description": "bla bla",
"members": {
"userc": {
"name": "userc",
"age": 56
}
}
}
}
Simply using nested dict seems unsuited because users and groups all have well defined attributes. Because Groups and Users are only used within the container I came up with a nested class:
class AccountContainer:
class Group:
def __init__(self, container, group):
self.name = group
self.members = {}
self.container = container
self.container.groups[self.name] = self # add myself to container
class User:
def __init__(self, group, user, age=None):
self.name = user
self.age = age
self.group = group
self.group.members[self.name] = self # add myself to group
def __init__(self):
self.groups = {}
def add_user(self, group, username, age=None):
# possibly check if group exists
self.groups[group].members[username] = AccountContainer.User(self.groups[group], username, age=age)
def add_group(self, group):
self.groups[group] = AccountContainer.Group(self, group)
# creating
c = AccountContainer()
c.add_group("groupa")
c.add_user("groupa", "usera")
# access
c.groups["groupa"].members["usera"].age = 38
# deleting
del(c.groups["groupa"].members["usera"])
How would you represent such a datastructure?
Is this a reasonable approach?
To me it seems a bit unnatural using a method to create a group or user while otherwise referring to dicts.
|
[
"I think an abundance of behavior-less classes, in a multi-paradigm language (one like C++ or Python, that while supporting classes doesn't constrain you to use them when simpler structures will do), is a \"design smell\" -- the design equivalent of a \"code smell\", albeit a mild one.\nIf I was doing a code review of this, I'd point that out, although it's nowhere as bad as to have me insist on a re-factoring. Nested classes (that have no specific code-behavioral reason to be nested) compound this: they offer no specific benefits and can on the other hand \"get in the way\", for example, in Python, by interfering with serialization (pickling).\nIn addition to good old dicts and full-fledged classes, Python 2.6 offers the handy alternative of namedtuples for \"structs\" with a predefined set of attributes; they seem particularly suitable to this use case.\nThe handy \"add this group/user to that container/group\" functionality that's combined in your add... and __init__ methods can be refactored into standalone functions (so can accessors, even though that's less of a problem -- hiding internal structure into standalone accessors gets you closer to respecting the Law of Demeter).\n",
"It's generally good practice to not have objects know about what contains them. Pass the user in to the group, not the group into the user. Just because your current application only has \"users\" used once, one per group, one group per account, doesn't mean you should hardcode all your classes with that knowledge. What if you want to reuse the User class elsewhere? What if you later need to support multiple AccountContainers with users in common?\nYou may also get some mileage out of named tuples, especially for your users:\nUser = collections.namedtuple('User', ('name', 'age'))\n\nclass Group:\n def __init__(self, name, users=()):\n self.name = name\n self.members = dict((u.name, u) for u in users)\n\n def add(user):\n self.members[user.name] = user\n\net cetera\n",
"I would feel comfortable using dicts. But I'd put the content in lists as a list instead of a dict will keep it clean and less redundant:\n[\n {\n \"name\": \"groupa\",\n \"description\": \"bla\",\n \"members\": [{\"name\": \"usera\", \"age\": 38},\n {\"name\": \"userb\",\"age\": 20}]\n },\n {\n \"name\": \"groupb\",\n \"description\": \"bla bla\",\n \"members\": [{\"name\": \"userc\",\"age\": 56}]\n }\n]\n\nUpdate:\nYou can still use random elements by the use of the random module:\ngroups_list[random.randrange(len(group_list))] \n\n",
"To echo Alex's answer.... these nested classes reek of code smell to me. \nSimpler maybe:\ndef Group(name=None,description=None,members=None):\n if name is None: \n name = \"UNK!\" # some reasonable default\n if members is None:\n members = dict()\n return dict(name = ...., members = ....)\n\nIn your original proposal, your objects are just glorified dicts anyway, and the only reason to use objects (in this code) are to get a cleaner init to handle empty attributes. Making them into functions that return actual dicts is nearly as clean, and much easier. Named-tuples seem like an even better solution though, as previously pointed out. \nThis (nested dicts approach) has the benefit of being trivial to construct from /dump to json.\n"
] |
[
5,
2,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001393849_python.txt
|
Q:
Convert list of objects to a list of integers and a lookup table
To illustrate what I mean by this, here is an example
messages = [
('Ricky', 'Steve', 'SMS'),
('Steve', 'Karl', 'SMS'),
('Karl', 'Nora', 'Email')
]
I want to convert this list and a definition of groups to a list of integers and a lookup dictionary so that each element in the group gets a unique id. That id should map to the element in the lookup table like this
messages_int, lookup_table = create_lookup_list(
messages, ('person', 'person', 'medium'))
print messages_int
[ (0, 1, 0),
(1, 2, 0),
(2, 3, 1) ]
print lookup_table
{ 'person': ['Ricky', 'Steve', 'Karl', 'Nora'],
'medium': ['SMS', 'Email']
}
I wonder if there is an elegant and pythonic solution to this problem.
I am also open to better terminology than create_lookup_list etc
A:
defaultdict combined with the itertools.count().next method is a good way to assign identifiers to unique items. Here's an example of how to apply this in your case:
from itertools import count
from collections import defaultdict
def create_lookup_list(data, domains):
domain_keys = defaultdict(lambda:defaultdict(count().next))
out = []
for row in data:
out.append(tuple(domain_keys[dom][val] for val, dom in zip(row, domains)))
lookup_table = dict((k, sorted(d, key=d.get)) for k, d in domain_keys.items())
return out, lookup_table
Edit: note that count().next becomes count().__next__ or lambda: next(count()) in Python 3.
A:
Mine's about the same length and complexity:
import collections
def create_lookup_list(messages, labels):
# Collect all the values
lookup = collections.defaultdict(set)
for msg in messages:
for l, v in zip(labels, msg):
lookup[l].add(v)
# Make the value sets lists
for k, v in lookup.items():
lookup[k] = list(v)
# Make the lookup_list
lookup_list = []
for msg in messages:
lookup_list.append([lookup[l].index(v) for l, v in zip(labels, msg)])
return lookup_list, lookup
A:
In Otto's answer (or anyone else's with string->id dicts), I'd replace (if obsessing over speed is your thing):
# create the lookup table
lookup_dict = {}
for group in indices:
lookup_dict[group] = sorted(indices[group].keys(),
lambda e1, e2: indices[group][e1]-indices[group][e2])
by
# k2i must map keys to consecutive ints [0,len(k2i)-1)
def inverse_indices(k2i):
inv=[0]*len(k2i)
for k,i in k2i.iteritems():
inv[i]=k
return inv
lookup_table = dict((g,inverse_indices(gi)) for g,gi in indices.iteritems())
This is better because direct assignment to each item in the inverse array directly is faster than sorting.
A:
Here is my own solution - I doubt it's the best
def create_lookup_list(input_list, groups):
# use a dictionary for the indices so that the index lookup
# is fast (not necessarily a requirement)
indices = dict((group, {}) for group in groups)
output = []
# assign indices by iterating through the list
for row in input_list:
newrow = []
for group, element in zip(groups, row):
if element in indices[group]:
index = indices[group][element]
else:
index = indices[group][element] = len(indices[group])
newrow.append(index)
output.append(newrow)
# create the lookup table
lookup_dict = {}
for group in indices:
lookup_dict[group] = sorted(indices[group].keys(),
lambda e1, e2: indices[group][e1]-indices[group][e2])
return output, lookup_dict
A:
This is a bit simpler, and more direct.
from collections import defaultdict
def create_lookup_list( messages, schema ):
def mapped_rows( messages ):
for row in messages:
newRow= []
for col, value in zip(schema,row):
if value not in lookups[col]:
lookups[col].append(value)
code= lookups[col].index(value)
newRow.append(code)
yield newRow
lookups = defaultdict(list)
return list( mapped_rows(messages) ), dict(lookups)
If the lookups were proper dictionaries, not lists, this could be simplified further.
Make your "lookup table" have the following structure
{ 'person': {'Ricky':0, 'Steve':1, 'Karl':2, 'Nora':3},
'medium': {'SMS':0, 'Email':1}
}
And it can be further reduced in complexity.
You can turn this working copy of the lookups into it's inverse as follows:
>>> lookups = { 'person': {'Ricky':0, 'Steve':1, 'Karl':2, 'Nora':3},
'medium': {'SMS':0, 'Email':1}
}
>>> dict( ( d, dict( (v,k) for k,v in lookups[d].items() ) ) for d in lookups )
{'person': {0: 'Ricky', 1: 'Steve', 2: 'Karl', 3: 'Nora'}, 'medium': {0: 'SMS', 1: 'Email'}}
A:
Here is my solution, it's not better - it's just different :)
def create_lookup_list(data, keys):
encoded = []
table = dict([(key, []) for key in keys])
for record in data:
msg_int = []
for key, value in zip(keys, record):
if value not in table[key]:
table[key].append(value)
msg_int.append(table[key].index(value))
encoded.append(tuple(msg_int))
return encoded, table
A:
Here is mine, the inner function lets me write the index-tuple as a generator.
def create_lookup_list( data, format):
table = {}
indices = []
def get_index( item, form ):
row = table.setdefault( form, [] )
try:
return row.index( item )
except ValueError:
n = len( row )
row.append( item )
return n
for row in data:
indices.append( tuple( get_index( item, form ) for item, form in zip( row, format ) ))
return table, indices
|
Convert list of objects to a list of integers and a lookup table
|
To illustrate what I mean by this, here is an example
messages = [
('Ricky', 'Steve', 'SMS'),
('Steve', 'Karl', 'SMS'),
('Karl', 'Nora', 'Email')
]
I want to convert this list and a definition of groups to a list of integers and a lookup dictionary so that each element in the group gets a unique id. That id should map to the element in the lookup table like this
messages_int, lookup_table = create_lookup_list(
messages, ('person', 'person', 'medium'))
print messages_int
[ (0, 1, 0),
(1, 2, 0),
(2, 3, 1) ]
print lookup_table
{ 'person': ['Ricky', 'Steve', 'Karl', 'Nora'],
'medium': ['SMS', 'Email']
}
I wonder if there is an elegant and pythonic solution to this problem.
I am also open to better terminology than create_lookup_list etc
|
[
"defaultdict combined with the itertools.count().next method is a good way to assign identifiers to unique items. Here's an example of how to apply this in your case:\nfrom itertools import count\nfrom collections import defaultdict\n\ndef create_lookup_list(data, domains):\n domain_keys = defaultdict(lambda:defaultdict(count().next))\n out = []\n for row in data:\n out.append(tuple(domain_keys[dom][val] for val, dom in zip(row, domains)))\n lookup_table = dict((k, sorted(d, key=d.get)) for k, d in domain_keys.items())\n return out, lookup_table\n\nEdit: note that count().next becomes count().__next__ or lambda: next(count()) in Python 3.\n",
"Mine's about the same length and complexity:\nimport collections\n\ndef create_lookup_list(messages, labels):\n\n # Collect all the values\n lookup = collections.defaultdict(set)\n for msg in messages:\n for l, v in zip(labels, msg):\n lookup[l].add(v)\n\n # Make the value sets lists\n for k, v in lookup.items():\n lookup[k] = list(v)\n\n # Make the lookup_list\n lookup_list = []\n for msg in messages:\n lookup_list.append([lookup[l].index(v) for l, v in zip(labels, msg)])\n\n return lookup_list, lookup\n\n",
"In Otto's answer (or anyone else's with string->id dicts), I'd replace (if obsessing over speed is your thing):\n# create the lookup table\nlookup_dict = {}\nfor group in indices:\n lookup_dict[group] = sorted(indices[group].keys(),\n lambda e1, e2: indices[group][e1]-indices[group][e2])\n\nby\n# k2i must map keys to consecutive ints [0,len(k2i)-1)\ndef inverse_indices(k2i):\n inv=[0]*len(k2i)\n for k,i in k2i.iteritems():\n inv[i]=k\n return inv\n\nlookup_table = dict((g,inverse_indices(gi)) for g,gi in indices.iteritems()) \n\nThis is better because direct assignment to each item in the inverse array directly is faster than sorting.\n",
"Here is my own solution - I doubt it's the best\ndef create_lookup_list(input_list, groups):\n # use a dictionary for the indices so that the index lookup \n # is fast (not necessarily a requirement)\n indices = dict((group, {}) for group in groups) \n output = []\n\n # assign indices by iterating through the list\n for row in input_list:\n newrow = []\n for group, element in zip(groups, row):\n if element in indices[group]:\n index = indices[group][element]\n else:\n index = indices[group][element] = len(indices[group])\n newrow.append(index)\n output.append(newrow)\n\n # create the lookup table\n lookup_dict = {}\n for group in indices:\n lookup_dict[group] = sorted(indices[group].keys(),\n lambda e1, e2: indices[group][e1]-indices[group][e2])\n\n return output, lookup_dict\n\n",
"This is a bit simpler, and more direct.\nfrom collections import defaultdict\n\ndef create_lookup_list( messages, schema ):\n def mapped_rows( messages ):\n for row in messages:\n newRow= []\n for col, value in zip(schema,row):\n if value not in lookups[col]:\n lookups[col].append(value)\n code= lookups[col].index(value)\n newRow.append(code)\n yield newRow\n lookups = defaultdict(list)\n return list( mapped_rows(messages) ), dict(lookups) \n\nIf the lookups were proper dictionaries, not lists, this could be simplified further.\nMake your \"lookup table\" have the following structure\n{ 'person': {'Ricky':0, 'Steve':1, 'Karl':2, 'Nora':3},\n 'medium': {'SMS':0, 'Email':1}\n}\n\nAnd it can be further reduced in complexity. \nYou can turn this working copy of the lookups into it's inverse as follows:\n>>> lookups = { 'person': {'Ricky':0, 'Steve':1, 'Karl':2, 'Nora':3},\n 'medium': {'SMS':0, 'Email':1}\n }\n>>> dict( ( d, dict( (v,k) for k,v in lookups[d].items() ) ) for d in lookups )\n{'person': {0: 'Ricky', 1: 'Steve', 2: 'Karl', 3: 'Nora'}, 'medium': {0: 'SMS', 1: 'Email'}}\n\n",
"Here is my solution, it's not better - it's just different :)\ndef create_lookup_list(data, keys):\n encoded = []\n table = dict([(key, []) for key in keys])\n\n for record in data:\n msg_int = []\n for key, value in zip(keys, record):\n if value not in table[key]:\n table[key].append(value)\n msg_int.append(table[key].index(value)) \n encoded.append(tuple(msg_int))\n\n return encoded, table\n\n",
"Here is mine, the inner function lets me write the index-tuple as a generator.\ndef create_lookup_list( data, format):\n table = {}\n indices = []\n def get_index( item, form ):\n row = table.setdefault( form, [] )\n try:\n return row.index( item )\n except ValueError:\n n = len( row )\n row.append( item )\n return n\n for row in data:\n indices.append( tuple( get_index( item, form ) for item, form in zip( row, format ) ))\n\n return table, indices\n\n"
] |
[
3,
2,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"lookup",
"python"
] |
stackoverflow_0001401721_lookup_python.txt
|
Q:
"deprecated" status on Google App Engine Django
I'm looking at Google App Engine Django on google code but the latest release (May 15/09) has been deprecated.
I'd like to know why that is? Are they discouraging us from using it? What deprecated it? Is there a better way to get set up with django?
A:
None of the packaged downloads are recommended by the project's owners -- deprecated basically means the same thing as "NOT recommended". There have been several changes since the May 15 upload of the last (now-deprecated) downloads, and I imagine the project owners are working to get a new enhanced download rounded up and ready.
Django 1.0 is now natively supported in App Engine (see here for details) and I imagine the project's owners are simply deciding how best to support their small but important bits of added value on top of that support. If you DO need such little extras, your best best may be to either wait for a new download to be prepared and blessed, OR try an SVN checkout as explained here and see if that meets your needs (possibly w/some tweaking -- be sure to offer the tweaks as a patch to the project owners if you make any!-).
Sounds chaotic...? Welcome to open source!-)
|
"deprecated" status on Google App Engine Django
|
I'm looking at Google App Engine Django on google code but the latest release (May 15/09) has been deprecated.
I'd like to know why that is? Are they discouraging us from using it? What deprecated it? Is there a better way to get set up with django?
|
[
"None of the packaged downloads are recommended by the project's owners -- deprecated basically means the same thing as \"NOT recommended\". There have been several changes since the May 15 upload of the last (now-deprecated) downloads, and I imagine the project owners are working to get a new enhanced download rounded up and ready.\nDjango 1.0 is now natively supported in App Engine (see here for details) and I imagine the project's owners are simply deciding how best to support their small but important bits of added value on top of that support. If you DO need such little extras, your best best may be to either wait for a new download to be prepared and blessed, OR try an SVN checkout as explained here and see if that meets your needs (possibly w/some tweaking -- be sure to offer the tweaks as a patch to the project owners if you make any!-).\nSounds chaotic...? Welcome to open source!-)\n"
] |
[
4
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001403413_google_app_engine_python.txt
|
Q:
Error using python doctest
I try to use doctest from example from http://docs.python.org/library/doctest.html
But when I run
python example.py -v
I get this
Traceback (most recent call last):
File "example.py", line 61, in <module>
doctest.testmod()
AttributeError: 'module' object has no attribute 'testmod'
But I can import doctest in python interactive shell and enable to use doctest.testmod() as well. I searched in google and didn't find the solution.
Python version is 2.5.1 on Max OSX
A:
Clearly the doctest module object you have at hand at that point is NOT the normal, unadulterated one you get from an import doctest from the standard library. Printing doctest.__file__ (and sys.stdout.flush()ing after that, just to make sure you do get to see the results;-) before the line-61 exception will let you know WHERE that stray doctest module is coming from.
If you show us example.py as well as that output we can probably point out what exactly you may be doing wrong, if it doesn't already become obvious to you.
A:
Try inserting a
print doctest, dir(doctest)
before line 61. This will tell you the location of the doctest module, and what attributes it has. You can do this to make sure that there's nothing wrong with your doctest module.
|
Error using python doctest
|
I try to use doctest from example from http://docs.python.org/library/doctest.html
But when I run
python example.py -v
I get this
Traceback (most recent call last):
File "example.py", line 61, in <module>
doctest.testmod()
AttributeError: 'module' object has no attribute 'testmod'
But I can import doctest in python interactive shell and enable to use doctest.testmod() as well. I searched in google and didn't find the solution.
Python version is 2.5.1 on Max OSX
|
[
"Clearly the doctest module object you have at hand at that point is NOT the normal, unadulterated one you get from an import doctest from the standard library. Printing doctest.__file__ (and sys.stdout.flush()ing after that, just to make sure you do get to see the results;-) before the line-61 exception will let you know WHERE that stray doctest module is coming from.\nIf you show us example.py as well as that output we can probably point out what exactly you may be doing wrong, if it doesn't already become obvious to you.\n",
"Try inserting a \nprint doctest, dir(doctest)\n\nbefore line 61. This will tell you the location of the doctest module, and what attributes it has. You can do this to make sure that there's nothing wrong with your doctest module.\n"
] |
[
5,
0
] |
[] |
[] |
[
"doctest",
"python"
] |
stackoverflow_0001403408_doctest_python.txt
|
Q:
What are some small, fast and lightweight open source applications (µTorrent -esque)?
Possible duplicate
What is the best open source example of a lightweight
Windows Application?
µTorrent is a small bit-torrent client, a really small one. It doesn't come with an installer, just a exe, you drop in your PATH somewhere. It's super lightweight and yet feature rich. Plus it is the work of one man. It's also closed-source.
Many people have been curious about how it has been written, and there are hints here and there about a custom library etc. But the question is, are there any programs with attributes like µTorrent that are available with source code--attributes like speed, small size, awesomeness.
Possible related question (/questions/9603/what-is-some-great-source-code-to-read), but think smaller than something like the Linux kernel.
Clarification: I don't want examples of bit-torrent source code, but anything which is used by tons of people (validation of awesomeness) and also fast, small and awesome!
A:
I think you should take a look at Notepad++ if you want to see a feature-rich low-consumption of power software :)
A:
Netcat
It's the program that started all of the curiousity behind networks and how things WORK.
Everyone's looked at this source code.
A:
rTorrent is a lightweight, feature-rich, console-only open-source torrent client.
A:
I like Frhed, a simple open-source Windows hex editor.
A:
FRESHMEAT is a great place to start. There are lots of small open source programs available that you can study.
Examples:
XML-RPC specification.C implementation for Python. Its easy to learn and its fun.
Heapq [\Lib\heapq.py] , xml-rpc [\Lib\xmlrpc] and lots of other codes in Python library are very well written.
|
What are some small, fast and lightweight open source applications (µTorrent -esque)?
|
Possible duplicate
What is the best open source example of a lightweight
Windows Application?
µTorrent is a small bit-torrent client, a really small one. It doesn't come with an installer, just a exe, you drop in your PATH somewhere. It's super lightweight and yet feature rich. Plus it is the work of one man. It's also closed-source.
Many people have been curious about how it has been written, and there are hints here and there about a custom library etc. But the question is, are there any programs with attributes like µTorrent that are available with source code--attributes like speed, small size, awesomeness.
Possible related question (/questions/9603/what-is-some-great-source-code-to-read), but think smaller than something like the Linux kernel.
Clarification: I don't want examples of bit-torrent source code, but anything which is used by tons of people (validation of awesomeness) and also fast, small and awesome!
|
[
"I think you should take a look at Notepad++ if you want to see a feature-rich low-consumption of power software :)\n",
"Netcat\nIt's the program that started all of the curiousity behind networks and how things WORK.\nEveryone's looked at this source code.\n",
"rTorrent is a lightweight, feature-rich, console-only open-source torrent client.\n",
"I like Frhed, a simple open-source Windows hex editor.\n",
"FRESHMEAT is a great place to start. There are lots of small open source programs available that you can study.\nExamples:\n\nXML-RPC specification.C implementation for Python. Its easy to learn and its fun.\nHeapq [\\Lib\\heapq.py] , xml-rpc [\\Lib\\xmlrpc] and lots of other codes in Python library are very well written.\n\n"
] |
[
7,
2,
1,
1,
0
] |
[] |
[] |
[
"c++",
"performance",
"python"
] |
stackoverflow_0001391756_c++_performance_python.txt
|
Q:
Efficient way to determine whether a particular function is on the stack in Python
For debugging, it is often useful to tell if a particular function is higher up on the call stack. For example, we often only want to run debugging code when a certain function called us.
One solution is to examine all of the stack entries higher up, but it this is in a function that is deep in the stack and repeatedly called, this leads to excessive overhead. The question is to find a method that allows us to determine if a particular function is higher up on the call stack in a way that is reasonably efficient.
Similar
Obtaining references to function objects on the execution stack from the frame object? - This question focuses on obtaining the function objects, rather than determining if we are in a particular function. Although the same techniques could be applied, they may end up being extremely inefficient.
A:
Unless the function you're aiming for does something very special to mark "one instance of me is active on the stack" (IOW: if the function is pristine and untouchable and can't possibly be made aware of this peculiar need of yours), there is no conceivable alternative to walking frame by frame up the stack until you hit either the top (and the function is not there) or a stack frame for your function of interest. As several comments to the question indicate, it's extremely doubtful whether it's worth striving to optimize this. But, assuming for the sake of argument that it was worthwhile...:
Edit: the original answer (by the OP) had many defects, but some have since been fixed, so I'm editing to reflect the current situation and why certain aspects are important.
First of all, it's crucial to use try/except, or with, in the decorator, so that ANY exit from a function being monitored is properly accounted for, not just normal ones (as the original version of the OP's own answer did).
Second, every decorator should ensure it keeps the decorated function's __name__ and __doc__ intact -- that's what functools.wraps is for (there are other ways, but wraps makes it simplest).
Third, just as crucial as the first point, a set, which was the data structure originally chosen by the OP, is the wrong choice: a function can be on the stack several times (direct or indirect recursion). We clearly need a "multi-set" (also known as "bag"), a set-like structure which keeps track of "how many times" each item is present. In Python, the natural implementation of a multiset is as a dict mapping keys to counts, which in turn is most handily implemented as a collections.defaultdict(int).
Fourth, a general approach should be threadsafe (when that can be accomplished easily, at least;-). Fortunately, threading.local makes it trivial, when applicable -- and here, it should surely be (each stack having its own separate thread of calls).
Fifth, an interesting issue that has been broached in some comments (noticing how badly the offered decorators in some answers play with other decorators: the monitoring decorator appears to have to be the LAST (outermost) one, otherwise the checking breaks. This comes from the natural but unfortunate choice of using the function object itself as the key into the monitoring dict.
I propose to solve this by a different choice of key: make the decorator take a (string, say) identifier argument that must be unique (in each given thread) and use the identifier as the key into the monitoring dict. The code checking the stack must of course be aware of the identifier and use it as well.
At decorating time, the decorator can check for the uniqueness property (by using a separate set). The identifier may be left to default to the function name (so it's only explicitly required to keep the flexibility of monitoring homonymous functions in the same namespace); the uniqueness property may be explicitly renounced when several monitored functions are to be considered "the same" for monitoring purposes (this may be the case if a given def statement is meant to be executed multiple times in slightly different contexts to make several function objects that the programmers wants to consider "the same function" for monitoring purposes). Finally, it should be possible to optionally revert to the "function object as identifier" for those rare cases in which further decoration is KNOWN to be impossible (since in those cases it may be the handiest way to guarantee uniqueness).
So, putting these many considerations together, we could have (including a threadlocal_var utility function that will probably already be in a toolbox module of course;-) something like the following...:
import collections
import functools
import threading
threadlocal = threading.local()
def threadlocal_var(varname, factory, *a, **k):
v = getattr(threadlocal, varname, None)
if v is None:
v = factory(*a, **k)
setattr(threadlocal, varname, v)
return v
def monitoring(identifier=None, unique=True, use_function=False):
def inner(f):
assert (not use_function) or (identifier is None)
if identifier is None:
if use_function:
identifier = f
else:
identifier = f.__name__
if unique:
monitored = threadlocal_var('uniques', set)
if identifier in monitored:
raise ValueError('Duplicate monitoring identifier %r' % identifier)
monitored.add(identifier)
counts = threadlocal_var('counts', collections.defaultdict, int)
@functools.wraps(f)
def wrapper(*a, **k):
counts[identifier] += 1
try:
return f(*a, **k)
finally:
counts[identifier] -= 1
return wrapper
return inner
I have not tested this code, so it might contain some typo or the like, but I'm offering it because I hope it does cover all the important technical points I explained above.
Is it all worth it? Probably not, as previously explained. However, I think along the lines of "if it's worth doing at all, then it's worth doing right";-).
A:
I don't really like this approach, but here's a fixed-up version of what you were doing:
from collections import defaultdict
import threading
functions_on_stack = threading.local()
def record_function_on_stack(f):
def wrapped(*args, **kwargs):
if not getattr(functions_on_stack, "stacks", None):
functions_on_stack.stacks = defaultdict(int)
functions_on_stack.stacks[wrapped] += 1
try:
result = f(*args, **kwargs)
finally:
functions_on_stack.stacks[wrapped] -= 1
if functions_on_stack.stacks[wrapped] == 0:
del functions_on_stack.stacks[wrapped]
return result
wrapped.orig_func = f
return wrapped
def function_is_on_stack(f):
return f in functions_on_stack.stacks
def nested():
if function_is_on_stack(test):
print "nested"
@record_function_on_stack
def test():
nested()
test()
This handles recursion, threading and exceptions.
I don't like this approach for two reasons:
It doesn't work if the function is decorated further: this must be the final decorator.
If you're using this for debugging, it means you have to edit code in two places to use it; one to add the decorator, and one to use it. It's much more convenient to just examine the stack, so you only have to edit code in the code you're debugging.
A better approach would be to examine the stack directly (possibly as a native extension for speed), and if possible, find a way to cache the results for the lifetime of the stack frame. (I'm not sure if that's possible without modifying the Python core, though.)
|
Efficient way to determine whether a particular function is on the stack in Python
|
For debugging, it is often useful to tell if a particular function is higher up on the call stack. For example, we often only want to run debugging code when a certain function called us.
One solution is to examine all of the stack entries higher up, but it this is in a function that is deep in the stack and repeatedly called, this leads to excessive overhead. The question is to find a method that allows us to determine if a particular function is higher up on the call stack in a way that is reasonably efficient.
Similar
Obtaining references to function objects on the execution stack from the frame object? - This question focuses on obtaining the function objects, rather than determining if we are in a particular function. Although the same techniques could be applied, they may end up being extremely inefficient.
|
[
"Unless the function you're aiming for does something very special to mark \"one instance of me is active on the stack\" (IOW: if the function is pristine and untouchable and can't possibly be made aware of this peculiar need of yours), there is no conceivable alternative to walking frame by frame up the stack until you hit either the top (and the function is not there) or a stack frame for your function of interest. As several comments to the question indicate, it's extremely doubtful whether it's worth striving to optimize this. But, assuming for the sake of argument that it was worthwhile...:\nEdit: the original answer (by the OP) had many defects, but some have since been fixed, so I'm editing to reflect the current situation and why certain aspects are important.\nFirst of all, it's crucial to use try/except, or with, in the decorator, so that ANY exit from a function being monitored is properly accounted for, not just normal ones (as the original version of the OP's own answer did).\nSecond, every decorator should ensure it keeps the decorated function's __name__ and __doc__ intact -- that's what functools.wraps is for (there are other ways, but wraps makes it simplest).\nThird, just as crucial as the first point, a set, which was the data structure originally chosen by the OP, is the wrong choice: a function can be on the stack several times (direct or indirect recursion). We clearly need a \"multi-set\" (also known as \"bag\"), a set-like structure which keeps track of \"how many times\" each item is present. In Python, the natural implementation of a multiset is as a dict mapping keys to counts, which in turn is most handily implemented as a collections.defaultdict(int).\nFourth, a general approach should be threadsafe (when that can be accomplished easily, at least;-). Fortunately, threading.local makes it trivial, when applicable -- and here, it should surely be (each stack having its own separate thread of calls).\nFifth, an interesting issue that has been broached in some comments (noticing how badly the offered decorators in some answers play with other decorators: the monitoring decorator appears to have to be the LAST (outermost) one, otherwise the checking breaks. This comes from the natural but unfortunate choice of using the function object itself as the key into the monitoring dict.\nI propose to solve this by a different choice of key: make the decorator take a (string, say) identifier argument that must be unique (in each given thread) and use the identifier as the key into the monitoring dict. The code checking the stack must of course be aware of the identifier and use it as well.\nAt decorating time, the decorator can check for the uniqueness property (by using a separate set). The identifier may be left to default to the function name (so it's only explicitly required to keep the flexibility of monitoring homonymous functions in the same namespace); the uniqueness property may be explicitly renounced when several monitored functions are to be considered \"the same\" for monitoring purposes (this may be the case if a given def statement is meant to be executed multiple times in slightly different contexts to make several function objects that the programmers wants to consider \"the same function\" for monitoring purposes). Finally, it should be possible to optionally revert to the \"function object as identifier\" for those rare cases in which further decoration is KNOWN to be impossible (since in those cases it may be the handiest way to guarantee uniqueness).\nSo, putting these many considerations together, we could have (including a threadlocal_var utility function that will probably already be in a toolbox module of course;-) something like the following...:\nimport collections\nimport functools\nimport threading\n\nthreadlocal = threading.local()\n\ndef threadlocal_var(varname, factory, *a, **k):\n v = getattr(threadlocal, varname, None)\n if v is None:\n v = factory(*a, **k)\n setattr(threadlocal, varname, v)\n return v\n\ndef monitoring(identifier=None, unique=True, use_function=False):\n def inner(f):\n assert (not use_function) or (identifier is None)\n if identifier is None:\n if use_function:\n identifier = f\n else:\n identifier = f.__name__\n if unique:\n monitored = threadlocal_var('uniques', set)\n if identifier in monitored:\n raise ValueError('Duplicate monitoring identifier %r' % identifier)\n monitored.add(identifier)\n counts = threadlocal_var('counts', collections.defaultdict, int)\n @functools.wraps(f)\n def wrapper(*a, **k):\n counts[identifier] += 1\n try:\n return f(*a, **k)\n finally:\n counts[identifier] -= 1\n return wrapper\n return inner\n\nI have not tested this code, so it might contain some typo or the like, but I'm offering it because I hope it does cover all the important technical points I explained above.\nIs it all worth it? Probably not, as previously explained. However, I think along the lines of \"if it's worth doing at all, then it's worth doing right\";-).\n",
"I don't really like this approach, but here's a fixed-up version of what you were doing:\nfrom collections import defaultdict\nimport threading\nfunctions_on_stack = threading.local()\n\ndef record_function_on_stack(f):\n def wrapped(*args, **kwargs):\n if not getattr(functions_on_stack, \"stacks\", None):\n functions_on_stack.stacks = defaultdict(int)\n functions_on_stack.stacks[wrapped] += 1\n\n try:\n result = f(*args, **kwargs)\n finally:\n functions_on_stack.stacks[wrapped] -= 1\n if functions_on_stack.stacks[wrapped] == 0:\n del functions_on_stack.stacks[wrapped]\n return result\n\n wrapped.orig_func = f\n return wrapped\n\ndef function_is_on_stack(f):\n return f in functions_on_stack.stacks\n\ndef nested():\n if function_is_on_stack(test):\n print \"nested\"\n\n@record_function_on_stack\ndef test():\n nested()\n\ntest()\n\nThis handles recursion, threading and exceptions.\nI don't like this approach for two reasons:\n\nIt doesn't work if the function is decorated further: this must be the final decorator.\nIf you're using this for debugging, it means you have to edit code in two places to use it; one to add the decorator, and one to use it. It's much more convenient to just examine the stack, so you only have to edit code in the code you're debugging.\n\nA better approach would be to examine the stack directly (possibly as a native extension for speed), and if possible, find a way to cache the results for the lifetime of the stack frame. (I'm not sure if that's possible without modifying the Python core, though.)\n"
] |
[
14,
1
] |
[] |
[] |
[
"callstack",
"python"
] |
stackoverflow_0001403471_callstack_python.txt
|
Q:
Pyqt GroupBox parenting
In Python and Pyqt - I've got a simple class which instantiates a Label class and a GroupBox class.
According to docs, passing the Groupbox to the Label upon creation should make the Groupbox the parent of Label. However, I must be missing something simple here. When I create the GroupBox it's fine, when I create the Label however - it appears distorted (or perhaps behind the GroupBox?)
Cheers -
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
class FileBrowser(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.setGeometry(0, 0, 920, 780)
self.initClasses()
def initClasses(self):
# GroupBox
self.groupBox1 = GroupBox(self, QRect(20, 10, 191, 131), 'Shot Info')
# Label
self.labelGroup1_ShotInfo = Label(self, QRect(10, 26, 52, 15), 'Film')
class GroupBox(QWidget):
def __init__(self, parent, geo, title):
QWidget.__init__(self, parent)
obj = QGroupBox(parent)
obj.setGeometry(geo)
obj.setTitle(title)
class Label(QWidget):
def __init__(self, parent, geo, text):
QWidget.__init__(self, parent)
obj = QLabel(parent)
obj.setGeometry(geo)
obj.setText(text)
def main():
app = QApplication(sys.argv)
w = FileBrowser()
w.show()
sys.exit(app.exec_())
if __name__ == "__main__":
main()
A:
The problem is that you are not using a layout. Because you are not using one, both widgets are being rendered one on top of the other one. It of course depends on what you are trying to do, but the following should be a good example:
class FileBrowser(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.setGeometry(0, 0, 920, 780)
self.initClasses()
# changes
layout = QVBoxLayout(self) # create layout out
layout.addWidget(self.groupBox1) # add widget
layout.addWidget(self.labelGroup1_ShotInfo) # add widget
# set my layout to make sure contents are correctly rendered
self.setLayout(layout)
def initClasses(self):
# GroupBox
self.groupBox1 = GroupBox(self, QRect(20, 10, 191, 131), 'Shot Info')
# Label
self.labelGroup1_ShotInfo = Label(self, QRect(10, 26, 52, 15), 'Film')
The above example uses a vertical layout and solves the problem.
|
Pyqt GroupBox parenting
|
In Python and Pyqt - I've got a simple class which instantiates a Label class and a GroupBox class.
According to docs, passing the Groupbox to the Label upon creation should make the Groupbox the parent of Label. However, I must be missing something simple here. When I create the GroupBox it's fine, when I create the Label however - it appears distorted (or perhaps behind the GroupBox?)
Cheers -
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
class FileBrowser(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.setGeometry(0, 0, 920, 780)
self.initClasses()
def initClasses(self):
# GroupBox
self.groupBox1 = GroupBox(self, QRect(20, 10, 191, 131), 'Shot Info')
# Label
self.labelGroup1_ShotInfo = Label(self, QRect(10, 26, 52, 15), 'Film')
class GroupBox(QWidget):
def __init__(self, parent, geo, title):
QWidget.__init__(self, parent)
obj = QGroupBox(parent)
obj.setGeometry(geo)
obj.setTitle(title)
class Label(QWidget):
def __init__(self, parent, geo, text):
QWidget.__init__(self, parent)
obj = QLabel(parent)
obj.setGeometry(geo)
obj.setText(text)
def main():
app = QApplication(sys.argv)
w = FileBrowser()
w.show()
sys.exit(app.exec_())
if __name__ == "__main__":
main()
|
[
"The problem is that you are not using a layout. Because you are not using one, both widgets are being rendered one on top of the other one. It of course depends on what you are trying to do, but the following should be a good example:\nclass FileBrowser(QMainWindow):\n def __init__(self):\n QMainWindow.__init__(self)\n\n self.setGeometry(0, 0, 920, 780)\n self.initClasses()\n # changes\n layout = QVBoxLayout(self) # create layout out\n layout.addWidget(self.groupBox1) # add widget\n layout.addWidget(self.labelGroup1_ShotInfo) # add widget\n # set my layout to make sure contents are correctly rendered\n self.setLayout(layout) \n\n def initClasses(self):\n # GroupBox\n self.groupBox1 = GroupBox(self, QRect(20, 10, 191, 131), 'Shot Info')\n\n # Label\n self.labelGroup1_ShotInfo = Label(self, QRect(10, 26, 52, 15), 'Film')\n\nThe above example uses a vertical layout and solves the problem. \n"
] |
[
2
] |
[] |
[] |
[
"oop",
"pyqt",
"python",
"qt"
] |
stackoverflow_0001391174_oop_pyqt_python_qt.txt
|
Q:
Pythonic way to return list of every nth item in a larger list
Say we have a list of numbers from 0 to 1000. Is there a pythonic/efficient way to produce a list of the first and every subsequent 10th item, i.e. [0, 10, 20, 30, ... ]?
Yes, I can do this using a for loop, but I'm wondering if there is a neater way to do this, perhaps even in one line?
A:
>>> lst = list(range(165))
>>> lst[0::10]
[0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160]
Note that this is around 100 times faster than looping and checking a modulus for each element:
$ python -m timeit -s "lst = list(range(1000))" "lst1 = [x for x in lst if x % 10 == 0]"
1000 loops, best of 3: 525 usec per loop
$ python -m timeit -s "lst = list(range(1000))" "lst1 = lst[0::10]"
100000 loops, best of 3: 4.02 usec per loop
A:
source_list[::10] is the most obvious, but this doesn't work for any iterable and is not memory efficient for large lists.
itertools.islice(source_sequence, 0, None, 10) works for any iterable and is memory-efficient, but probably is not the fastest solution for large list and big step.
(source_list[i] for i in xrange(0, len(source_list), 10))
A:
Use range(start, end, step)
li = list(range(0, 1000, 10))
[0, 10, 20, 30, 40, 50, 60, 70, 80, 90 ... 990]
Or, if you have a list use slice: From manual: s[i:j:k] slice of s from i to j with step k
yourlist = [0, ... ,10 ...]
sub = yourlist[::10] # same as yourlist[0:100:10]
>>> sub
[0, 10, 20, 30, 40, 50, 60, 70, 80, 90]
A:
You can use the slice operator like this:
l = [1,2,3,4,5]
l2 = l[::2] # get subsequent 2nd item
A:
newlist = oldlist[::10]
This picks out every 10th element of the list.
A:
Why not just use a step parameter of range function as well to get:
l = range(0, 1000, 10)
For comparison, on my machine:
H:\>python -m timeit -s "l = range(1000)" "l1 = [x for x in l if x % 10 == 0]"
10000 loops, best of 3: 90.8 usec per loop
H:\>python -m timeit -s "l = range(1000)" "l1 = l[0::10]"
1000000 loops, best of 3: 0.861 usec per loop
H:\>python -m timeit -s "l = range(0, 1000, 10)"
100000000 loops, best of 3: 0.0172 usec per loop
A:
existing_list = range(0, 1001)
filtered_list = [i for i in existing_list if i % 10 == 0]
A:
Here is a better implementation of an "every 10th item" list comprehension, that does not use the list contents as part of the membership test:
>>> l = range(165)
>>> [ item for i,item in enumerate(l) if i%10==0 ]
[0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160]
>>> l = list("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
>>> [ item for i,item in enumerate(l) if i%10==0 ]
['A', 'K', 'U']
But this is still far slower than just using list slicing.
|
Pythonic way to return list of every nth item in a larger list
|
Say we have a list of numbers from 0 to 1000. Is there a pythonic/efficient way to produce a list of the first and every subsequent 10th item, i.e. [0, 10, 20, 30, ... ]?
Yes, I can do this using a for loop, but I'm wondering if there is a neater way to do this, perhaps even in one line?
|
[
">>> lst = list(range(165))\n>>> lst[0::10]\n[0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160]\n\nNote that this is around 100 times faster than looping and checking a modulus for each element:\n$ python -m timeit -s \"lst = list(range(1000))\" \"lst1 = [x for x in lst if x % 10 == 0]\"\n1000 loops, best of 3: 525 usec per loop\n$ python -m timeit -s \"lst = list(range(1000))\" \"lst1 = lst[0::10]\"\n100000 loops, best of 3: 4.02 usec per loop\n\n",
"\nsource_list[::10] is the most obvious, but this doesn't work for any iterable and is not memory efficient for large lists.\nitertools.islice(source_sequence, 0, None, 10) works for any iterable and is memory-efficient, but probably is not the fastest solution for large list and big step.\n(source_list[i] for i in xrange(0, len(source_list), 10))\n\n",
"Use range(start, end, step)\nli = list(range(0, 1000, 10))\n\n[0, 10, 20, 30, 40, 50, 60, 70, 80, 90 ... 990]\n\nOr, if you have a list use slice: From manual: s[i:j:k] slice of s from i to j with step k\nyourlist = [0, ... ,10 ...] \nsub = yourlist[::10] # same as yourlist[0:100:10]\n\n>>> sub\n[0, 10, 20, 30, 40, 50, 60, 70, 80, 90]\n\n",
"You can use the slice operator like this: \nl = [1,2,3,4,5]\nl2 = l[::2] # get subsequent 2nd item\n\n",
"newlist = oldlist[::10]\n\nThis picks out every 10th element of the list.\n",
"Why not just use a step parameter of range function as well to get:\nl = range(0, 1000, 10)\n\nFor comparison, on my machine:\nH:\\>python -m timeit -s \"l = range(1000)\" \"l1 = [x for x in l if x % 10 == 0]\"\n10000 loops, best of 3: 90.8 usec per loop\nH:\\>python -m timeit -s \"l = range(1000)\" \"l1 = l[0::10]\"\n1000000 loops, best of 3: 0.861 usec per loop\nH:\\>python -m timeit -s \"l = range(0, 1000, 10)\"\n100000000 loops, best of 3: 0.0172 usec per loop\n\n",
"existing_list = range(0, 1001)\nfiltered_list = [i for i in existing_list if i % 10 == 0]\n\n",
"Here is a better implementation of an \"every 10th item\" list comprehension, that does not use the list contents as part of the membership test:\n>>> l = range(165)\n>>> [ item for i,item in enumerate(l) if i%10==0 ]\n[0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160]\n>>> l = list(\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\")\n>>> [ item for i,item in enumerate(l) if i%10==0 ]\n['A', 'K', 'U']\n\nBut this is still far slower than just using list slicing.\n"
] |
[
365,
71,
31,
30,
13,
4,
3,
1
] |
[
"List comprehensions are exactly made for that:\nsmaller_list = [x for x in range(100001) if x % 10 == 0]\n\nYou can get more info about them in the python official documentation:\nhttp://docs.python.org/tutorial/datastructures.html#list-comprehensions\n"
] |
[
-9
] |
[
"list",
"python"
] |
stackoverflow_0001403674_list_python.txt
|
Q:
Setting up/Inserting into Many-to-Many Database with Python, SQLALchemy, Sqlite
I am learning Python, and as a first project am taking Twitter RSS feeds, parsing the data, and inserting the data into a sqlite database. I have been able to successfully parse each feed entry into a content variable (e.g., "You should buy low..."), a url variable (e.g., u'http://bit.ly/HbFwL'), and a hashtag list (e.g., #stocks', u'#stockmarket', u'#finance', u'#money', u'#mkt']). I have also been successful at inserting these three pieces of information into three separate columns in a sqlite "RSSEntries" table, where each row is a different rss entry/tweet.
However, I want to set up a database where there is a many-to-many relation between the individual rss feed entries (i.e., individual tweets) and the hashtags that are associated with each entry. So, I set up the following tables using sqlalchemy (the first table just includes the Twitterers' rss feed urls that I want to download and parse):
RSSFeeds = schema.Table('feeds', metadata,
schema.Column('id', types.Integer,
schema.Sequence('feeds_seq_id', optional=True), primary_key=True),
schema.Column('url', types.VARCHAR(1000), default=u''),
)
RSSEntries = schema.Table('entries', metadata,
schema.Column('id', types.Integer,
schema.Sequence('entries_seq_id', optional=True), primary_key=True),
schema.Column('feed_id', types.Integer, schema.ForeignKey('feeds.id')),
schema.Column('short_url', types.VARCHAR(1000), default=u''),
schema.Column('content', types.Text(), nullable=False),
schema.Column('hashtags', types.Unicode(255)),
)
tag_table = schema.Table('tag', metadata,
schema.Column('id', types.Integer,
schema.Sequence('tag_seq_id', optional=True), primary_key=True),
schema.Column('tagname', types.Unicode(20), nullable=False, unique=True)
)
entrytag_table = schema.Table('entrytag', metadata,
schema.Column('id', types.Integer,
schema.Sequence('entrytag_seq_id', optional=True), primary_key=True),
schema.Column('entryid', types.Integer, schema.ForeignKey('entries.id')),
schema.Column('tagid', types.Integer, schema.ForeignKey('tag.id')),
)
So far, I've been able to successfully enter just the three main pieces of information into the RSSEntries table using the following code (abbreviated where...)
engine = create_engine('sqlite:///test.sqlite', echo=True)
conn = engine.connect()
.........
conn.execute('INSERT INTO entries (feed_id, short_url, content, hashtags) VALUES
(?,?,?,?)', (id, tinyurl, content, hashtags))
Now, here's the huge question. How do I insert the data into the feedtag and tagname tables? This is a real sticking point for me, since to start the hasthag variable is currently a list, and each feed entry could contain anywhere between 0 and, say, 6 hashtags. I know how to insert the whole list into a single column but not how to insert just the elements of the list into separate columns (or, in this example, rows). A bigger sticking point is the general question of how to insert the individual hashtags into the tagname table when a tagname could be used in numerous different feed entries, and then how to have the "associations" appear properly in the feedtag table.
In brief, I know exactly how each of the tables should look when they're all done, but I have no idea how to write the code to get the data into the tagname and feedtag tables. The whole "many-to-many" set-up is new to me.
I could really use your help on this. Thanks in advance for any suggestions.
-Greg
P.S. - Edit - Thanks to Ants Aasma's excellent suggestions, I've been able to almost get the whole thing to work. Specifically, the 1st and 2nd suggested blocks of code now work fine, but I'm having a problem implementing the 3rd block of code. I am getting the following error:
Traceback (most recent call last):
File "RSS_sqlalchemy.py", line 242, in <module>
store_feed_items(id, entries)
File "RSS_sqlalchemy.py", line 196, in store_feed_items
[{'feedid': entry_id, 'tagid': tag_ids[tag]} for tag in hashtags2])
NameError: global name 'entry_id' is not defined
Then, because I couldn't tell where Ants Aasma got the "entry_id" part from, I tried replacing it with "entries.id", thinking this might insert the "id" from the "entries" table. However, in that case I get this error:
Traceback (most recent call last):
File "RSS_sqlalchemy.py", line 242, in <module>
store_feed_items(id, entries)
File "RSS_sqlalchemy.py", line 196, in store_feed_items
[{'feedid': entries.id, 'tagid': tag_ids[tag]} for tag in hashtags2])
AttributeError: 'list' object has no attribute 'id'
I'm not quite sure where the problem is, and I don't really understand where the "entry_id" part fits in, so I've pasted in below all of my relevant "insertion" code. Can somebody help me see what's wrong? Note that I also just noticed that I was incorrectly calling my last table "feedtag_table" instead of "entrytag_table" This didn't match with my initially stated goal of relating individual feed entries to hashtags, rather than feeds to hashtags. I've since corrected the code above.
feeds = conn.execute('SELECT id, url FROM feeds').fetchall()
def store_feed_items(id, items):
""" Takes a feed_id and a list of items and stored them in the DB """
for entry in items:
conn.execute('SELECT id from entries WHERE short_url=?', (entry.link,))
s = unicode(entry.summary)
test = s.split()
tinyurl2 = [i for i in test if i.startswith('http://')]
hashtags2 = [i for i in s.split() if i.startswith('#')]
content2 = ' '.join(i for i in s.split() if i not in tinyurl2+hashtags2)
content = unicode(content2)
tinyurl = unicode(tinyurl2)
hashtags = unicode (hashtags2)
date = strftime("%Y-%m-%d %H:%M:%S",entry.updated_parsed)
conn.execute(RSSEntries.insert(), {'feed_id': id, 'short_url': tinyurl,
'content': content, 'hashtags': hashtags, 'date': date})
tags = tag_table
tag_id_query = select([tags.c.tagname, tags.c.id], tags.c.tagname.in_(hashtags))
tag_ids = dict(conn.execute(tag_id_query).fetchall())
for tag in hashtags:
if tag not in tag_ids:
result = conn.execute(tags.insert(), {'tagname': tag})
tag_ids[tag] = result.last_inserted_ids()[0]
conn.execute(entrytag_table.insert(),
[{'feedid': id, 'tagid': tag_ids[tag]} for tag in hashtags2])
A:
First, you should use the SQLAlchemy SQL builder for the inserts to give SQLAlcehemy more insight into what you're doing.
result = conn.execute(RSSEntries.insert(), {'feed_id': id, 'short_url': tinyurl,
'content': content, 'hashtags': hashtags, 'date': date})
entry_id = result.last_insert_ids()[0]
To insert tag associations to your schema you need to fist look up your tag identifiers and create any that don't exist:
tags = tag_table
tag_id_query = select([tags.c.tagname, tags.c.id], tags.c.tagname.in_(hashtags))
tag_ids = dict(conn.execute(tag_id_query).fetchall())
for tag in hashtags:
if tag not in tag_ids:
result = conn.execute(tags.insert(), {'tagname': tag})
tag_ids[tag] = result.last_inserted_ids()[0]
Then just insert the associated id's into the feedtag_table. You can use the executemany support by passing a list of dicts to the execute method.
conn.execute(feedtag_table.insert(),
[{'feedid': entry_id, 'tagid': tag_ids[tag]} for tag in hashtags])
|
Setting up/Inserting into Many-to-Many Database with Python, SQLALchemy, Sqlite
|
I am learning Python, and as a first project am taking Twitter RSS feeds, parsing the data, and inserting the data into a sqlite database. I have been able to successfully parse each feed entry into a content variable (e.g., "You should buy low..."), a url variable (e.g., u'http://bit.ly/HbFwL'), and a hashtag list (e.g., #stocks', u'#stockmarket', u'#finance', u'#money', u'#mkt']). I have also been successful at inserting these three pieces of information into three separate columns in a sqlite "RSSEntries" table, where each row is a different rss entry/tweet.
However, I want to set up a database where there is a many-to-many relation between the individual rss feed entries (i.e., individual tweets) and the hashtags that are associated with each entry. So, I set up the following tables using sqlalchemy (the first table just includes the Twitterers' rss feed urls that I want to download and parse):
RSSFeeds = schema.Table('feeds', metadata,
schema.Column('id', types.Integer,
schema.Sequence('feeds_seq_id', optional=True), primary_key=True),
schema.Column('url', types.VARCHAR(1000), default=u''),
)
RSSEntries = schema.Table('entries', metadata,
schema.Column('id', types.Integer,
schema.Sequence('entries_seq_id', optional=True), primary_key=True),
schema.Column('feed_id', types.Integer, schema.ForeignKey('feeds.id')),
schema.Column('short_url', types.VARCHAR(1000), default=u''),
schema.Column('content', types.Text(), nullable=False),
schema.Column('hashtags', types.Unicode(255)),
)
tag_table = schema.Table('tag', metadata,
schema.Column('id', types.Integer,
schema.Sequence('tag_seq_id', optional=True), primary_key=True),
schema.Column('tagname', types.Unicode(20), nullable=False, unique=True)
)
entrytag_table = schema.Table('entrytag', metadata,
schema.Column('id', types.Integer,
schema.Sequence('entrytag_seq_id', optional=True), primary_key=True),
schema.Column('entryid', types.Integer, schema.ForeignKey('entries.id')),
schema.Column('tagid', types.Integer, schema.ForeignKey('tag.id')),
)
So far, I've been able to successfully enter just the three main pieces of information into the RSSEntries table using the following code (abbreviated where...)
engine = create_engine('sqlite:///test.sqlite', echo=True)
conn = engine.connect()
.........
conn.execute('INSERT INTO entries (feed_id, short_url, content, hashtags) VALUES
(?,?,?,?)', (id, tinyurl, content, hashtags))
Now, here's the huge question. How do I insert the data into the feedtag and tagname tables? This is a real sticking point for me, since to start the hasthag variable is currently a list, and each feed entry could contain anywhere between 0 and, say, 6 hashtags. I know how to insert the whole list into a single column but not how to insert just the elements of the list into separate columns (or, in this example, rows). A bigger sticking point is the general question of how to insert the individual hashtags into the tagname table when a tagname could be used in numerous different feed entries, and then how to have the "associations" appear properly in the feedtag table.
In brief, I know exactly how each of the tables should look when they're all done, but I have no idea how to write the code to get the data into the tagname and feedtag tables. The whole "many-to-many" set-up is new to me.
I could really use your help on this. Thanks in advance for any suggestions.
-Greg
P.S. - Edit - Thanks to Ants Aasma's excellent suggestions, I've been able to almost get the whole thing to work. Specifically, the 1st and 2nd suggested blocks of code now work fine, but I'm having a problem implementing the 3rd block of code. I am getting the following error:
Traceback (most recent call last):
File "RSS_sqlalchemy.py", line 242, in <module>
store_feed_items(id, entries)
File "RSS_sqlalchemy.py", line 196, in store_feed_items
[{'feedid': entry_id, 'tagid': tag_ids[tag]} for tag in hashtags2])
NameError: global name 'entry_id' is not defined
Then, because I couldn't tell where Ants Aasma got the "entry_id" part from, I tried replacing it with "entries.id", thinking this might insert the "id" from the "entries" table. However, in that case I get this error:
Traceback (most recent call last):
File "RSS_sqlalchemy.py", line 242, in <module>
store_feed_items(id, entries)
File "RSS_sqlalchemy.py", line 196, in store_feed_items
[{'feedid': entries.id, 'tagid': tag_ids[tag]} for tag in hashtags2])
AttributeError: 'list' object has no attribute 'id'
I'm not quite sure where the problem is, and I don't really understand where the "entry_id" part fits in, so I've pasted in below all of my relevant "insertion" code. Can somebody help me see what's wrong? Note that I also just noticed that I was incorrectly calling my last table "feedtag_table" instead of "entrytag_table" This didn't match with my initially stated goal of relating individual feed entries to hashtags, rather than feeds to hashtags. I've since corrected the code above.
feeds = conn.execute('SELECT id, url FROM feeds').fetchall()
def store_feed_items(id, items):
""" Takes a feed_id and a list of items and stored them in the DB """
for entry in items:
conn.execute('SELECT id from entries WHERE short_url=?', (entry.link,))
s = unicode(entry.summary)
test = s.split()
tinyurl2 = [i for i in test if i.startswith('http://')]
hashtags2 = [i for i in s.split() if i.startswith('#')]
content2 = ' '.join(i for i in s.split() if i not in tinyurl2+hashtags2)
content = unicode(content2)
tinyurl = unicode(tinyurl2)
hashtags = unicode (hashtags2)
date = strftime("%Y-%m-%d %H:%M:%S",entry.updated_parsed)
conn.execute(RSSEntries.insert(), {'feed_id': id, 'short_url': tinyurl,
'content': content, 'hashtags': hashtags, 'date': date})
tags = tag_table
tag_id_query = select([tags.c.tagname, tags.c.id], tags.c.tagname.in_(hashtags))
tag_ids = dict(conn.execute(tag_id_query).fetchall())
for tag in hashtags:
if tag not in tag_ids:
result = conn.execute(tags.insert(), {'tagname': tag})
tag_ids[tag] = result.last_inserted_ids()[0]
conn.execute(entrytag_table.insert(),
[{'feedid': id, 'tagid': tag_ids[tag]} for tag in hashtags2])
|
[
"First, you should use the SQLAlchemy SQL builder for the inserts to give SQLAlcehemy more insight into what you're doing.\n result = conn.execute(RSSEntries.insert(), {'feed_id': id, 'short_url': tinyurl,\n 'content': content, 'hashtags': hashtags, 'date': date})\n entry_id = result.last_insert_ids()[0]\n\nTo insert tag associations to your schema you need to fist look up your tag identifiers and create any that don't exist:\ntags = tag_table\ntag_id_query = select([tags.c.tagname, tags.c.id], tags.c.tagname.in_(hashtags))\ntag_ids = dict(conn.execute(tag_id_query).fetchall())\nfor tag in hashtags:\n if tag not in tag_ids:\n result = conn.execute(tags.insert(), {'tagname': tag})\n tag_ids[tag] = result.last_inserted_ids()[0]\n\nThen just insert the associated id's into the feedtag_table. You can use the executemany support by passing a list of dicts to the execute method.\nconn.execute(feedtag_table.insert(),\n [{'feedid': entry_id, 'tagid': tag_ids[tag]} for tag in hashtags])\n\n"
] |
[
4
] |
[] |
[] |
[
"insert",
"many_to_many",
"python",
"sqlalchemy",
"sqlite"
] |
stackoverflow_0001403084_insert_many_to_many_python_sqlalchemy_sqlite.txt
|
Q:
Combine tab-separated value (TSV) files into an Excel 2007 (XLSX) spreadsheet
I need to combine several tab-separated value (TSV) files into an Excel 2007 (XLSX) spreadsheet, preferably using Python. There is not much cleverness needed in combining them - just copying each TSV file onto a separate sheet in Excel will do. Of course, the data needs to be split into columns and rows same as Excel does when I manually copy-paste the data into the UI.
I've had a look at the raw XML file Excel 2007 generates and it's huge and complex, so writing that from scratch doesn't seem realistic. Are there any libraries available for this?
A:
Looks like xlwt may serve your needs -- you can read each TSV file with Python's standard library csv module (which DOES do tab-separated as well as comma-separated etc, don't worry!-) and use xlwt (maybe via this cheatsheet;-) to create an XLS file, make sheets in it, build each sheet from the data you read via csv, etc. Not sure about XLSX vs plain XLS support but maybe the XLS might be enough...?
A:
The best python module for directly creating Excel files is xlwt, but it doesn't support XLSX.
As I see it, your options are:
If you only have "several", you could just do it by hand.
Use pythonwin to control Excel through COM. This requires you to run the code on a Windows machine with Excel 2007 installed.
Use python to do some preprocessing on the TSV to produce a format that will make step (1) easier. I'm not sure if Excel reads TSV, but it will certainly read CSV files directly.
A:
Note that Excel 2007 will quite happily read "legacy" XLS files (those written by Excel 97-2003 and by xlwt). You need XLSX files because .....?
If you want to go with the defaults that Excel will choose when deciding whether each piece of your data is a number, a date, or some text, use pythonwin to drive Excel 2007. If the data is in a fixed layout such that other than a possible heading row, each column contains data that is all of one known type, consider using xlwt.
You may wish to approach xlwt via http://www.python-excel.org which contains an up-to-date tutorial for xlrd, xlwt, and xlutils.
|
Combine tab-separated value (TSV) files into an Excel 2007 (XLSX) spreadsheet
|
I need to combine several tab-separated value (TSV) files into an Excel 2007 (XLSX) spreadsheet, preferably using Python. There is not much cleverness needed in combining them - just copying each TSV file onto a separate sheet in Excel will do. Of course, the data needs to be split into columns and rows same as Excel does when I manually copy-paste the data into the UI.
I've had a look at the raw XML file Excel 2007 generates and it's huge and complex, so writing that from scratch doesn't seem realistic. Are there any libraries available for this?
|
[
"Looks like xlwt may serve your needs -- you can read each TSV file with Python's standard library csv module (which DOES do tab-separated as well as comma-separated etc, don't worry!-) and use xlwt (maybe via this cheatsheet;-) to create an XLS file, make sheets in it, build each sheet from the data you read via csv, etc. Not sure about XLSX vs plain XLS support but maybe the XLS might be enough...?\n",
"The best python module for directly creating Excel files is xlwt, but it doesn't support XLSX.\nAs I see it, your options are:\n\nIf you only have \"several\", you could just do it by hand.\nUse pythonwin to control Excel through COM. This requires you to run the code on a Windows machine with Excel 2007 installed.\nUse python to do some preprocessing on the TSV to produce a format that will make step (1) easier. I'm not sure if Excel reads TSV, but it will certainly read CSV files directly.\n\n",
"Note that Excel 2007 will quite happily read \"legacy\" XLS files (those written by Excel 97-2003 and by xlwt). You need XLSX files because .....?\nIf you want to go with the defaults that Excel will choose when deciding whether each piece of your data is a number, a date, or some text, use pythonwin to drive Excel 2007. If the data is in a fixed layout such that other than a possible heading row, each column contains data that is all of one known type, consider using xlwt.\nYou may wish to approach xlwt via http://www.python-excel.org which contains an up-to-date tutorial for xlrd, xlwt, and xlutils.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"excel",
"excel_2007",
"python"
] |
stackoverflow_0001403468_excel_excel_2007_python.txt
|
Q:
In bash, what is the simplest way to configure lighttpd to call a local python script based on a particular URL?
In bash, what is the simplest way to configure lighttpd to call a local python script while passing any query string or name-value pairs included with the URL as a command line option for the local python app to parse?
Example:
www.myapp.com/sendtopython/app1.py?Foo=Bar
results in the following occurring on the system.
>python app1.py Foo=Bar
www.myapp.com/sendtopython/app2.py?-h
results in the following occurring on the system.
>python app2.py –h
Here is an example lighttpd install and config script.
#!/bin/bash
# Install and configure web console managed by lighttpd
# Suggested Amazon EC2 AMI : ami-0d729464
#
# The console installed into /opt/web-console and
# available on the http://_the_server_dns_/web-console
set -e -x
export DEBIAN_FRONTEND=noninteractive
function die()
{
echo -e "$@" >> /dev/console
exit 1
}
apt-get update && apt-get upgrade -y
apt-get -y install python
apt-get -y install unzip
apt-get -y install lighttpd
# web directory defaults to /var/www.
WEBDIR=/var/www/logs
mkdir $WEBDIR || die "Cannot create log directory."
PYTHON=`which python`
echo $?
if [ ! $? ]
then
echo "Python interpreter not installed or not found in system path!!!" >> /dev/console
echo "Exiting setup-instance..."
exit 1
fi
#Download web-console
FILE_DOWNLOAD_URL=http://downloads.sourceforge.net/web-console/web-console_v0.2.5_beta.zip
wget $FILE_DOWNLOAD_URL -O web-console.zip || die "Error downloading file web-console.zip"
# Install the web-console
INSTALL_DIR=/opt/web-console
mkdir $INSTALL_DIR
unzip -u -d $INSTALL_DIR web-console.zip || die "Error extracting web-console.zip"
chown www-data:www-data $INSTALL_DIR
# Configure lighttpd
cat > $INSTALL_DIR/webconsole.conf <<EOF
server.modules += ( "mod_cgi" )
alias.url += ( "/web-console/wc.pl" => "/opt/web-console/wc.pl" )
alias.url += ( "/web-console/" => "/opt/web-console/wc.pl" )
\$HTTP["url"] =~ "^/web-console/" {
cgi.assign = ( ".pl" => "/usr/bin/perl" )
}
EOF
ln -s $INSTALL_DIR/webconsole.conf /etc/lighttpd/conf-enabled/
/etc/init.d/lighttpd force-reload
exit 0
A:
Mh, for one thing I wouldn't mess with the install script, but run it once and then edit the resulting lighttpd configuration file (webconsole.conf in your case).
You then need to register Python scripts for CGI, like is done for Perl in the install script. You could add a line
cgi.assign = ( ".py" => "/usr/bin/python" )
under the corresponding .pl line which would make Python another CGI option for the /web-console/ path (look up the lighttpd docs if you want to register .py as CGI in any path).
Then, your Python CGI script app1.py, app2.py, ... have to comply to the CGI spec, which if I recall correclty passes URL parameters as environment variables. So you cannot simply use sys.argv. I'm sure there is a Python module that does the parameter extraction for you. (In Perl, Lincoln Stein's CGI module is capable of both env and command line args, but I'm not sure about Python's).
|
In bash, what is the simplest way to configure lighttpd to call a local python script based on a particular URL?
|
In bash, what is the simplest way to configure lighttpd to call a local python script while passing any query string or name-value pairs included with the URL as a command line option for the local python app to parse?
Example:
www.myapp.com/sendtopython/app1.py?Foo=Bar
results in the following occurring on the system.
>python app1.py Foo=Bar
www.myapp.com/sendtopython/app2.py?-h
results in the following occurring on the system.
>python app2.py –h
Here is an example lighttpd install and config script.
#!/bin/bash
# Install and configure web console managed by lighttpd
# Suggested Amazon EC2 AMI : ami-0d729464
#
# The console installed into /opt/web-console and
# available on the http://_the_server_dns_/web-console
set -e -x
export DEBIAN_FRONTEND=noninteractive
function die()
{
echo -e "$@" >> /dev/console
exit 1
}
apt-get update && apt-get upgrade -y
apt-get -y install python
apt-get -y install unzip
apt-get -y install lighttpd
# web directory defaults to /var/www.
WEBDIR=/var/www/logs
mkdir $WEBDIR || die "Cannot create log directory."
PYTHON=`which python`
echo $?
if [ ! $? ]
then
echo "Python interpreter not installed or not found in system path!!!" >> /dev/console
echo "Exiting setup-instance..."
exit 1
fi
#Download web-console
FILE_DOWNLOAD_URL=http://downloads.sourceforge.net/web-console/web-console_v0.2.5_beta.zip
wget $FILE_DOWNLOAD_URL -O web-console.zip || die "Error downloading file web-console.zip"
# Install the web-console
INSTALL_DIR=/opt/web-console
mkdir $INSTALL_DIR
unzip -u -d $INSTALL_DIR web-console.zip || die "Error extracting web-console.zip"
chown www-data:www-data $INSTALL_DIR
# Configure lighttpd
cat > $INSTALL_DIR/webconsole.conf <<EOF
server.modules += ( "mod_cgi" )
alias.url += ( "/web-console/wc.pl" => "/opt/web-console/wc.pl" )
alias.url += ( "/web-console/" => "/opt/web-console/wc.pl" )
\$HTTP["url"] =~ "^/web-console/" {
cgi.assign = ( ".pl" => "/usr/bin/perl" )
}
EOF
ln -s $INSTALL_DIR/webconsole.conf /etc/lighttpd/conf-enabled/
/etc/init.d/lighttpd force-reload
exit 0
|
[
"Mh, for one thing I wouldn't mess with the install script, but run it once and then edit the resulting lighttpd configuration file (webconsole.conf in your case).\nYou then need to register Python scripts for CGI, like is done for Perl in the install script. You could add a line\ncgi.assign = ( \".py\" => \"/usr/bin/python\" )\n\nunder the corresponding .pl line which would make Python another CGI option for the /web-console/ path (look up the lighttpd docs if you want to register .py as CGI in any path).\nThen, your Python CGI script app1.py, app2.py, ... have to comply to the CGI spec, which if I recall correclty passes URL parameters as environment variables. So you cannot simply use sys.argv. I'm sure there is a Python module that does the parameter extraction for you. (In Perl, Lincoln Stein's CGI module is capable of both env and command line args, but I'm not sure about Python's).\n"
] |
[
3
] |
[] |
[] |
[
"bash",
"lighttpd",
"python"
] |
stackoverflow_0001403672_bash_lighttpd_python.txt
|
Q:
Plotting two graphs that share an x-axis in matplotlib
I have to plot 2 graphs in a single screen. The x-axis remains the same but the y-axis should be different.
How can I do that in 'matplotlib'?
A:
twinx is the function you're looking for; here's an example of how to use it.
A:
subplot will let you plot more than one figure on the same canvas. See the example on the linked documentation page.
There is an example of a shared axis plot in the examples directory, called shared_axis_demo.py:
from pylab import *
t = arange(0.01, 5.0, 0.01)
s1 = sin(2*pi*t)
s2 = exp(-t)
s3 = sin(4*pi*t)
ax1 = subplot(311)
plot(t,s1)
setp( ax1.get_xticklabels(), fontsize=6)
## share x only
ax2 = subplot(312, sharex=ax1)
plot(t, s2)
# make these tick labels invisible
setp( ax2.get_xticklabels(), visible=False)
# share x and y
ax3 = subplot(313, sharex=ax1, sharey=ax1)
plot(t, s3)
xlim(0.01,5.0)
show()
|
Plotting two graphs that share an x-axis in matplotlib
|
I have to plot 2 graphs in a single screen. The x-axis remains the same but the y-axis should be different.
How can I do that in 'matplotlib'?
|
[
"twinx is the function you're looking for; here's an example of how to use it.\n\n",
"subplot will let you plot more than one figure on the same canvas. See the example on the linked documentation page.\nThere is an example of a shared axis plot in the examples directory, called shared_axis_demo.py:\nfrom pylab import *\n\nt = arange(0.01, 5.0, 0.01)\ns1 = sin(2*pi*t)\ns2 = exp(-t)\ns3 = sin(4*pi*t)\nax1 = subplot(311)\nplot(t,s1)\nsetp( ax1.get_xticklabels(), fontsize=6)\n\n## share x only\nax2 = subplot(312, sharex=ax1)\nplot(t, s2)\n# make these tick labels invisible\nsetp( ax2.get_xticklabels(), visible=False)\n\n# share x and y\nax3 = subplot(313, sharex=ax1, sharey=ax1)\nplot(t, s3)\nxlim(0.01,5.0)\nshow() \n\n"
] |
[
19,
7
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0001404502_matplotlib_python.txt
|
Q:
Path separator char in python 2.4
Just out of curiosity - is there another way to obtain the platform's path separator char than os.path.normcase('/') in Python 2.4?
I was expecting something like a os.path.separator constant...
A:
That would be os.sep.
|
Path separator char in python 2.4
|
Just out of curiosity - is there another way to obtain the platform's path separator char than os.path.normcase('/') in Python 2.4?
I was expecting something like a os.path.separator constant...
|
[
"That would be os.sep.\n"
] |
[
41
] |
[] |
[] |
[
"path_separator",
"python"
] |
stackoverflow_0001404749_path_separator_python.txt
|
Q:
Socket programming
Please take a look at my code:
from twisted.internet.protocol import ServerFactory
from twisted.internet import reactor
from twisted.protocols import basic
class ThasherProtocol(basic.LineReceiver):
def lineReceived(self, line):
print line
self.transport.write( 1 )
self.transport.loseConnection()
class ThasherFactory(ServerFactory):
protocol = ThasherProtocol
reactor.listenUNIX( "/home/disappearedng/Desktop/test.sock" , ThasherFactory() )
reactor.run()
===
import socket
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM )
s.connect( "/home/disappearedng/Desktop/test.sock")
s.sendall('hello')
print s.recv(4096)
# Hangs
Why does it hang? Why doens't it return 1?
A:
you should send a line and not just hello in order to have lineReceived called
e.g. s.sendall('hello\r\n')
|
Socket programming
|
Please take a look at my code:
from twisted.internet.protocol import ServerFactory
from twisted.internet import reactor
from twisted.protocols import basic
class ThasherProtocol(basic.LineReceiver):
def lineReceived(self, line):
print line
self.transport.write( 1 )
self.transport.loseConnection()
class ThasherFactory(ServerFactory):
protocol = ThasherProtocol
reactor.listenUNIX( "/home/disappearedng/Desktop/test.sock" , ThasherFactory() )
reactor.run()
===
import socket
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM )
s.connect( "/home/disappearedng/Desktop/test.sock")
s.sendall('hello')
print s.recv(4096)
# Hangs
Why does it hang? Why doens't it return 1?
|
[
"you should send a line and not just hello in order to have lineReceived called\ne.g. s.sendall('hello\\r\\n')\n"
] |
[
2
] |
[] |
[] |
[
"python",
"sockets"
] |
stackoverflow_0001404724_python_sockets.txt
|
Q:
Good way to write a lightweight client function to be imported Twisted Python
I have the following server running:
class ThasherProtocol(basic.LineReceiver):
def lineReceived(self, line):
dic = simplejson.loads( line)
ret = self.factory.d[ dic['method'] ]( dic['args'] )
self.transport.write( simplejson.dumps( ret) )
self.transport.loseConnection()
class ThasherFactory(ServerFactory):
protocol = ThasherProtocol
def __init__(self):
self.thasher = Thasher()
self.d= {
'getHash': self.thasher.getHash,
'sellHash' : self.thasher.sellHash
}
reactor.listenUNIX( c.LOCATION_THASHER, ThasherFactory() )
reactor.run()
I have multiple files importing a special function called "getHash" from a particular file.
Note that getHash's arguments are only gonna be a dictionary of texts (strings).
How do I write a client function (getHash) that can be simply:
from particular file import getHash
i = getHash( { 'type':'url', 'url':'http://www.stackoverflow.com' } )
Note that ALL I WANT TO DO is:
1) dump a dict into json,
2) dump that json into the particular socket,
3) wait for that to come back and unpack the json
A:
You want getHash to return a Deferred, not a synchronous value.
The way to do this is to create a Deferred and associate it with the connection that performs a particular request.
The following is untested and probably won't work, but it should give you a rough idea:
import simplejson
from twisted.python.protocol import ClientFactory
from twisted.internet.defer import Deferred
from twisted.internet import reactor
from twisted.protocols.basic import LineReceiver
class BufferingJSONRequest(LineReceiver):
buf = ''
def connectionMade(self):
self.sendLine(simplejson.dumps(self.factory.params))
def dataReceived(self, data):
self.buf += data
def connectionLost(self, reason):
deferred = self.factory.deferred
try:
result = simplejson.load(self.buf)
except:
deferred.errback()
else:
deferred.callback(result)
class BufferingRequestFactory(ClientFactory):
protocol = BufferingJSONRequest
def __init__(self, params, deferred):
self.params = params
self.deferred = deferred
def clientConnectionFailed(self, connector, reason):
self.deferred.errback(reason)
def getHash(params):
result = Deferred()
reactor.connectUNIX(LOCATION_THASHER,
BufferingRequestFactory(params, result))
return result
Now, in order to use this function, you will already need to be familiar with Deferreds, and you will need to write a callback function to run when the result eventually arrives. But an explanation of those belongs on a separate question ;).
|
Good way to write a lightweight client function to be imported Twisted Python
|
I have the following server running:
class ThasherProtocol(basic.LineReceiver):
def lineReceived(self, line):
dic = simplejson.loads( line)
ret = self.factory.d[ dic['method'] ]( dic['args'] )
self.transport.write( simplejson.dumps( ret) )
self.transport.loseConnection()
class ThasherFactory(ServerFactory):
protocol = ThasherProtocol
def __init__(self):
self.thasher = Thasher()
self.d= {
'getHash': self.thasher.getHash,
'sellHash' : self.thasher.sellHash
}
reactor.listenUNIX( c.LOCATION_THASHER, ThasherFactory() )
reactor.run()
I have multiple files importing a special function called "getHash" from a particular file.
Note that getHash's arguments are only gonna be a dictionary of texts (strings).
How do I write a client function (getHash) that can be simply:
from particular file import getHash
i = getHash( { 'type':'url', 'url':'http://www.stackoverflow.com' } )
Note that ALL I WANT TO DO is:
1) dump a dict into json,
2) dump that json into the particular socket,
3) wait for that to come back and unpack the json
|
[
"You want getHash to return a Deferred, not a synchronous value.\nThe way to do this is to create a Deferred and associate it with the connection that performs a particular request.\nThe following is untested and probably won't work, but it should give you a rough idea:\nimport simplejson\nfrom twisted.python.protocol import ClientFactory\nfrom twisted.internet.defer import Deferred\nfrom twisted.internet import reactor\nfrom twisted.protocols.basic import LineReceiver\n\nclass BufferingJSONRequest(LineReceiver):\n buf = ''\n\n def connectionMade(self):\n self.sendLine(simplejson.dumps(self.factory.params))\n\n def dataReceived(self, data):\n self.buf += data\n\n def connectionLost(self, reason):\n deferred = self.factory.deferred\n try:\n result = simplejson.load(self.buf)\n except:\n deferred.errback()\n else:\n deferred.callback(result)\n\nclass BufferingRequestFactory(ClientFactory):\n protocol = BufferingJSONRequest\n\n def __init__(self, params, deferred):\n self.params = params\n self.deferred = deferred\n\n def clientConnectionFailed(self, connector, reason):\n self.deferred.errback(reason)\n\ndef getHash(params):\n result = Deferred()\n reactor.connectUNIX(LOCATION_THASHER,\n BufferingRequestFactory(params, result))\n return result\n\nNow, in order to use this function, you will already need to be familiar with Deferreds, and you will need to write a callback function to run when the result eventually arrives. But an explanation of those belongs on a separate question ;).\n"
] |
[
2
] |
[
"I managed to solve my own problem.\nUse sockets (Unix sockets in particular) it speed up my app 4x and it's not difficult to use at all.\nso now my solution is simplejson + socket\n"
] |
[
-1
] |
[
"python",
"twisted"
] |
stackoverflow_0001404066_python_twisted.txt
|
Q:
Is it possible to intercept attribute getting/setting in ActionScript 3?
When developing in ActionScript 3, I often find myself looking for a way to achieve something similar to what is offered by python's __getattr__ / __setattr__ magic methods i.e. to be able to intercept attribute lookup on an instance, and do something custom.
Is there some acceptable way to achieve this in ActionScript 3? In AS3 attribute lookup behaves a little differently for normal (sealed) and dynamic classes -- ideally this would work in the same way for both cases. In python this works beautifully for all kinds of objects (of course!) even for subclasses of dict itself!
A:
Look a the flash.utils.Proxy object.
The Proxy class lets you override the
default behavior of ActionScript
operations (such as retrieving and
modifying properties) on an object.
A:
In AS3 you can code explicit variables accessors.
Example Class1:
private var __myvar:String;
public function get myvar():String { return __myvar; }
public function set myvar(value:String):void { __myvar = value; }
Now as you create an instance of Class1 you can access __myvar through the accessor functions.
if you want to set bindable that var you have to put the [Bindable] keyword upon one of its accessors.
Further, you can also implement the getter or the setter only, so your var will be read or write only.
I hope it helps.
|
Is it possible to intercept attribute getting/setting in ActionScript 3?
|
When developing in ActionScript 3, I often find myself looking for a way to achieve something similar to what is offered by python's __getattr__ / __setattr__ magic methods i.e. to be able to intercept attribute lookup on an instance, and do something custom.
Is there some acceptable way to achieve this in ActionScript 3? In AS3 attribute lookup behaves a little differently for normal (sealed) and dynamic classes -- ideally this would work in the same way for both cases. In python this works beautifully for all kinds of objects (of course!) even for subclasses of dict itself!
|
[
"Look a the flash.utils.Proxy object.\n\nThe Proxy class lets you override the\n default behavior of ActionScript\n operations (such as retrieving and\n modifying properties) on an object.\n\n",
"In AS3 you can code explicit variables accessors.\nExample Class1:\nprivate var __myvar:String;\n\npublic function get myvar():String { return __myvar; }\npublic function set myvar(value:String):void { __myvar = value; }\n\nNow as you create an instance of Class1 you can access __myvar through the accessor functions.\nif you want to set bindable that var you have to put the [Bindable] keyword upon one of its accessors.\nFurther, you can also implement the getter or the setter only, so your var will be read or write only.\nI hope it helps.\n"
] |
[
0,
0
] |
[] |
[] |
[
"actionscript_3",
"python"
] |
stackoverflow_0001398890_actionscript_3_python.txt
|
Q:
How to get colored emails from crontab?
I call a Python script from crontab. The script does generates colored output using ANSI escapes but when crontab is sending the mail with the output I see the escapes instead of colors.
What is happening is logic but I would like to know if it would be possible to generate a html message instead.
I would like a solution that does not require to implement the email notification myself.
A:
Maybe you can try with some txt to html converter, for example, http://txt2html.sourceforge.net/, you can also use pygments with some modifications.
|
How to get colored emails from crontab?
|
I call a Python script from crontab. The script does generates colored output using ANSI escapes but when crontab is sending the mail with the output I see the escapes instead of colors.
What is happening is logic but I would like to know if it would be possible to generate a html message instead.
I would like a solution that does not require to implement the email notification myself.
|
[
"Maybe you can try with some txt to html converter, for example, http://txt2html.sourceforge.net/, you can also use pygments with some modifications.\n"
] |
[
0
] |
[] |
[] |
[
"ansi_escape",
"colors",
"console",
"crontab",
"python"
] |
stackoverflow_0001405108_ansi_escape_colors_console_crontab_python.txt
|
Q:
gVim and multiple programming languages
My day job involves coding with Perl. At home I play around with Python and Erlang. For Perl I want to indent my code with two spaces. Whereas for Python the standard is 4. Also I have some key bindings to open function declarations which I would like to use with all programming languages. How can this be achieved in gVim?
As in, is there a way to maintain a configuration file for each programming language or something of that sort?
A:
In your $HOME, make .vim/ directory (or vimfiles/ on Windows), in it make ftplugin/ directory, and in it keep files named "perl.vim" or "python.vim" or "html.vim" or ...
These should be loaded automatically when you open/create new file of given filetype as long as you don't forget to add :filetype plugin on in your .vimrc (or _vimrc under windows)
Then, vim options should be defined with :setlocal (and not :set, otherwise their definition will override the default global setting).
Mappings are defined with :n/i/v(nore)map <buffer>, as well as the abbreviations. Commands are defined with the -b option. Menus can't be made local without the help of a plugin.
local, <buffer>, and -b are important to prevent side effects.
A:
You should be able to do with by leveraging filetypes ... e.g., add this to your vimrc (and modify appropriately for different languages):
autocmd FileType python set tabstop=4|set shiftwidth=4|set expandtab
A:
In addition to rangerchris's answer, you might consider using modelines. Modelines tell the editor how to configure itself:
#!/usr/bin/perl
# vi: ts=4 sw=4 ht=4 et textwidth=76 :
use strict;
use warnings;
print "hello world\n";
That modeline tells vi to use 4 character tabs and autoindents, to use spaces instead of tabs, and that it should insert a newline when the cursor gets to 76 characters.
You can control how Vim reads modelines with two variables (most likely set in your .vimrc):
set modeline
set modelines=5
The modeline variable tells Vim to look for modelines if it is set. The modelines variable tells Vim how many lines from the top and bottom to scan looking for the modeline (in this case it will find the modeline if it is in the first or last five lines of the file).
Like any system that takes instructions from untrusted sources, modelines can be a security threat, so the root user should never use modelines and you should keep your copy of Vim up-to-date.
The real benefit to modelines is that they are per file. Most Perl people are four spaces as indent people, but I am an eight character tab person. When working with other people's code, I use a modeline that reflects their usage. The rest of the time I use my own.
A:
Here's how I do it. The below is an excerpt from my .vimrc, and I maintain further configs per language, and load those when a new buffer is loaded.
" HTML
autocmd BufNewFile,BufRead *.html,*.htm,*.xhtml source ~/.vimhtml
" XML
autocmd BufNewFile,BufRead *.xml,*.xmi source ~/.vimxml
" Perl
autocmd BufNewFile,BufRead *.pl,*.pm source ~/.vimperl
Note that although I source a file, I can execute any VIM command, or call a function. e.g. for loading a new Java file I do this:
autocmd BufNewFile *.java call GeneratePackage()
where GeneratePackage() is a VIM function.
|
gVim and multiple programming languages
|
My day job involves coding with Perl. At home I play around with Python and Erlang. For Perl I want to indent my code with two spaces. Whereas for Python the standard is 4. Also I have some key bindings to open function declarations which I would like to use with all programming languages. How can this be achieved in gVim?
As in, is there a way to maintain a configuration file for each programming language or something of that sort?
|
[
"In your $HOME, make .vim/ directory (or vimfiles/ on Windows), in it make ftplugin/ directory, and in it keep files named \"perl.vim\" or \"python.vim\" or \"html.vim\" or ...\nThese should be loaded automatically when you open/create new file of given filetype as long as you don't forget to add :filetype plugin on in your .vimrc (or _vimrc under windows)\nThen, vim options should be defined with :setlocal (and not :set, otherwise their definition will override the default global setting).\nMappings are defined with :n/i/v(nore)map <buffer>, as well as the abbreviations. Commands are defined with the -b option. Menus can't be made local without the help of a plugin.\nlocal, <buffer>, and -b are important to prevent side effects.\n",
"You should be able to do with by leveraging filetypes ... e.g., add this to your vimrc (and modify appropriately for different languages):\nautocmd FileType python set tabstop=4|set shiftwidth=4|set expandtab\n\n",
"In addition to rangerchris's answer, you might consider using modelines. Modelines tell the editor how to configure itself:\n#!/usr/bin/perl\n# vi: ts=4 sw=4 ht=4 et textwidth=76 :\n\nuse strict;\nuse warnings;\n\nprint \"hello world\\n\";\n\nThat modeline tells vi to use 4 character tabs and autoindents, to use spaces instead of tabs, and that it should insert a newline when the cursor gets to 76 characters.\nYou can control how Vim reads modelines with two variables (most likely set in your .vimrc):\nset modeline\nset modelines=5\n\nThe modeline variable tells Vim to look for modelines if it is set. The modelines variable tells Vim how many lines from the top and bottom to scan looking for the modeline (in this case it will find the modeline if it is in the first or last five lines of the file).\nLike any system that takes instructions from untrusted sources, modelines can be a security threat, so the root user should never use modelines and you should keep your copy of Vim up-to-date.\nThe real benefit to modelines is that they are per file. Most Perl people are four spaces as indent people, but I am an eight character tab person. When working with other people's code, I use a modeline that reflects their usage. The rest of the time I use my own. \n",
"Here's how I do it. The below is an excerpt from my .vimrc, and I maintain further configs per language, and load those when a new buffer is loaded.\n\" HTML\nautocmd BufNewFile,BufRead *.html,*.htm,*.xhtml source ~/.vimhtml\n\" XML\nautocmd BufNewFile,BufRead *.xml,*.xmi source ~/.vimxml\n\" Perl\nautocmd BufNewFile,BufRead *.pl,*.pm source ~/.vimperl\n\nNote that although I source a file, I can execute any VIM command, or call a function. e.g. for loading a new Java file I do this:\nautocmd BufNewFile *.java call GeneratePackage()\n\nwhere GeneratePackage() is a VIM function.\n"
] |
[
25,
23,
7,
3
] |
[] |
[] |
[
"editor",
"perl",
"python",
"vim"
] |
stackoverflow_0001404515_editor_perl_python_vim.txt
|
Q:
Special considerations for using Python in init.d script?
Are there any special considerations for using Python in an 'init.d' script being run through init? (i.e. booting Ubuntu)
From what I understand through googling/testing on Ubuntu, the environment variables provided to an 'init.d' script are scarce and so using "#!/usr/bin/env python" might not work.
Anything else?
A:
That just highlights the biggest problem with python in an init.d script -- added complexity.
Python has no specification, and the env doesn't even have to point to cpython. If you upgrade and python breaks, you'll have to bite your tongue. And there is a much greater chance that python will break than sh (the safe bet for init.d scripts). Reason being, simple utility:
ecarroll@x60s:/etc/init.d$ ldd /usr/bin/python
linux-gate.so.1 => (0xb7ff7000)
libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7fc9000)
libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb7fc5000)
libutil.so.1 => /lib/tls/i686/cmov/libutil.so.1 (0xb7fc0000)
libz.so.1 => /lib/libz.so.1 (0xb7faa000)
libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7f84000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7e21000)
/lib/ld-linux.so.2 (0xb7ff8000)
ecarroll@x60s:/etc/init.d$ ldd /bin/sh
linux-gate.so.1 => (0xb803f000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7ec7000)
/lib/ld-linux.so.2 (0xb8040000)
Python is linking into libpthread, libdl, libutil, libz, libm amongst other things that can possibly break. Python is simply doing more.
-rwxr-xr-x 1 root root 86K 2008-11-05 01:51 /bin/dash
-rwxr-xr-x 1 root root 2.2M 2009-04-18 21:53 /usr/bin/python2.6
You can read up more about what you're specifically talking about with env variables here:
http://www.debian.org/doc/debian-policy/ch-opersys.html#s9.9
The main problem is that the defaults for env can be set in /etc/profile which would only run if the script is being run under a shell that supports reading it.
A:
I am assuming this is running some kind of daemon written in python, if not then this may not apply.
You will (probably) want to do the standard unix double fork and redirect file descriptors thing. This is the one I use (Adapted from an ActiveState code recepie whose url eludes me at the moment).
def daemonize(stdin, stdout, stderr, pidfile):
if os.path.isfile(pidfile):
p = open(pidfile, "r")
oldpid = p.read().strip()
p.close()
if os.path.isdir("/proc/%s"%oldpid):
log.err("Server already running with pid %s"%oldpid)
sys.exit(1)
try:
pid = os.fork()
if pid > 0:
sys.exit(0)
except OSError, e:
log.err("Fork #1 failed: (%d) %s"%(e.errno, e.strerror))
sys.exit(1)
os.chdir("/")
os.umask(0)
os.setsid()
try:
pid = os.fork()
if pid > 0:
if os.getuid() == 0:
pidfile = open(pidfile, "w+")
pidfile.write(str(pid))
pidfile.close()
sys.exit(0)
except OSError, e:
log.err("Fork #2 failed: (%d) %s"%(e.errno, e.strerror))
sys.exit(1)
try:
os.setgid(grp.getgrnam("nogroup").gr_gid)
except KeyError, e:
log.err("Failed to get GID: %s"%e)
sys.exit(1)
except OSError, e:
log.err("Failed to set GID: (%s) %s"%(e.errno, e.strerror))
sys.exit(1)
try:
os.setuid(pwd.getpwnam("oracle").pw_uid)
except KeyError, e:
log.err("Failed to get UID: %s"%e)
sys.exit(1)
except OSError, e:
log.err("Failed to set UID: (%s) %s"%(e.errno, e.strerror))
sys.exit(1)
for f in sys.stdout, sys.stderr:
f.flush()
si = open(stdin, "r")
so = open(stdout, "a+")
se = open(stderr, "a+", 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
Just run this before starting up your daemon loop and it will probably do the right thing.
As a side note, I am using #!/usr/bin/env python as the shebang line in a script on ubuntu and it is working fine for me.
You will probably still want to redirect stdout/stderr to a file even if you are not running a daemon to provide debug info.
|
Special considerations for using Python in init.d script?
|
Are there any special considerations for using Python in an 'init.d' script being run through init? (i.e. booting Ubuntu)
From what I understand through googling/testing on Ubuntu, the environment variables provided to an 'init.d' script are scarce and so using "#!/usr/bin/env python" might not work.
Anything else?
|
[
"That just highlights the biggest problem with python in an init.d script -- added complexity. \nPython has no specification, and the env doesn't even have to point to cpython. If you upgrade and python breaks, you'll have to bite your tongue. And there is a much greater chance that python will break than sh (the safe bet for init.d scripts). Reason being, simple utility:\n\necarroll@x60s:/etc/init.d$ ldd /usr/bin/python\n linux-gate.so.1 => (0xb7ff7000)\n libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7fc9000)\n libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb7fc5000)\n libutil.so.1 => /lib/tls/i686/cmov/libutil.so.1 (0xb7fc0000)\n libz.so.1 => /lib/libz.so.1 (0xb7faa000)\n libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7f84000)\n libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7e21000)\n /lib/ld-linux.so.2 (0xb7ff8000)\necarroll@x60s:/etc/init.d$ ldd /bin/sh\n linux-gate.so.1 => (0xb803f000)\n libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7ec7000)\n /lib/ld-linux.so.2 (0xb8040000)\n\nPython is linking into libpthread, libdl, libutil, libz, libm amongst other things that can possibly break. Python is simply doing more.\n\n-rwxr-xr-x 1 root root 86K 2008-11-05 01:51 /bin/dash\n-rwxr-xr-x 1 root root 2.2M 2009-04-18 21:53 /usr/bin/python2.6\n\nYou can read up more about what you're specifically talking about with env variables here:\nhttp://www.debian.org/doc/debian-policy/ch-opersys.html#s9.9\nThe main problem is that the defaults for env can be set in /etc/profile which would only run if the script is being run under a shell that supports reading it.\n",
"I am assuming this is running some kind of daemon written in python, if not then this may not apply.\nYou will (probably) want to do the standard unix double fork and redirect file descriptors thing. This is the one I use (Adapted from an ActiveState code recepie whose url eludes me at the moment).\ndef daemonize(stdin, stdout, stderr, pidfile):\n if os.path.isfile(pidfile):\n p = open(pidfile, \"r\")\n oldpid = p.read().strip()\n p.close()\n if os.path.isdir(\"/proc/%s\"%oldpid):\n log.err(\"Server already running with pid %s\"%oldpid)\n sys.exit(1)\n try:\n pid = os.fork()\n if pid > 0:\n sys.exit(0)\n except OSError, e:\n log.err(\"Fork #1 failed: (%d) %s\"%(e.errno, e.strerror))\n sys.exit(1)\n os.chdir(\"/\")\n os.umask(0)\n os.setsid()\n try:\n pid = os.fork()\n if pid > 0:\n if os.getuid() == 0:\n pidfile = open(pidfile, \"w+\")\n pidfile.write(str(pid))\n pidfile.close()\n sys.exit(0)\n except OSError, e:\n log.err(\"Fork #2 failed: (%d) %s\"%(e.errno, e.strerror))\n sys.exit(1)\n try:\n os.setgid(grp.getgrnam(\"nogroup\").gr_gid)\n except KeyError, e:\n log.err(\"Failed to get GID: %s\"%e)\n sys.exit(1)\n except OSError, e:\n log.err(\"Failed to set GID: (%s) %s\"%(e.errno, e.strerror))\n sys.exit(1)\n try:\n os.setuid(pwd.getpwnam(\"oracle\").pw_uid)\n except KeyError, e:\n log.err(\"Failed to get UID: %s\"%e)\n sys.exit(1)\n except OSError, e:\n log.err(\"Failed to set UID: (%s) %s\"%(e.errno, e.strerror))\n sys.exit(1)\n for f in sys.stdout, sys.stderr:\n f.flush()\n si = open(stdin, \"r\")\n so = open(stdout, \"a+\")\n se = open(stderr, \"a+\", 0)\n os.dup2(si.fileno(), sys.stdin.fileno())\n os.dup2(so.fileno(), sys.stdout.fileno())\n os.dup2(se.fileno(), sys.stderr.fileno())\n\nJust run this before starting up your daemon loop and it will probably do the right thing.\nAs a side note, I am using #!/usr/bin/env python as the shebang line in a script on ubuntu and it is working fine for me.\nYou will probably still want to redirect stdout/stderr to a file even if you are not running a daemon to provide debug info. \n"
] |
[
4,
1
] |
[] |
[] |
[
"init.d",
"linux",
"python",
"ubuntu"
] |
stackoverflow_0001405555_init.d_linux_python_ubuntu.txt
|
Q:
How to catch error in Django project on apache: 10048 "Address already in use"
Python 2.5.2, Apache 2.2, Django 1.0.2 final
My Django app tries to connect to a certain port. When that port is busy, I get the error 10048 "Address already in use" from apache. I know where the error is coming from. How do I catch this apache error?
More info:
error at /report/5/2009/08/01/
(10048, 'Address already in use')
Request Method: GET
Request URL: http://192.168.0.21/report/5/2009/08/01/
Exception Type: error
Exception Value:
(10048, 'Address already in use')
Exception Location: C:\Python25\Lib\httplib.py in connect, line 683
Python Executable: C:\Program Files\Apache Software Foundation\Apache2.2\bin\httpd.exe
Python Version: 2.5.2
Python Path: ['D:\\django\\system', 'C:\\Python25\\lib\\site-packages\\setuptools-0.6c9-py2.5.egg', 'C:\\Python25\\lib\\site-packages\\python_memcached-1.44-py2.5.egg', 'C:\\Program Files\\Apache Software Foundation\\Apache2.2\\htdocs', 'c:\\mapnik_0_6_0\\site-packages', 'C:\\Program Files\\Apache Software Foundation\\Apache2.2', 'C:\\WINDOWS\\system32\\python25.zip', 'C:\\Python25\\Lib', 'C:\\Python25\\DLLs', 'C:\\Python25\\Lib\\lib-tk', 'C:\\Program Files\\Apache Software Foundation\\Apache2.2\\bin', 'C:\\Python25', 'C:\\Python25\\lib\\site-packages', 'C:\\Python25\\lib\\site-packages\\PIL', 'C:\\Program Files\\GeoDjango\\Django-1.0.2-final']
Server time: don, 10 Sep 2009 11:20:45 +0200
A:
If you're calling httplib.connect directly from your code, then the try/except should be around that direct call. Or is the call happening indirectly...? Unfortunately Apache is not giving you a stack trace, so if you're having problems locating exactly what call sequence is involved you could put a broad try/except at the highest level available to you, and save a printout of the traceback so you can identify the call site involved (or, edit your question to show the traceback, and we can help you locate it). See the traceback module in Python's standard library for more on how to format a traceback in an except clause.
|
How to catch error in Django project on apache: 10048 "Address already in use"
|
Python 2.5.2, Apache 2.2, Django 1.0.2 final
My Django app tries to connect to a certain port. When that port is busy, I get the error 10048 "Address already in use" from apache. I know where the error is coming from. How do I catch this apache error?
More info:
error at /report/5/2009/08/01/
(10048, 'Address already in use')
Request Method: GET
Request URL: http://192.168.0.21/report/5/2009/08/01/
Exception Type: error
Exception Value:
(10048, 'Address already in use')
Exception Location: C:\Python25\Lib\httplib.py in connect, line 683
Python Executable: C:\Program Files\Apache Software Foundation\Apache2.2\bin\httpd.exe
Python Version: 2.5.2
Python Path: ['D:\\django\\system', 'C:\\Python25\\lib\\site-packages\\setuptools-0.6c9-py2.5.egg', 'C:\\Python25\\lib\\site-packages\\python_memcached-1.44-py2.5.egg', 'C:\\Program Files\\Apache Software Foundation\\Apache2.2\\htdocs', 'c:\\mapnik_0_6_0\\site-packages', 'C:\\Program Files\\Apache Software Foundation\\Apache2.2', 'C:\\WINDOWS\\system32\\python25.zip', 'C:\\Python25\\Lib', 'C:\\Python25\\DLLs', 'C:\\Python25\\Lib\\lib-tk', 'C:\\Program Files\\Apache Software Foundation\\Apache2.2\\bin', 'C:\\Python25', 'C:\\Python25\\lib\\site-packages', 'C:\\Python25\\lib\\site-packages\\PIL', 'C:\\Program Files\\GeoDjango\\Django-1.0.2-final']
Server time: don, 10 Sep 2009 11:20:45 +0200
|
[
"If you're calling httplib.connect directly from your code, then the try/except should be around that direct call. Or is the call happening indirectly...? Unfortunately Apache is not giving you a stack trace, so if you're having problems locating exactly what call sequence is involved you could put a broad try/except at the highest level available to you, and save a printout of the traceback so you can identify the call site involved (or, edit your question to show the traceback, and we can help you locate it). See the traceback module in Python's standard library for more on how to format a traceback in an except clause.\n"
] |
[
1
] |
[] |
[] |
[
"apache",
"django",
"python"
] |
stackoverflow_0001404259_apache_django_python.txt
|
Q:
Python sched.scheduler exceeds max recursion depth
I have recently started learning Python and part of the simple app I am making includes a timer with a hh:mm:ss display running in its own thread.
Looking around the web I found two ways of implementing this:
Using sched.scheduler
Using threading.Timer
The way I did it looks similar for both implementations:
sched:
def tick(self, display, alarm_time):
# Schedule this function to run every minute
s = sched.scheduler(time.time, time.sleep)
s.enter(1, 1, self.tick, ([display, alarm_time]))
# Update the time
self.updateTime(display)
Timer:
def tick(self, display):
# Schedule this function to run every second
t = Timer(1, self.tick, (display,alarm_time))
t.start()
# Update the time
self.updateTime(display)
Works fine with regards to ticking correctly, but generates the following error after a few minutes: RuntimeError: maximum recursion depth exceeded. I know you can increase the max recursion level manually, but surely this should not be necessary here?
No error, but occasionally the seconds will skip, or tick irregularly.
Can someone please point me in the right direction as to how to do this correctly? Thank you.
A:
Here's how to make a one-shot into a periodic event, e.g. with sched: if the function must make its own scheduler and be the only thing running on its thread:
def tick(self, display, alarm_time, scheduler=None):
# make a new scheduler only once & schedule this function immediately
if scheduler is None:
scheduler = sched.scheduler(time.time, time.sleep)
scheduler.enter(0, 1, self.tick, ([display, alarm_time, scheduler]))
scheduler.run()
# reschedule this function to run again in a minute
scheduler.enter(1, 1, self.tick, (display, alarm_time, scheduler]))
# do whatever actual work this function requires, e.g.:
self.updateTime(display)
If other events must also be scheduled in the same thread then the scheduler must be made and owned "elsewhere" -- the if part above can get refactored into another method, e.g.:
def scheduleperiodic(self, method, *args):
self.scheduler = sched.scheduler(time.time, time.sleep)
self.scheduler.enter(0, 1, method, args)
# whatever else needs to be scheduled at start, if any, can go here
self.scheduler.run()
def tick(self, display, alarm_time):
# reschedule this function to run again in a minute
self.scheduler.enter(60, 1, self.tick, (display, alarm_time))
# do whatever actual work this function requires, e.g.:
self.updateTime(display)
Again, of course and as always with sched, while the scheduler is running, it (and the scheduled event callbacks) will "take over" the thread in question (so you'll need to hive off a separate thread for it if you need other things to be happening at the same time).
If you need to use this kind of idiom in many functions it could be refactored into a decorator, but that would somewhat mask the underlying simplicity of the idiom, so I prefer this simple, overt use. BTW, note that time.time and time.sleep use seconds, not minutes, as their unit of time, so you need 60, not one, to indicate "a minute from now";-).
A:
A Timer is a one-shot event. It cannot be made to loop this way.
Using a Timer to call a function which then creates another Timer which calls a function that creates a Timer which calls a function which creates a Timer, ..., must reach the recursion limit.
You don't mention your OS, but the "skipping" or "ticking irregularly" is for two reasons.
You computer is busy and "1 second" means "pretty close to 1 second, depending on what else is going on"
If you start your timer at 0.9999 seconds, and wait 1 second, you might be at 1.9999 (rounds down to 1) or 2.00000. It may appear to duplicate a time or skip a time. Your computer's internal hardware clock is very accurate, and rounding things off to the nearest second will (always) lead to the remote possibility of duplicates or skips.
Use sched correctly. http://docs.python.org/library/sched.html#module-sched
Your code snippet makes no sense for sched, either. You do not need to create a new scheduler object. You only need to create a new event.
Read http://docs.python.org/library/sched.html#sched.scheduler.enter on creating a new event for an existing scheduler instance.
|
Python sched.scheduler exceeds max recursion depth
|
I have recently started learning Python and part of the simple app I am making includes a timer with a hh:mm:ss display running in its own thread.
Looking around the web I found two ways of implementing this:
Using sched.scheduler
Using threading.Timer
The way I did it looks similar for both implementations:
sched:
def tick(self, display, alarm_time):
# Schedule this function to run every minute
s = sched.scheduler(time.time, time.sleep)
s.enter(1, 1, self.tick, ([display, alarm_time]))
# Update the time
self.updateTime(display)
Timer:
def tick(self, display):
# Schedule this function to run every second
t = Timer(1, self.tick, (display,alarm_time))
t.start()
# Update the time
self.updateTime(display)
Works fine with regards to ticking correctly, but generates the following error after a few minutes: RuntimeError: maximum recursion depth exceeded. I know you can increase the max recursion level manually, but surely this should not be necessary here?
No error, but occasionally the seconds will skip, or tick irregularly.
Can someone please point me in the right direction as to how to do this correctly? Thank you.
|
[
"Here's how to make a one-shot into a periodic event, e.g. with sched: if the function must make its own scheduler and be the only thing running on its thread:\ndef tick(self, display, alarm_time, scheduler=None):\n # make a new scheduler only once & schedule this function immediately\n if scheduler is None:\n scheduler = sched.scheduler(time.time, time.sleep)\n scheduler.enter(0, 1, self.tick, ([display, alarm_time, scheduler]))\n scheduler.run()\n\n # reschedule this function to run again in a minute\n scheduler.enter(1, 1, self.tick, (display, alarm_time, scheduler]))\n\n # do whatever actual work this function requires, e.g.:\n self.updateTime(display)\n\nIf other events must also be scheduled in the same thread then the scheduler must be made and owned \"elsewhere\" -- the if part above can get refactored into another method, e.g.:\ndef scheduleperiodic(self, method, *args):\n self.scheduler = sched.scheduler(time.time, time.sleep)\n self.scheduler.enter(0, 1, method, args)\n # whatever else needs to be scheduled at start, if any, can go here\n self.scheduler.run()\n\ndef tick(self, display, alarm_time):\n # reschedule this function to run again in a minute\n self.scheduler.enter(60, 1, self.tick, (display, alarm_time))\n\n # do whatever actual work this function requires, e.g.:\n self.updateTime(display)\n\nAgain, of course and as always with sched, while the scheduler is running, it (and the scheduled event callbacks) will \"take over\" the thread in question (so you'll need to hive off a separate thread for it if you need other things to be happening at the same time).\nIf you need to use this kind of idiom in many functions it could be refactored into a decorator, but that would somewhat mask the underlying simplicity of the idiom, so I prefer this simple, overt use. BTW, note that time.time and time.sleep use seconds, not minutes, as their unit of time, so you need 60, not one, to indicate \"a minute from now\";-).\n",
"A Timer is a one-shot event. It cannot be made to loop this way.\nUsing a Timer to call a function which then creates another Timer which calls a function that creates a Timer which calls a function which creates a Timer, ..., must reach the recursion limit.\nYou don't mention your OS, but the \"skipping\" or \"ticking irregularly\" is for two reasons.\n\nYou computer is busy and \"1 second\" means \"pretty close to 1 second, depending on what else is going on\"\nIf you start your timer at 0.9999 seconds, and wait 1 second, you might be at 1.9999 (rounds down to 1) or 2.00000. It may appear to duplicate a time or skip a time. Your computer's internal hardware clock is very accurate, and rounding things off to the nearest second will (always) lead to the remote possibility of duplicates or skips. \n\nUse sched correctly. http://docs.python.org/library/sched.html#module-sched\nYour code snippet makes no sense for sched, either. You do not need to create a new scheduler object. You only need to create a new event.\nRead http://docs.python.org/library/sched.html#sched.scheduler.enter on creating a new event for an existing scheduler instance.\n"
] |
[
6,
3
] |
[] |
[] |
[
"clock",
"python",
"scheduler",
"timer"
] |
stackoverflow_0001404580_clock_python_scheduler_timer.txt
|
Q:
Why do I have this TypeError when using tkinter?
so I upgraded to python 3.1.1 from 2.6 and i ran an old program of mine which uses tkinter.
I get the following error message which I don't recall getting in the 2.6 version.
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python31\lib\tkinter\__init__.py", line 1399, in __call__
return self.func(*args)
File "C:\myprog.py", line 77, in <lambda>
self.canvas.bind("<Button-3>", lambda event: myfunc_sub(event))
File "C:\myprog.py", line 65, in myfunc_sub
temp_ids = self.canvas.find_overlapping(self.canvas.coords(name)[0], self.canvas.coords(name)[1], self.canvas.coords(name)[2],self.canvas.coords(name)[3])
TypeError: 'map' object is not subscriptable
I'm pretty sure the line
temp_ids = self.canvas.find_overlapping(self.canvas.coords(name)[0], self.canvas.coords(name)[1], self.canvas.coords(name)[2],self.canvas.coords(name)[3])
was ok in the older version. I'm not sure what has changed so that the way i get each coordinate is not possible.
from the tkinter docs (pdf)
".find_enclosed ( x1, y1, x2, y2 )
Returns a list of the object IDs of all objects that occur completely within the rectangle whose top left corner is (x1, y1) and bottom right corner is (x2, y2).
.find_overlapping ( x1, y1, x2, y2 )
Like the previous method, but returns a list of the object IDs of all the objects that share at least one point with the given rectangle."
any ideas on how to fix this? please let me know if you need more info. the tkinter version i have is 8.5, i have idle 3.1.1 and python 3.1.1. i know the pdf link i provided is for 8.4, but i can't imagine there was a change in these functions.
thanks!
A:
There were several breaking changes from Python 2.X to Python 3.X -- among them, map's functionality.
Have you run your script through 2to3 yet?
A:
self.canvas.coords(name)
return a map object, and as the error states map object is unsubscriptable in python 3. you need to change coords to be a tuple or a list.
you need to change your code to be:
temp_ids = self.canvas.find_overlapping(*tuple(self.canvas.coords(name)))
|
Why do I have this TypeError when using tkinter?
|
so I upgraded to python 3.1.1 from 2.6 and i ran an old program of mine which uses tkinter.
I get the following error message which I don't recall getting in the 2.6 version.
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python31\lib\tkinter\__init__.py", line 1399, in __call__
return self.func(*args)
File "C:\myprog.py", line 77, in <lambda>
self.canvas.bind("<Button-3>", lambda event: myfunc_sub(event))
File "C:\myprog.py", line 65, in myfunc_sub
temp_ids = self.canvas.find_overlapping(self.canvas.coords(name)[0], self.canvas.coords(name)[1], self.canvas.coords(name)[2],self.canvas.coords(name)[3])
TypeError: 'map' object is not subscriptable
I'm pretty sure the line
temp_ids = self.canvas.find_overlapping(self.canvas.coords(name)[0], self.canvas.coords(name)[1], self.canvas.coords(name)[2],self.canvas.coords(name)[3])
was ok in the older version. I'm not sure what has changed so that the way i get each coordinate is not possible.
from the tkinter docs (pdf)
".find_enclosed ( x1, y1, x2, y2 )
Returns a list of the object IDs of all objects that occur completely within the rectangle whose top left corner is (x1, y1) and bottom right corner is (x2, y2).
.find_overlapping ( x1, y1, x2, y2 )
Like the previous method, but returns a list of the object IDs of all the objects that share at least one point with the given rectangle."
any ideas on how to fix this? please let me know if you need more info. the tkinter version i have is 8.5, i have idle 3.1.1 and python 3.1.1. i know the pdf link i provided is for 8.4, but i can't imagine there was a change in these functions.
thanks!
|
[
"There were several breaking changes from Python 2.X to Python 3.X -- among them, map's functionality.\nHave you run your script through 2to3 yet?\n",
"self.canvas.coords(name)\n\nreturn a map object, and as the error states map object is unsubscriptable in python 3. you need to change coords to be a tuple or a list.\nyou need to change your code to be:\ntemp_ids = self.canvas.find_overlapping(*tuple(self.canvas.coords(name)))\n\n"
] |
[
3,
2
] |
[] |
[] |
[
"python",
"python_3.x",
"tkinter",
"typeerror"
] |
stackoverflow_0001406371_python_python_3.x_tkinter_typeerror.txt
|
Q:
Splitting a string with no line breaks into a list of lines with a maximum column count
I have a long string (multiple paragraphs) which I need to split into a list of line strings. The determination of what makes a "line" is based on:
The number of characters in the line is less than or equal to X (where X is a fixed number of columns per line_)
OR, there is a newline in the original string (that will force a new "line" to be created.
I know I can do this algorithmically but I was wondering if python has something that can handle this case. It's essentially word-wrapping a string.
And, by the way, the output lines must be broken on word boundaries, not character boundaries.
Here's an example of input and output:
Input:
"Within eight hours of Wilson's outburst, his Democratic opponent, former-Marine Rob Miller, had received nearly 3,000 individual contributions raising approximately $100,000, the Democratic Congressional Campaign Committee said.
Wilson, a conservative Republican who promotes a strong national defense and reining in the size of government, won a special election to the House in 2001, succeeding the late Rep. Floyd Spence, R-S.C. Wilson had worked on Spence's staff on Capitol Hill and also had served as an intern for Sen. Strom Thurmond, R-S.C."
Output:
"Within eight hours of Wilson's outburst, his"
"Democratic opponent, former-Marine Rob Miller,"
" had received nearly 3,000 individual "
"contributions raising approximately $100,000,"
" the Democratic Congressional Campaign Committee"
" said."
""
"Wilson, a conservative Republican who promotes a "
"strong national defense and reining in the size "
"of government, won a special election to the House"
" in 2001, succeeding the late Rep. Floyd Spence, "
"R-S.C. Wilson had worked on Spence's staff on "
"Capitol Hill and also had served as an intern"
" for Sen. Strom Thurmond, R-S.C."
A:
EDIT
What you are looking for is textwrap, but that's only part of the solution not the complete one. To take newline into account you need to do this:
from textwrap import wrap
'\n'.join(['\n'.join(wrap(block, width=50)) for block in text.splitlines()])
>>> print '\n'.join(['\n'.join(wrap(block, width=50)) for block in text.splitlines()])
Within eight hours of Wilson's outburst, his
Democratic opponent, former-Marine Rob Miller, had
received nearly 3,000 individual contributions
raising approximately $100,000, the Democratic
Congressional Campaign Committee said.
Wilson, a conservative Republican who promotes a
strong national defense and reining in the size of
government, won a special election to the House in
2001, succeeding the late Rep. Floyd Spence,
R-S.C. Wilson had worked on Spence's staff on
Capitol Hill and also had served as an intern for
Sen. Strom Thurmond
A:
You probably want to use the textwrap function in the standard library:
http://docs.python.org/library/textwrap.html
|
Splitting a string with no line breaks into a list of lines with a maximum column count
|
I have a long string (multiple paragraphs) which I need to split into a list of line strings. The determination of what makes a "line" is based on:
The number of characters in the line is less than or equal to X (where X is a fixed number of columns per line_)
OR, there is a newline in the original string (that will force a new "line" to be created.
I know I can do this algorithmically but I was wondering if python has something that can handle this case. It's essentially word-wrapping a string.
And, by the way, the output lines must be broken on word boundaries, not character boundaries.
Here's an example of input and output:
Input:
"Within eight hours of Wilson's outburst, his Democratic opponent, former-Marine Rob Miller, had received nearly 3,000 individual contributions raising approximately $100,000, the Democratic Congressional Campaign Committee said.
Wilson, a conservative Republican who promotes a strong national defense and reining in the size of government, won a special election to the House in 2001, succeeding the late Rep. Floyd Spence, R-S.C. Wilson had worked on Spence's staff on Capitol Hill and also had served as an intern for Sen. Strom Thurmond, R-S.C."
Output:
"Within eight hours of Wilson's outburst, his"
"Democratic opponent, former-Marine Rob Miller,"
" had received nearly 3,000 individual "
"contributions raising approximately $100,000,"
" the Democratic Congressional Campaign Committee"
" said."
""
"Wilson, a conservative Republican who promotes a "
"strong national defense and reining in the size "
"of government, won a special election to the House"
" in 2001, succeeding the late Rep. Floyd Spence, "
"R-S.C. Wilson had worked on Spence's staff on "
"Capitol Hill and also had served as an intern"
" for Sen. Strom Thurmond, R-S.C."
|
[
"EDIT\nWhat you are looking for is textwrap, but that's only part of the solution not the complete one. To take newline into account you need to do this:\nfrom textwrap import wrap\n'\\n'.join(['\\n'.join(wrap(block, width=50)) for block in text.splitlines()])\n\n>>> print '\\n'.join(['\\n'.join(wrap(block, width=50)) for block in text.splitlines()])\n\nWithin eight hours of Wilson's outburst, his\nDemocratic opponent, former-Marine Rob Miller, had\nreceived nearly 3,000 individual contributions\nraising approximately $100,000, the Democratic\nCongressional Campaign Committee said.\n\nWilson, a conservative Republican who promotes a\nstrong national defense and reining in the size of\ngovernment, won a special election to the House in\n2001, succeeding the late Rep. Floyd Spence,\nR-S.C. Wilson had worked on Spence's staff on\nCapitol Hill and also had served as an intern for\nSen. Strom Thurmond\n\n",
"You probably want to use the textwrap function in the standard library:\nhttp://docs.python.org/library/textwrap.html\n"
] |
[
14,
4
] |
[] |
[] |
[
"python",
"text_manipulation"
] |
stackoverflow_0001406493_python_text_manipulation.txt
|
Q:
Deploying a Web.py application with WSGI, several servers
I've created a web.py application, and now that it is ready to be deployed, I want to run in not on web.py's built-in webserver. I want to be able to run it on different webservers, Apache or IIS, without having to change my application code. This is where WSGI is supposed to come in, if I understand it correctly.
However, I don't understand what exacly I have to do to make my application deployable on a WSGI server? Most examples assume you are using Pylons/Django/other-framework, on which you simply run some magic command which fixes everything for you.
From what I understand of the (quite brief) web.py documentation, instead of running web.application(...).run(), I should use web.application(...).wsgifunc(). And then what?
A:
Exactly what you need to do to host it with a specific WSGI hosting mechanism varies with the server.
For the case of Apache/mod_wsgi and Phusion Passenger, you just need to provide a WSGI script file which contains an object called 'application'. For web.py 0.2, this is the result of calling web.wsgifunc() with appropriate arguments. For web.py 0.3, you instead use wsgifunc() member function of object returned by web.application(). For details of these see mod_wsgi documentation:
http://code.google.com/p/modwsgi/wiki/IntegrationWithWebPy
If instead you are having to use FASTCGI, SCGI or AJP adapters for a server such as Lighttpd, nginx or Cherokee, then you need to use 'flup' package to provide a bridge between those language agnostic interfaces and WSGI. This involves calling a flup function with the same WSGI application object above that something like mod_wsgi or Phusion Passenger would use directly without the need for a bridge. For details of this see:
http://trac.saddi.com/flup/wiki/FlupServers
Important thing is to structure your web application so that it is in its own self contained set of modules. To work with a particular server, then create a separate script file as necessary to bridge between what that server requires and your application code. Your application code should always be outside of the web server document directory and only the script file that acts as bridge would be in server document directory if appropriate.
A:
As of July 21 2009, there is a much fuller installation guide at the webpy install site, that discusses flup, fastcgi, apache and more. I haven't yet tried it, but it seems like it's much more detailed.
A:
Here is an example of two hosted apps using cherrypy wsgi server:
#!/usr/bin/python
from web import wsgiserver
import web
# webpy wsgi app
urls = (
'/test.*', 'index'
)
class index:
def GET(self):
web.header("content-type", "text/html")
return "Hello, world1!"
application = web.application(urls, globals(), autoreload=False).wsgifunc()
# generic wsgi app
def my_blog_app(environ, start_response):
status = '200 OK'
response_headers = [('Content-type','text/plain')]
start_response(status, response_headers)
return ['Hello world! - blog\n']
"""
# single hosted app
server = wsgiserver.CherryPyWSGIServer(
('0.0.0.0', 8070), application,
server_name='www.cherrypy.example')
"""
# multiple hosted apps with WSGIPathInfoDispatcher
d = wsgiserver.WSGIPathInfoDispatcher({'/test': application, '/blog': my_blog_app})
server = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 8070), d)
server.start()
|
Deploying a Web.py application with WSGI, several servers
|
I've created a web.py application, and now that it is ready to be deployed, I want to run in not on web.py's built-in webserver. I want to be able to run it on different webservers, Apache or IIS, without having to change my application code. This is where WSGI is supposed to come in, if I understand it correctly.
However, I don't understand what exacly I have to do to make my application deployable on a WSGI server? Most examples assume you are using Pylons/Django/other-framework, on which you simply run some magic command which fixes everything for you.
From what I understand of the (quite brief) web.py documentation, instead of running web.application(...).run(), I should use web.application(...).wsgifunc(). And then what?
|
[
"Exactly what you need to do to host it with a specific WSGI hosting mechanism varies with the server.\nFor the case of Apache/mod_wsgi and Phusion Passenger, you just need to provide a WSGI script file which contains an object called 'application'. For web.py 0.2, this is the result of calling web.wsgifunc() with appropriate arguments. For web.py 0.3, you instead use wsgifunc() member function of object returned by web.application(). For details of these see mod_wsgi documentation:\nhttp://code.google.com/p/modwsgi/wiki/IntegrationWithWebPy\nIf instead you are having to use FASTCGI, SCGI or AJP adapters for a server such as Lighttpd, nginx or Cherokee, then you need to use 'flup' package to provide a bridge between those language agnostic interfaces and WSGI. This involves calling a flup function with the same WSGI application object above that something like mod_wsgi or Phusion Passenger would use directly without the need for a bridge. For details of this see:\nhttp://trac.saddi.com/flup/wiki/FlupServers\nImportant thing is to structure your web application so that it is in its own self contained set of modules. To work with a particular server, then create a separate script file as necessary to bridge between what that server requires and your application code. Your application code should always be outside of the web server document directory and only the script file that acts as bridge would be in server document directory if appropriate.\n",
"As of July 21 2009, there is a much fuller installation guide at the webpy install site, that discusses flup, fastcgi, apache and more. I haven't yet tried it, but it seems like it's much more detailed. \n",
"Here is an example of two hosted apps using cherrypy wsgi server:\n\n\n#!/usr/bin/python\nfrom web import wsgiserver\nimport web\n\n# webpy wsgi app\nurls = (\n '/test.*', 'index'\n)\n\nclass index:\n def GET(self):\n web.header(\"content-type\", \"text/html\")\n return \"Hello, world1!\"\n\napplication = web.application(urls, globals(), autoreload=False).wsgifunc() \n\n\n# generic wsgi app\ndef my_blog_app(environ, start_response):\n status = '200 OK'\n response_headers = [('Content-type','text/plain')]\n start_response(status, response_headers)\n return ['Hello world! - blog\\n']\n\n\n\"\"\"\n# single hosted app\nserver = wsgiserver.CherryPyWSGIServer(\n ('0.0.0.0', 8070), application,\n server_name='www.cherrypy.example')\n\n\"\"\"\n\n# multiple hosted apps with WSGIPathInfoDispatcher\nd = wsgiserver.WSGIPathInfoDispatcher({'/test': application, '/blog': my_blog_app})\nserver = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 8070), d) \nserver.start()\n\n"
] |
[
6,
0,
0
] |
[] |
[] |
[
"python",
"web.py",
"wsgi"
] |
stackoverflow_0001078599_python_web.py_wsgi.txt
|
Q:
How do I keep state between requests in AppEngine (Python)?
I'm writing a simple app with AppEngine, using Python. After a successful insert by a user and redirect, I'd like to display a flash confirmation message on the next page.
What's the best way to keep state between one request and the next? Or is this not possible because AppEngine is distributed? I guess, the underlying question is whether AppEngine provides a persistent session object.
Thanks
Hannes
A:
No session support is included in App Engine itself, but you can add your own session support.
GAE Utilities is one library made specifically for this; a more heavyweight alternative is to use django sessions through App Engine Patch.
A:
The ways to reliable keep state between requests are memcache, the datastore or through the user (cookies or post/get).
You can use the runtime cache too, but this is very unreliable as you don't know if a request will end up in the same runtime or the runtime can drop it's entire cache if it feels like it.
I really wouldn't use the runtime cache except for very specific situations, for example I use it to cache the serialization of objects to json as that is pretty slow and if the caching is gone I can regenerate the result easily.
|
How do I keep state between requests in AppEngine (Python)?
|
I'm writing a simple app with AppEngine, using Python. After a successful insert by a user and redirect, I'd like to display a flash confirmation message on the next page.
What's the best way to keep state between one request and the next? Or is this not possible because AppEngine is distributed? I guess, the underlying question is whether AppEngine provides a persistent session object.
Thanks
Hannes
|
[
"No session support is included in App Engine itself, but you can add your own session support.\nGAE Utilities is one library made specifically for this; a more heavyweight alternative is to use django sessions through App Engine Patch.\n",
"The ways to reliable keep state between requests are memcache, the datastore or through the user (cookies or post/get).\nYou can use the runtime cache too, but this is very unreliable as you don't know if a request will end up in the same runtime or the runtime can drop it's entire cache if it feels like it.\nI really wouldn't use the runtime cache except for very specific situations, for example I use it to cache the serialization of objects to json as that is pretty slow and if the caching is gone I can regenerate the result easily.\n"
] |
[
3,
3
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001406636_google_app_engine_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.