content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Use value of variable in lambda expression
a = [] a.append(lambda x:x**0)
a.append(lambda x:x**1)
a[0](2), a[1](2), a[2](2)... spits out 1, 2, 4, ...
b=[]
for i in range(4)
b.append(lambda x:x**i)
b[0](2), b[1](2), b[2](2)... spits out 8, 8, 8, ...
In the for loop, the i is being passed to lambda as a variable, so when I call it, the last value of i is used instead of the code running as it does with a[]. (ie b[0] should use x^1, b[1] should use x^2, ...)
How can I tell lambda to pick up the value of i instead of the variable i itself.
A:
Ugly, but one way:
for i in range(4)
b.append(lambda x, copy=i: x**copy)
You might prefer
def raiser(power):
return lambda x: x**power
for i in range(4)
b.append(raiser(i))
(All code untested.)
A:
Define a factory
def power_function_factory(value):
def new_power_function(base):
return base ** value
return new_power_function
b = []
for i in range(4):
b.append(power_function_factory(i))
or
b = [power_function_factory(i) for i in range(4)]
A:
b=[]
f=(lambda p:(lambda x:x**p))
for i in range(4):
b.append(f(i))
for g in b:
print g(2)
|
Use value of variable in lambda expression
|
a = [] a.append(lambda x:x**0)
a.append(lambda x:x**1)
a[0](2), a[1](2), a[2](2)... spits out 1, 2, 4, ...
b=[]
for i in range(4)
b.append(lambda x:x**i)
b[0](2), b[1](2), b[2](2)... spits out 8, 8, 8, ...
In the for loop, the i is being passed to lambda as a variable, so when I call it, the last value of i is used instead of the code running as it does with a[]. (ie b[0] should use x^1, b[1] should use x^2, ...)
How can I tell lambda to pick up the value of i instead of the variable i itself.
|
[
"Ugly, but one way:\nfor i in range(4)\n b.append(lambda x, copy=i: x**copy)\n\nYou might prefer\ndef raiser(power):\n return lambda x: x**power\n\nfor i in range(4)\n b.append(raiser(i))\n\n(All code untested.)\n",
"Define a factory\ndef power_function_factory(value):\n def new_power_function(base):\n return base ** value\n return new_power_function\n\nb = []\nfor i in range(4):\n b.append(power_function_factory(i))\n\nor \nb = [power_function_factory(i) for i in range(4)]\n\n",
"b=[]\nf=(lambda p:(lambda x:x**p))\nfor i in range(4):\n b.append(f(i))\nfor g in b:\n print g(2)\n\n"
] |
[
8,
4,
2
] |
[] |
[] |
[
"lambda",
"python"
] |
stackoverflow_0000760688_lambda_python.txt
|
Q:
Is there a fini routine for a python module written in C?
I have a python module written in C, and I would like to add a function that is called when the module is unloaded. I obviously have an initfoo function to initialize the module -- is there a way to tell python to call a finifoo function when it's uninitializing the module?
Is atexit my only option?
A:
Not in Python 2, but Python 3 seems to. If you need to manage some resource, I would advise putting it in a module-level object -- I'm pretty sure those will be garbage-collected when the module is unloaded.
From the link:
Currently, extension modules are
initialized usually once and then
"live" forever. The only exception is
when Py_Finalize() is called: then the
initialization routine is invoked a
second time.
This suggests that you could set a static boolean in your initializer that gets flipped on every call. Check it status to see whether or not the module is being finalized.
|
Is there a fini routine for a python module written in C?
|
I have a python module written in C, and I would like to add a function that is called when the module is unloaded. I obviously have an initfoo function to initialize the module -- is there a way to tell python to call a finifoo function when it's uninitializing the module?
Is atexit my only option?
|
[
"Not in Python 2, but Python 3 seems to. If you need to manage some resource, I would advise putting it in a module-level object -- I'm pretty sure those will be garbage-collected when the module is unloaded.\n\nFrom the link:\n\nCurrently, extension modules are\n initialized usually once and then\n \"live\" forever. The only exception is\n when Py_Finalize() is called: then the\n initialization routine is invoked a\n second time.\n\nThis suggests that you could set a static boolean in your initializer that gets flipped on every call. Check it status to see whether or not the module is being finalized.\n"
] |
[
4
] |
[] |
[] |
[
"c",
"python",
"python_module"
] |
stackoverflow_0000760937_c_python_python_module.txt
|
Q:
sys.getrefcount continuation
link text
I got the concept of reference count
So when i do a "del astrd" ,reference count drops to zero and astrd gets collected by gc ?
This is the sample codes.These codes I developed after my yesterday's question:link text
one.py:
def abc():
print "Hello"
print "123"
print '345'
two.py:
import one
#reload(one)
#def defg():
one.abc()
three.py:
import os,sys,gc
from time import sleep
import two
#reload(two)
#two.defg()
sleep(20)
directory = os.listdir('.')
for filename in directory:
if filename[-3:] == 'pyc':
print '- ' + filename
print sys.getrefcount(filename)
file_name = os.path.splitext (filename)[0]
del file_name # remove the local reference
del sys.modules[os.path.splitext (filename)[0]] # removes import
gc.collect() # garbage collect
#del sys.modules[filename]
#del filename
#os.remove(filename)
What i did in three.py is correct or not ?
Is there any unnecessary step ?If yes,why ?
Please help me out of this.
A:
I believe that memory is automatically freed the moment the refcount reaches zero. The GC is not involved.
The python GC is optional, and is only used when there are unreachable objects that has reference cycles. In fact, you can call gc.disable() if you are sure your program does not create reference cycles.
As for the original question:
When you do del astrd, you remove the binding of astrd from the local namespace a reference to an object (whatever astrd references).
If this means that the refcount is zero, the memory used by the object is freed.
So del does not delete objects, it unbinds references. The deletion of objects is a side effect that occurs if unbinding a reference causes the refcount to reach zero.
Note that the above is only true for CPython. Jython and IronPython uses the JVM/CLR GC mechanism, and does not use refcounting at all, I believe.
The handy gc.get_objects returns a list of all object instances tracked by the python interpreter. Example:
import gc
class test(object):
pass
def number_of_test_instances():
return len([obj for obj in gc.get_objects() if isinstance(obj, test)])
for i in range(100):
t = test()
print "Created and abandoned 100 instances, there are now", \
number_of_test_instances(), \
"instances known to the python interpreter."
# note that in normal operation, the GC would
# detect the unreachable objects and start
# collecting them right away
gc.disable()
for i in range(100):
t = test()
t.t = t
print "Created and abandoned 100 instances with circular ref, there are now", \
number_of_test_instances(), \
"instances known to the python interpreter."
gc.collect()
print "After manually doing gc.collect(), there are now", \
number_of_test_instances(), \
"instances known to the python interpreter."
Running this program gives:
Created and abandoned 100 instances, there are now 1 instances known to the python interpreter.
Created and abandoned 100 instances with circular ref, there are now 100 instances known to the python interpreter.
After manually doing gc.collect(), there are now 1 instances known to the python interpreter.
A:
It gets a chance to get collected on the next GC collection run.
See: http://docs.python.org/library/gc.html
A:
Could you give some background as to what you are doing?
There's rarely any reason for explicitly using del on variables other than to clean up a namespace of things you don't want to expose. I'm not sure why you are calling del file_name or running gc.collect(). (del sys.modules[filename] is fine - that's a different use of del)
For objects when the exact time they get finalised doesn't matter (eg strings like file_name), you may as well let the variable drop out of scope - when your function finishes, it will be collected, and it won't cause any harm till then. Manually calling del for such variables just clutters up your code.
For objects which need to be finalised immediately (eg. an open file, or a held lock), you shouldn't be relying on the garbage collector anyway - it is not guaranteed to immediately collect such objects. It happens to do so in the standard C python implementation, but not in Jython or IronPython, and is so not guaranteed. Instead, you should explicitly clean up such objects by calling close or using the new with construct.
The only other reason might be that you have a very large amount of memory allocated, and want to signal you are done with it before the variable that refers to it goes out of scope naturally.
Your example doesn't seem to fit either of these circumstances however, so I'm not sure why you're manually invoking the garbage collector at all.
|
sys.getrefcount continuation
|
link text
I got the concept of reference count
So when i do a "del astrd" ,reference count drops to zero and astrd gets collected by gc ?
This is the sample codes.These codes I developed after my yesterday's question:link text
one.py:
def abc():
print "Hello"
print "123"
print '345'
two.py:
import one
#reload(one)
#def defg():
one.abc()
three.py:
import os,sys,gc
from time import sleep
import two
#reload(two)
#two.defg()
sleep(20)
directory = os.listdir('.')
for filename in directory:
if filename[-3:] == 'pyc':
print '- ' + filename
print sys.getrefcount(filename)
file_name = os.path.splitext (filename)[0]
del file_name # remove the local reference
del sys.modules[os.path.splitext (filename)[0]] # removes import
gc.collect() # garbage collect
#del sys.modules[filename]
#del filename
#os.remove(filename)
What i did in three.py is correct or not ?
Is there any unnecessary step ?If yes,why ?
Please help me out of this.
|
[
"I believe that memory is automatically freed the moment the refcount reaches zero. The GC is not involved.\nThe python GC is optional, and is only used when there are unreachable objects that has reference cycles. In fact, you can call gc.disable() if you are sure your program does not create reference cycles.\nAs for the original question:\n\nWhen you do del astrd, you remove the binding of astrd from the local namespace a reference to an object (whatever astrd references).\nIf this means that the refcount is zero, the memory used by the object is freed.\nSo del does not delete objects, it unbinds references. The deletion of objects is a side effect that occurs if unbinding a reference causes the refcount to reach zero.\n\nNote that the above is only true for CPython. Jython and IronPython uses the JVM/CLR GC mechanism, and does not use refcounting at all, I believe.\nThe handy gc.get_objects returns a list of all object instances tracked by the python interpreter. Example:\n\nimport gc\n\nclass test(object):\n pass\n\ndef number_of_test_instances():\n return len([obj for obj in gc.get_objects() if isinstance(obj, test)])\n\nfor i in range(100):\n t = test()\n\nprint \"Created and abandoned 100 instances, there are now\", \\\n number_of_test_instances(), \\\n \"instances known to the python interpreter.\"\n\n# note that in normal operation, the GC would\n# detect the unreachable objects and start\n# collecting them right away\ngc.disable()\n\nfor i in range(100):\n t = test()\n t.t = t\n\nprint \"Created and abandoned 100 instances with circular ref, there are now\", \\\n number_of_test_instances(), \\\n \"instances known to the python interpreter.\"\n\ngc.collect()\nprint \"After manually doing gc.collect(), there are now\", \\\n number_of_test_instances(), \\\n \"instances known to the python interpreter.\"\n\nRunning this program gives:\n\nCreated and abandoned 100 instances, there are now 1 instances known to the python interpreter.\nCreated and abandoned 100 instances with circular ref, there are now 100 instances known to the python interpreter.\nAfter manually doing gc.collect(), there are now 1 instances known to the python interpreter.\n\n",
"It gets a chance to get collected on the next GC collection run.\nSee: http://docs.python.org/library/gc.html\n",
"Could you give some background as to what you are doing?\nThere's rarely any reason for explicitly using del on variables other than to clean up a namespace of things you don't want to expose. I'm not sure why you are calling del file_name or running gc.collect(). (del sys.modules[filename] is fine - that's a different use of del)\nFor objects when the exact time they get finalised doesn't matter (eg strings like file_name), you may as well let the variable drop out of scope - when your function finishes, it will be collected, and it won't cause any harm till then. Manually calling del for such variables just clutters up your code.\nFor objects which need to be finalised immediately (eg. an open file, or a held lock), you shouldn't be relying on the garbage collector anyway - it is not guaranteed to immediately collect such objects. It happens to do so in the standard C python implementation, but not in Jython or IronPython, and is so not guaranteed. Instead, you should explicitly clean up such objects by calling close or using the new with construct.\nThe only other reason might be that you have a very large amount of memory allocated, and want to signal you are done with it before the variable that refers to it goes out of scope naturally.\nYour example doesn't seem to fit either of these circumstances however, so I'm not sure why you're manually invoking the garbage collector at all.\n"
] |
[
8,
1,
1
] |
[] |
[] |
[
"del",
"python"
] |
stackoverflow_0000759906_del_python.txt
|
Q:
Why csv.reader is not pythonic?
I started to use the csv.reader in Python 2.6 but you can't use len on it, or slice it, etc. What's the reason behind this? It certainly feels very limiting.
Or is this just an abandoned module in later versions?
A:
I'm pretty sure you can't use len or slice because it is an iterator. Try this instead.
import csv
r = csv.reader(...)
lines = [line for line in r]
print len(lines) #number of lines
for odd in lines[1::2]: print odd # print odd lines
|
Why csv.reader is not pythonic?
|
I started to use the csv.reader in Python 2.6 but you can't use len on it, or slice it, etc. What's the reason behind this? It certainly feels very limiting.
Or is this just an abandoned module in later versions?
|
[
"I'm pretty sure you can't use len or slice because it is an iterator. Try this instead.\nimport csv\nr = csv.reader(...)\nlines = [line for line in r]\nprint len(lines) #number of lines\nfor odd in lines[1::2]: print odd # print odd lines\n\n"
] |
[
14
] |
[] |
[] |
[
"csv",
"python"
] |
stackoverflow_0000761430_csv_python.txt
|
Q:
Overriding 'to boolean' operator in python?
I'm using a class that is inherited from list as a data structure:
class CItem( list ) :
pass
oItem = CItem()
oItem.m_something = 10
oItem += [ 1, 2, 3 ]
All is perfect, but if I use my object of my class inside of an 'if', python evaluates it to False if underlying the list has no elements. Since my class is not just list, I really want it to evaluate False only if it's None, and evaluate to True otherwise:
a = None
if a :
print "this is not called, as expected"
a = CItem()
if a :
print "and this is not called too, since CItem is empty list. How to fix it?"
A:
In 2.x: override __nonzero__(). In 3.x, override __bool__().
|
Overriding 'to boolean' operator in python?
|
I'm using a class that is inherited from list as a data structure:
class CItem( list ) :
pass
oItem = CItem()
oItem.m_something = 10
oItem += [ 1, 2, 3 ]
All is perfect, but if I use my object of my class inside of an 'if', python evaluates it to False if underlying the list has no elements. Since my class is not just list, I really want it to evaluate False only if it's None, and evaluate to True otherwise:
a = None
if a :
print "this is not called, as expected"
a = CItem()
if a :
print "and this is not called too, since CItem is empty list. How to fix it?"
|
[
"In 2.x: override __nonzero__(). In 3.x, override __bool__().\n"
] |
[
45
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000761586_python.txt
|
Q:
Program Control-Flow in Python
I have some data that I have stored in a list and if I print out the list I see the following:
.
.
.
007 A000000 Y
007 B000000 5
007 C010100 1
007 C020100 ACORN FUND
007 C030100 N
007 C010200 2
007 C020200 ACORN INTERNATIONAL
007 C030200 N
007 C010300 3
007 C020300 ACORN USA
007 C030300 N
007 C010400 4
.
.
.
The dots before and after the sequence are to represent that there is other data that is similarily structured but might or might not not be part of this seventh item (007). if the first value in the seventh item is '007 A000000 Y' then I want to create a dictionary listing of some of the data items. I can do this and have done so by just running through all of the items in my list and comparing their values to some test values for the variables. For instance a line of code like:
if dataLine.find('007 B')==0:
numberOfSeries=int(dataLine.split()[2])
What I want to do though is
if dataLine.find(''007 A000000 Y')==0:
READ THE NEXT LINE RIGHT HERE
Right now I am having to iterate through the entire list for each cycle
I want to shorten the processing because I have about 60K files that have between 500 to 5,000 lines in each.
I have thought about creating another reference to the list and counting the datalines until dataLine.find(''007 A000000 Y')==0. But that does not seem like it is the most elegant solution.
A:
You can use itertools.groupby() to segment your sequence into multiple sub-sequences.
import itertools
for key, subseq in itertools.groupby(tempans, lambda s: s.partition(' ')[0]):
if key == '007':
for dataLine in subseq:
if dataLine.startswith('007 B'):
numberOfSeries = int(dataLine.split()[2])
itertools.dropwhile() would also work if you really just want to seek up to that line,
list(itertools.dropwhile(lambda s: s != '007 A000000 Y', tempans))
['007 A000000 Y',
'007 B000000 5',
'007 C010100 1',
'007 C020100 ACORN FUND',
'007 C030100 N',
'007 C010200 2',
'007 C020200 ACORN INTERNATIONAL',
'007 C030200 N',
'007 C010300 3',
'007 C020300 ACORN USA',
'007 C030300 N',
'007 C010400 4',
'.',
'.',
'.',
'']
A:
You could read the data into a dictionary. Assuming you are reading from a file-like object infile:
from collections import defaultdict
data = defaultdict(list)
for line in infile:
elements = line.strip().split()
data[elements[0]].append(tuple(elements[1:]))
Now if you want to read the line after '007 A000000 Y', you can do so as:
# find the index of ('A000000', 'Y')
idx = data['007'].index(('A000000', 'Y'))
# get the next line
print data['007'][idx+1]
A:
The only difficulty with using all the data in a dictionary is that a really big dictionary can become troublesome. (It's what we used to call the "Big Ole Matrix" approach.)
A solution to this is to construct an index in the Dictionary, creating a mapping of key->offset, using the tell method to get the file offset value. Then you can refer to the line again by seeking with the seek method.
A:
Okay-while I was Googling to make sure I had covered my bases I came across a solution:
I find that I forget to think in Lists and Dictionaries even though I use them. Python has some powerful tools to work with these types to speed your ability to manipulate them.
I need a slice so the slice references are easily obtained by
beginPosit = tempans.index('007 A000000 Y')
endPosit = min([i for i, item in enumerate(tempans) if '008 ' in item])
where tempans is the datalist
now I can write
for line in tempans[beginPosit:endPosit]:
process each line
I think I answered my own question. I learned alot from the other answers and appreciate them but I think this is what I needed
Okay I am going to further edit my answer. I have learned a lot here but some of this stuff is over my head still and I want to get some code written while I am learning more about this fantastic tool.
from itertools import takewhile
beginPosit = tempans.index('007 A000000 Y')
new=takewhile(lambda x: '007 ' in x, tempans[beginPosit:])
This is based on an earlier answer to a similar question and Steven Huwig's answer
A:
You said you wanted to do this:
if dataLine.find(''007 A000000 Y')==0:
READ THE NEXT LINE RIGHT HERE
Presumably this is within a "for dataLine in data" loop.
Alternatively, you could use an iterator directly instead of in a for loop:
>>> i = iter(data)
>>> while i.next() != '007 A000000 Y': pass # find your starting line
>>> i.next() # read the next line
'007 B000000 5'
You also mention having 60K files to process. Are they all formatted similarly? Do they need to be processed differently? If they can all be processed the same way, you could consider chaining them together in a single flow:
def gfind( directory, pattern="*" ):
for name in fnmatch.filter( os.listdir( directory ), pattern ):
yield os.path.join( directory, name )
def gopen( names ):
for name in names:
yield open(name, 'rb')
def gcat( files ):
for file in files:
for line in file:
yield line
data = gcat( gopen( gfind( 'C:\datafiles', '*.dat' ) ) )
This lets you lazily process all your files in a single iterator. Not sure if that helps your current situation but I thought it worth mentioning.
|
Program Control-Flow in Python
|
I have some data that I have stored in a list and if I print out the list I see the following:
.
.
.
007 A000000 Y
007 B000000 5
007 C010100 1
007 C020100 ACORN FUND
007 C030100 N
007 C010200 2
007 C020200 ACORN INTERNATIONAL
007 C030200 N
007 C010300 3
007 C020300 ACORN USA
007 C030300 N
007 C010400 4
.
.
.
The dots before and after the sequence are to represent that there is other data that is similarily structured but might or might not not be part of this seventh item (007). if the first value in the seventh item is '007 A000000 Y' then I want to create a dictionary listing of some of the data items. I can do this and have done so by just running through all of the items in my list and comparing their values to some test values for the variables. For instance a line of code like:
if dataLine.find('007 B')==0:
numberOfSeries=int(dataLine.split()[2])
What I want to do though is
if dataLine.find(''007 A000000 Y')==0:
READ THE NEXT LINE RIGHT HERE
Right now I am having to iterate through the entire list for each cycle
I want to shorten the processing because I have about 60K files that have between 500 to 5,000 lines in each.
I have thought about creating another reference to the list and counting the datalines until dataLine.find(''007 A000000 Y')==0. But that does not seem like it is the most elegant solution.
|
[
"You can use itertools.groupby() to segment your sequence into multiple sub-sequences. \nimport itertools\n\nfor key, subseq in itertools.groupby(tempans, lambda s: s.partition(' ')[0]):\n if key == '007':\n for dataLine in subseq:\n if dataLine.startswith('007 B'):\n numberOfSeries = int(dataLine.split()[2])\n\n\nitertools.dropwhile() would also work if you really just want to seek up to that line,\nlist(itertools.dropwhile(lambda s: s != '007 A000000 Y', tempans))\n['007 A000000 Y',\n '007 B000000 5',\n '007 C010100 1',\n '007 C020100 ACORN FUND',\n '007 C030100 N',\n '007 C010200 2',\n '007 C020200 ACORN INTERNATIONAL',\n '007 C030200 N',\n '007 C010300 3',\n '007 C020300 ACORN USA',\n '007 C030300 N',\n '007 C010400 4',\n '.',\n '.',\n '.',\n '']\n\n",
"You could read the data into a dictionary. Assuming you are reading from a file-like object infile:\nfrom collections import defaultdict\ndata = defaultdict(list)\nfor line in infile:\n elements = line.strip().split()\n data[elements[0]].append(tuple(elements[1:]))\n\nNow if you want to read the line after '007 A000000 Y', you can do so as:\n# find the index of ('A000000', 'Y')\nidx = data['007'].index(('A000000', 'Y'))\n# get the next line\nprint data['007'][idx+1]\n\n",
"The only difficulty with using all the data in a dictionary is that a really big dictionary can become troublesome. (It's what we used to call the \"Big Ole Matrix\" approach.)\nA solution to this is to construct an index in the Dictionary, creating a mapping of key->offset, using the tell method to get the file offset value. Then you can refer to the line again by seeking with the seek method.\n",
"Okay-while I was Googling to make sure I had covered my bases I came across a solution:\nI find that I forget to think in Lists and Dictionaries even though I use them. Python has some powerful tools to work with these types to speed your ability to manipulate them.\nI need a slice so the slice references are easily obtained by\nbeginPosit = tempans.index('007 A000000 Y')\nendPosit = min([i for i, item in enumerate(tempans) if '008 ' in item])\n\nwhere tempans is the datalist\nnow I can write\nfor line in tempans[beginPosit:endPosit]:\n process each line\n\nI think I answered my own question. I learned alot from the other answers and appreciate them but I think this is what I needed\nOkay I am going to further edit my answer. I have learned a lot here but some of this stuff is over my head still and I want to get some code written while I am learning more about this fantastic tool. \nfrom itertools import takewhile\nbeginPosit = tempans.index('007 A000000 Y')\nnew=takewhile(lambda x: '007 ' in x, tempans[beginPosit:])\n\nThis is based on an earlier answer to a similar question and Steven Huwig's answer\n",
"You said you wanted to do this:\nif dataLine.find(''007 A000000 Y')==0:\n READ THE NEXT LINE RIGHT HERE\n\nPresumably this is within a \"for dataLine in data\" loop.\nAlternatively, you could use an iterator directly instead of in a for loop:\n>>> i = iter(data)\n>>> while i.next() != '007 A000000 Y': pass # find your starting line\n>>> i.next() # read the next line\n'007 B000000 5'\n\nYou also mention having 60K files to process. Are they all formatted similarly? Do they need to be processed differently? If they can all be processed the same way, you could consider chaining them together in a single flow:\ndef gfind( directory, pattern=\"*\" ):\n for name in fnmatch.filter( os.listdir( directory ), pattern ):\n yield os.path.join( directory, name )\n\ndef gopen( names ):\n for name in names:\n yield open(name, 'rb')\n\ndef gcat( files ):\n for file in files:\n for line in file:\n yield line\n\ndata = gcat( gopen( gfind( 'C:\\datafiles', '*.dat' ) ) )\n\nThis lets you lazily process all your files in a single iterator. Not sure if that helps your current situation but I thought it worth mentioning.\n"
] |
[
3,
2,
2,
0,
0
] |
[] |
[] |
[
"enumerate",
"list",
"python"
] |
stackoverflow_0000758465_enumerate_list_python.txt
|
Q:
Send headers along in python
I have the following python script and I would like to send "fake" header information along so that my application acts as if it is firefox. How could I do that?
import urllib, urllib2, cookielib
username = '****'
password = '****'
login_user = urllib.urlencode({'password' : password, 'username' : username})
jar = cookielib.FileCookieJar("cookies")
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
response = opener.open("http://www.***.com")
response = opener.open("http://www.***.com/login.php")
response = opener.open("http://www.***.com/welcome.php", login_user)
A:
Use the addheaders() function on your opener object.
Just add this one line after you create your opener, before you start opening pages:
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
http://docs.python.org/library/urllib2.html (it's at the bottom of this document)
A:
You have to get a bit more low-level to be able to do that.
request = urllib2.Request('http://stackoverflow.com')
request.add_header('User-Agent', 'FIREFOX LOL')
opener = urllib2.build_opener()
data = opener.open(request).read()
print data
Not tested.
A:
FWIW, depending on just how precisely you want to mimic Firefox, setting the User-Agent may not be enough (though that is probably sufficient for most cases). To make your script to look like 'normal' web browsing, you might want to set an appropriate Referer and make additional requests for the rest of the page content (Javascript/CSS/Images/Flash/etc). Something to thing about, though perhaps not appropriate to your particular situation.
|
Send headers along in python
|
I have the following python script and I would like to send "fake" header information along so that my application acts as if it is firefox. How could I do that?
import urllib, urllib2, cookielib
username = '****'
password = '****'
login_user = urllib.urlencode({'password' : password, 'username' : username})
jar = cookielib.FileCookieJar("cookies")
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
response = opener.open("http://www.***.com")
response = opener.open("http://www.***.com/login.php")
response = opener.open("http://www.***.com/welcome.php", login_user)
|
[
"Use the addheaders() function on your opener object. \nJust add this one line after you create your opener, before you start opening pages:\nopener.addheaders = [('User-agent', 'Mozilla/5.0')]\n\nhttp://docs.python.org/library/urllib2.html (it's at the bottom of this document)\n",
"You have to get a bit more low-level to be able to do that.\nrequest = urllib2.Request('http://stackoverflow.com')\nrequest.add_header('User-Agent', 'FIREFOX LOL')\nopener = urllib2.build_opener()\ndata = opener.open(request).read()\nprint data\n\nNot tested.\n",
"FWIW, depending on just how precisely you want to mimic Firefox, setting the User-Agent may not be enough (though that is probably sufficient for most cases). To make your script to look like 'normal' web browsing, you might want to set an appropriate Referer and make additional requests for the rest of the page content (Javascript/CSS/Images/Flash/etc). Something to thing about, though perhaps not appropriate to your particular situation.\n"
] |
[
6,
5,
1
] |
[] |
[] |
[
"http_headers",
"post",
"python"
] |
stackoverflow_0000761978_http_headers_post_python.txt
|
Q:
Python Regexp problem
I'm trying to regexp a line from a webpage. The line is as follows:
<tr><td width=60 bgcolor='#ffffcc'><b>random Value</b></td><td align=center width=80>
This is what I tried, but it doesn't seem to work, can anyone help me out? 'htmlbody' contains the html page and no, I did not forget to import 're'.
reg = re.compile("<tr><td width=60 bgcolor='#ffffcc'><b>([^<]*)</b></td><td align=center width=80>")
value = reg.search(htmlbody)
print 'Value is', value
A:
There is no surefire way to do this with a regex. See Can you provide some examples of why it is hard to parse XML and HTML with a regex? for why. What you need is an HTML parser like HTMLParser:
#!/usr/bin/python
from HTMLParser import HTMLParser
class FindTDs(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.level = 0
def handle_starttag(self, tag, attrs):
if tag == 'td':
self.level = self.level + 1
def handle_endtag(self, tag):
if tag == 'td':
self.level = self.level - 1
def handle_data(self, data):
if self.level > 0:
print data
find = FindTDs()
html = "<table>\n"
for i in range(3):
html += "\t<tr>"
for j in range(5):
html += "<td>%s.%s</td>" % (i, j)
html += "</tr>\n"
html += "</table>"
find.feed(html)
A:
This
import re
htmlbody = "<tr><td width=60 bgcolor='#ffffcc'><b>random Value</b></td><td align=center width=80>"
reg = re.compile("<tr><td width=60 bgcolor='#ffffcc'><b>([^<]*)</b></td><td align=center width=80>")
value = reg.search(htmlbody).group(1)
print 'Value is', value
prints out
Value is random Value
Is this what you want?
A:
It sounds like you may want to use findall rather than search:
reg = re.compile("<tr><td width=60 bgcolor='#ffffcc'><b>([^<]*)</b></td><td align=center width=80>")
value = reg.findall(htmlbody)
print 'Found %i match(es)' % len(value)
I have to caution you, though, that regular expressions are notoriously poor at handling HTML. You're better off using a proper parser using the HTMLParser module built in to Python.
|
Python Regexp problem
|
I'm trying to regexp a line from a webpage. The line is as follows:
<tr><td width=60 bgcolor='#ffffcc'><b>random Value</b></td><td align=center width=80>
This is what I tried, but it doesn't seem to work, can anyone help me out? 'htmlbody' contains the html page and no, I did not forget to import 're'.
reg = re.compile("<tr><td width=60 bgcolor='#ffffcc'><b>([^<]*)</b></td><td align=center width=80>")
value = reg.search(htmlbody)
print 'Value is', value
|
[
"There is no surefire way to do this with a regex. See Can you provide some examples of why it is hard to parse XML and HTML with a regex? for why. What you need is an HTML parser like HTMLParser:\n#!/usr/bin/python\n\nfrom HTMLParser import HTMLParser\n\nclass FindTDs(HTMLParser):\n def __init__(self):\n HTMLParser.__init__(self)\n self.level = 0\n\n def handle_starttag(self, tag, attrs):\n if tag == 'td':\n self.level = self.level + 1\n\n def handle_endtag(self, tag):\n if tag == 'td':\n self.level = self.level - 1\n\n def handle_data(self, data):\n if self.level > 0:\n print data\n\nfind = FindTDs()\n\nhtml = \"<table>\\n\"\nfor i in range(3):\n html += \"\\t<tr>\"\n for j in range(5):\n html += \"<td>%s.%s</td>\" % (i, j)\n html += \"</tr>\\n\"\nhtml += \"</table>\"\n\nfind.feed(html)\n\n",
"This\nimport re\n\nhtmlbody = \"<tr><td width=60 bgcolor='#ffffcc'><b>random Value</b></td><td align=center width=80>\"\n\nreg = re.compile(\"<tr><td width=60 bgcolor='#ffffcc'><b>([^<]*)</b></td><td align=center width=80>\")\nvalue = reg.search(htmlbody).group(1)\nprint 'Value is', value\n\nprints out\nValue is random Value\n\nIs this what you want?\n",
"It sounds like you may want to use findall rather than search:\nreg = re.compile(\"<tr><td width=60 bgcolor='#ffffcc'><b>([^<]*)</b></td><td align=center width=80>\")\nvalue = reg.findall(htmlbody)\nprint 'Found %i match(es)' % len(value)\n\nI have to caution you, though, that regular expressions are notoriously poor at handling HTML. You're better off using a proper parser using the HTMLParser module built in to Python.\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"html",
"python",
"regex"
] |
stackoverflow_0000762482_html_python_regex.txt
|
Q:
How can I get useful information from flash swf files?
I'm doing some crawling with Python, and would like to be able to identify (however imperfectly) the flash I come across - is it a video, an ad, a game, or whatever.
I assume I would have to decompile the swf, which seems doable. But what sort of processing would I do with the decompiled Actionscript to figure out what it's purpose is?
Edit: or any better ideas would be most welcome also.
A:
I think your best bet would be to check the context where you see the swf file
usually they're embedded within web pages so if that page has 100 occurences of the word "game", then it might be a game, as an example
To detect an ad it might be trickier but i think that checking the domainname where the swf is hosted might do the trick, also html tags around the swf will be of great use
A:
It might help to look at the arguments passed to the Flash movie. If there's reference to an FLV file then there's a good chance the SWF is being used to play a movie.
The path to the SWF might help too. If it's under, say an /ads directory then it's probably just a banner ad. Or if it's under /games then it's probably a game.
Other than using heuristics like this there's probably not much you can do. SWFs can be used for a lot of different things, and there's really nothing in the SWF itself that would tell you what "type" it is.
A:
Tough one. I guess you should try find a scope for a swf context.
As you said, swfs can be: ads,games, video players, they can also contain experimental art.
who knows. Once you know what exactly your after, it should be easier to figure out how to look for that kind of data.
I think it would be easier to get started with commercial websites. Those need promotion, so if they might promotional ria's setup with a little bit of SEO in mind so look for things like swfobject, swfaddress and tracking stuff ( omniture and who knows what else ). They should have keywords in the embedding html.
Google and Yahoo are working with Adobe as far as I know to make SWFs indexable. There is something mentioned about a custom FlashPlayer used for Flash indexing in the Flash Internals presentation from Adobe MAX.
Hope it helps.
|
How can I get useful information from flash swf files?
|
I'm doing some crawling with Python, and would like to be able to identify (however imperfectly) the flash I come across - is it a video, an ad, a game, or whatever.
I assume I would have to decompile the swf, which seems doable. But what sort of processing would I do with the decompiled Actionscript to figure out what it's purpose is?
Edit: or any better ideas would be most welcome also.
|
[
"I think your best bet would be to check the context where you see the swf file\nusually they're embedded within web pages so if that page has 100 occurences of the word \"game\", then it might be a game, as an example\nTo detect an ad it might be trickier but i think that checking the domainname where the swf is hosted might do the trick, also html tags around the swf will be of great use\n",
"It might help to look at the arguments passed to the Flash movie. If there's reference to an FLV file then there's a good chance the SWF is being used to play a movie.\nThe path to the SWF might help too. If it's under, say an /ads directory then it's probably just a banner ad. Or if it's under /games then it's probably a game.\nOther than using heuristics like this there's probably not much you can do. SWFs can be used for a lot of different things, and there's really nothing in the SWF itself that would tell you what \"type\" it is.\n",
"Tough one. I guess you should try find a scope for a swf context.\nAs you said, swfs can be: ads,games, video players, they can also contain experimental art.\nwho knows. Once you know what exactly your after, it should be easier to figure out how to look for that kind of data.\nI think it would be easier to get started with commercial websites. Those need promotion, so if they might promotional ria's setup with a little bit of SEO in mind so look for things like swfobject, swfaddress and tracking stuff ( omniture and who knows what else ). They should have keywords in the embedding html.\nGoogle and Yahoo are working with Adobe as far as I know to make SWFs indexable. There is something mentioned about a custom FlashPlayer used for Flash indexing in the Flash Internals presentation from Adobe MAX. \nHope it helps.\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
"actionscript",
"flash",
"python"
] |
stackoverflow_0000731016_actionscript_flash_python.txt
|
Q:
pyQT QNetworkManager and ProgressBars
I'm trying to code something that downloads a file from a webserver and saves it, showing the download progress in a QProgressBar.
Now, there are ways to do this in regular Python and it's easy. Problem is that it locks the refresh of the progressBar. Solution is to use PyQT's QNetworkManager class. I can download stuff just fine with it, I just can't get the setup to show the progress on the progressBar. Here´s an example:
class Form(QDialog):
def __init__(self,parent=None):
super(Form,self).__init__(parent)
self.progressBar = QProgressBar()
self.reply = None
layout = QHBoxLayout()
layout.addWidget(self.progressBar)
self.setLayout(layout)
self.manager = QNetworkAccessManager(self)
self.connect(self.manager,SIGNAL("finished(QNetworkReply*)"),self.replyFinished)
self.Down()
def Down(self):
address = QUrl("http://stackoverflow.com") #URL from the remote file.
self.manager.get(QNetworkRequest(address))
def replyFinished(self, reply):
self.connect(reply,SIGNAL("downloadProgress(int,int)"),self.progressBar, SLOT("setValue(int)"))
self.reply = reply
self.progressBar.setMaximum(reply.size())
alltext = self.reply.readAll()
#print alltext
#print alltext
def updateBar(self, read,total):
print "read", read
print "total",total
#self.progressBar.setMinimum(0)
#self.progressBar.setMask(total)
#self.progressBar.setValue(read)
In this case, my method "updateBar" is never called... any ideas?
A:
Well you haven't connected any of the signals to your updateBar() method.
change
def replyFinished(self, reply):
self.connect(reply,SIGNAL("downloadProgress(int,int)"),self.progressBar, SLOT("setValue(int)"))
to
def replyFinished(self, reply):
self.connect(reply,SIGNAL("downloadProgress(int,int)"),self.updateBar)
Note that in Python you don't have to explicitly use the SLOT() syntax; you can just pass the reference to your method or function.
Update:
I just wanted to point out that if you want to use a Progress bar in any situation where your GUI locks up during processing, one solution is to run your processing code in another thread so your GUI receives repaint events. Consider reading about the QThread class, in case you come across another reason for a progress bar that does not have a pre-built solution for you.
|
pyQT QNetworkManager and ProgressBars
|
I'm trying to code something that downloads a file from a webserver and saves it, showing the download progress in a QProgressBar.
Now, there are ways to do this in regular Python and it's easy. Problem is that it locks the refresh of the progressBar. Solution is to use PyQT's QNetworkManager class. I can download stuff just fine with it, I just can't get the setup to show the progress on the progressBar. Here´s an example:
class Form(QDialog):
def __init__(self,parent=None):
super(Form,self).__init__(parent)
self.progressBar = QProgressBar()
self.reply = None
layout = QHBoxLayout()
layout.addWidget(self.progressBar)
self.setLayout(layout)
self.manager = QNetworkAccessManager(self)
self.connect(self.manager,SIGNAL("finished(QNetworkReply*)"),self.replyFinished)
self.Down()
def Down(self):
address = QUrl("http://stackoverflow.com") #URL from the remote file.
self.manager.get(QNetworkRequest(address))
def replyFinished(self, reply):
self.connect(reply,SIGNAL("downloadProgress(int,int)"),self.progressBar, SLOT("setValue(int)"))
self.reply = reply
self.progressBar.setMaximum(reply.size())
alltext = self.reply.readAll()
#print alltext
#print alltext
def updateBar(self, read,total):
print "read", read
print "total",total
#self.progressBar.setMinimum(0)
#self.progressBar.setMask(total)
#self.progressBar.setValue(read)
In this case, my method "updateBar" is never called... any ideas?
|
[
"Well you haven't connected any of the signals to your updateBar() method.\nchange\ndef replyFinished(self, reply):\n self.connect(reply,SIGNAL(\"downloadProgress(int,int)\"),self.progressBar, SLOT(\"setValue(int)\"))\n\nto \ndef replyFinished(self, reply):\n self.connect(reply,SIGNAL(\"downloadProgress(int,int)\"),self.updateBar)\n\nNote that in Python you don't have to explicitly use the SLOT() syntax; you can just pass the reference to your method or function.\nUpdate:\nI just wanted to point out that if you want to use a Progress bar in any situation where your GUI locks up during processing, one solution is to run your processing code in another thread so your GUI receives repaint events. Consider reading about the QThread class, in case you come across another reason for a progress bar that does not have a pre-built solution for you.\n"
] |
[
4
] |
[] |
[] |
[
"pyqt",
"python"
] |
stackoverflow_0000761286_pyqt_python.txt
|
Q:
How do i convert WMD markdown syntax to HTML on my site?
Am using django and am implementing WMD on my site, am just wondering how do i convert the markdown syntax to HTML for display purposes, is there some sort of function i should call to do this conversion?
What is the best way to handle markdown ie. do i save the markdown as is to the database then parse it when displaying it or should i save the converted HTML then convert it to markup during editing?
A:
Check out the markup add-on which comes with Django. That is what you are looking for.
To activate these filters, add 'django.contrib.markup' to your INSTALLED_APPS setting. Once you’ve done that, use {% load markup %} in a template, and you’ll have access to these filters. For more documentation, read the source code in django/contrib/markup/templatetags/markup.py.
Also check out this article for some more details if you're still stuck.
|
How do i convert WMD markdown syntax to HTML on my site?
|
Am using django and am implementing WMD on my site, am just wondering how do i convert the markdown syntax to HTML for display purposes, is there some sort of function i should call to do this conversion?
What is the best way to handle markdown ie. do i save the markdown as is to the database then parse it when displaying it or should i save the converted HTML then convert it to markup during editing?
|
[
"Check out the markup add-on which comes with Django. That is what you are looking for.\n\nTo activate these filters, add 'django.contrib.markup' to your INSTALLED_APPS setting. Once you’ve done that, use {% load markup %} in a template, and you’ll have access to these filters. For more documentation, read the source code in django/contrib/markup/templatetags/markup.py.\n\nAlso check out this article for some more details if you're still stuck.\n"
] |
[
6
] |
[] |
[] |
[
"django",
"html",
"markdown",
"python",
"wmd"
] |
stackoverflow_0000763087_django_html_markdown_python_wmd.txt
|
Q:
If you were to clone Monopoly Tycoon in Python, what libraries would you use?
Ever played the game Monopoly Tycoon? I think it's great.
I would love to remake it. Unfortunately, I have no experience when it comes to 3D programming. I imagine there's a relatively steep learning curve when it comes to openGL stuff, figuring out what is being clicked on and so on...
If you were to undertake this task, what libraries would you need?
A:
pyGame seems quite mature and builds on top of the proven SDL library.
A:
I'd use pyglet. It's all opengl from the start, doesn't build on top of ugly SDL library and has better interfaces than what I've seen on other python's multimedia libraries.
import pyglet
from pyglet.gl import *
class Application(object):
def __init__(self):
self.window = window = pyglet.window.Window()
window.push_handlers(self)
def on_draw(self):
self.window.clear()
glBegin(GL_TRIANGLES)
glVertex2f(0,0)
glVertex2f(200,0)
glVertex2f(200,200)
glEnd()
if __name__=='__main__':
app = Application()
pyglet.app.run()
I wrote this from scratch to show you a reference. You can pretty much start from that.
There's couple of useful things in the library like vertex lists, textures, scheduling, unicode fonts, a little bit of UI components, event dispatching, audio. The library itself is messy inside and I didn't like it too much. But then this is my opinion from every library that has widespread and that I've looked into.
Myself I'm dissatisfied to the opengl namespacing. It'd be better with non-C namespace in the front. This'd leave you in some flexibility:
from pyglet.gl import Begin, Vertex2f, TRIANGLES, End
...
Begin(TRIANGLES)
Vertex2f(0,0)
Vertex2f(200,0)
Vertex2f(200,200)
End()
|
If you were to clone Monopoly Tycoon in Python, what libraries would you use?
|
Ever played the game Monopoly Tycoon? I think it's great.
I would love to remake it. Unfortunately, I have no experience when it comes to 3D programming. I imagine there's a relatively steep learning curve when it comes to openGL stuff, figuring out what is being clicked on and so on...
If you were to undertake this task, what libraries would you need?
|
[
"pyGame seems quite mature and builds on top of the proven SDL library. \n",
"I'd use pyglet. It's all opengl from the start, doesn't build on top of ugly SDL library and has better interfaces than what I've seen on other python's multimedia libraries.\nimport pyglet\nfrom pyglet.gl import *\n\nclass Application(object):\n def __init__(self):\n self.window = window = pyglet.window.Window()\n window.push_handlers(self)\n\n def on_draw(self):\n self.window.clear()\n glBegin(GL_TRIANGLES)\n glVertex2f(0,0)\n glVertex2f(200,0)\n glVertex2f(200,200)\n glEnd()\n\nif __name__=='__main__':\n app = Application()\n pyglet.app.run()\n\nI wrote this from scratch to show you a reference. You can pretty much start from that. \nThere's couple of useful things in the library like vertex lists, textures, scheduling, unicode fonts, a little bit of UI components, event dispatching, audio. The library itself is messy inside and I didn't like it too much. But then this is my opinion from every library that has widespread and that I've looked into.\nMyself I'm dissatisfied to the opengl namespacing. It'd be better with non-C namespace in the front. This'd leave you in some flexibility:\nfrom pyglet.gl import Begin, Vertex2f, TRIANGLES, End\n\n...\n Begin(TRIANGLES)\n Vertex2f(0,0)\n Vertex2f(200,0)\n Vertex2f(200,200)\n End()\n\n"
] |
[
6,
4
] |
[] |
[] |
[
"opengl",
"python"
] |
stackoverflow_0000761652_opengl_python.txt
|
Q:
Auto GET to argument of view
some_view?param1=10¶m2=20
def some_view(request, param1, param2):
Is such possible in Django?
A:
You could always write a decorator. Eg. something like (untested):
def map_params(func):
def decorated(request):
return func(request, **request.GET)
return decorated
@map_params
def some_view(request, param1, param2):
...
A:
I'm not sure it's possible to get it to pass them as arguments to the view function, but why can't you access the GET variables from request.GET? Given that URL, Django would have request.GET['param1'] be 10 and request.GET['param2'] be 20. Otherwise, you'd have to come up with some kind of weird regular expression to try and do what you want.
A:
I agree with Paolo... the stuff after the '?' are GET parameters and should probably be treated as such. That said, if you really want to keep the definition of some_view() as you've stated in the question, you could do something like:
from django.http import Http404
def some_view_proxy(request):
if 'param1' in request.GET and 'param2' in request.GET:
return some_view(request, request.GET['param1'],
request.GET['param2'])
raise Http404
Or you could just define some_view() like this and use the GET params. Just curious, why do you want that?
A:
Instead of fighting Django, why not just request some_view/10/20 and then set up urls.py to extract the arguments?
|
Auto GET to argument of view
|
some_view?param1=10¶m2=20
def some_view(request, param1, param2):
Is such possible in Django?
|
[
"You could always write a decorator. Eg. something like (untested):\ndef map_params(func):\n def decorated(request):\n return func(request, **request.GET)\n return decorated\n\n@map_params\ndef some_view(request, param1, param2):\n ...\n\n",
"I'm not sure it's possible to get it to pass them as arguments to the view function, but why can't you access the GET variables from request.GET? Given that URL, Django would have request.GET['param1'] be 10 and request.GET['param2'] be 20. Otherwise, you'd have to come up with some kind of weird regular expression to try and do what you want.\n",
"I agree with Paolo... the stuff after the '?' are GET parameters and should probably be treated as such. That said, if you really want to keep the definition of some_view() as you've stated in the question, you could do something like:\nfrom django.http import Http404\ndef some_view_proxy(request):\n if 'param1' in request.GET and 'param2' in request.GET:\n return some_view(request, request.GET['param1'],\n request.GET['param2'])\n raise Http404\n\nOr you could just define some_view() like this and use the GET params. Just curious, why do you want that?\n",
"Instead of fighting Django, why not just request some_view/10/20 and then set up urls.py to extract the arguments?\n"
] |
[
3,
1,
1,
1
] |
[] |
[] |
[
"django",
"django_urls",
"django_views",
"python"
] |
stackoverflow_0000763103_django_django_urls_django_views_python.txt
|
Q:
Python Beginner: How to Prevent 'finally' from executing?
The function code:
# Connect to the DB
try:
dbi = MySQLdb.connect(host='localhost', \
user='user', \
passwd='pass', \
db='dbname', \
port=3309)
print "Connected to DB ..."
except MySQLdb.Error, e:
apiErr = 2
apiErrMsg = "Error %d: %s" % (e.args[0], e.args[1])
return
# To prevent try..finally bug in python2.4,
# one has to nest the "try: except:" part.
try:
try:
sql = dbi.cursor()
sql.execute("""
SELECT *
FROM table
WHERE idClient = %s
""", (key, ))
access = sql.fetchall()
# [some more code here]
except MySQLdb.Error, e:
apiErr = 2
apiErrMsg = "Error %d: %s" % (e.args[0], e.args[1])
return
finally:
sql.close()
dbi.close()
I understand that in a try .. except .. finally, the finally block will always execute.
In the above code, I don't want the finally in the second try block to execute if there is an exception in the first try block. What am I doing wrong?
(Note: Using python 2.4)
Clarification: I am not aware if MySQLdb closes connections automatically when an error occurs. The problem I am facing with the above code is, when there is an error in establishing a connection (the first try block of the code), calling dbi.close() in the finally block raises "AttributeError: 'NoneType' object has no attribute 'close'" with reference to dbi ...
Solution:
This worked as desired -
# define at the start
dbi = None
sql = None
In the finally block,
if sql is not None:
sql.close()
if dbi is not None:
dbi.close()
Thanks to those who replied. I learned something new from all of you. (I'll try to phrase my questions more clearly the next time :).
A:
Use else: instead of finally:. See the Exception Handling part of the docs:
The try ... except statement has an optional else clause, which, when present, must follow all except clauses. It is useful for code that must be executed if the try clause does not raise an exception.
for arg in sys.argv[1:]:
try:
f = open(arg, 'r')
except IOError:
print 'cannot open', arg
else:
print arg, 'has', len(f.readlines()), 'lines'
f.close()
..basically:
try:
[code that might error]
except IOError:
[This code is only ran when IOError is raised]
else:
[This code is only ran when NO exception is raised]
finally:
[This code is always run, both if an exception is raised or not]
A:
I think in this case you do want to use finally, because you want to close those connections.
I disagree with the notion that you should have two try blocks in the same method.
I think the flaw in the design is acquiring the connection and performing the query in the same method. I would recommend separating the two. A service class or method knows about the unit of work. It should acquire the connection, pass it to another class that performs the query, and closes the connection when it's done. That way the query method can throw any exception it encounters and leave the cleanup to the class or method that's responsible for the connection.
A:
Don't use finally. If you don't want the code to always be executed then you should find another flow control structure that meets your needs.
One way to accomplish this behavior is to move the statements in your 'finally' block to the bottom of your 'try' block. That way they won't get executed when an exception is thrown but will get executed, after all other statements, otherwise.
EDIT:
After further discussion it appears that in your case you do actually want to use 'finally'. What I recommend is checking to see if your connection has already been closed before you attempt to close it.
|
Python Beginner: How to Prevent 'finally' from executing?
|
The function code:
# Connect to the DB
try:
dbi = MySQLdb.connect(host='localhost', \
user='user', \
passwd='pass', \
db='dbname', \
port=3309)
print "Connected to DB ..."
except MySQLdb.Error, e:
apiErr = 2
apiErrMsg = "Error %d: %s" % (e.args[0], e.args[1])
return
# To prevent try..finally bug in python2.4,
# one has to nest the "try: except:" part.
try:
try:
sql = dbi.cursor()
sql.execute("""
SELECT *
FROM table
WHERE idClient = %s
""", (key, ))
access = sql.fetchall()
# [some more code here]
except MySQLdb.Error, e:
apiErr = 2
apiErrMsg = "Error %d: %s" % (e.args[0], e.args[1])
return
finally:
sql.close()
dbi.close()
I understand that in a try .. except .. finally, the finally block will always execute.
In the above code, I don't want the finally in the second try block to execute if there is an exception in the first try block. What am I doing wrong?
(Note: Using python 2.4)
Clarification: I am not aware if MySQLdb closes connections automatically when an error occurs. The problem I am facing with the above code is, when there is an error in establishing a connection (the first try block of the code), calling dbi.close() in the finally block raises "AttributeError: 'NoneType' object has no attribute 'close'" with reference to dbi ...
Solution:
This worked as desired -
# define at the start
dbi = None
sql = None
In the finally block,
if sql is not None:
sql.close()
if dbi is not None:
dbi.close()
Thanks to those who replied. I learned something new from all of you. (I'll try to phrase my questions more clearly the next time :).
|
[
"Use else: instead of finally:. See the Exception Handling part of the docs:\n\nThe try ... except statement has an optional else clause, which, when present, must follow all except clauses. It is useful for code that must be executed if the try clause does not raise an exception.\n\nfor arg in sys.argv[1:]:\n try:\n f = open(arg, 'r')\n except IOError:\n print 'cannot open', arg\n else:\n print arg, 'has', len(f.readlines()), 'lines'\n f.close()\n\n..basically:\ntry:\n [code that might error]\nexcept IOError:\n [This code is only ran when IOError is raised]\nelse:\n [This code is only ran when NO exception is raised]\nfinally:\n [This code is always run, both if an exception is raised or not]\n\n",
"I think in this case you do want to use finally, because you want to close those connections. \nI disagree with the notion that you should have two try blocks in the same method.\nI think the flaw in the design is acquiring the connection and performing the query in the same method. I would recommend separating the two. A service class or method knows about the unit of work. It should acquire the connection, pass it to another class that performs the query, and closes the connection when it's done. That way the query method can throw any exception it encounters and leave the cleanup to the class or method that's responsible for the connection.\n",
"Don't use finally. If you don't want the code to always be executed then you should find another flow control structure that meets your needs.\nOne way to accomplish this behavior is to move the statements in your 'finally' block to the bottom of your 'try' block. That way they won't get executed when an exception is thrown but will get executed, after all other statements, otherwise.\nEDIT:\nAfter further discussion it appears that in your case you do actually want to use 'finally'. What I recommend is checking to see if your connection has already been closed before you attempt to close it.\n"
] |
[
6,
5,
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000763480_python.txt
|
Q:
Importing In Python
Is it possible to import modules based on location?
(eg. do all modules i import have to be in /usr/lib64/python2.5/ or a similar dir?)
I'd like to import a module that's local to the current script.
A:
You can extend the path at runtime like this:
sys.path.extend(map(os.path.abspath, ['other1/', 'other2/', 'yourlib/']))
A:
You can edit your PYTHONPATH to add or remove locations that python will search whenever you attempt an import.
A:
python will import from the current directory by default.
sys.path is the variable that controls where python searches for imports.
A:
You can import module that are in the same path the module you are importing to. For example:
Directory contains: mod1.py, mod2.py
mod2.py
--------
import mod1
Or you can add any directory to your PYTHON_PATH variable:
import sys
sys.path.extend('/user/some/other/directory')
import mod1
A:
It searches in ./lib by default.
A:
For low-level control over the import process, the imp module lets you import modules from arbitrary open files under arbitrary names.
For example, if this is foo.py:
def x():
print 'hello, world'
Then this code:
import imp
with open('foo.py', 'r') as module_file:
imp.load_module('module_name', module_file, '', ('', 'r', imp.PY_SOURCE))
import module_name
module_name.x()
prints "hello, world".
A:
Use init.py
The only problem with doing dynamic modification of sys.path is that you need to repeat it in every script and hard-code the pathnames. That gets messy and non DRY if you have even two or three files.
Instead, if your file structure looks like this:
~/foo/__init__.py
~/foo/foo.py
~/foo/bar/__init__.py
~/foo/bar/baz.py
Here the init.py's are blank files created with touch, while foo.py and baz.py are actual python scripts. Then you can do something like this:
import sys
try:
from foo import foo
from foo.bar import baz
except ImportError:
"%s is not in %s. Add to your PYTHONPATH in ~/.bashrc" % \
(os.path.expanduser("~/foo"),sys.path)
Structuring your stuff as a package from the beginning is a little more work but makes it much easier to scale the project later and to see where imports are coming from. Moreover, if you move stuff around, you can use a single symlink rather than doing a find/replace through your codebase. E.g. if you moved '~/foo' to '~/downloads/foo', just do this:
cd ~
ln -s ~/downloads/foo foo
And all your imports will still work.
|
Importing In Python
|
Is it possible to import modules based on location?
(eg. do all modules i import have to be in /usr/lib64/python2.5/ or a similar dir?)
I'd like to import a module that's local to the current script.
|
[
"You can extend the path at runtime like this:\nsys.path.extend(map(os.path.abspath, ['other1/', 'other2/', 'yourlib/']))\n\n",
"You can edit your PYTHONPATH to add or remove locations that python will search whenever you attempt an import.\n",
"\npython will import from the current directory by default.\nsys.path is the variable that controls where python searches for imports.\n\n",
"You can import module that are in the same path the module you are importing to. For example:\nDirectory contains: mod1.py, mod2.py\nmod2.py\n--------\nimport mod1\n\nOr you can add any directory to your PYTHON_PATH variable:\nimport sys\nsys.path.extend('/user/some/other/directory')\nimport mod1\n\n",
"It searches in ./lib by default.\n",
"For low-level control over the import process, the imp module lets you import modules from arbitrary open files under arbitrary names.\nFor example, if this is foo.py:\ndef x():\n print 'hello, world'\n\nThen this code:\nimport imp\n\nwith open('foo.py', 'r') as module_file:\n imp.load_module('module_name', module_file, '', ('', 'r', imp.PY_SOURCE))\n\nimport module_name\n\nmodule_name.x()\n\nprints \"hello, world\".\n",
"Use init.py\nThe only problem with doing dynamic modification of sys.path is that you need to repeat it in every script and hard-code the pathnames. That gets messy and non DRY if you have even two or three files. \nInstead, if your file structure looks like this: \n~/foo/__init__.py\n~/foo/foo.py\n~/foo/bar/__init__.py\n~/foo/bar/baz.py\n\nHere the init.py's are blank files created with touch, while foo.py and baz.py are actual python scripts. Then you can do something like this: \nimport sys\ntry:\n from foo import foo\n from foo.bar import baz\nexcept ImportError:\n \"%s is not in %s. Add to your PYTHONPATH in ~/.bashrc\" % \\\n (os.path.expanduser(\"~/foo\"),sys.path)\n\nStructuring your stuff as a package from the beginning is a little more work but makes it much easier to scale the project later and to see where imports are coming from. Moreover, if you move stuff around, you can use a single symlink rather than doing a find/replace through your codebase. E.g. if you moved '~/foo' to '~/downloads/foo', just do this: \ncd ~\nln -s ~/downloads/foo foo\n\nAnd all your imports will still work. \n"
] |
[
9,
3,
3,
0,
0,
0,
0
] |
[] |
[] |
[
"import",
"python"
] |
stackoverflow_0000762111_import_python.txt
|
Q:
Active Directory - Django/Rails
I'm thinking about re-writing a web app in Django or Rails and wondering about authenticating against AD. Is one ecosystem better suited for this (libraries, etc) or is it a toss-up?
(The app will be hosted on Linux)
I have lots of reasons for the re-write, one them is to make myself more marketable. Anyone care to comment on the which of these frameworks has better long-term outlook for a new programmer? (I've read the StackOverflow threads already, but ask just in case something new has come up).
Thanks in advance.
A:
A quick google to give you some pointers on using Active Directory in these environments.
http://www.djangosnippets.org/snippets/501/
http://www.zorched.net/2007/06/04/active-directory-authentication-for-ruby-on-rails/
A:
I did Active Directory auth in Rails about a year ago. I did it similarly to the article Daniel linked to. It felt hacky, but it was an internal app, so it was acceptable.
Since then Passenger (mod_rails) has come out which could be a better alternative than FastCGI.
|
Active Directory - Django/Rails
|
I'm thinking about re-writing a web app in Django or Rails and wondering about authenticating against AD. Is one ecosystem better suited for this (libraries, etc) or is it a toss-up?
(The app will be hosted on Linux)
I have lots of reasons for the re-write, one them is to make myself more marketable. Anyone care to comment on the which of these frameworks has better long-term outlook for a new programmer? (I've read the StackOverflow threads already, but ask just in case something new has come up).
Thanks in advance.
|
[
"A quick google to give you some pointers on using Active Directory in these environments.\n\nhttp://www.djangosnippets.org/snippets/501/\nhttp://www.zorched.net/2007/06/04/active-directory-authentication-for-ruby-on-rails/\n\n",
"I did Active Directory auth in Rails about a year ago. I did it similarly to the article Daniel linked to. It felt hacky, but it was an internal app, so it was acceptable.\nSince then Passenger (mod_rails) has come out which could be a better alternative than FastCGI. \n"
] |
[
3,
0
] |
[] |
[] |
[
"active_directory",
"django",
"python",
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000761820_active_directory_django_python_ruby_ruby_on_rails.txt
|
Q:
How to interact through vim?
I am writing an editor which has lot of parameters that could be easily interacted with through text. I find it inconvenient to implement a separate text-editor or lots of UI code for every little parameter. Usual buttons, boxes and gadgets would be burdensome and clumsy. I'd much rather let user interact with those parameters through vim.
The preferable way for me would be to get editor open vim with my text buffer. Then, when one would save the text buffer in vim, my editor would get notified from that and update it's view.
A:
Write your intermediate results (what you want the user to edit) to a temp file. Then use the $EDITOR environment variable in a system call to make the user edit the temp file, and read the results when the process finishes.
This lets users configure which editor they want to use in a pseudo-standard fashion.
A:
Check out It's All Text!. It's a Firefox add-in that does something similar for textareas on web pages, except the editor in question is configurable.
A:
You can also think about integrating VIM in to your app. Pida does this
|
How to interact through vim?
|
I am writing an editor which has lot of parameters that could be easily interacted with through text. I find it inconvenient to implement a separate text-editor or lots of UI code for every little parameter. Usual buttons, boxes and gadgets would be burdensome and clumsy. I'd much rather let user interact with those parameters through vim.
The preferable way for me would be to get editor open vim with my text buffer. Then, when one would save the text buffer in vim, my editor would get notified from that and update it's view.
|
[
"Write your intermediate results (what you want the user to edit) to a temp file. Then use the $EDITOR environment variable in a system call to make the user edit the temp file, and read the results when the process finishes.\nThis lets users configure which editor they want to use in a pseudo-standard fashion.\n",
"Check out It's All Text!. It's a Firefox add-in that does something similar for textareas on web pages, except the editor in question is configurable.\n",
"You can also think about integrating VIM in to your app. Pida does this\n"
] |
[
6,
2,
1
] |
[] |
[] |
[
"linux",
"python",
"vim"
] |
stackoverflow_0000763372_linux_python_vim.txt
|
Q:
Determine if a function is available in a Python module
I am working on some Python socket code that's using the socket.fromfd() function.
However, this method is not available on all platforms, so I am writing some fallback code in the case that the method is not defined.
What's the best way to determine if a method is defined at runtime? Is the following sufficient or is there a better idiom?
if 'fromfd' in dir(socket):
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
I'm slightly concerned that the documentation for dir() seems to discourage its use. Would getattr() be a better choice, as in:
if getattr(socket, 'fromfd', None) is not None:
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
Thoughts?
EDIT As Paolo pointed out, this question is nearly a duplicate of a question about determining attribute presence. However, since the terminology used is disjoint (lk's "object has an attribute" vs my "module has a function") it may be helpful to preserve this question for searchability unless the two can be combined.
A:
hasattr() is the best choice. Go with that. :)
if hasattr(socket, 'fromfd'):
pass
else:
pass
EDIT: Actually, according to the docs all hasattr is doing is calling getattr and catching the exception. So if you want to cut out the middle man you should go with marcog's answer.
EDIT: I also just realized this question is actually a duplicate. One of the answers there discusses the merits of the two options you have: catching the exception ("easier to ask for forgiveness than permission") or simply checking before hand ("look before you leap"). Honestly, I am more of the latter, but it seems like the Python community leans towards the former school of thought.
A:
Or simply use a try..except block:
try:
sock = socket.fromfd(...)
except AttributeError:
sock = socket.socket(...)
A:
hasattr(obj, 'attributename') is probably a better one. hasattr will try to access the attribute, and if it's not there, it'll return false.
It's possible to have dynamic methods in python, i.e. methods that are created when you try to access them. They would not be in dir(...). However hasattr would check for it.
>>> class C(object):
... def __init__(self):
... pass
... def mymethod1(self):
... print "In #1"
... def __getattr__(self, name):
... if name == 'mymethod2':
... def func():
... print "In my super meta #2"
... return func
... else:
... raise AttributeError
...
>>> c = C()
>>> 'mymethod1' in dir(c)
True
>>> hasattr(c, 'mymethod1')
True
>>> c.mymethod1()
In #1
>>> 'mymethod2' in dir(c)
False
>>> hasattr(c, 'mymethod2')
True
>>> c.mymethod2()
In my super meta #2
|
Determine if a function is available in a Python module
|
I am working on some Python socket code that's using the socket.fromfd() function.
However, this method is not available on all platforms, so I am writing some fallback code in the case that the method is not defined.
What's the best way to determine if a method is defined at runtime? Is the following sufficient or is there a better idiom?
if 'fromfd' in dir(socket):
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
I'm slightly concerned that the documentation for dir() seems to discourage its use. Would getattr() be a better choice, as in:
if getattr(socket, 'fromfd', None) is not None:
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
Thoughts?
EDIT As Paolo pointed out, this question is nearly a duplicate of a question about determining attribute presence. However, since the terminology used is disjoint (lk's "object has an attribute" vs my "module has a function") it may be helpful to preserve this question for searchability unless the two can be combined.
|
[
"hasattr() is the best choice. Go with that. :)\nif hasattr(socket, 'fromfd'):\n pass\nelse:\n pass\n\nEDIT: Actually, according to the docs all hasattr is doing is calling getattr and catching the exception. So if you want to cut out the middle man you should go with marcog's answer.\nEDIT: I also just realized this question is actually a duplicate. One of the answers there discusses the merits of the two options you have: catching the exception (\"easier to ask for forgiveness than permission\") or simply checking before hand (\"look before you leap\"). Honestly, I am more of the latter, but it seems like the Python community leans towards the former school of thought.\n",
"Or simply use a try..except block:\ntry:\n sock = socket.fromfd(...)\nexcept AttributeError:\n sock = socket.socket(...)\n\n",
"hasattr(obj, 'attributename') is probably a better one. hasattr will try to access the attribute, and if it's not there, it'll return false. \nIt's possible to have dynamic methods in python, i.e. methods that are created when you try to access them. They would not be in dir(...). However hasattr would check for it.\n>>> class C(object):\n... def __init__(self):\n... pass\n... def mymethod1(self):\n... print \"In #1\"\n... def __getattr__(self, name):\n... if name == 'mymethod2':\n... def func():\n... print \"In my super meta #2\"\n... return func\n... else:\n... raise AttributeError\n... \n>>> c = C()\n>>> 'mymethod1' in dir(c)\nTrue\n>>> hasattr(c, 'mymethod1')\nTrue\n>>> c.mymethod1()\nIn #1\n>>> 'mymethod2' in dir(c)\nFalse\n>>> hasattr(c, 'mymethod2')\nTrue\n>>> c.mymethod2()\nIn my super meta #2\n\n"
] |
[
60,
26,
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000763971_python.txt
|
Q:
Trying to get enum style choices= working for django but the whole tuplets are appearing in the drop down
Im using appengine and the appenginepatch (so my issue could be related to that)
I have set up a model with a property that has several choices but when trying to display on a form or via admin interface I am getting an error:
Property mode is 'o'; must be one of (('s', 'Single'), ('m', 'Multi'), ('o', 'Ordered'))
This is my code:
MODES = (
('s', 'Single'),
('m', 'Multi'),
('o', 'Ordered')
)
class X(search.SearchableModel):
mode = models.StringProperty( default='s', choices=MODES )
if I set it to use Integers (as below) the admin form (and my own ModelForm) shows each option for the property as the whole tuplet so that when I select and try to save I get the error that I'm not saving an Integer value
MODES = (
(0, 'Single'),
(1, 'Multi'),
(2, 'Ordered')
)
class X(search.SearchableModel):
mode = models.IntegerProperty( default=0, choices=MODES )
Is there something special I have to do?
A:
It looks like this is an issue in Django/appengine support. It's documented here on the google-app-engine-django bug tracker, but it's closed as "wontfix" there. It is also documented here on the googleappengine bug tracker and is closed as invalid.
According to the docs, the appengine choices parameter works different than the Django one. You do not appear to be able to do what you want without creating a custom widget. According to Guido's comment closing the googleappengine ticket,
I realize that this may cause problems
when you're trying to create a form
from the model, but the solution is to
override the form field using a custom
widget and passing the list of desired
choices to the widget. (There's an
example of this in Rietveld, in
codereview/views.py, class
SettingForm.)
|
Trying to get enum style choices= working for django but the whole tuplets are appearing in the drop down
|
Im using appengine and the appenginepatch (so my issue could be related to that)
I have set up a model with a property that has several choices but when trying to display on a form or via admin interface I am getting an error:
Property mode is 'o'; must be one of (('s', 'Single'), ('m', 'Multi'), ('o', 'Ordered'))
This is my code:
MODES = (
('s', 'Single'),
('m', 'Multi'),
('o', 'Ordered')
)
class X(search.SearchableModel):
mode = models.StringProperty( default='s', choices=MODES )
if I set it to use Integers (as below) the admin form (and my own ModelForm) shows each option for the property as the whole tuplet so that when I select and try to save I get the error that I'm not saving an Integer value
MODES = (
(0, 'Single'),
(1, 'Multi'),
(2, 'Ordered')
)
class X(search.SearchableModel):
mode = models.IntegerProperty( default=0, choices=MODES )
Is there something special I have to do?
|
[
"It looks like this is an issue in Django/appengine support. It's documented here on the google-app-engine-django bug tracker, but it's closed as \"wontfix\" there. It is also documented here on the googleappengine bug tracker and is closed as invalid.\nAccording to the docs, the appengine choices parameter works different than the Django one. You do not appear to be able to do what you want without creating a custom widget. According to Guido's comment closing the googleappengine ticket, \n\nI realize that this may cause problems\n when you're trying to create a form\n from the model, but the solution is to\n override the form field using a custom\n widget and passing the list of desired\n choices to the widget. (There's an\n example of this in Rietveld, in\n codereview/views.py, class\n SettingForm.)\n\n"
] |
[
2
] |
[] |
[] |
[
"django",
"google_app_engine",
"python"
] |
stackoverflow_0000764177_django_google_app_engine_python.txt
|
Q:
Pythonic macro syntax
I've been working on an alternative compiler front-end for Python where all syntax is parsed via macros. I'm finally to the point with its development that I can start work on a superset of the Python language where macros are an integral component.
My problem is that I can't come up with a pythonic macro definition syntax. I've posted several examples in two different syntaxes in answers below. Can anyone come up with a better syntax? It doesn't have to build off the syntax I've proposed in any way -- I'm completely open here. Any comments, suggestions, etc would be helpful, as would alternative syntaxes showing the examples I've posted.
A note about the macro structure, as seen in the examples I've posted: The use of MultiLine/MLMacro and Partial/PartialMacro tell the parser how the macro is applied. If it's multiline, the macro will match multiple line lists; generally used for constructs. If it's partial, the macro will match code in the middle of a list; generally used for operators.
A:
After thinking about it a while a few days ago, and coming up with nothing worth posting, I came back to it now and came up with some syntax I rather like, because it nearly looks like python:
macro PrintMacro:
syntax:
"print", OneOrMore(Var(), name='vars')
return Printnl(vars, None)
Make all the macro "keywords" look like creating python objects (Var() instead of simple Var)
Pass the name of elements as a "keyword parameter" to items we want a name for.
It should still be easy to find all the names in the parser, since this syntax definition anyway needs to be interpreted in some way to fill the macro classes syntax variable.
needs to be converted to fill the syntax variable of the resulting macro class.
The internal syntax representation could also look the same:
class PrintMacro(Macro):
syntax = 'print', OneOrMore(Var(), name='vars')
...
The internal syntax classes like OneOrMore would follow this pattern to allow subitems and an optional name:
class MacroSyntaxElement(object):
def __init__(self, *p, name=None):
self.subelements = p
self.name = name
When the macro matches, you just collect all items that have a name and pass them as keyword parameters to the handler function:
class Macro():
...
def parse(self, ...):
syntaxtree = []
nameditems = {}
# parse, however this is done
# store all elements that have a name as
# nameditems[name] = parsed_element
self.handle(syntaxtree, **nameditems)
The handler function would then be defined like this:
class PrintMacro(Macro):
...
def handle(self, syntaxtree, vars):
return Printnl(vars, None)
I added the syntaxtree as a first parameter that is always passed, so you wouldn't need to have any named items if you just want to do very basic stuff on the syntax tree.
Also, if you don't like the decorators, why not add the macro type like a "base class"? IfMacro would then look like this:
macro IfMacro(MultiLine):
syntax:
Group("if", Var(), ":", Var(), name='if_')
ZeroOrMore("elif", Var(), ":", Var(), name='elifs')
Optional("else", Var(name='elseBody'))
return If(
[(cond, Stmt(body)) for keyword, cond, colon, body in [if_] + elifs],
None if elseBody is None else Stmt(elseBody)
)
And in the internal representation:
class IfMacro(MultiLineMacro):
syntax = (
Group("if", Var(), ":", Var(), name='if_'),
ZeroOrMore("elif", Var(), ":", Var(), name='elifs'),
Optional("else", Var(name='elseBody'))
)
def handle(self, syntaxtree, if_=None, elifs=None, elseBody=None):
# Default parameters in case there is no such named item.
# In this case this can only happen for 'elseBody'.
return If(
[(cond, Stmt(body)) for keyword, cond, body in [if_] + elifs],
None if elseNody is None else Stmt(elseBody)
)
I think this would give a quite flexible system. Main advantages:
Easy to learn (looks like standard python)
Easy to parse (parses like standard python)
Optional items can be easily handled, since you can have a default parameter None in the handler
Flexible use of named items:
You don't need to name any items if you don't want, because the syntax tree is always passed in.
You can name any subexpressions in a big macro definition, so it's easy to pick out specific stuff you're interested in
Easily extensible if you want to add more features to the macro constructs. For example Several("abc", min=3, max=5, name="a"). I think this could also be used to add default values to optional elements like Optional("step", Var(), name="step", default=1).
I'm not sure about the quote/unquote syntax with "quote:" and "$", but some syntax for this is needed, since it makes life much easier if you don't have to manually write syntax trees. Probably its a good idea to require (or just permit?) parenthesis for "$", so that you can insert more complicated syntax parts, if you want. Like $(Stmt(a, b, c)).
The ToMacro would look something like this:
# macro definition
macro ToMacro(Partial):
syntax:
Var(name='start'), "to", Var(name='end'), Optional("inclusive", name='inc'), Optional("step", Var(name='step'))
if step == None:
step = quote(1)
if inclusive:
return quote:
xrange($(start), $(end)+1, $(step))
else:
return quote:
xrange($(start), $(end), $(step))
# resulting macro class
class ToMacro(PartialMacro):
syntax = Var(name='start'), "to", Var(name='end'), Optional("inclusive", name='inc'), Optional("step", Var(name='step'))
def handle(syntaxtree, start=None, end=None, inc=None, step=None):
if step is None:
step = Number(1)
if inclusive:
return ['xrange', ['(', start, [end, '+', Number(1)], step, ')']]
return ['xrange', ['(', start, end, step, ')']]
A:
You might consider looking at how Boo (a .NET-based language with a syntax largely inspired by Python) implements macros, as described at http://boo.codehaus.org/Syntactic+Macros.
A:
You should take a look at MetaPython to see if it accomplishes what you're looking for.
A:
Incorporating BNF
class IfMacro(Macro):
syntax: "if" expression ":" suite ("elif" expression ":" suite )* ["else" ":" suite]
def handle(self, if_, elifs, elseBody):
return If(
[(expression, Stmt(suite)) for expression, suite in [if_] + elifs],
elseBody != None and Stmt(elseBody) or None
)
A:
I'm posting a bit of floating ideas to see if it inspires.
I don't know much python, and I'm not using real python syntax, but it beats nothing :p
macro PrintMacro:
syntax:
print $a
rules:
a: list(String), as vars
handle:
# do something with 'vars'
macro IfMacro:
syntax:
if $a :
$b
$c
rules:
a: 1 boolean as if_cond
b: 1 coderef as if_code
c: optional macro(ElseIf) as else_if_block
if( if_cond ):
if_code();
elsif( defined else_if_block ):
else_if_block();
More ideas:
Implementing Perl quote style, but in Python! ( its a very bad implementation, and note: whitespace is significant in the rule )
macro stringQuote:
syntax:
q$open$content$close
rules:
open: anyOf('[{(/_') or anyRange('a','z') or anyRange('0','9');
content: string
close: anyOf(']})/_') or anyRange('a','z') or anyRange('0','9');
detect:
return 1 if open == '[' and close == ']'
return 1 if open == '{' and close == '}'
return 1 if open == '(' and close == ')'
return 1 if open == close
return 0
handle:
return content;
A:
This is a new macro syntax I've come up with based on Kent Fredric's ideas. It parses the syntax into a list just like the code is parsed.
Print macro:
macro PrintMacro:
syntax:
print $stmts
if not isinstance(stmts, list):
stmts = [stmts]
return Printnl(stmts, None)
If macro:
@MultiLine
macro IfMacro:
syntax:
@if_ = if $cond: $body
@elifs = ZeroOrMore(elif $cond: $body)
Optional(else: $elseBody)
return If(
[(cond, Stmt(body)) for cond, body in [if_] + elifs],
elseBody != None and Stmt(elseBody) or None
)
X to Y [inclusive] [step Z] macro:
@Partial
macro ToMacro:
syntax:
$start to $end Optional(inclusive) Optional(step $step)
if step == None:
step = quote 1
if inclusive:
return quote:
xrange($start, $end+1, $step)
else:
return quote:
xrange($start, $end, $step)
Aside from the minor issue of using decorators to identify macro type, my only real issue with this is the way you can name groups, e.g. in the if case.
I'm using @name = ..., but this just reeks of Perl. I don't want to just use name = ... because that could conflict with a macro pattern to match. Any ideas?
A:
If you're only asking about the syntax (not implementation) of macros within Python, then I believe the answer is obvious. The syntax should closely match what Python already has (i.e., the "def" keyword).
Whether you implement this as one of the following is up to you:
def macro largest(lst):
defmac largest(lst):
macro largest(lst):
but I believe it should be exactly the same as a normal function with respect to the rest so that:
def twice_second(a,b):
glob_i = glob_i + 1
return b * 2
x = twice_second (1,7);
and
defmac twice_second(a,b):
glob_i = glob_i + 1
return b * 2
x = twice_second (1,7);
are functionally equivalent.
The way I would implement this is with a pre-processor (a la C) which would:
replace all defmac's with defs in the input file.
pass it through Python to check syntax (sneaky bit, this).
put the defmac's back in.
find all uses of each macro and "inline" them, using your own reserved variables, such as converting local var a with __macro_second_local_a.
return value should be a special variable as well (macro_second_retval).
global variables would keep their real names.
parameter can be given _macro_second_param_XXX names.
once all inlining is done, remove the defmac 'functions' entirely.
pass the resultant file through Python.
No doubt there'll be some nigglies to take care of (like tuples or multiple return points) but Python is sufficiently robust to handle that in my opinion.
So:
x = twice_second (1,7);
becomes:
# These lines are the input params.
__macro_second_param_a = 1
__macro_second_param_b = 7
# These lines are the inlined macro.
glob_i = glob_i + 1
__macro_second_retval = __macro_second_param_b * 2
# Modified call to macro.
x = __macro_second_retval
A:
This is the current mechanism for defining syntax by use of a standard Python class.
Print macro:
class PrintMacro(Macro):
syntax = 'print', Var
def handle(self, stmts):
if not isinstance(stmts, list):
stmts = [stmts]
return Printnl(stmts, None)
If/elif/else macro class:
class IfMacro(MLMacro):
syntax = (
('if', Var, Var),
ZeroOrMore('elif', Var, Var),
Optional('else', Var)
)
def handle(self, if_, elifs, elseBody):
return If(
[(cond, Stmt(body)) for cond, body in [if_] + elifs],
elseBody != None and Stmt(elseBody) or None
)
X to Y [inclusive] [step Z] macro class:
class ToMacro(PartialMacro):
syntax = Var, 'to', Var, Optional('inclusive'), Optional('step', Var)
def handle(self, start, end, inclusive, step):
if inclusive:
end = ['(', end, '+', Number(1), ')']
if step == None: step = Number(1)
return ['xrange', ['(', start, end, step, ')']]
My issues with this design is that things are very verbose and don't feel pythonic in the least. In addition, the lack of quotation ability makes complex macros difficult.
A:
This is the macro syntax I've come up with for my Python superset.
Print macro:
macro PrintMacro:
syntax:
stmts = 'print', Var
if not isinstance(stmts, list):
stmts = [stmts]
return Printnl(stmts, None)
If macro:
@MultiLine
macro IfMacro:
syntax:
if_ = 'if', Var, Var
elifs = ZeroOrMore('elif', Var, Var)
else_ = Optional('else', Var)
return If(
[(cond, Stmt(body)) for cond, body in [if_] + elifs],
elseBody != None and Stmt(elseBody) or None
)
X to Y [inclusive] [step Z] macro:
@Partial
macro ToMacro:
syntax:
start = Var
'to'
end = Var
inclusive = Optional('inclusive')
step = Optional('step', Var)
if step == None:
step = quote 1
if inclusive:
return quote:
xrange($start, $end+1, $step)
else:
return quote:
xrange($start, $end, $step)
My primary issue with this is that the syntax block is unclear, particularly the 'to' line in the last example. I'm also not a big fan of using decorators to differentiate macro types.
|
Pythonic macro syntax
|
I've been working on an alternative compiler front-end for Python where all syntax is parsed via macros. I'm finally to the point with its development that I can start work on a superset of the Python language where macros are an integral component.
My problem is that I can't come up with a pythonic macro definition syntax. I've posted several examples in two different syntaxes in answers below. Can anyone come up with a better syntax? It doesn't have to build off the syntax I've proposed in any way -- I'm completely open here. Any comments, suggestions, etc would be helpful, as would alternative syntaxes showing the examples I've posted.
A note about the macro structure, as seen in the examples I've posted: The use of MultiLine/MLMacro and Partial/PartialMacro tell the parser how the macro is applied. If it's multiline, the macro will match multiple line lists; generally used for constructs. If it's partial, the macro will match code in the middle of a list; generally used for operators.
|
[
"After thinking about it a while a few days ago, and coming up with nothing worth posting, I came back to it now and came up with some syntax I rather like, because it nearly looks like python:\nmacro PrintMacro:\n syntax:\n \"print\", OneOrMore(Var(), name='vars')\n\n return Printnl(vars, None)\n\n\nMake all the macro \"keywords\" look like creating python objects (Var() instead of simple Var)\nPass the name of elements as a \"keyword parameter\" to items we want a name for. \nIt should still be easy to find all the names in the parser, since this syntax definition anyway needs to be interpreted in some way to fill the macro classes syntax variable.\nneeds to be converted to fill the syntax variable of the resulting macro class.\n\nThe internal syntax representation could also look the same:\nclass PrintMacro(Macro):\n syntax = 'print', OneOrMore(Var(), name='vars')\n ...\n\nThe internal syntax classes like OneOrMore would follow this pattern to allow subitems and an optional name:\nclass MacroSyntaxElement(object):\n def __init__(self, *p, name=None):\n self.subelements = p\n self.name = name\n\nWhen the macro matches, you just collect all items that have a name and pass them as keyword parameters to the handler function:\nclass Macro():\n ...\n def parse(self, ...):\n syntaxtree = []\n nameditems = {}\n # parse, however this is done\n # store all elements that have a name as\n # nameditems[name] = parsed_element\n self.handle(syntaxtree, **nameditems)\n\nThe handler function would then be defined like this:\nclass PrintMacro(Macro):\n ...\n def handle(self, syntaxtree, vars):\n return Printnl(vars, None)\n\nI added the syntaxtree as a first parameter that is always passed, so you wouldn't need to have any named items if you just want to do very basic stuff on the syntax tree.\nAlso, if you don't like the decorators, why not add the macro type like a \"base class\"? IfMacro would then look like this:\nmacro IfMacro(MultiLine):\n syntax:\n Group(\"if\", Var(), \":\", Var(), name='if_')\n ZeroOrMore(\"elif\", Var(), \":\", Var(), name='elifs')\n Optional(\"else\", Var(name='elseBody'))\n\n return If(\n [(cond, Stmt(body)) for keyword, cond, colon, body in [if_] + elifs],\n None if elseBody is None else Stmt(elseBody)\n )\n\nAnd in the internal representation:\nclass IfMacro(MultiLineMacro):\n syntax = (\n Group(\"if\", Var(), \":\", Var(), name='if_'),\n ZeroOrMore(\"elif\", Var(), \":\", Var(), name='elifs'),\n Optional(\"else\", Var(name='elseBody'))\n )\n\n def handle(self, syntaxtree, if_=None, elifs=None, elseBody=None):\n # Default parameters in case there is no such named item.\n # In this case this can only happen for 'elseBody'.\n return If(\n [(cond, Stmt(body)) for keyword, cond, body in [if_] + elifs],\n None if elseNody is None else Stmt(elseBody)\n )\n\nI think this would give a quite flexible system. Main advantages:\n\nEasy to learn (looks like standard python)\nEasy to parse (parses like standard python)\nOptional items can be easily handled, since you can have a default parameter None in the handler\nFlexible use of named items:\n\n\nYou don't need to name any items if you don't want, because the syntax tree is always passed in.\nYou can name any subexpressions in a big macro definition, so it's easy to pick out specific stuff you're interested in\n\nEasily extensible if you want to add more features to the macro constructs. For example Several(\"abc\", min=3, max=5, name=\"a\"). I think this could also be used to add default values to optional elements like Optional(\"step\", Var(), name=\"step\", default=1).\n\nI'm not sure about the quote/unquote syntax with \"quote:\" and \"$\", but some syntax for this is needed, since it makes life much easier if you don't have to manually write syntax trees. Probably its a good idea to require (or just permit?) parenthesis for \"$\", so that you can insert more complicated syntax parts, if you want. Like $(Stmt(a, b, c)).\nThe ToMacro would look something like this:\n# macro definition\nmacro ToMacro(Partial):\n syntax:\n Var(name='start'), \"to\", Var(name='end'), Optional(\"inclusive\", name='inc'), Optional(\"step\", Var(name='step'))\n\n if step == None:\n step = quote(1)\n if inclusive:\n return quote:\n xrange($(start), $(end)+1, $(step))\n else:\n return quote:\n xrange($(start), $(end), $(step))\n\n# resulting macro class\nclass ToMacro(PartialMacro):\n syntax = Var(name='start'), \"to\", Var(name='end'), Optional(\"inclusive\", name='inc'), Optional(\"step\", Var(name='step'))\n\n def handle(syntaxtree, start=None, end=None, inc=None, step=None):\n if step is None:\n step = Number(1)\n if inclusive:\n return ['xrange', ['(', start, [end, '+', Number(1)], step, ')']]\n return ['xrange', ['(', start, end, step, ')']]\n\n",
"You might consider looking at how Boo (a .NET-based language with a syntax largely inspired by Python) implements macros, as described at http://boo.codehaus.org/Syntactic+Macros.\n",
"You should take a look at MetaPython to see if it accomplishes what you're looking for.\n",
"Incorporating BNF\nclass IfMacro(Macro):\n syntax: \"if\" expression \":\" suite (\"elif\" expression \":\" suite )* [\"else\" \":\" suite] \n\n def handle(self, if_, elifs, elseBody):\n return If(\n [(expression, Stmt(suite)) for expression, suite in [if_] + elifs],\n elseBody != None and Stmt(elseBody) or None\n )\n\n",
"I'm posting a bit of floating ideas to see if it inspires. \nI don't know much python, and I'm not using real python syntax, but it beats nothing :p\nmacro PrintMacro:\n syntax: \n print $a\n rules: \n a: list(String), as vars\n handle:\n # do something with 'vars' \n\nmacro IfMacro:\n syntax: \n if $a :\n $b\n $c \n rules: \n a: 1 boolean as if_cond \n b: 1 coderef as if_code \n c: optional macro(ElseIf) as else_if_block \n\n if( if_cond ):\n if_code();\n elsif( defined else_if_block ): \n else_if_block(); \n\nMore ideas: \nImplementing Perl quote style, but in Python! ( its a very bad implementation, and note: whitespace is significant in the rule ) \nmacro stringQuote:\n syntax: \n q$open$content$close\n rules: \n open: anyOf('[{(/_') or anyRange('a','z') or anyRange('0','9');\n content: string\n close: anyOf(']})/_') or anyRange('a','z') or anyRange('0','9');\n detect: \n return 1 if open == '[' and close == ']' \n return 1 if open == '{' and close == '}'\n return 1 if open == '(' and close == ')'\n return 1 if open == close \n return 0\n handle: \n return content;\n\n",
"This is a new macro syntax I've come up with based on Kent Fredric's ideas. It parses the syntax into a list just like the code is parsed.\nPrint macro:\nmacro PrintMacro:\n syntax:\n print $stmts\n\n if not isinstance(stmts, list):\n stmts = [stmts]\n return Printnl(stmts, None)\n\nIf macro:\n@MultiLine\nmacro IfMacro:\n syntax:\n @if_ = if $cond: $body\n @elifs = ZeroOrMore(elif $cond: $body)\n Optional(else: $elseBody)\n\n return If(\n [(cond, Stmt(body)) for cond, body in [if_] + elifs],\n elseBody != None and Stmt(elseBody) or None\n )\n\nX to Y [inclusive] [step Z] macro:\n@Partial\nmacro ToMacro:\n syntax:\n $start to $end Optional(inclusive) Optional(step $step)\n\n if step == None:\n step = quote 1\n if inclusive:\n return quote:\n xrange($start, $end+1, $step)\n else:\n return quote:\n xrange($start, $end, $step)\n\nAside from the minor issue of using decorators to identify macro type, my only real issue with this is the way you can name groups, e.g. in the if case.\nI'm using @name = ..., but this just reeks of Perl. I don't want to just use name = ... because that could conflict with a macro pattern to match. Any ideas?\n",
"If you're only asking about the syntax (not implementation) of macros within Python, then I believe the answer is obvious. The syntax should closely match what Python already has (i.e., the \"def\" keyword).\nWhether you implement this as one of the following is up to you:\ndef macro largest(lst):\ndefmac largest(lst):\nmacro largest(lst):\n\nbut I believe it should be exactly the same as a normal function with respect to the rest so that:\ndef twice_second(a,b):\n glob_i = glob_i + 1\n return b * 2\nx = twice_second (1,7);\n\nand\ndefmac twice_second(a,b):\n glob_i = glob_i + 1\n return b * 2\nx = twice_second (1,7);\n\nare functionally equivalent.\nThe way I would implement this is with a pre-processor (a la C) which would:\n\nreplace all defmac's with defs in the input file.\npass it through Python to check syntax (sneaky bit, this).\nput the defmac's back in.\nfind all uses of each macro and \"inline\" them, using your own reserved variables, such as converting local var a with __macro_second_local_a.\nreturn value should be a special variable as well (macro_second_retval).\nglobal variables would keep their real names.\nparameter can be given _macro_second_param_XXX names.\nonce all inlining is done, remove the defmac 'functions' entirely.\npass the resultant file through Python.\n\nNo doubt there'll be some nigglies to take care of (like tuples or multiple return points) but Python is sufficiently robust to handle that in my opinion.\nSo:\nx = twice_second (1,7);\n\nbecomes:\n# These lines are the input params.\n__macro_second_param_a = 1\n__macro_second_param_b = 7\n\n# These lines are the inlined macro.\nglob_i = glob_i + 1\n__macro_second_retval = __macro_second_param_b * 2\n\n# Modified call to macro.\nx = __macro_second_retval\n\n",
"This is the current mechanism for defining syntax by use of a standard Python class.\nPrint macro:\nclass PrintMacro(Macro):\n syntax = 'print', Var\n def handle(self, stmts):\n if not isinstance(stmts, list):\n stmts = [stmts]\n return Printnl(stmts, None)\n\nIf/elif/else macro class:\nclass IfMacro(MLMacro):\n syntax = (\n ('if', Var, Var),\n ZeroOrMore('elif', Var, Var),\n Optional('else', Var)\n )\n def handle(self, if_, elifs, elseBody):\n return If(\n [(cond, Stmt(body)) for cond, body in [if_] + elifs],\n elseBody != None and Stmt(elseBody) or None\n )\n\nX to Y [inclusive] [step Z] macro class:\nclass ToMacro(PartialMacro):\n syntax = Var, 'to', Var, Optional('inclusive'), Optional('step', Var)\n def handle(self, start, end, inclusive, step):\n if inclusive:\n end = ['(', end, '+', Number(1), ')']\n if step == None: step = Number(1)\n return ['xrange', ['(', start, end, step, ')']]\n\nMy issues with this design is that things are very verbose and don't feel pythonic in the least. In addition, the lack of quotation ability makes complex macros difficult.\n",
"This is the macro syntax I've come up with for my Python superset.\nPrint macro:\nmacro PrintMacro:\n syntax:\n stmts = 'print', Var\n\n if not isinstance(stmts, list):\n stmts = [stmts]\n return Printnl(stmts, None)\n\nIf macro:\n@MultiLine\nmacro IfMacro:\n syntax:\n if_ = 'if', Var, Var\n elifs = ZeroOrMore('elif', Var, Var)\n else_ = Optional('else', Var)\n\n return If(\n [(cond, Stmt(body)) for cond, body in [if_] + elifs],\n elseBody != None and Stmt(elseBody) or None\n )\n\nX to Y [inclusive] [step Z] macro:\n@Partial\nmacro ToMacro:\n syntax:\n start = Var\n 'to'\n end = Var\n inclusive = Optional('inclusive')\n step = Optional('step', Var)\n\n if step == None:\n step = quote 1\n if inclusive:\n return quote:\n xrange($start, $end+1, $step)\n else:\n return quote:\n xrange($start, $end, $step)\n\nMy primary issue with this is that the syntax block is unclear, particularly the 'to' line in the last example. I'm also not a big fan of using decorators to differentiate macro types.\n"
] |
[
11,
3,
3,
2,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"macros",
"python",
"syntax"
] |
stackoverflow_0000454648_macros_python_syntax.txt
|
Q:
How do I modify sys.path from .htaccess to allow mod_python to see Django?
The host I'm considering for hosting a Django site has mod_python installed, but does not have Django. Django's INSTALL file indicates that I can simply copy the django directory to Python's site-packages directory to install Django, so I suspect that it might be possible to configure Python / mod_python to look for it elsewhere (namely my user space) by modifying sys.path, but I don't know how to change it from .htaccess or mod_python.
How do I modify sys.path from .htaccess to allow mod_python to see Django?
P.S. I can only access the site via FTP (i.e. no shell access). I realize that it sounds like I should just switch hosts, but there are compelling reasons for me to make this work so I'm at least going to try.
A:
According to ticket #2255 for Django, you need admin access to httpd.conf in order to use Django with mod_python, and this is not going to change, so you may be dead in the water. To answer the basic question of how to modify sys.path from .htaccess, you can use the PythonPath directive in .htaccess.
A:
Is the PythonPath setting what you are looking for? I haven't tried it with Django, but I would assume that it should do the job for you.
A:
You're using mod_python wrong. It was never intended to serve python web applications. You should be using WSGI for this... or at least FastCGI.
|
How do I modify sys.path from .htaccess to allow mod_python to see Django?
|
The host I'm considering for hosting a Django site has mod_python installed, but does not have Django. Django's INSTALL file indicates that I can simply copy the django directory to Python's site-packages directory to install Django, so I suspect that it might be possible to configure Python / mod_python to look for it elsewhere (namely my user space) by modifying sys.path, but I don't know how to change it from .htaccess or mod_python.
How do I modify sys.path from .htaccess to allow mod_python to see Django?
P.S. I can only access the site via FTP (i.e. no shell access). I realize that it sounds like I should just switch hosts, but there are compelling reasons for me to make this work so I'm at least going to try.
|
[
"According to ticket #2255 for Django, you need admin access to httpd.conf in order to use Django with mod_python, and this is not going to change, so you may be dead in the water. To answer the basic question of how to modify sys.path from .htaccess, you can use the PythonPath directive in .htaccess.\n",
"Is the PythonPath setting what you are looking for? I haven't tried it with Django, but I would assume that it should do the job for you.\n",
"You're using mod_python wrong. It was never intended to serve python web applications. You should be using WSGI for this... or at least FastCGI.\n"
] |
[
3,
1,
1
] |
[] |
[] |
[
".htaccess",
"apache",
"django",
"mod_python",
"python"
] |
stackoverflow_0000764312_.htaccess_apache_django_mod_python_python.txt
|
Q:
Is there a Python library that allows to build user interfaces without writing much code?
I am writing editing front ends in Python since several years now, and I am fed up with micromanaging every UI detail of a window or dialog every single time.
Is there a technology that allows me to, say, specify the relations between a GTK+ Glade-designed interface and the tables and records of an SQLite database to do all the middle man work? It should spare me the work of manually writing event handlers, input evaluators and view updates.
The technologies in question are just examples, but I want to stick with Python as far as possible.
A:
Besides the ones already mentioned I can add:
Kiwi
uxpython
pygtk
treethon
I've never used any of them so have no recommendations but, for what it's worth, I have used at least 2 complex programs built directly on pygtk that worked in both Windows and Linux.
I think Kiwi is the only one of these with baked in support for db (through interface with SQLAlchemy, SQLObject, or Storm) but I would be surprised if you couldn't use one of those ORM's inside any of the other frameworks.
A:
PyQt and its models can automate some of these tasks for you (to some amount off course, e.g. filling widgets with data from a database and handling most of the widgets behaviour, buffering etc.).
If you want a more object-oriented approach to handling SQL you could look into an ORM-oriented solution (for example SQLAlchemy).
A:
Dabo is built on top of wxPython, so you may not prefer it, but it's designed to make it easy to tie a GUI to a database, so I'd recommend you check it out if you haven't already. In particular, it's got good facilities for tying widgets to data, and handling a lot of the common cases of GUI development.
A:
wxGlade may help, although I haven't used it myself so I don't speak from experience.
Boa Constructor apparently has a wxPython GUI builder in it, and there is also PythonCard, though development on these two projects seems to have stalled.
A:
Traits might be a good option for you.
http://code.enthought.com/projects/traits/docs/html/TUIUG/index.html
AS simple as it is to map a UI to an object, it doesn't seem too far fetched to incorporate SQLAlchemy for persistence.
A:
There is a good book on wxPython, "wxPython in Action", which can't be said for some of the other solutions. No knock on the others. I've had success developing with wxPython in the past and it comes with a great set of demo applications with source code from which you can borrow liberally.
The best UI designer I found for wxPython applications is a commercial one, Anthemion DialogBlocks. It's by one of the wxPython programmers and is worth the money. Other solutions for UI design include wxGlade (I found it usable but not featureful) and Boa Constructor (haven't used it). Wing IDE might also have one. Stani's Python Editor bundles wxGlade, I believe. There are a lot of other projects that don't really work or are fairly old.
As far as SQL automation goes, as another answerer says, I'd look at SQL alchemy, but the learning curve for a small application might be too much and you'd be better off just going straight to odbc. The best odbc api is the one used by Django, pyodbc.
It's been a while since I developed with these tools, so there may be something newer for each, but at the time these were definitely the best of breed in my opinion.
A:
I had lots of success with wxPython, but that was some years ago now and there may be better new solutions...
A:
Ok this is an unconventional solution but write yourself a code generator. I have done this several times using Mako. So in my case I auto inspect a table which columns it contains and types and generate classes from that. It's more work upfront but does exactly what you want and is reusable in subsequent projects.
|
Is there a Python library that allows to build user interfaces without writing much code?
|
I am writing editing front ends in Python since several years now, and I am fed up with micromanaging every UI detail of a window or dialog every single time.
Is there a technology that allows me to, say, specify the relations between a GTK+ Glade-designed interface and the tables and records of an SQLite database to do all the middle man work? It should spare me the work of manually writing event handlers, input evaluators and view updates.
The technologies in question are just examples, but I want to stick with Python as far as possible.
|
[
"Besides the ones already mentioned I can add:\n\nKiwi\nuxpython\npygtk\ntreethon\n\nI've never used any of them so have no recommendations but, for what it's worth, I have used at least 2 complex programs built directly on pygtk that worked in both Windows and Linux.\nI think Kiwi is the only one of these with baked in support for db (through interface with SQLAlchemy, SQLObject, or Storm) but I would be surprised if you couldn't use one of those ORM's inside any of the other frameworks.\n",
"PyQt and its models can automate some of these tasks for you (to some amount off course, e.g. filling widgets with data from a database and handling most of the widgets behaviour, buffering etc.).\nIf you want a more object-oriented approach to handling SQL you could look into an ORM-oriented solution (for example SQLAlchemy).\n",
"Dabo is built on top of wxPython, so you may not prefer it, but it's designed to make it easy to tie a GUI to a database, so I'd recommend you check it out if you haven't already. In particular, it's got good facilities for tying widgets to data, and handling a lot of the common cases of GUI development.\n",
"wxGlade may help, although I haven't used it myself so I don't speak from experience.\nBoa Constructor apparently has a wxPython GUI builder in it, and there is also PythonCard, though development on these two projects seems to have stalled.\n",
"Traits might be a good option for you.\nhttp://code.enthought.com/projects/traits/docs/html/TUIUG/index.html\nAS simple as it is to map a UI to an object, it doesn't seem too far fetched to incorporate SQLAlchemy for persistence. \n",
"There is a good book on wxPython, \"wxPython in Action\", which can't be said for some of the other solutions. No knock on the others. I've had success developing with wxPython in the past and it comes with a great set of demo applications with source code from which you can borrow liberally.\nThe best UI designer I found for wxPython applications is a commercial one, Anthemion DialogBlocks. It's by one of the wxPython programmers and is worth the money. Other solutions for UI design include wxGlade (I found it usable but not featureful) and Boa Constructor (haven't used it). Wing IDE might also have one. Stani's Python Editor bundles wxGlade, I believe. There are a lot of other projects that don't really work or are fairly old.\nAs far as SQL automation goes, as another answerer says, I'd look at SQL alchemy, but the learning curve for a small application might be too much and you'd be better off just going straight to odbc. The best odbc api is the one used by Django, pyodbc.\nIt's been a while since I developed with these tools, so there may be something newer for each, but at the time these were definitely the best of breed in my opinion.\n",
"I had lots of success with wxPython, but that was some years ago now and there may be better new solutions...\n",
"Ok this is an unconventional solution but write yourself a code generator. I have done this several times using Mako. So in my case I auto inspect a table which columns it contains and types and generate classes from that. It's more work upfront but does exactly what you want and is reusable in subsequent projects.\n"
] |
[
5,
4,
4,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"glade",
"gtk",
"python",
"sqlite",
"user_interface"
] |
stackoverflow_0000671741_glade_gtk_python_sqlite_user_interface.txt
|
Q:
How's Python Multiprocessing Implemented on Windows?
Given the absence of a Windows fork() call, how's the multiprocessing package in Python 2.6 implemented under Windows? On top of Win32 threads or some sort of fake fork or just compatibility on top of the existing multithreading?
A:
It's done using a subprocess call to sys.executable (i.e. start a new Python process) followed by serializing all of the globals, and sending those over the pipe. A poor man's cloning of the current process. This is the cause of the extra restrictions found when using multiprocessing on Windows plaform.
You may also be interested in viewing Jesse Noller's talk from PyCon about multiprocessing where he discusses its use.
|
How's Python Multiprocessing Implemented on Windows?
|
Given the absence of a Windows fork() call, how's the multiprocessing package in Python 2.6 implemented under Windows? On top of Win32 threads or some sort of fake fork or just compatibility on top of the existing multithreading?
|
[
"It's done using a subprocess call to sys.executable (i.e. start a new Python process) followed by serializing all of the globals, and sending those over the pipe. A poor man's cloning of the current process. This is the cause of the extra restrictions found when using multiprocessing on Windows plaform.\nYou may also be interested in viewing Jesse Noller's talk from PyCon about multiprocessing where he discusses its use.\n"
] |
[
30
] |
[] |
[] |
[
"fork",
"multithreading",
"python"
] |
stackoverflow_0000765129_fork_multithreading_python.txt
|
Q:
How do I store a string with a `"` in it?
I want to have a JSON object with the value of an attribute as a string with the character ".
For example:
{
"Dimensions" : " 12.0" x 9.6" "
}
Obviously this is not possible. How do I do this?
With Python.
A:
Isaac is correct.
As for how to do it in python, you need to provide a more detailed explanation of how you are building your JSON object. For example, let's say you're using no external libraries and are doing it manually (ridiculous, I know), you would do this:
>>> string = "{ \"Dimensions\" : \" 12.0\\\" x 9.6\\\" \" }"
>>> print string
{ "Dimensions" : " 12.0\" x 9.6\" " }
Obviously this is kind of silly. If you are using the standard python json module, try this:
from json import JSONEncoder
encoder = JSONEncoder()
string = encoder.encode({ "Dimensions":" 12.0\" x 9.6\" " })
>>> print string
{"Dimensions": " 12.0\" x 9.6\" "}
which is the desired result.
A:
Python has two symbols you can use to specify string literals, the single quote and the double quote.
For example:
my_string = "I'm home!"
Or, more relevant to you,
>>> string = '{ "Dimensions" : " 12.0\\\" x 9.6\\\" " }'
>>> print string
{ "Dimensions" : " 12.0\" x 9.6\" " }
You can also prefix the string with 'r' to specify it is a raw string, so backslash escaped sequences are not processed, making it cleaner.
>>> string = r'{ "Dimensions" : " 12.0\" x 9.6\" " }'
>>> print string
{ "Dimensions" : " 12.0\" x 9.6\" " }
A:
JSON.stringify, if using Javascript, will escape it for you.
If not, you can escape them like \" (put a \ in front)
Edit: in Python, try re.escape() or just replace all " with \":
"json string".replace("\"","\\\"");
|
How do I store a string with a `"` in it?
|
I want to have a JSON object with the value of an attribute as a string with the character ".
For example:
{
"Dimensions" : " 12.0" x 9.6" "
}
Obviously this is not possible. How do I do this?
With Python.
|
[
"Isaac is correct.\nAs for how to do it in python, you need to provide a more detailed explanation of how you are building your JSON object. For example, let's say you're using no external libraries and are doing it manually (ridiculous, I know), you would do this:\n>>> string = \"{ \\\"Dimensions\\\" : \\\" 12.0\\\\\\\" x 9.6\\\\\\\" \\\" }\"\n>>> print string\n{ \"Dimensions\" : \" 12.0\\\" x 9.6\\\" \" }\n\nObviously this is kind of silly. If you are using the standard python json module, try this:\nfrom json import JSONEncoder\nencoder = JSONEncoder()\nstring = encoder.encode({ \"Dimensions\":\" 12.0\\\" x 9.6\\\" \" })\n\n>>> print string\n{\"Dimensions\": \" 12.0\\\" x 9.6\\\" \"}\n\nwhich is the desired result. \n",
"Python has two symbols you can use to specify string literals, the single quote and the double quote. \nFor example:\n my_string = \"I'm home!\"\nOr, more relevant to you, \n>>> string = '{ \"Dimensions\" : \" 12.0\\\\\\\" x 9.6\\\\\\\" \" }'\n>>> print string\n{ \"Dimensions\" : \" 12.0\\\" x 9.6\\\" \" }\n\nYou can also prefix the string with 'r' to specify it is a raw string, so backslash escaped sequences are not processed, making it cleaner.\n>>> string = r'{ \"Dimensions\" : \" 12.0\\\" x 9.6\\\" \" }'\n>>> print string\n{ \"Dimensions\" : \" 12.0\\\" x 9.6\\\" \" }\n\n",
"JSON.stringify, if using Javascript, will escape it for you.\nIf not, you can escape them like \\\" (put a \\ in front)\nEdit: in Python, try re.escape() or just replace all \" with \\\":\n\"json string\".replace(\"\\\"\",\"\\\\\\\"\");\n\n"
] |
[
7,
3,
1
] |
[] |
[] |
[
"json",
"python"
] |
stackoverflow_0000763654_json_python.txt
|
Q:
Standard Django way for letting users edit rich content
I have a Django website in which I want site administrators to be able to edit rich content.
Suppose we're talking about an organizational info page, which might include some pictures, and some links, where the page is not as structured as a news page (which updates with news pieces every few days), but still needs the ability to be easily edited by site admins which do not necessarily want to mess with HTML (or rather, I do not want them to).
So where do I put this dynamic content? On the database? In which format? How do I make it accesible in the django default admin?
A:
Use one of the existing rich-text editors
The lightest weight would be to use something at the js level like DojoEditor:
http://code.djangoproject.com/wiki/AddDojoEditor
See also this thread:
Replace textarea with rich text editor in Django Admin?
A:
For what you're describing I'd use flatpages, which is a django app that lets users create and edit pages in the admin panel.
As for formatting, I'd use TinyMCE. Integrating it is pretty easy, here is a walkthrough (do steps 1 and 2 and jump to the bottom, "Using TinyMCE with flatpages (newforms)")
|
Standard Django way for letting users edit rich content
|
I have a Django website in which I want site administrators to be able to edit rich content.
Suppose we're talking about an organizational info page, which might include some pictures, and some links, where the page is not as structured as a news page (which updates with news pieces every few days), but still needs the ability to be easily edited by site admins which do not necessarily want to mess with HTML (or rather, I do not want them to).
So where do I put this dynamic content? On the database? In which format? How do I make it accesible in the django default admin?
|
[
"Use one of the existing rich-text editors\nThe lightest weight would be to use something at the js level like DojoEditor: \nhttp://code.djangoproject.com/wiki/AddDojoEditor\nSee also this thread: \nReplace textarea with rich text editor in Django Admin?\n",
"For what you're describing I'd use flatpages, which is a django app that lets users create and edit pages in the admin panel.\nAs for formatting, I'd use TinyMCE. Integrating it is pretty easy, here is a walkthrough (do steps 1 and 2 and jump to the bottom, \"Using TinyMCE with flatpages (newforms)\")\n"
] |
[
4,
1
] |
[] |
[] |
[
"django",
"django_admin",
"django_models",
"python"
] |
stackoverflow_0000765066_django_django_admin_django_models_python.txt
|
Q:
Is this Python code thread safe?
import time
import threading
class test(threading.Thread):
def __init__ (self):
threading.Thread.__init__(self)
self.doSkip = False
self.count = 0
def run(self):
while self.count<9:
self.work()
def skip(self):
self.doSkip = True
def work(self):
self.count+=1
time.sleep(1)
if(self.doSkip):
print "skipped"
self.doSkip = False
return
print self.count
t = test()
t.start()
while t.count<9:
time.sleep(2)
t.skip()
A:
Thread-safe in which way? I don't see any part you might want to protect here.
skip may reset the doSkip at any time, so there's not much point in locking it. You don't have any resources that are accessed at the same time - so IMHO nothing can be corrupted / unsafe in this code.
The only part that might run differently depending on locking / counting is how many "skip"s do you expect on every call to .skip(). If you want to ensure that every skip results in a skipped call to .work(), you should change doSkip into a counter that is protected by a lock on both increment and compare/decrement. Currently one thread might turn doSkip on after the check, but before the doSkip reset. It doesn't matter in this example, but in some real situation (with more code) it might make a difference.
A:
Whenever the test of a mutex boolean ( e.g. if(self.doSkip) ) is separate from the set of the mutex boolean you will probably have threading problems.
The rule is that your thread will get swapped out at the most inconvenient time. That is, after the test and before the set. Moving them closer together reduces the window for screw-ups but does not eliminate them. You almost always need a specially created mechanism from the language or kernel to fully close that window.
The threading library has Semaphores that can be used to synchronize threads and/or create critical sections of code.
A:
Apparently there isn't any critical resource, so I'd say it's thread-safe.
But as usual you can't predict in which order the two threads will be blocked/run by the scheduler.
A:
This is and will thread safe as long as you don't share data between threads.
If an other thread needs to read/write data to your thread class, then this won't be thread safe unless you protect data with some synchronization mechanism (like locks).
A:
To elaborate on DanM's answer, conceivably this could happen:
Thread 1: t.skip()
Thread 2: if self.doSkip: print 'skipped'
Thread 1: t.skip()
Thread 2: self.doSkip = False
etc.
In other words, while you might expect to see one "skipped" for every call to t.skip(), this sequence of events would violate that.
However, because of your sleep() calls, I think this sequence of events is actually impossible.
(unless your computer is running really slowly)
|
Is this Python code thread safe?
|
import time
import threading
class test(threading.Thread):
def __init__ (self):
threading.Thread.__init__(self)
self.doSkip = False
self.count = 0
def run(self):
while self.count<9:
self.work()
def skip(self):
self.doSkip = True
def work(self):
self.count+=1
time.sleep(1)
if(self.doSkip):
print "skipped"
self.doSkip = False
return
print self.count
t = test()
t.start()
while t.count<9:
time.sleep(2)
t.skip()
|
[
"Thread-safe in which way? I don't see any part you might want to protect here.\nskip may reset the doSkip at any time, so there's not much point in locking it. You don't have any resources that are accessed at the same time - so IMHO nothing can be corrupted / unsafe in this code.\nThe only part that might run differently depending on locking / counting is how many \"skip\"s do you expect on every call to .skip(). If you want to ensure that every skip results in a skipped call to .work(), you should change doSkip into a counter that is protected by a lock on both increment and compare/decrement. Currently one thread might turn doSkip on after the check, but before the doSkip reset. It doesn't matter in this example, but in some real situation (with more code) it might make a difference.\n",
"Whenever the test of a mutex boolean ( e.g. if(self.doSkip) ) is separate from the set of the mutex boolean you will probably have threading problems.\nThe rule is that your thread will get swapped out at the most inconvenient time. That is, after the test and before the set. Moving them closer together reduces the window for screw-ups but does not eliminate them. You almost always need a specially created mechanism from the language or kernel to fully close that window.\nThe threading library has Semaphores that can be used to synchronize threads and/or create critical sections of code.\n",
"Apparently there isn't any critical resource, so I'd say it's thread-safe.\nBut as usual you can't predict in which order the two threads will be blocked/run by the scheduler.\n",
"This is and will thread safe as long as you don't share data between threads.\nIf an other thread needs to read/write data to your thread class, then this won't be thread safe unless you protect data with some synchronization mechanism (like locks).\n",
"To elaborate on DanM's answer, conceivably this could happen:\n\nThread 1: t.skip()\nThread 2: if self.doSkip: print 'skipped'\nThread 1: t.skip()\nThread 2: self.doSkip = False\netc.\n\nIn other words, while you might expect to see one \"skipped\" for every call to t.skip(), this sequence of events would violate that.\nHowever, because of your sleep() calls, I think this sequence of events is actually impossible.\n(unless your computer is running really slowly)\n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"multithreading",
"python"
] |
stackoverflow_0000762448_multithreading_python.txt
|
Q:
mod_wsgi/python sys.path.exend problems
I'm working on a mod_wsgi script.. at the beginning is:
sys.path.extend(map(os.path.abspath, ['/media/server/www/webroot/']))
But I've noticed, that every time I update the script the sys.path var keeps growing with duplicates of this extension:
['/usr/lib64/python25.zip'
'/usr/lib64/python2.5'
'/usr/lib64/python2.5/plat-linux2'
'/usr/lib64/python2.5/lib-tk'
'/usr/lib64/python2.5/lib-dynload'
'/usr/lib64/python2.5/site-packages'
'/usr/lib64/python2.5/site-packages/Numeric'
'/usr/lib64/python2.5/site-packages/gtk-2.0'
'/usr/lib64/python2.5/site-packages/scim-0.1'
'/usr/lib/python2.5/site-packages'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot']
It resets every time I restart apache.. is there any way to make sure this doesn't happen? I want the module path to be loaded only once..
A:
No need to worry about checking or using abspath yourself. Use the ‘site’ module's built-in addsitedir function. It will take care of these issues and others (eg. pth files) automatically:
import site
site.addsitedir('/media/server/www/webroot/')
(This function is only documented in Python 2.6, but it has pretty much always existed.)
A:
One fairly simple way to do this is to check to see if the path has already been extended before extending it::
path_extension = map(os.path.abspath,['/media/server/www/webroot/'])
if path_extension[0] not in sys.path:
sys.path.extend(path_extension)
This has the disadvantage, however, of always scanning through most of sys.path when checking to see if it's been extended. A faster, though more complex, version is below::
path_extension = map(os.path.abspath,['/media/server/www/webroot/'])
if path_extension[-1] not in reversed(sys.path):
sys.path.extend(path_extension)
A better solution, however, is probably to either add the path extensions to your PYTHONPATH environment variable or put a .pth file into your site-packages directory:
http://docs.python.org/install/index.html
A:
The mod_wsgi documentation on code reloading covers this.
|
mod_wsgi/python sys.path.exend problems
|
I'm working on a mod_wsgi script.. at the beginning is:
sys.path.extend(map(os.path.abspath, ['/media/server/www/webroot/']))
But I've noticed, that every time I update the script the sys.path var keeps growing with duplicates of this extension:
['/usr/lib64/python25.zip'
'/usr/lib64/python2.5'
'/usr/lib64/python2.5/plat-linux2'
'/usr/lib64/python2.5/lib-tk'
'/usr/lib64/python2.5/lib-dynload'
'/usr/lib64/python2.5/site-packages'
'/usr/lib64/python2.5/site-packages/Numeric'
'/usr/lib64/python2.5/site-packages/gtk-2.0'
'/usr/lib64/python2.5/site-packages/scim-0.1'
'/usr/lib/python2.5/site-packages'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot']
It resets every time I restart apache.. is there any way to make sure this doesn't happen? I want the module path to be loaded only once..
|
[
"No need to worry about checking or using abspath yourself. Use the ‘site’ module's built-in addsitedir function. It will take care of these issues and others (eg. pth files) automatically:\nimport site\nsite.addsitedir('/media/server/www/webroot/')\n\n(This function is only documented in Python 2.6, but it has pretty much always existed.)\n",
"One fairly simple way to do this is to check to see if the path has already been extended before extending it::\npath_extension = map(os.path.abspath,['/media/server/www/webroot/']) \nif path_extension[0] not in sys.path:\n sys.path.extend(path_extension)\n\nThis has the disadvantage, however, of always scanning through most of sys.path when checking to see if it's been extended. A faster, though more complex, version is below::\npath_extension = map(os.path.abspath,['/media/server/www/webroot/']) \nif path_extension[-1] not in reversed(sys.path):\n sys.path.extend(path_extension)\n\nA better solution, however, is probably to either add the path extensions to your PYTHONPATH environment variable or put a .pth file into your site-packages directory:\nhttp://docs.python.org/install/index.html\n",
"The mod_wsgi documentation on code reloading covers this.\n"
] |
[
7,
3,
2
] |
[] |
[] |
[
"apache",
"mod_wsgi",
"python"
] |
stackoverflow_0000764081_apache_mod_wsgi_python.txt
|
Q:
Python Collections.DefaultDict Sort + Output Top X Custom Class Object
Problem: I need to output the TOP X Contributors determined by the amount of messages posted.
Data: I have a collection of the messages posted. This is not a Database/SQL question by the sample query below just give an overview of the code.
tweetsSQL = db.GqlQuery("SELECT * FROM TweetModel ORDER BY date_created DESC")
My Model:
class TweetModel(db.Model):
# Model Definition
# Tweet Message ID is the Key Name
to_user_id = db.IntegerProperty()
to_user = db.StringProperty(multiline=False)
message = db.StringProperty(multiline=False)
date_created = db.DateTimeProperty(auto_now_add=False)
user = db.ReferenceProperty(UserModel, collection_name = 'tweets')
From examples on SO, I was able to find the TOP X Contributors by doing this:
visits = defaultdict(int)
for t in tweetsSQL:
visits[t.user.from_user] += 1
Now I can then sort it using:
c = sorted(visits.iteritems(), key=operator.itemgetter(1), reverse=True)
But the only way now to retrieve the original Objects is to loop through object c, find the KeyName and then look in TweetsSQL for it to obtain the TweetModel Object.
Is there a better way?
*** Sorry I should have added that Count(*) is not available due to using google app engine
[EDIT 2]
In Summary, given a List of Messages, how do I order them by User's message Count.
IN SQL, it would be:
SELECT * FROM TweetModel GROUP BY Users ORDER BY Count(*)
But I cannot do it in SQL and need to duplicate this functionality in code. My starting point is "SELECT * FROM TweetModel"
A:
Use heapq.nlargest() instead of sorted(), for efficiency; it's what it's for. I don't know the answer about the DB part of your question.
A:
I think your job would be a lot easier if you change the SQL query to something like:
SELECT top 100 userId FROM TweetModel GROUP BY userId ORDER BY count(*)
I wouldn't bother with the TweetModel class if you only need the data to solve the stated problem.
A:
Why not invert the dictionary, once you have constructed it, so that the keys are the message counts and the values are the users? Then you can sort the keys and easily get to the users.
|
Python Collections.DefaultDict Sort + Output Top X Custom Class Object
|
Problem: I need to output the TOP X Contributors determined by the amount of messages posted.
Data: I have a collection of the messages posted. This is not a Database/SQL question by the sample query below just give an overview of the code.
tweetsSQL = db.GqlQuery("SELECT * FROM TweetModel ORDER BY date_created DESC")
My Model:
class TweetModel(db.Model):
# Model Definition
# Tweet Message ID is the Key Name
to_user_id = db.IntegerProperty()
to_user = db.StringProperty(multiline=False)
message = db.StringProperty(multiline=False)
date_created = db.DateTimeProperty(auto_now_add=False)
user = db.ReferenceProperty(UserModel, collection_name = 'tweets')
From examples on SO, I was able to find the TOP X Contributors by doing this:
visits = defaultdict(int)
for t in tweetsSQL:
visits[t.user.from_user] += 1
Now I can then sort it using:
c = sorted(visits.iteritems(), key=operator.itemgetter(1), reverse=True)
But the only way now to retrieve the original Objects is to loop through object c, find the KeyName and then look in TweetsSQL for it to obtain the TweetModel Object.
Is there a better way?
*** Sorry I should have added that Count(*) is not available due to using google app engine
[EDIT 2]
In Summary, given a List of Messages, how do I order them by User's message Count.
IN SQL, it would be:
SELECT * FROM TweetModel GROUP BY Users ORDER BY Count(*)
But I cannot do it in SQL and need to duplicate this functionality in code. My starting point is "SELECT * FROM TweetModel"
|
[
"Use heapq.nlargest() instead of sorted(), for efficiency; it's what it's for. I don't know the answer about the DB part of your question.\n",
"I think your job would be a lot easier if you change the SQL query to something like: \nSELECT top 100 userId FROM TweetModel GROUP BY userId ORDER BY count(*)\n\nI wouldn't bother with the TweetModel class if you only need the data to solve the stated problem. \n",
"Why not invert the dictionary, once you have constructed it, so that the keys are the message counts and the values are the users? Then you can sort the keys and easily get to the users.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000766546_python.txt
|
Q:
Attributes not available when overwriting __init__?
I'm trying to overwrite a __init__ method, but when I call the super method the attributes created in that method are not available.
I can see that it's not an inheritance problem since class B still has the attributes available.
I think the code sample will explain it better :-)
Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> class A(object):
... def __init__(self, *args, **kwargs):
... self.args = args
... self.kwargs = kwargs
...
>>> a = A('a', 'b', key='value')
>>> print a.args, a.kwargs
('a', 'b') {'key': 'value'}
>>> class B(A):
... pass
...
>>> b = B('b', 'c', key_b='value_b')
>>> print b.args, b.kwargs
('b', 'c') {'key_b': 'value_b'}
>>> class C(A):
... def __init__(self, *args, **kwargs):
... print 'class C'
... super(A, self).__init__(*args, **kwargs)
...
>>> c = C('c', 'd', key_c='value_C')
class C
>>> print c.args, c.kwargs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'C' object has no attribute 'args'
>>> class D(A):
... def __init__(self, *args, **kwargs):
... super(A, self).__init__(*args, **kwargs)
... print 'D'
...
>>> d = D('d', 'e', key_d='value D')
D
>>> print d.args, d.kwargs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'D' object has no attribute 'args'
>>>
A:
Your call to the superclass needs to use its own type
super(D, self).__init__(*args,**kwargs)
rather than
super(A...
I believe calling super(A, self).__init__ will call the superclass of A, which is object. Rather, you want to call the superclass of D, which is A.
A:
You're using super() incorrectly. In your "C" class the second line of the init() method should pass C as the first argument like so...
super(C, self).__init__(*args, **kwargs)
And really you shouldn't even need to use super here. You could just call
A.__init__(self, *args, **kwargs)
|
Attributes not available when overwriting __init__?
|
I'm trying to overwrite a __init__ method, but when I call the super method the attributes created in that method are not available.
I can see that it's not an inheritance problem since class B still has the attributes available.
I think the code sample will explain it better :-)
Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> class A(object):
... def __init__(self, *args, **kwargs):
... self.args = args
... self.kwargs = kwargs
...
>>> a = A('a', 'b', key='value')
>>> print a.args, a.kwargs
('a', 'b') {'key': 'value'}
>>> class B(A):
... pass
...
>>> b = B('b', 'c', key_b='value_b')
>>> print b.args, b.kwargs
('b', 'c') {'key_b': 'value_b'}
>>> class C(A):
... def __init__(self, *args, **kwargs):
... print 'class C'
... super(A, self).__init__(*args, **kwargs)
...
>>> c = C('c', 'd', key_c='value_C')
class C
>>> print c.args, c.kwargs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'C' object has no attribute 'args'
>>> class D(A):
... def __init__(self, *args, **kwargs):
... super(A, self).__init__(*args, **kwargs)
... print 'D'
...
>>> d = D('d', 'e', key_d='value D')
D
>>> print d.args, d.kwargs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'D' object has no attribute 'args'
>>>
|
[
"Your call to the superclass needs to use its own type\nsuper(D, self).__init__(*args,**kwargs)\n\nrather than \nsuper(A...\n\nI believe calling super(A, self).__init__ will call the superclass of A, which is object. Rather, you want to call the superclass of D, which is A.\n",
"You're using super() incorrectly. In your \"C\" class the second line of the init() method should pass C as the first argument like so...\nsuper(C, self).__init__(*args, **kwargs)\n\nAnd really you shouldn't even need to use super here. You could just call \nA.__init__(self, *args, **kwargs)\n\n"
] |
[
5,
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000767241_python.txt
|
Q:
Some internals of Django auth middleware
In the django.contrib.auth middleware
I see the code:
class AuthenticationMiddleware(object):
def process_request(self, request):
assert hasattr(request, 'session'), "requires session middleware"
request.__class__.user = LazyUser()
return None
Please avdise me why such a form
request._ class _.user = LazyUser()
used?
Why not just
request.user = LazyUser()
?
I know what _ class _ attribute means, but as I undersand direct assignment to instance variable will be better. Where I'm wrong?
A:
LazyUser is descriptor-class. According to documentation it can be only class attribute not instance one:
For instance, a.x has a lookup chain starting with a.__dict__['x'], then type(a).__dict__['x'], and continuing through the base classes of type(a) excluding metaclasses.
|
Some internals of Django auth middleware
|
In the django.contrib.auth middleware
I see the code:
class AuthenticationMiddleware(object):
def process_request(self, request):
assert hasattr(request, 'session'), "requires session middleware"
request.__class__.user = LazyUser()
return None
Please avdise me why such a form
request._ class _.user = LazyUser()
used?
Why not just
request.user = LazyUser()
?
I know what _ class _ attribute means, but as I undersand direct assignment to instance variable will be better. Where I'm wrong?
|
[
"LazyUser is descriptor-class. According to documentation it can be only class attribute not instance one:\n\nFor instance, a.x has a lookup chain starting with a.__dict__['x'], then type(a).__dict__['x'], and continuing through the base classes of type(a) excluding metaclasses.\n\n"
] |
[
9
] |
[
"This is going to affect how requests are created. All such instances will have their user attribute as that particular LazuUser without the need to make that change after each individual request is instantiated.\n"
] |
[
-1
] |
[
"django",
"python"
] |
stackoverflow_0000766733_django_python.txt
|
Q:
Current working directory no longer inherited from calling process from python 2.5 onwards?
I updated my python version on windows 2003 server from 2.4 to 2.5.
In 2.4 I could import a file "sub1.py" from a subdirectory c:\application\subdir\ like this:
import sub1
as long as the calling script main.py that lives in c:\application was started like this:
c:\application\subdir>python ..\main.py
But in 2.5 it no longer works for me:
C:\application\subdir>python ..\main.py
Traceback (most recent call last):
File "main.py", line 3, in <module>
import sub1
ImportError: No module named sub1
Now I can put an empty file
__init__.py
into subdir and import like this:
import subdir.sub1 as sub1
Was there a change in python 2.5? This would mean the current working directory in python 2.4 was inherited from the calling process, and in python 2.5 it is set to where the main script lives.
[edit3]
I corrected the question now. I must appologize that I had over-simplified the example at first and removed the cause that results in the error without checking my simplified example.
[/edit3]
A:
to import sub.py you need to:
import sub # not sub1
A:
You can check where python searches for modules. A list of locations is contained in variable sys.path.
You can create a simple script (or execute it interactively) that shows this:
import sys
for x in sys.path:
print x
By default, python will search the directory in which it is being executed and where the original script resides.
Also, try setting the PYTHONPATH environment variable to include ".\" directory.
A:
I assume sub1 is a typo? In your question you sometimes refer to sub, sometimes to sub1.
I would first of all check that the sub.py file exists in c:\application.
Check the permissions of the sub.py file and the application directory. Can the user read the sub.py file? Can the python interpreter create the *.pyc file?
Also, manually delete the sub.pyc file, just in case an old version of the pyc is causing the problem.
A:
You need to make the following changes:
turn subdir into a package by adding an empty __init__.py file to the directory
change the import to: from subdir import sub1
A:
add this directory to your python path
|
Current working directory no longer inherited from calling process from python 2.5 onwards?
|
I updated my python version on windows 2003 server from 2.4 to 2.5.
In 2.4 I could import a file "sub1.py" from a subdirectory c:\application\subdir\ like this:
import sub1
as long as the calling script main.py that lives in c:\application was started like this:
c:\application\subdir>python ..\main.py
But in 2.5 it no longer works for me:
C:\application\subdir>python ..\main.py
Traceback (most recent call last):
File "main.py", line 3, in <module>
import sub1
ImportError: No module named sub1
Now I can put an empty file
__init__.py
into subdir and import like this:
import subdir.sub1 as sub1
Was there a change in python 2.5? This would mean the current working directory in python 2.4 was inherited from the calling process, and in python 2.5 it is set to where the main script lives.
[edit3]
I corrected the question now. I must appologize that I had over-simplified the example at first and removed the cause that results in the error without checking my simplified example.
[/edit3]
|
[
"to import sub.py you need to:\nimport sub # not sub1\n\n",
"You can check where python searches for modules. A list of locations is contained in variable sys.path.\nYou can create a simple script (or execute it interactively) that shows this:\nimport sys\n\nfor x in sys.path:\n print x\n\nBy default, python will search the directory in which it is being executed and where the original script resides.\nAlso, try setting the PYTHONPATH environment variable to include \".\\\" directory.\n",
"I assume sub1 is a typo? In your question you sometimes refer to sub, sometimes to sub1.\nI would first of all check that the sub.py file exists in c:\\application.\n\nCheck the permissions of the sub.py file and the application directory. Can the user read the sub.py file? Can the python interpreter create the *.pyc file?\nAlso, manually delete the sub.pyc file, just in case an old version of the pyc is causing the problem.\n\n",
"You need to make the following changes:\n\nturn subdir into a package by adding an empty __init__.py file to the directory\nchange the import to: from subdir import sub1\n\n",
"add this directory to your python path\n"
] |
[
4,
2,
1,
1,
0
] |
[] |
[] |
[
"import",
"python"
] |
stackoverflow_0000767531_import_python.txt
|
Q:
Estimating zip size/creation time
I need to create ZIP archives on demand, using either Python zipfile module or unix command line utilities.
Resources to be zipped are often > 1GB and not necessarily compression-friendly.
How do I efficiently estimate its creation time / size?
A:
Extract a bunch of small parts from the big file. Maybe 64 chunks of 64k each. Randomly selected.
Concatenate the data, compress it, measure the time and the compression ratio. Since you've randomly selected parts of the file chances are that you have compressed a representative subset of the data.
Now all you have to do is to estimate the time for the whole file based on the time of your test-data.
A:
I suggest you measure the average time it takes to produce a zip of a certain size. Then you calculate the estimate from that measure. However I think the estimate will be very rough in any case if you don't know how well the data compresses. If the data you want to compress had a very similar "profile" each time you could probably make better predictions.
A:
If its possible to get progress callbacks from the python module i would suggest finding out how many bytes are processed pr second ( By simply storing where in the file you where at start of the second, and where you are at the end ). When you have the data on how fast the computer your on you can off course save it, and use it as a basis for your next zip file. ( I normally collect about 5 samples before showing a time prognosses )
Using this method can give you Microsoft minutes so as you get more samples you would need to average it out. This would esp be the case if your making a zip file that contains a lot of files, as ZIP tends to slow down when compressing many small files compared to 1 large file.
A:
If you're using the ZipFile.write() method to write your files into the archive, you could do the following:
Get a list of the files you want to zip and their relative sizes
Write one file to the archive and time how long it took
Calculate ETA based on the number of files written, their size, and how much is left.
This won't work if you're only zipping one really big file though. I've never used the zip module myself, so I'm not sure if it would work, but for small numbers of large files, maybe you could use the ZipFile.writestr() function and read in / zip up your files in chunks?
|
Estimating zip size/creation time
|
I need to create ZIP archives on demand, using either Python zipfile module or unix command line utilities.
Resources to be zipped are often > 1GB and not necessarily compression-friendly.
How do I efficiently estimate its creation time / size?
|
[
"Extract a bunch of small parts from the big file. Maybe 64 chunks of 64k each. Randomly selected.\nConcatenate the data, compress it, measure the time and the compression ratio. Since you've randomly selected parts of the file chances are that you have compressed a representative subset of the data.\nNow all you have to do is to estimate the time for the whole file based on the time of your test-data.\n",
"I suggest you measure the average time it takes to produce a zip of a certain size. Then you calculate the estimate from that measure. However I think the estimate will be very rough in any case if you don't know how well the data compresses. If the data you want to compress had a very similar \"profile\" each time you could probably make better predictions.\n",
"If its possible to get progress callbacks from the python module i would suggest finding out how many bytes are processed pr second ( By simply storing where in the file you where at start of the second, and where you are at the end ). When you have the data on how fast the computer your on you can off course save it, and use it as a basis for your next zip file. ( I normally collect about 5 samples before showing a time prognosses )\nUsing this method can give you Microsoft minutes so as you get more samples you would need to average it out. This would esp be the case if your making a zip file that contains a lot of files, as ZIP tends to slow down when compressing many small files compared to 1 large file.\n",
"If you're using the ZipFile.write() method to write your files into the archive, you could do the following:\n\nGet a list of the files you want to zip and their relative sizes\nWrite one file to the archive and time how long it took\nCalculate ETA based on the number of files written, their size, and how much is left.\n\nThis won't work if you're only zipping one really big file though. I've never used the zip module myself, so I'm not sure if it would work, but for small numbers of large files, maybe you could use the ZipFile.writestr() function and read in / zip up your files in chunks?\n"
] |
[
16,
3,
1,
0
] |
[] |
[] |
[
"python",
"time_estimation",
"zip"
] |
stackoverflow_0000767684_python_time_estimation_zip.txt
|
Q:
how to send mail in python ssmtp vs smtplib
I need to send email in delbian linux. How to send? I run my server on 256 MB linux box and I heard postfix and sendmail is overkill.
Recently I came across the ssmtp, that seems to be an executable, needs to be executed as a process and called through python using os modules.
alternatively, python already provides smtplib which is working fine with me.
What is the advantage of using ssmtp over python's smtplib?
A:
In a Python program, there is no advantage.
The only purpose of ssmtp is to wrap the SMTP protocol in the sendmail API. That is, it provides a program /usr/sbin/sendmail that accepts the same options, arguments, and inputs as the full-blown sendmail (though most of the options do nothing); but behind the scenes, instead of processing the email itself, it sends the message to an SMTP server. This is for systems that need to have a sendmail program present, perhaps because they don't understand SMTP - for example, I think older versions of PHP had this requirement, and even in recent versions it might still be easier to configure PHP to use the so-called sendmail interface (i.e. the program sendmail) than to use SMTP directly. (I haven't used PHP in a little while, I'm not sure about the current status)
However, in Python the situation is reversed: you have a builtin library that makes it easy to use SMTP directly, whereas using sendmail requires you to invoke the subprocess module which is somewhat clunky and also very dependent on things that are not part of Python. So basically there is no reason not to use smtplib.
A:
Additionally, postfix is very easy to install in "satellite" mode, where all it does is queue and deliver email for you. Way easier than implementing your own email queue. Most decent package management systems will let you configure it this way.
A:
There are other lightweight SMTP senders, such as msmtp, the one I prefer.
But Postfix is fine for a 256 Mb machine. The good thing about a full MTA like Postfix is that it keeps the message and retries if the destination server is down. With smtplib and the server on a remote machine, you program now depends on the network...
|
how to send mail in python ssmtp vs smtplib
|
I need to send email in delbian linux. How to send? I run my server on 256 MB linux box and I heard postfix and sendmail is overkill.
Recently I came across the ssmtp, that seems to be an executable, needs to be executed as a process and called through python using os modules.
alternatively, python already provides smtplib which is working fine with me.
What is the advantage of using ssmtp over python's smtplib?
|
[
"In a Python program, there is no advantage.\nThe only purpose of ssmtp is to wrap the SMTP protocol in the sendmail API. That is, it provides a program /usr/sbin/sendmail that accepts the same options, arguments, and inputs as the full-blown sendmail (though most of the options do nothing); but behind the scenes, instead of processing the email itself, it sends the message to an SMTP server. This is for systems that need to have a sendmail program present, perhaps because they don't understand SMTP - for example, I think older versions of PHP had this requirement, and even in recent versions it might still be easier to configure PHP to use the so-called sendmail interface (i.e. the program sendmail) than to use SMTP directly. (I haven't used PHP in a little while, I'm not sure about the current status) \nHowever, in Python the situation is reversed: you have a builtin library that makes it easy to use SMTP directly, whereas using sendmail requires you to invoke the subprocess module which is somewhat clunky and also very dependent on things that are not part of Python. So basically there is no reason not to use smtplib.\n",
"Additionally, postfix is very easy to install in \"satellite\" mode, where all it does is queue and deliver email for you. Way easier than implementing your own email queue. Most decent package management systems will let you configure it this way.\n",
"There are other lightweight SMTP senders, such as msmtp, the one I prefer.\nBut Postfix is fine for a 256 Mb machine. The good thing about a full MTA like Postfix is that it keeps the message and retries if the destination server is down. With smtplib and the server on a remote machine, you program now depends on the network...\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"python",
"smtplib",
"ssmtp"
] |
stackoverflow_0000764778_python_smtplib_ssmtp.txt
|
Q:
Blocks of code in Python
Can you elaborate on the current state of "blocks" (in the Ruby sense) in Python?
What are the language constructs that exist in Python? How do they compare to other languages (like Ruby, Smalltalk, [insert more])? Or does Python lack such constructs?
I have so far understood the lambda thing; it is only one-line, but maybe it comes close. What about "decorators" and yield in this context?
I am also using old Python versions in some projects. Which constructs were introduced in which Python version (2.5, 2.6, etc.) or are planned in future versions?
Can you link interesting articles on the subject that explain this stuff for Python and also comparing to other languages and could be interesting for someone who wants to extend basic Python knowledge?
A:
Functions are the first-class members in Python:
def add(x, y):
return x + y
a = add # Bind
b = a(34, 1) # Call
So you can pass functions around all you want. You can do the same with any callable object in Python.
A:
lambda is the closest equivalent to a Ruby block, and the restriction to one line is intentional. It is typically argued that multiline anonymous functions (blocks, in Ruby) are usually less readable than defining the function somewhere with a name and passing that, as illustrated in SilentGhost's answer.
A:
There are good discussions on comp.lang.python that compare to other languages:
Let's Talk About Lambda Functions!
code blocks in Python
A:
The def is equivalent of an assignment statement which only binds the function object to the object reference variable.
The object reference variable can then be used to call the function object to execute.
|
Blocks of code in Python
|
Can you elaborate on the current state of "blocks" (in the Ruby sense) in Python?
What are the language constructs that exist in Python? How do they compare to other languages (like Ruby, Smalltalk, [insert more])? Or does Python lack such constructs?
I have so far understood the lambda thing; it is only one-line, but maybe it comes close. What about "decorators" and yield in this context?
I am also using old Python versions in some projects. Which constructs were introduced in which Python version (2.5, 2.6, etc.) or are planned in future versions?
Can you link interesting articles on the subject that explain this stuff for Python and also comparing to other languages and could be interesting for someone who wants to extend basic Python knowledge?
|
[
"Functions are the first-class members in Python:\ndef add(x, y):\n return x + y\n\na = add # Bind\nb = a(34, 1) # Call\n\nSo you can pass functions around all you want. You can do the same with any callable object in Python.\n",
"lambda is the closest equivalent to a Ruby block, and the restriction to one line is intentional. It is typically argued that multiline anonymous functions (blocks, in Ruby) are usually less readable than defining the function somewhere with a name and passing that, as illustrated in SilentGhost's answer.\n",
"There are good discussions on comp.lang.python that compare to other languages:\n\nLet's Talk About Lambda Functions!\ncode blocks in Python\n\n",
"The def is equivalent of an assignment statement which only binds the function object to the object reference variable.\nThe object reference variable can then be used to call the function object to execute.\n"
] |
[
10,
3,
3,
0
] |
[] |
[] |
[
"lambda",
"python",
"ruby"
] |
stackoverflow_0000767519_lambda_python_ruby.txt
|
Q:
Markup-based GUI for python
I want to get myself into programming some serious GUI based applications, but when I look at things like Swing/SWT from Java, I can't help but HATE programming a GUI interface by creating "widget" objects and populating them and calling methods on them.
I think GUI design should be done in a separate text-based file in some markup format, which is read and rendered (e.g. HTML), so that the design of the interface is not tightly coupled with the rest of the code.
I've seen HTMLayout and I love the idea, but so far it seems be only in C++.
I'm looking for a python library (or even a WIP project) for doing markup-based gui.
UPDATE
The reason I can't accept QT's xml is the same reason I hate the programatic approach; you're assembling each widget separately, and specifying each property of it on a separate line. It doesn't provide any advantage over doing it the programatic way.
A:
You can try Mozilla's XUL. It supports Python via XPCOM.
See this project: pyxpcomext
XUL isn't compiled, it is packaged and loaded at runtime. Firefox and many other great applications use it, but most of them use Javascript for scripting instead of Python. There are one or 2 using Python though.
A:
You should look into Qt, which you can use from Python using the excellent PyQt interface (why they didn't name it QtPy --- cutiepie, get it ? --- I will never understand).
With Qt, you can have the choice of constructing your GUI's programmatically (which you don't want), or using XML markup. This XML file can either be compiled to code beforehand, or loaded with a short command. The latter is the normal way to work using PyQt.
Qt is versatile, high-quality, cross-platform, and you're likely using it already without knowing it. The official Skype client application is written in Qt if I remember correctly.
Edit: Just adding some links so the OP get get some feel for it ...
Short intro to Qt programming with C++ -- notice the use of the Qt Designer, which outputs .ui files, which is an XML format which I remember was also quite easy to work with by hand. PyQt programming is very similar, except for being in a much easier programming language of course :-)
Info about PyQt on the Python wiki
Online book on PyQt programming
A:
How about wxPython? I'm just now beginning to work with it, but there's a tool -- XRC Resource Editor -- that allows you to assemble your GUI, which is then written to an XML file. As I understand it, your Python application loads the XML file, rather than having a whole bunch of GUI-layout code mixed in with your Python code.
A:
If you choose a language like Tcl or Python and Tk for your application development it becomes fairly trivial to write your own DSL for describing the interface. You can, for instance, write a DSL that lets you create menus like this:
menubar {
File => {
Open => cmd.open
Save => cmd.save
Exit => cmd.exit
}
Edit => {
Cut => cmd.cut
Copy => cmd.copy
Paste => cmd.paste
}
}
... and your main GUI forms like this:
form PropertiesForm {
Font: [fontchooser]
Foreground: [foregroundChooser]
Background: [backgroundChooser]
}
form NewUserForm {
username [_____________________]
[] administrator
enable the following features:
() feature 1
() feature 2
() feature 3
}
notebook {
Properties => PropertiesForm
New User => NewUserForm
}
... and so on. Tcl really excels at letting you write DSLs like this. Note that this capability isn't built in to Tcl per se, but the language makes DSLs trivial. Some of this type of thing exists on the Tcler's wiki, for example there's code to create menus similar to what I described at Menus Made Easy.
I think, though, that after a while you'll find it really, really hard to make professional grade UIs in this manner.
A:
If you use GTK, you can use Glade, which is an XML file.
|
Markup-based GUI for python
|
I want to get myself into programming some serious GUI based applications, but when I look at things like Swing/SWT from Java, I can't help but HATE programming a GUI interface by creating "widget" objects and populating them and calling methods on them.
I think GUI design should be done in a separate text-based file in some markup format, which is read and rendered (e.g. HTML), so that the design of the interface is not tightly coupled with the rest of the code.
I've seen HTMLayout and I love the idea, but so far it seems be only in C++.
I'm looking for a python library (or even a WIP project) for doing markup-based gui.
UPDATE
The reason I can't accept QT's xml is the same reason I hate the programatic approach; you're assembling each widget separately, and specifying each property of it on a separate line. It doesn't provide any advantage over doing it the programatic way.
|
[
"You can try Mozilla's XUL. It supports Python via XPCOM.\nSee this project: pyxpcomext\nXUL isn't compiled, it is packaged and loaded at runtime. Firefox and many other great applications use it, but most of them use Javascript for scripting instead of Python. There are one or 2 using Python though.\n",
"You should look into Qt, which you can use from Python using the excellent PyQt interface (why they didn't name it QtPy --- cutiepie, get it ? --- I will never understand).\nWith Qt, you can have the choice of constructing your GUI's programmatically (which you don't want), or using XML markup. This XML file can either be compiled to code beforehand, or loaded with a short command. The latter is the normal way to work using PyQt.\nQt is versatile, high-quality, cross-platform, and you're likely using it already without knowing it. The official Skype client application is written in Qt if I remember correctly.\nEdit: Just adding some links so the OP get get some feel for it ...\n\nShort intro to Qt programming with C++ -- notice the use of the Qt Designer, which outputs .ui files, which is an XML format which I remember was also quite easy to work with by hand. PyQt programming is very similar, except for being in a much easier programming language of course :-)\nInfo about PyQt on the Python wiki\nOnline book on PyQt programming\n\n",
"How about wxPython? I'm just now beginning to work with it, but there's a tool -- XRC Resource Editor -- that allows you to assemble your GUI, which is then written to an XML file. As I understand it, your Python application loads the XML file, rather than having a whole bunch of GUI-layout code mixed in with your Python code.\n",
"If you choose a language like Tcl or Python and Tk for your application development it becomes fairly trivial to write your own DSL for describing the interface. You can, for instance, write a DSL that lets you create menus like this:\nmenubar {\n File => {\n Open => cmd.open\n Save => cmd.save\n Exit => cmd.exit\n }\n Edit => {\n Cut => cmd.cut\n Copy => cmd.copy\n Paste => cmd.paste\n }\n}\n\n... and your main GUI forms like this:\nform PropertiesForm {\n Font: [fontchooser]\n Foreground: [foregroundChooser]\n Background: [backgroundChooser]\n}\nform NewUserForm {\n username [_____________________]\n [] administrator\n enable the following features:\n () feature 1\n () feature 2\n () feature 3\n}\nnotebook {\n Properties => PropertiesForm\n New User => NewUserForm\n}\n\n... and so on. Tcl really excels at letting you write DSLs like this. Note that this capability isn't built in to Tcl per se, but the language makes DSLs trivial. Some of this type of thing exists on the Tcler's wiki, for example there's code to create menus similar to what I described at Menus Made Easy. \nI think, though, that after a while you'll find it really, really hard to make professional grade UIs in this manner.\n",
"If you use GTK, you can use Glade, which is an XML file. \n"
] |
[
7,
2,
2,
2,
1
] |
[
"It's XML, not Python, but look at Open Laszlo\n",
"windows?\nyou can use the WinForms editor in Visual Studio and then talk to the assembly from IronPython.\n"
] |
[
-1,
-2
] |
[
"markup",
"python",
"user_interface"
] |
stackoverflow_0000364327_markup_python_user_interface.txt
|
Q:
How do I prevent execution of arbitrary commands from a Django app making system calls?
I have a Django application I'm developing that must make a system call to an external program on the server. In creating the command for the system call, the application takes values from a form and uses them as parameters for the call. I suppose this means that one can essentially use bogus parameters and write arbitrary commands for the shell to execute (e.g., just place a semicolon and then rm -rf *).
This is bad. While most users aren't malicious, it is a potential security problem. How does one handle these potential points of exploit?
EDIT (for clarification): The users will see a form that is split up with various fields for each of the parameters and options. However some fields will be available as open text fields. All of these fields are combined and fed to subprocess.check_call(). Technically, though, this isn't separated too far from just handing the users a command prompt. This has got to be fairly common, so what do other developers do to sanitize input so that they don't get a Bobby Tables.
A:
Based on my understanding of the question, I'm assuming you aren't letting the users specify commands to run on the shell, but just arguments to those commands. In this case, you can avoid shell injection attacks by using the subprocess module and not using the shell (i.e. specify use the default shell=False parameter in the subprocess.Popen constructor.
Oh, and never use os.system() for any strings containing any input coming from a user.
A:
By never trusting users. Any data coming from the web browser should be considered tainted. And absolutely do not try to validate the data via JS or by limiting what can be entered in the FORM fields. You need to do the tests on the server before passing it to your external application.
Update after your edit: no matter how you present the form to users on your front-end the backend should treat it as though it came from a set of text boxes with big flashing text around them saying "insert whatever you want here!"
A:
To do this, you must do the following. If you don't know what "options" and "arguments" are, read the optparse background.
Each "Command" or "Request" is actually an instance of a model. Define your Request model with all of the parameters someone might provide.
For simple options, you must provide a field with a specific list of CHOICES. For options that are "on" or "off" (-x in the command-line) you should provide a CHOICE list with two human-understandable values ("Do X" and "Do not do X".)
For options with a value, you must provide a field that takes the option's value. You must write a Form with the validation for this field. We'll return to option value validation in a bit.
For arguments, you have a second Model (with an FK to the first). This may be as simple as a single FilePath field, or may be more complex. Again, you may have to provide a Form to validate instances of this Model, also.
Option validation varies by what kind of option it is. You must narrow the acceptable values to be narrowest possible set of characters and write a parser that is absolutely sure of passing only valid characters.
Your options will fall into the same categories as the option types in optparse -- string, int, long, choice, float and complex. Note that int, long, float and complex have validation rules already defined by Django's Models and Forms. Choice is a special kind of string, already supported by Django's Models and Forms.
What's left are "strings". Define the allowed strings. Write a regex for those strings. Validate using the regex. Most of the time, you can never accept quotes (", ' or `) in any form.
Final step. Your Model has a method which emits the command as a sequence of strings all ready for subprocess.Popen.
Edit
This is the backbone of our app. It's so common, we have a single Model with numerous Forms, each for a special batch command that gets run. The Model is pretty generic. The Forms are pretty specific ways to build the Model object. That's the way Django is designed to work, and it helps to fit with Django's well-thought-out design patterns.
Any field that is "available as open text fields" is a mistake. Each field that's "open" must have a regex to specify what is permitted. If you can't formalize a regex, you have to rethink what you're doing.
A field that cannot be constrained with a regex absolutely cannot be a command-line parameter. Period. It must be stored to a file to database column before being used.
Edit
Like this.
class MySubprocessCommandClass( models.Model ):
myOption_1 = models.CharField( choice = OPTION_1_CHOICES, max_length=2 )
myOption_2 = models.CharField( max_length=20 )
etc.
def theCommand( self ):
return [ "theCommand", "-p", self.myOption_1, "-r", self.myOption_2, etc. ]
Your form is a ModelForm for this Model.
You don't have to save() the instances of the model. We save them so that we can create a log of precisely what was run.
A:
The answer is, don't let users type in shell commands! There is no excuse for allowing arbitrary shell commands to be executed.
Also, if you really must allow users to supply arguments to external commands, don't use a shell. In C, you could use execvp() to supply arguments directly to the command, but with django, I'm not sure how you would do this (but I'm sure there is a way). Of course, you should still do some argument sanitation, especially if the command has the potential to cause any harm.
A:
Depending on the range of your commands, you could customize the form, so that parameters are entered in separate form-fields. Those you can parse for fitting values more easily.
Also, beware of backticks and other shell specific stuff.
|
How do I prevent execution of arbitrary commands from a Django app making system calls?
|
I have a Django application I'm developing that must make a system call to an external program on the server. In creating the command for the system call, the application takes values from a form and uses them as parameters for the call. I suppose this means that one can essentially use bogus parameters and write arbitrary commands for the shell to execute (e.g., just place a semicolon and then rm -rf *).
This is bad. While most users aren't malicious, it is a potential security problem. How does one handle these potential points of exploit?
EDIT (for clarification): The users will see a form that is split up with various fields for each of the parameters and options. However some fields will be available as open text fields. All of these fields are combined and fed to subprocess.check_call(). Technically, though, this isn't separated too far from just handing the users a command prompt. This has got to be fairly common, so what do other developers do to sanitize input so that they don't get a Bobby Tables.
|
[
"Based on my understanding of the question, I'm assuming you aren't letting the users specify commands to run on the shell, but just arguments to those commands. In this case, you can avoid shell injection attacks by using the subprocess module and not using the shell (i.e. specify use the default shell=False parameter in the subprocess.Popen constructor.\nOh, and never use os.system() for any strings containing any input coming from a user.\n",
"By never trusting users. Any data coming from the web browser should be considered tainted. And absolutely do not try to validate the data via JS or by limiting what can be entered in the FORM fields. You need to do the tests on the server before passing it to your external application.\nUpdate after your edit: no matter how you present the form to users on your front-end the backend should treat it as though it came from a set of text boxes with big flashing text around them saying \"insert whatever you want here!\" \n",
"To do this, you must do the following. If you don't know what \"options\" and \"arguments\" are, read the optparse background.\nEach \"Command\" or \"Request\" is actually an instance of a model. Define your Request model with all of the parameters someone might provide.\n\nFor simple options, you must provide a field with a specific list of CHOICES. For options that are \"on\" or \"off\" (-x in the command-line) you should provide a CHOICE list with two human-understandable values (\"Do X\" and \"Do not do X\".)\nFor options with a value, you must provide a field that takes the option's value. You must write a Form with the validation for this field. We'll return to option value validation in a bit.\nFor arguments, you have a second Model (with an FK to the first). This may be as simple as a single FilePath field, or may be more complex. Again, you may have to provide a Form to validate instances of this Model, also.\n\nOption validation varies by what kind of option it is. You must narrow the acceptable values to be narrowest possible set of characters and write a parser that is absolutely sure of passing only valid characters.\nYour options will fall into the same categories as the option types in optparse -- string, int, long, choice, float and complex. Note that int, long, float and complex have validation rules already defined by Django's Models and Forms. Choice is a special kind of string, already supported by Django's Models and Forms.\nWhat's left are \"strings\". Define the allowed strings. Write a regex for those strings. Validate using the regex. Most of the time, you can never accept quotes (\", ' or `) in any form. \nFinal step. Your Model has a method which emits the command as a sequence of strings all ready for subprocess.Popen.\n\nEdit\nThis is the backbone of our app. It's so common, we have a single Model with numerous Forms, each for a special batch command that gets run. The Model is pretty generic. The Forms are pretty specific ways to build the Model object. That's the way Django is designed to work, and it helps to fit with Django's well-thought-out design patterns.\nAny field that is \"available as open text fields\" is a mistake. Each field that's \"open\" must have a regex to specify what is permitted. If you can't formalize a regex, you have to rethink what you're doing.\nA field that cannot be constrained with a regex absolutely cannot be a command-line parameter. Period. It must be stored to a file to database column before being used.\n\nEdit\nLike this.\nclass MySubprocessCommandClass( models.Model ):\n myOption_1 = models.CharField( choice = OPTION_1_CHOICES, max_length=2 )\n myOption_2 = models.CharField( max_length=20 )\n etc.\n def theCommand( self ):\n return [ \"theCommand\", \"-p\", self.myOption_1, \"-r\", self.myOption_2, etc. ]\n\nYour form is a ModelForm for this Model.\nYou don't have to save() the instances of the model. We save them so that we can create a log of precisely what was run.\n",
"The answer is, don't let users type in shell commands! There is no excuse for allowing arbitrary shell commands to be executed.\nAlso, if you really must allow users to supply arguments to external commands, don't use a shell. In C, you could use execvp() to supply arguments directly to the command, but with django, I'm not sure how you would do this (but I'm sure there is a way). Of course, you should still do some argument sanitation, especially if the command has the potential to cause any harm.\n",
"Depending on the range of your commands, you could customize the form, so that parameters are entered in separate form-fields. Those you can parse for fitting values more easily.\nAlso, beware of backticks and other shell specific stuff.\n"
] |
[
11,
6,
4,
3,
0
] |
[] |
[] |
[
"django",
"python",
"security"
] |
stackoverflow_0000768677_django_python_security.txt
|
Q:
Overriding class member variables in Python (Django/Satchmo)
I'm using Satchmo and Django and am trying to extend Satchmo's Product model. I'd like to make one of the fields in Satchmo's Product model have a default value in the admin without changing Satchmo's source code. Here is an abbreviated version of Satchmo's Product model:
class Product(models.Model):
site = models.ForeignKey(Site, verbose_name='Site')
This is what I attempted to do to extend it...
class MyProduct(Product):
Product.site = models.ForeignKey(Site, verbose_name='Site', editable=False, default=1)
This does not work, any ideas on why?
A:
For two reasons, firstly the way you are trying to override a class variable just isn't how it works in Python. You just define it in the class as normal, the same way that def __init__(self): is overriding the super-class initializer. But, Django model inheritance simply doesn't support this. If you want to add constraints, you could do so in the save() method.
A:
You could probably monkeypatch it if you really wanted to:
site_field = Product._meta.get_field('site')
site_field.editable = False
site_field.default = 1
But this is a nasty habit and could cause problems; arguably less maintainable than just patching Satchmo's source directly.
|
Overriding class member variables in Python (Django/Satchmo)
|
I'm using Satchmo and Django and am trying to extend Satchmo's Product model. I'd like to make one of the fields in Satchmo's Product model have a default value in the admin without changing Satchmo's source code. Here is an abbreviated version of Satchmo's Product model:
class Product(models.Model):
site = models.ForeignKey(Site, verbose_name='Site')
This is what I attempted to do to extend it...
class MyProduct(Product):
Product.site = models.ForeignKey(Site, verbose_name='Site', editable=False, default=1)
This does not work, any ideas on why?
|
[
"For two reasons, firstly the way you are trying to override a class variable just isn't how it works in Python. You just define it in the class as normal, the same way that def __init__(self): is overriding the super-class initializer. But, Django model inheritance simply doesn't support this. If you want to add constraints, you could do so in the save() method.\n",
"You could probably monkeypatch it if you really wanted to:\nsite_field = Product._meta.get_field('site')\nsite_field.editable = False\nsite_field.default = 1\n\nBut this is a nasty habit and could cause problems; arguably less maintainable than just patching Satchmo's source directly.\n"
] |
[
1,
1
] |
[
"You can't change the superclass from a subclass. \nYou have the source. Use subversion. Make the change. When Satchmo is updated merge the updates around your change. \n"
] |
[
-2
] |
[
"django",
"python",
"satchmo"
] |
stackoverflow_0000762165_django_python_satchmo.txt
|
Q:
What is the most Pythonic way to provide a fall-back value in an assignment?
In Perl, it's often nice to be able to assign an object, but specify some fall-back value if the variable being assigned from is 'undef'. For instance:
my $x = undef;
my $y = 2;
my $a = $x || $y;
After this,
$a == 2
Is there a concise way to achieve this in Python if the value x is None, or would a full-on ...
if x is not None
a = x
else
a = y
... be the most Pythonic way to achieve this?
EDIT: Apologies, as has been pointed out by several commenters, I wasn't really talking about the value being undefined, but 'undef' in Perl, which is not really the same thing. But the question as originally worded didn't make this clear.
A:
Since 2.5:
If you want to fall back only on None:
a = x if x is not None else y
If you want to fall back also on empty string, false, 0 etc.:
a = x if x else y
or
a = x or y
As for undefined (as never defined, a.k.a. not bound):
try:
a = x
except NameError:
a = y
or a bit more hackish (I'd not really recommend that, but it's short):
a = vars().get('x',y)
A:
first you can do your full-on with a ternary:
a = y if x is None else x
but it doesn't solve your problem. what you want to do is more closely implemented with:
try:
a = x
except:
a = y
A:
Just some nitpicking with your Perl example:
my $x = undef;
This redundant code can be shortened to:
my $x;
And the following code doesn't do what you say it does:
my $a = $x || $y;
This actually assigns $y to $a when $x is false. False values include things like undef, zero, and the empty string. To only test for definedness, you could do the following (as of Perl 5.10):
my $a = $x // $y;
A:
I am quite convinced that there is no 'pythonic' way to do this, because this is not a pattern that is pythonic. Control should not reach an undefined variable reference in elegant code. There are similar ideas that are pythonic. Most obvious:
def myRange(start, stop=None):
start, stop = (0, start) if stop is None else (start, stop)
...
What's important is that stop is defined in scope, but the caller didn't have to pass it explicitly, only that it has taken it's default value, altering the semantics of the arguments, which in effect causes the first argument to be optional instead of the second, even where the language does not allow that without this clever trick.
That being said, something like this might follow the premise without using a try-catch block.
a = locals().get('x', y)
A:
There's python's ternary operation:
a = x if x is not None else y
Available in 2.5 and up.
A:
I think this would help, since the problem comes down to check whether a variable is defined or not:
Easy way to check that variable is defined in python?
A:
Most of the solutions relying on if statements don't work for the case where x is 0 or negative.
>>> x = 0
>>> y = 2
>>> a = x or y
>>> a
2
>>>
If you knew the name of the variable ahead of time you could look for like so:
if 'x' in dir():
a = x
except:
a =y
However that solution seems kind of sloppy to me. I believe the best method is to use a try except block like so:
try:
a = x
else:
a = y
A:
One way to rewrite...
if x is not None
a = x
else
a = y
..is:
x = myfunction()
if x is None:
x = y
print x
Or, using exceptions (possibly more Python'y, depending on the what the code is doing - if it returns None because there was an error, using an exception is probably the correct way):
try:
x = myfunction()
except AnException:
x = "fallback"
print x
All that said, there really isn't anything wrong with you original code:
if x is not None
a = x
else
a = y
It's long, but I find that far easier to read (and much more Pythonic) than either of the following one-liners:
a = x if x is not None else y
a = x or y
|
What is the most Pythonic way to provide a fall-back value in an assignment?
|
In Perl, it's often nice to be able to assign an object, but specify some fall-back value if the variable being assigned from is 'undef'. For instance:
my $x = undef;
my $y = 2;
my $a = $x || $y;
After this,
$a == 2
Is there a concise way to achieve this in Python if the value x is None, or would a full-on ...
if x is not None
a = x
else
a = y
... be the most Pythonic way to achieve this?
EDIT: Apologies, as has been pointed out by several commenters, I wasn't really talking about the value being undefined, but 'undef' in Perl, which is not really the same thing. But the question as originally worded didn't make this clear.
|
[
"Since 2.5:\nIf you want to fall back only on None:\na = x if x is not None else y \n\nIf you want to fall back also on empty string, false, 0 etc.: \na = x if x else y \n\nor\na = x or y \n\n\nAs for undefined (as never defined, a.k.a. not bound):\ntry:\n a = x \nexcept NameError:\n a = y\n\nor a bit more hackish (I'd not really recommend that, but it's short):\na = vars().get('x',y)\n\n",
"first you can do your full-on with a ternary:\na = y if x is None else x\n\nbut it doesn't solve your problem. what you want to do is more closely implemented with:\ntry:\n a = x\nexcept:\n a = y\n\n",
"Just some nitpicking with your Perl example:\nmy $x = undef;\n\nThis redundant code can be shortened to:\nmy $x;\n\nAnd the following code doesn't do what you say it does:\nmy $a = $x || $y;\n\nThis actually assigns $y to $a when $x is false. False values include things like undef, zero, and the empty string. To only test for definedness, you could do the following (as of Perl 5.10):\nmy $a = $x // $y;\n\n",
"I am quite convinced that there is no 'pythonic' way to do this, because this is not a pattern that is pythonic. Control should not reach an undefined variable reference in elegant code. There are similar ideas that are pythonic. Most obvious:\ndef myRange(start, stop=None):\n start, stop = (0, start) if stop is None else (start, stop)\n ...\n\nWhat's important is that stop is defined in scope, but the caller didn't have to pass it explicitly, only that it has taken it's default value, altering the semantics of the arguments, which in effect causes the first argument to be optional instead of the second, even where the language does not allow that without this clever trick.\nThat being said, something like this might follow the premise without using a try-catch block.\na = locals().get('x', y)\n\n",
"There's python's ternary operation:\na = x if x is not None else y\n\nAvailable in 2.5 and up.\n",
"I think this would help, since the problem comes down to check whether a variable is defined or not:\nEasy way to check that variable is defined in python?\n",
"Most of the solutions relying on if statements don't work for the case where x is 0 or negative.\n>>> x = 0\n>>> y = 2\n>>> a = x or y\n>>> a\n2\n>>> \n\nIf you knew the name of the variable ahead of time you could look for like so:\nif 'x' in dir():\n a = x \nexcept:\n a =y\n\nHowever that solution seems kind of sloppy to me. I believe the best method is to use a try except block like so:\ntry:\n a = x\nelse:\n a = y\n\n",
"One way to rewrite...\nif x is not None\n a = x\nelse\n a = y\n\n..is:\nx = myfunction()\n\nif x is None:\n x = y\n\nprint x\n\nOr, using exceptions (possibly more Python'y, depending on the what the code is doing - if it returns None because there was an error, using an exception is probably the correct way):\ntry:\n x = myfunction()\nexcept AnException:\n x = \"fallback\"\n\nprint x\n\nAll that said, there really isn't anything wrong with you original code:\nif x is not None\n a = x\nelse\n a = y\n\nIt's long, but I find that far easier to read (and much more Pythonic) than either of the following one-liners:\na = x if x is not None else y\na = x or y\n\n"
] |
[
57,
6,
4,
3,
1,
1,
0,
0
] |
[
"If it's an argument to a function you can do this:\ndef MyFunc( a=2 ):\n print \"a is %d\"%a\n\n>>> MyFunc()\n...a is 2\n>>> MyFunc(5)\n...a is 5\n\n[Edit] For the downvoters.. the if/else bit is unnecessary for the solution - just added to make the results clear. Edited it to remove the if statement if that makes it clearer?\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0000768175_python.txt
|
Q:
How to find all built in libraries in Python
I've recently started with Python, and am enjoying the "batteries included" design. I'e already found out I can import time, math, re, urllib, but don't know how to know that something is builtin rather than writing it from scratch.
What's included, and where can I get other good quality libraries from?
A:
Firstly, the python libary reference gives a blow by blow of what's actually included. And the global module index contains a neat, alphabetized summary of those same modules. If you have dependencies on a library, you can trivially test for the presence with a construct like:
try:
import foobar
except:
print 'No foobar module'
If you do this on startup for modules not necessarily present in the distribution you can bail with a sensible diagnostic.
The Python Package Index plays a role similar to that of CPAN in the perl world and has a list of many third party modules of one sort or another. Browsing and searching this should give you a feel for what's about. There are also utilities such as Yolk which allow you to query the Python Package Index and the installed packages on Python.
Other good online Python resources are:
www.python.org
The comp.lang.python newsgroup - this is still very active.
Various of the items linked off the Python home page.
Various home pages and blogs by python luminaries such as The Daily Python URL, effbot.org, The Python Cookbook, Ian Bicking's blog (the guy responsible for SQLObject), and the Many blogs and sites off planet.python.org.
A:
run
pydoc -p 8080
and point your browser to http://localhost:8080/
You'll see everything that's installed and can spend lots of time discovering new things. :)
A:
The Python Global Module Index (http://docs.python.org/modindex.html) lists out every module included in Python 2.6.
Sourceforge has all sorts of good Python modules - one that came in handy for me recently was PyExcelerator, a module for writing straight to MS Excel workbooks. The Python Package Index, (http://pypi.python.org/) is also a good source of Python modules.
A:
Doug Hellman's blog covers lots of built-in libraries in depth. If you want to learn more about the standard library you should definitely read through his articles.
A:
This is not directly related to your question, but when you're in the python console, you can call help() on any function and it will print its documentation.
also, you can call dir() on any module or object and it will list all of its attributes, including functions.
This useful for inspecting contents of a module after you've imported it.
>>> import math
>>> dir(math)
['__doc__', '__name__', 'acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'cosh', 'degrees', 'e', 'exp', 'fabs', 'floor', 'fmod', 'frexp', 'hypot', 'ldexp', 'log', 'log10', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh']
>>> help( math.log )
Help on built-in function log in module math:
log(...)
log(x[, base]) -> the logarithm of x to the given base.
If the base not specified, returns the natural logarithm (base e) of x.
|
How to find all built in libraries in Python
|
I've recently started with Python, and am enjoying the "batteries included" design. I'e already found out I can import time, math, re, urllib, but don't know how to know that something is builtin rather than writing it from scratch.
What's included, and where can I get other good quality libraries from?
|
[
"Firstly, the python libary reference gives a blow by blow of what's actually included. And the global module index contains a neat, alphabetized summary of those same modules. If you have dependencies on a library, you can trivially test for the presence with a construct like:\ntry:\n import foobar\nexcept:\n print 'No foobar module'\n\nIf you do this on startup for modules not necessarily present in the distribution you can bail with a sensible diagnostic.\nThe Python Package Index plays a role similar to that of CPAN in the perl world and has a list of many third party modules of one sort or another. Browsing and searching this should give you a feel for what's about. There are also utilities such as Yolk which allow you to query the Python Package Index and the installed packages on Python.\nOther good online Python resources are:\n\nwww.python.org\nThe comp.lang.python newsgroup - this is still very active.\nVarious of the items linked off the Python home page.\nVarious home pages and blogs by python luminaries such as The Daily Python URL, effbot.org, The Python Cookbook, Ian Bicking's blog (the guy responsible for SQLObject), and the Many blogs and sites off planet.python.org.\n\n",
"run\npydoc -p 8080\n\nand point your browser to http://localhost:8080/\nYou'll see everything that's installed and can spend lots of time discovering new things. :)\n",
"The Python Global Module Index (http://docs.python.org/modindex.html) lists out every module included in Python 2.6. \nSourceforge has all sorts of good Python modules - one that came in handy for me recently was PyExcelerator, a module for writing straight to MS Excel workbooks. The Python Package Index, (http://pypi.python.org/) is also a good source of Python modules.\n",
"Doug Hellman's blog covers lots of built-in libraries in depth. If you want to learn more about the standard library you should definitely read through his articles.\n",
"This is not directly related to your question, but when you're in the python console, you can call help() on any function and it will print its documentation.\nalso, you can call dir() on any module or object and it will list all of its attributes, including functions.\nThis useful for inspecting contents of a module after you've imported it.\n>>> import math\n>>> dir(math)\n['__doc__', '__name__', 'acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'cosh', 'degrees', 'e', 'exp', 'fabs', 'floor', 'fmod', 'frexp', 'hypot', 'ldexp', 'log', 'log10', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh']\n>>> help( math.log )\nHelp on built-in function log in module math:\n\nlog(...)\n log(x[, base]) -> the logarithm of x to the given base.\n If the base not specified, returns the natural logarithm (base e) of x.\n\n"
] |
[
17,
12,
3,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000329498_python.txt
|
Q:
How do I use AND in a Django filter?
How do I create an "AND" filter to retrieve objects in Django? e.g I would like to retrieve a row which has a combination of two words in a single field.
For example the following SQL query does exactly that when I run it on mysql database:
select * from myapp_question
where ((question like '%software%') and (question like '%java%'))
How do you accomplish this in Django using filters?
A:
For thoroughness sake, let's just mention the Q object method:
from django.db.models import Q
criterion1 = Q(question__contains="software")
criterion2 = Q(question__contains="java")
q = Question.objects.filter(criterion1 & criterion2)
Note the other answers here are simpler and better adapted for your use case, but if anyone with a similar but slightly more complex problem (such as needing "not" or "or") sees this, it's good to have the reference right here.
A:
(update: this answer will not work anymore and give the syntax error keyword argument repeated)
mymodel.objects.filter(first_name__icontains="Foo", first_name__icontains="Bar")
update: Long time since I wrote this answer and done some django, but I am sure to this days the best approach is to use the Q object method like David Berger shows here: How do I use AND in a Django filter?
A:
You can chain filter expressions in Django:
q = Question.objects.filter(question__contains='software').filter(question__contains='java')
You can find more info in the Django docs at "Chaining Filters".
|
How do I use AND in a Django filter?
|
How do I create an "AND" filter to retrieve objects in Django? e.g I would like to retrieve a row which has a combination of two words in a single field.
For example the following SQL query does exactly that when I run it on mysql database:
select * from myapp_question
where ((question like '%software%') and (question like '%java%'))
How do you accomplish this in Django using filters?
|
[
"For thoroughness sake, let's just mention the Q object method:\nfrom django.db.models import Q\ncriterion1 = Q(question__contains=\"software\")\ncriterion2 = Q(question__contains=\"java\")\nq = Question.objects.filter(criterion1 & criterion2)\n\nNote the other answers here are simpler and better adapted for your use case, but if anyone with a similar but slightly more complex problem (such as needing \"not\" or \"or\") sees this, it's good to have the reference right here.\n",
"(update: this answer will not work anymore and give the syntax error keyword argument repeated)\nmymodel.objects.filter(first_name__icontains=\"Foo\", first_name__icontains=\"Bar\")\n\nupdate: Long time since I wrote this answer and done some django, but I am sure to this days the best approach is to use the Q object method like David Berger shows here: How do I use AND in a Django filter?\n",
"You can chain filter expressions in Django:\nq = Question.objects.filter(question__contains='software').filter(question__contains='java')\n\nYou can find more info in the Django docs at \"Chaining Filters\".\n"
] |
[
165,
112,
16
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000769843_django_python.txt
|
Q:
Multiple projects from one setup.py?
My current setup.py (using setuptools) installs two things, one is tvdb_api (an API wrapper), the other is tvnamer (a command line script)
I wish to make the two available separately, so a user can do..
easy_install tvdb_api
..to only get the API wrapper, or..
easy_install tvnamer
..to install tvnamer (and tvdb_api, as a requirement)
Is this possible without having two separate setup.py scripts? Can you have two separate PyPi packages that come from the same python setup.py upload command..?
A:
setup.py is just a regular Python file, which by convention sets up packages. By convention, setup.py contains a call to the setuptools or distutils setup() function. If you want to use one setup.py for two packages, you can call a different setup() function based on a command-line argument:
import sys
if len(sys.argv) > 1 and sys.argv[1] == 'script':
sys.argv = [sys.argv[0]] + sys.argv[2:]
setup(name='tvnamer', ...)
else:
setup(name='tvdb_api', ...)
Practically, though, I'd recommend just writing two scripts.
|
Multiple projects from one setup.py?
|
My current setup.py (using setuptools) installs two things, one is tvdb_api (an API wrapper), the other is tvnamer (a command line script)
I wish to make the two available separately, so a user can do..
easy_install tvdb_api
..to only get the API wrapper, or..
easy_install tvnamer
..to install tvnamer (and tvdb_api, as a requirement)
Is this possible without having two separate setup.py scripts? Can you have two separate PyPi packages that come from the same python setup.py upload command..?
|
[
"setup.py is just a regular Python file, which by convention sets up packages. By convention, setup.py contains a call to the setuptools or distutils setup() function. If you want to use one setup.py for two packages, you can call a different setup() function based on a command-line argument:\nimport sys\nif len(sys.argv) > 1 and sys.argv[1] == 'script':\n sys.argv = [sys.argv[0]] + sys.argv[2:]\n setup(name='tvnamer', ...)\nelse:\n setup(name='tvdb_api', ...)\n\nPractically, though, I'd recommend just writing two scripts.\n"
] |
[
11
] |
[] |
[] |
[
"python",
"setuptools"
] |
stackoverflow_0000769793_python_setuptools.txt
|
Q:
The lines that stand out in a file, but aren't exact duplicates
I'm combing a webapp's log file for statements that stand out.
Most of the lines are similar and uninteresting. I'd pass them through Unix uniq, however that filters nothing, as all the lines are slightly different: they all have a different timestamp, similar statements might print a different user ID, etc.
What's a way and/or tool to get just the lines that are notably different from any other? (But, again, not precise duplicates)
I was thinking about playing with Python's difflib but that seems geared toward diffing two files, rather than all pairs of lines in the same file.
[EDIT]
I assumed the solution would give a uniqueness score for each line. So by "notably different" I meant, I choose a threshold that the uniqueness score must exceed for any line to be included in the output.
Based on that, if there are other viable ways to define it, please discuss. Also, the method doesn't have to have 100% accuracy and recall.
[/EDIT]
Examples:
I'd prefer answers that are as general purpose as possible. I know I can strip away the timestamp at the beginning. Stripping the end is more challenging, as its language may be absolutely unlike anything else in the file. These sorts of details are why I shied from concrete examples before, but because some people asked...
Similar 1:
2009-04-20 00:03:57 INFO com.foo.Bar - URL:/graph?id=1234
2009-04-20 00:04:02 INFO com.foo.Bar - URL:/graph?id=asdfghjk
Similar 2:
2009-04-20 00:05:59 INFO com.baz.abc.Accessor - Cache /path/to/some/dir hits: 3466 / 16534, 0.102818% misses
2009-04-20 00:06:00 INFO com.baz.abc.Accessor - Cache /path/to/some/different/dir hits: 4352685 / 271315, 0.004423% misses
Different 1:
2009-04-20 00:03:57 INFO com.foo.Bar - URL:/graph?id=1234
2009-04-20 00:05:59 INFO com.baz.abc.Accessor - Cache /path/to/some/dir hits: 3466 / 16534, 0.102818% misses
In the Different 1 case, I'd like both lines returned but not other lines like them. In other words, those 2 lines are distinct types (then I can later ask for only statistically rare line types). The edit distance is much bigger between those two, for one thing.
A:
Define "notably different". Then have a look at "edit distance" measures.
A:
You could try a bit of code that counts words, and then sorts lines by those having the least common words.
If that doesn't do the trick, you can add in some smarts to filter out time stamps and numbers.
Your problem is similar to an earlier question on generating summaries of news stories.
A:
I don't know a tool for you but if I were going to roll my own, I'd approach it like this:
Presumably the log lines have a well defined structure, no? So
parse the lines on that structure
write a number of very basic relevance filters (functions that just return a simple number from the parsed structure)
run the parsed lines through a set of filters, and cut on the basis of the total score
possibly sort the remaining lines into various bins by the results of more filters
generate reports, dump bins to files, or other output
If you are familiar with the unix tool procmail, I'm suggesting a similar treatment customized for your data.
As zacherates notes in the comments, your filters will typically ignore time stamps (and possibly IP address), and just concentrate on the content: for example really long http requests might represent an attack...or whatever applies to your domain.
Your binning filters might be as simple as a hash on a few selected fields, or you might try to do something with Charlie Martin's suggestion and used edit distance measures.
A:
Perhaps you could do a basic calculation of "words the same"/"all words"?
e.g. (including an offset to allow you to ignore the timestamp and the word 'INFO', if that's always the same):
def score(s1, s2, offset=26):
words1 = re.findall('\w+', s1[offset:])
words2 = re.findall('\w+', s2[offset:])
return float(len(set(words1) & set(words2)))/max(len(set(words1)), len(set(words2)))
Given:
>>> s1
'2009-04-20 00:03:57 INFO com.foo.Bar - URL:/graph?id=1234'
>>> s2
'2009-04-20 00:04:02 INFO com.foo.Bar - URL:/graph?id=asdfghjk'
>>> s3
'2009-04-20 00:05:59 INFO com.baz.abc.Accessor - Cache /path/to/some/dir hits: 3466 / 16534, 0.102818% misses'
>>> s4
'2009-04-20 00:06:00 INFO com.baz.abc.Accessor - Cache /path/to/some/different/dir hits: 4352685 / 271315, 0.004423% misses'
This yields:
>>> score(s1,s2)
0.8571428571428571
>>> score(s3,s4)
0.75
>>> score(s1,s3)
0.066666666666666666
You've still got to decide which lines to compare. Also the use of set() may distort the scores slightly – the price of a simple algorithm :-)
A:
I wonder if you could just focus on the part that defines uniqueness for you. In this case, it seems that the part defining uniqueness is just the middle part:
2009-04-20 00:03:57 INFO com.foo.Bar - URL:/graph?id=1234
^---------------------^
2009-04-20 00:05:59 INFO com.baz.abc.Accessor - Cache /path/to/some/dir hits: 3466 / 16534, 0.102818% misses
^--------------------------------^
I would then compare exactly this part, perhaps using a regular expression (just the parenthesized group; how to access sub-matches like this is language dependent):
/^.{20}(\w+\s+[\w\.-]+\s+-\s+\w+)/
A:
I think you want to break this into fields, sort by the "severity level" field and the next field (looks like "class"). I'd use Haskell:
module Main where
import Data.List (nubBy, sortBy)
sortAndNub s = nubBy fields2and3
$ sortBy fields2and3comp
$ map words $ lines s
fields2and3 a b = fieldEq 2 a b
&& fieldEq 3 a b
fieldEq f a b = a!!f == (b!!f)
fields2and3comp a b = case compare (a!!2) (b!!2) of
LT -> LT
GT -> GT
EQ -> compare (a!!3) (b!!3)
main = interact $ unlines.(map unwords).sortAndNub
|
The lines that stand out in a file, but aren't exact duplicates
|
I'm combing a webapp's log file for statements that stand out.
Most of the lines are similar and uninteresting. I'd pass them through Unix uniq, however that filters nothing, as all the lines are slightly different: they all have a different timestamp, similar statements might print a different user ID, etc.
What's a way and/or tool to get just the lines that are notably different from any other? (But, again, not precise duplicates)
I was thinking about playing with Python's difflib but that seems geared toward diffing two files, rather than all pairs of lines in the same file.
[EDIT]
I assumed the solution would give a uniqueness score for each line. So by "notably different" I meant, I choose a threshold that the uniqueness score must exceed for any line to be included in the output.
Based on that, if there are other viable ways to define it, please discuss. Also, the method doesn't have to have 100% accuracy and recall.
[/EDIT]
Examples:
I'd prefer answers that are as general purpose as possible. I know I can strip away the timestamp at the beginning. Stripping the end is more challenging, as its language may be absolutely unlike anything else in the file. These sorts of details are why I shied from concrete examples before, but because some people asked...
Similar 1:
2009-04-20 00:03:57 INFO com.foo.Bar - URL:/graph?id=1234
2009-04-20 00:04:02 INFO com.foo.Bar - URL:/graph?id=asdfghjk
Similar 2:
2009-04-20 00:05:59 INFO com.baz.abc.Accessor - Cache /path/to/some/dir hits: 3466 / 16534, 0.102818% misses
2009-04-20 00:06:00 INFO com.baz.abc.Accessor - Cache /path/to/some/different/dir hits: 4352685 / 271315, 0.004423% misses
Different 1:
2009-04-20 00:03:57 INFO com.foo.Bar - URL:/graph?id=1234
2009-04-20 00:05:59 INFO com.baz.abc.Accessor - Cache /path/to/some/dir hits: 3466 / 16534, 0.102818% misses
In the Different 1 case, I'd like both lines returned but not other lines like them. In other words, those 2 lines are distinct types (then I can later ask for only statistically rare line types). The edit distance is much bigger between those two, for one thing.
|
[
"Define \"notably different\". Then have a look at \"edit distance\" measures.\n",
"You could try a bit of code that counts words, and then sorts lines by those having the least common words. \nIf that doesn't do the trick, you can add in some smarts to filter out time stamps and numbers. \nYour problem is similar to an earlier question on generating summaries of news stories.\n",
"I don't know a tool for you but if I were going to roll my own, I'd approach it like this:\nPresumably the log lines have a well defined structure, no? So\n\nparse the lines on that structure\nwrite a number of very basic relevance filters (functions that just return a simple number from the parsed structure)\nrun the parsed lines through a set of filters, and cut on the basis of the total score\npossibly sort the remaining lines into various bins by the results of more filters\ngenerate reports, dump bins to files, or other output\n\nIf you are familiar with the unix tool procmail, I'm suggesting a similar treatment customized for your data.\n\nAs zacherates notes in the comments, your filters will typically ignore time stamps (and possibly IP address), and just concentrate on the content: for example really long http requests might represent an attack...or whatever applies to your domain.\nYour binning filters might be as simple as a hash on a few selected fields, or you might try to do something with Charlie Martin's suggestion and used edit distance measures.\n",
"Perhaps you could do a basic calculation of \"words the same\"/\"all words\"?\ne.g. (including an offset to allow you to ignore the timestamp and the word 'INFO', if that's always the same):\ndef score(s1, s2, offset=26):\n words1 = re.findall('\\w+', s1[offset:])\n words2 = re.findall('\\w+', s2[offset:])\n return float(len(set(words1) & set(words2)))/max(len(set(words1)), len(set(words2)))\n\nGiven:\n>>> s1\n'2009-04-20 00:03:57 INFO com.foo.Bar - URL:/graph?id=1234'\n>>> s2\n'2009-04-20 00:04:02 INFO com.foo.Bar - URL:/graph?id=asdfghjk'\n>>> s3\n'2009-04-20 00:05:59 INFO com.baz.abc.Accessor - Cache /path/to/some/dir hits: 3466 / 16534, 0.102818% misses'\n>>> s4\n'2009-04-20 00:06:00 INFO com.baz.abc.Accessor - Cache /path/to/some/different/dir hits: 4352685 / 271315, 0.004423% misses'\n\nThis yields:\n>>> score(s1,s2)\n0.8571428571428571\n>>> score(s3,s4)\n0.75\n>>> score(s1,s3)\n0.066666666666666666\n\nYou've still got to decide which lines to compare. Also the use of set() may distort the scores slightly – the price of a simple algorithm :-)\n",
"I wonder if you could just focus on the part that defines uniqueness for you. In this case, it seems that the part defining uniqueness is just the middle part:\n\n2009-04-20 00:03:57 INFO com.foo.Bar - URL:/graph?id=1234\n ^---------------------^ \n\n2009-04-20 00:05:59 INFO com.baz.abc.Accessor - Cache /path/to/some/dir hits: 3466 / 16534, 0.102818% misses\n ^--------------------------------^\n\nI would then compare exactly this part, perhaps using a regular expression (just the parenthesized group; how to access sub-matches like this is language dependent):\n/^.{20}(\\w+\\s+[\\w\\.-]+\\s+-\\s+\\w+)/\n\n",
"I think you want to break this into fields, sort by the \"severity level\" field and the next field (looks like \"class\"). I'd use Haskell:\n\nmodule Main where \nimport Data.List (nubBy, sortBy)\n\nsortAndNub s = nubBy fields2and3 \n $ sortBy fields2and3comp\n $ map words $ lines s\n\nfields2and3 a b = fieldEq 2 a b \n && fieldEq 3 a b\nfieldEq f a b = a!!f == (b!!f)\nfields2and3comp a b = case compare (a!!2) (b!!2) of\n LT -> LT\n GT -> GT\n EQ -> compare (a!!3) (b!!3)\nmain = interact $ unlines.(map unwords).sortAndNub\n\n"
] |
[
3,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"algorithm",
"grep",
"nlp",
"python",
"unix"
] |
stackoverflow_0000769775_algorithm_grep_nlp_python_unix.txt
|
Q:
Installing Django on Shared Server: No module named MySQLdb?
I'm getting this error
Traceback (most recent call last):
File "/home/<username>/flup/server/fcgi_base.py", line 558, in run
File "/home/<username>/flup/server/fcgi_base.py", line 1116, in handler
File "/home/<username>/python/django/django/core/handlers/wsgi.py", line 241, in __call__
response = self.get_response(request)
File "/home/<username>/python/django/django/core/handlers/base.py", line 73, in get_response
response = middleware_method(request)
File "/home/<username>/python/django/django/contrib/sessions/middleware.py", line 10, in process_request
engine = import_module(settings.SESSION_ENGINE)
File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/<username>/python/django/django/contrib/sessions/backends/db.py", line 2, in ?
from django.contrib.sessions.models import Session
File "/home/<username>/python/django/django/contrib/sessions/models.py", line 4, in ?
from django.db import models
File "/home/<username>/python/django/django/db/__init__.py", line 41, in ?
backend = load_backend(settings.DATABASE_ENGINE)
File "/home/<username>/python/django/django/db/__init__.py", line 17, in load_backend
return import_module('.base', 'django.db.backends.%s' % backend_name)
File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/<username>/python/django/django/db/backends/mysql/base.py", line 13, in ?
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
when I try to run this script on my shared server
#!/usr/bin/python
import sys, os
sys.path.insert(0, "/home/<username>/python/django")
sys.path.insert(0, "/home/<username>/python/django/www") # projects directory
os.chdir("/home/<username>/python/django/www/<project>")
os.environ['DJANGO_SETTINGS_MODULE'] = "<project>.settings"
from django.core.servers.fastcgi import runfastcgi
runfastcgi(method="threaded", daemonize="false")
But, my web host just installed MySQLdb for me a few hours ago. When I run python from the shell I can import MySQLdb just fine. Why would this script report that it can't find it?
A:
You are missing the python-mysql db driver on your python path.
see if you can figure out the pythonpath WSGI is seeing... which can be different from what you are experiencing in shell
A:
Is it possible that you have the wrong DATABASE_ENGINE setting in your settings.py? It should be mysql and not mysqldb there.
|
Installing Django on Shared Server: No module named MySQLdb?
|
I'm getting this error
Traceback (most recent call last):
File "/home/<username>/flup/server/fcgi_base.py", line 558, in run
File "/home/<username>/flup/server/fcgi_base.py", line 1116, in handler
File "/home/<username>/python/django/django/core/handlers/wsgi.py", line 241, in __call__
response = self.get_response(request)
File "/home/<username>/python/django/django/core/handlers/base.py", line 73, in get_response
response = middleware_method(request)
File "/home/<username>/python/django/django/contrib/sessions/middleware.py", line 10, in process_request
engine = import_module(settings.SESSION_ENGINE)
File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/<username>/python/django/django/contrib/sessions/backends/db.py", line 2, in ?
from django.contrib.sessions.models import Session
File "/home/<username>/python/django/django/contrib/sessions/models.py", line 4, in ?
from django.db import models
File "/home/<username>/python/django/django/db/__init__.py", line 41, in ?
backend = load_backend(settings.DATABASE_ENGINE)
File "/home/<username>/python/django/django/db/__init__.py", line 17, in load_backend
return import_module('.base', 'django.db.backends.%s' % backend_name)
File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/<username>/python/django/django/db/backends/mysql/base.py", line 13, in ?
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
when I try to run this script on my shared server
#!/usr/bin/python
import sys, os
sys.path.insert(0, "/home/<username>/python/django")
sys.path.insert(0, "/home/<username>/python/django/www") # projects directory
os.chdir("/home/<username>/python/django/www/<project>")
os.environ['DJANGO_SETTINGS_MODULE'] = "<project>.settings"
from django.core.servers.fastcgi import runfastcgi
runfastcgi(method="threaded", daemonize="false")
But, my web host just installed MySQLdb for me a few hours ago. When I run python from the shell I can import MySQLdb just fine. Why would this script report that it can't find it?
|
[
"You are missing the python-mysql db driver on your python path.\nsee if you can figure out the pythonpath WSGI is seeing... which can be different from what you are experiencing in shell\n",
"Is it possible that you have the wrong DATABASE_ENGINE setting in your settings.py? It should be mysql and not mysqldb there.\n"
] |
[
2,
1
] |
[] |
[] |
[
"django",
"mysql",
"python"
] |
stackoverflow_0000770904_django_mysql_python.txt
|
Q:
What is the best secure way to allow a user to delete a model instance that they added to the db?
I would like to give users access to delete a model instance that they added to the db. In the django docs it says allowing someone to delete from the template is not a good practice. Is there a secure way to let a user click a "delete this" link from the template and remove that model instance? How should I go about doing that?
A:
Check out this question for discussion related to what you are asking about.
Essentially, when you normally click on a link on the page the browser makes a GET request to the server to get the next page's contents. Just like there is a lot of pushing towards semantically relevant CSS layouts, it is also important that your page requests are semantically relevant. The problem with using links to remove items is that it is making a GET request to DELETE something in the database. From this comes the problem that some search engines might index your links and accidentally erase content. There also comes the problem of cross-site request forgeries which can make an unsuspecting user make a command to a website without being aware. So the proper way to handle this is by following the rule that any request that modifies state in the server should be processed via POST. As such, instead of doing this:
<a href="{% url remove_item item.id %}">Delete Item</a>
It is better to do this:
<form action='{% url remove_item %}' method='POST' id='form'>
<input type='hidden' name='action' value='delete'>
<input type='hidden' name='id' value='{{ item.id }}'>
<input type="submit" value="Delete Item">
</form>
If you would like to keep your links while maintaining the POST, you'd have to resort to Javascript:
<a href="#" onclick="document.getElementById('form').submit(); return false;">Delete Item</a>
Unsightly, yes, but it's for the best. Your Django view would then do something like this:
def remove_item(request):
if request.method == 'POST':
## remove item
Furthermore, as Scott mentions, Django has some built in stuff to help you avoid the cross-site request forgeries I mentioned above, since it is still possible to do it even if you are doing a POST (just slightly harder). The way to avoid this is to have some kind of token tied to the form that needs to be validated server side before allowing the action to be taken. Check out the CsrfMiddleware class for more details on that. It will essentially automate some of that work out of it for you.
Additional Reading
URIs, Addressability, and the use of HTTP GET and POST
9.1.1 Safe Methods, HTTP 1.1, RFC 2616
Architecture of the World Wide Web, Volume One
Using POST with a regular link
Cross-Site Request Forgeries and You
A:
Have the user submit a POST request to delete that model instance. These kinds of changes should never be possible via GET requests, so that people can't link each other to unwittingly performing changes on the site.
In your view, check that request.user is the same as the author of that particular model instance. You could also check that the HTTP_REFERRER is not set to another site if you were really worried.
Your security issue here is Cross Site Request Forgery. Django provides CsrfMiddleware which will actually add security to your forms to prevent this kind of attack. But it only works as long as you're not allowing permanent changes to take place via GET requests.
|
What is the best secure way to allow a user to delete a model instance that they added to the db?
|
I would like to give users access to delete a model instance that they added to the db. In the django docs it says allowing someone to delete from the template is not a good practice. Is there a secure way to let a user click a "delete this" link from the template and remove that model instance? How should I go about doing that?
|
[
"Check out this question for discussion related to what you are asking about.\nEssentially, when you normally click on a link on the page the browser makes a GET request to the server to get the next page's contents. Just like there is a lot of pushing towards semantically relevant CSS layouts, it is also important that your page requests are semantically relevant. The problem with using links to remove items is that it is making a GET request to DELETE something in the database. From this comes the problem that some search engines might index your links and accidentally erase content. There also comes the problem of cross-site request forgeries which can make an unsuspecting user make a command to a website without being aware. So the proper way to handle this is by following the rule that any request that modifies state in the server should be processed via POST. As such, instead of doing this:\n<a href=\"{% url remove_item item.id %}\">Delete Item</a>\n\nIt is better to do this:\n<form action='{% url remove_item %}' method='POST' id='form'>\n <input type='hidden' name='action' value='delete'>\n <input type='hidden' name='id' value='{{ item.id }}'>\n <input type=\"submit\" value=\"Delete Item\">\n</form>\n\nIf you would like to keep your links while maintaining the POST, you'd have to resort to Javascript:\n<a href=\"#\" onclick=\"document.getElementById('form').submit(); return false;\">Delete Item</a>\n\nUnsightly, yes, but it's for the best. Your Django view would then do something like this:\ndef remove_item(request):\n if request.method == 'POST':\n ## remove item\n\nFurthermore, as Scott mentions, Django has some built in stuff to help you avoid the cross-site request forgeries I mentioned above, since it is still possible to do it even if you are doing a POST (just slightly harder). The way to avoid this is to have some kind of token tied to the form that needs to be validated server side before allowing the action to be taken. Check out the CsrfMiddleware class for more details on that. It will essentially automate some of that work out of it for you.\nAdditional Reading\n\nURIs, Addressability, and the use of HTTP GET and POST\n9.1.1 Safe Methods, HTTP 1.1, RFC 2616\nArchitecture of the World Wide Web, Volume One\nUsing POST with a regular link\nCross-Site Request Forgeries and You\n\n",
"Have the user submit a POST request to delete that model instance. These kinds of changes should never be possible via GET requests, so that people can't link each other to unwittingly performing changes on the site.\nIn your view, check that request.user is the same as the author of that particular model instance. You could also check that the HTTP_REFERRER is not set to another site if you were really worried.\nYour security issue here is Cross Site Request Forgery. Django provides CsrfMiddleware which will actually add security to your forms to prevent this kind of attack. But it only works as long as you're not allowing permanent changes to take place via GET requests.\n"
] |
[
7,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000770427_django_python.txt
|
Q:
importing gaeutilities or any other module by dev_appserver
I'm developing a gae application on a windows machine. to have session handling I downloaded gaeutilities and added its path (C:\Python25\Lib\site-packages\gaeutilities-1.2.1) to the registry ("PythonPath" item under python25).
in my code this is how I import the gaeutilities Session class:
from appengine_utilities.sessions import Session
when gae engine (dev_appserver.py) tries to import it, an exception is raised, stating an importerror and "no module named appengine_utilities.sessions"
on the other hand, pyscripter can find the module (autocomplete becomes available for the Session class), and I can import the module within the python interpreter (the same one that dev_appserver uses, python 2.5.4).
for a remedy, I created a PYTHONPATH environmental variable and also added the path to it. nothing changes.
I'm lost. what am I doing wrong?
important edit: I have found myself to be totally unable to import any 3rd party gae modules. PYTHONPATH is correct, sys.path is correct, registry is correct, still dev_appserver complains of importerror.
A:
Strange.
I would start troubleshooting by making 100% sure that the sys.path that dev_appserver.py uses does include C:\Python25\Lib\site-packages\gaeutilities-1.2.1.
I suggest you display sys.path in a HTML view served by dev_appserver.py.
Check permissions on gaeutilities-1.2.1 directory and subdirectories. Perhaps the python interpreter is unable to create *.pyc files or something like that.
Another suggestion:
Put the appengines_utilities folder in your application directory (the directory that contains your app.yaml file). I guess you need all third-party stuff there anyway if you want to upload the code to google's servers.
|
importing gaeutilities or any other module by dev_appserver
|
I'm developing a gae application on a windows machine. to have session handling I downloaded gaeutilities and added its path (C:\Python25\Lib\site-packages\gaeutilities-1.2.1) to the registry ("PythonPath" item under python25).
in my code this is how I import the gaeutilities Session class:
from appengine_utilities.sessions import Session
when gae engine (dev_appserver.py) tries to import it, an exception is raised, stating an importerror and "no module named appengine_utilities.sessions"
on the other hand, pyscripter can find the module (autocomplete becomes available for the Session class), and I can import the module within the python interpreter (the same one that dev_appserver uses, python 2.5.4).
for a remedy, I created a PYTHONPATH environmental variable and also added the path to it. nothing changes.
I'm lost. what am I doing wrong?
important edit: I have found myself to be totally unable to import any 3rd party gae modules. PYTHONPATH is correct, sys.path is correct, registry is correct, still dev_appserver complains of importerror.
|
[
"Strange.\nI would start troubleshooting by making 100% sure that the sys.path that dev_appserver.py uses does include C:\\Python25\\Lib\\site-packages\\gaeutilities-1.2.1.\nI suggest you display sys.path in a HTML view served by dev_appserver.py.\nCheck permissions on gaeutilities-1.2.1 directory and subdirectories. Perhaps the python interpreter is unable to create *.pyc files or something like that.\nAnother suggestion:\nPut the appengines_utilities folder in your application directory (the directory that contains your app.yaml file). I guess you need all third-party stuff there anyway if you want to upload the code to google's servers.\n"
] |
[
1
] |
[] |
[] |
[
"google_app_engine",
"import",
"python"
] |
stackoverflow_0000770385_google_app_engine_import_python.txt
|
Q:
GStrings in Python
Groovy has a concept of GStrings. I can write code like this:
def greeting = 'Hello World'
println """This is my first program ${greeting}"""
I can access the value of a variable from within the String.
How can I do this in Python?
--
Thanks
A:
In Python, you have to explicitely pass a dictionary of possible variables, you cannot access arbitrary "outside" variables from within a string. But, you can use the locals() function that returns a dictionary with all variables of the local scope.
For the actual replacement, there are many ways to do it (how unpythonic!):
greeting = "Hello World"
# Use this in versions prior to 2.6:
print("My first programm; %(greeting)s" % locals())
# Since Python 2.6, the recommended example is:
print("My first program; {greeting}".format(**locals()))
# Works in 2.x and 3.x:
from string import Template
print(Template("My first programm; $greeting").substitute(locals()))
A:
d = {'greeting': 'Hello World'}
print "This is my first program %(greeting)s" % d
A:
You can't exactly...
I think the closest you can really get is using standard %-based substitution, e.g:
greeting = "Hello World"
print "This is my first program %s" % greeting
Having said that, there are some fancy new classes as of Python 2.6 which can do this in different ways: check out the string documentation for 2.6, specifically from section 8.1.2 onwards to find out more.
A:
If your trying to do templating you might want to look into Cheetah. It lets you do exactly what your talking about, same syntax and all.
http://www.cheetahtemplate.org/
A:
In Python 2.6+ you can do:
"My name is {0}".format('Fred')
Check out PEP 3101.
|
GStrings in Python
|
Groovy has a concept of GStrings. I can write code like this:
def greeting = 'Hello World'
println """This is my first program ${greeting}"""
I can access the value of a variable from within the String.
How can I do this in Python?
--
Thanks
|
[
"In Python, you have to explicitely pass a dictionary of possible variables, you cannot access arbitrary \"outside\" variables from within a string. But, you can use the locals() function that returns a dictionary with all variables of the local scope.\nFor the actual replacement, there are many ways to do it (how unpythonic!):\ngreeting = \"Hello World\"\n\n# Use this in versions prior to 2.6:\nprint(\"My first programm; %(greeting)s\" % locals())\n\n# Since Python 2.6, the recommended example is:\nprint(\"My first program; {greeting}\".format(**locals()))\n\n# Works in 2.x and 3.x:\nfrom string import Template\nprint(Template(\"My first programm; $greeting\").substitute(locals()))\n\n",
"d = {'greeting': 'Hello World'}\nprint \"This is my first program %(greeting)s\" % d\n\n",
"You can't exactly...\nI think the closest you can really get is using standard %-based substitution, e.g:\ngreeting = \"Hello World\"\nprint \"This is my first program %s\" % greeting\n\nHaving said that, there are some fancy new classes as of Python 2.6 which can do this in different ways: check out the string documentation for 2.6, specifically from section 8.1.2 onwards to find out more.\n",
"If your trying to do templating you might want to look into Cheetah. It lets you do exactly what your talking about, same syntax and all.\nhttp://www.cheetahtemplate.org/\n",
"In Python 2.6+ you can do:\n\"My name is {0}\".format('Fred')\n\nCheck out PEP 3101.\n"
] |
[
5,
3,
1,
1,
1
] |
[] |
[] |
[
"gstring",
"python"
] |
stackoverflow_0000771312_gstring_python.txt
|
Q:
Python + PHP + Lighttpd?
I've set up a few web servers in my day, but I'm not sure how they work internally. I'm setting up a new environment for myself and I'm interested in configuring my lighttpd server to support both PHP and Python. Is this possible?
A:
Yes, this is possible. Here you can find a sample configuration.
fastcgi.server = (
".php" => ((
"bin-path" => "/usr/bin/php5-cgi",
"socket" => "/tmp/php.socket"
)),
"django.fcgi" => (
"main" => (
"host" => "127.0.0.1",
"port" => 9090, #set the port numbers to what-eva you want
),
),
"admin.fcgi" => (
"admin" => (
"host" => "127.0.0.1",
"port" => 9091,
)
)
)
A:
You can also enable Lighty to use .pl, .py and .php as 'cgi' by enabling mod_cgi and setting it up. The default configs are on the Lighty website. However, this will have the benefits and problems of running an independent cgi process. If you are only experiencing light traffic, performance shouldn't be an issue.
|
Python + PHP + Lighttpd?
|
I've set up a few web servers in my day, but I'm not sure how they work internally. I'm setting up a new environment for myself and I'm interested in configuring my lighttpd server to support both PHP and Python. Is this possible?
|
[
"Yes, this is possible. Here you can find a sample configuration.\nfastcgi.server = (\n\".php\" => ((\n\"bin-path\" => \"/usr/bin/php5-cgi\",\n\"socket\" => \"/tmp/php.socket\"\n)),\n\"django.fcgi\" => (\n\"main\" => (\n\"host\" => \"127.0.0.1\",\n\"port\" => 9090, #set the port numbers to what-eva you want\n),\n),\n\"admin.fcgi\" => (\n\"admin\" => (\n\"host\" => \"127.0.0.1\",\n\"port\" => 9091,\n)\n)\n)\n\n",
"You can also enable Lighty to use .pl, .py and .php as 'cgi' by enabling mod_cgi and setting it up. The default configs are on the Lighty website. However, this will have the benefits and problems of running an independent cgi process. If you are only experiencing light traffic, performance shouldn't be an issue.\n"
] |
[
3,
1
] |
[] |
[] |
[
"configure",
"lighttpd",
"php",
"python"
] |
stackoverflow_0000771341_configure_lighttpd_php_python.txt
|
Q:
pyqt4 and pyserial
I want to do an app constantly watching the serial port and changing the user interface according to the input received from the port. I've managed to read lines from the port with pyserial under Linux, but I'm not sure how to do this in a regular fashion: create a separate thread and check for input on a timer event? How do i make sure I don't miss anything? (implementing some kind of handshake/protocol seems like an overkill for this...) And most importantly: How do I do it with the facilities of qt4?
Edit: This is what I'm doing now (I want to do this periodically with the rest of the app running and not waiting)
class MessageBox(QtGui.QWidget):
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
ser = serial.Serial('/dev/ttyS0', 9600, bytesize=serial.EIGHTBITS,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
timeout=None,
xonxoff=0,
rtscts=0,
interCharTimeout=None)
self.label = QtGui.QLabel(ser.readline(), self)
self.label.move(15, 10)
ser.close()
self.setGeometry(300, 300, 250, 150)
self.setWindowTitle('Authentication')
self.color = QtGui.QColor(0, 0, 0)
self.square = QtGui.QWidget(self)
self.square.setGeometry(120, 20, 100, 100)
self.square.setStyleSheet("QWidget { background-color: %s }" % self.color.name())
A:
You won't miss any bytes, any pending input is buffered.
You have several options:
use a thread that polls the serial port with PySerial/inWaiting()
Use a timer in the main thread that polls the serial port with PySerial/inWaiting.
find the handle of the port and pass it to QSocketNotifier. This works only on linux but in that case, QSocketNotifier will watch the file associated with your serial port and send a signal when there's something available.
Method 2 and 3 are better because you don't need a thread.
|
pyqt4 and pyserial
|
I want to do an app constantly watching the serial port and changing the user interface according to the input received from the port. I've managed to read lines from the port with pyserial under Linux, but I'm not sure how to do this in a regular fashion: create a separate thread and check for input on a timer event? How do i make sure I don't miss anything? (implementing some kind of handshake/protocol seems like an overkill for this...) And most importantly: How do I do it with the facilities of qt4?
Edit: This is what I'm doing now (I want to do this periodically with the rest of the app running and not waiting)
class MessageBox(QtGui.QWidget):
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
ser = serial.Serial('/dev/ttyS0', 9600, bytesize=serial.EIGHTBITS,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
timeout=None,
xonxoff=0,
rtscts=0,
interCharTimeout=None)
self.label = QtGui.QLabel(ser.readline(), self)
self.label.move(15, 10)
ser.close()
self.setGeometry(300, 300, 250, 150)
self.setWindowTitle('Authentication')
self.color = QtGui.QColor(0, 0, 0)
self.square = QtGui.QWidget(self)
self.square.setGeometry(120, 20, 100, 100)
self.square.setStyleSheet("QWidget { background-color: %s }" % self.color.name())
|
[
"You won't miss any bytes, any pending input is buffered.\nYou have several options:\n\nuse a thread that polls the serial port with PySerial/inWaiting() \nUse a timer in the main thread that polls the serial port with PySerial/inWaiting.\nfind the handle of the port and pass it to QSocketNotifier. This works only on linux but in that case, QSocketNotifier will watch the file associated with your serial port and send a signal when there's something available.\n\nMethod 2 and 3 are better because you don't need a thread.\n"
] |
[
4
] |
[] |
[] |
[
"linux",
"pyserial",
"python",
"qt",
"serial_port"
] |
stackoverflow_0000771988_linux_pyserial_python_qt_serial_port.txt
|
Q:
How to add a second bouncing ball to the window?
I have coded an animation (in python) for a beach ball to bounce around a screen. I now wish to add a second ball to the window, and when the two collide for them to bounce off each other.
So far, my attempts at this have been unsuccessful. Any ideas how to do this? The code I have so far is below.
import pygame
import sys
if __name__ =='__main__':
ball_image = 'Beachball.jpg'
bounce_sound = 'Thump.wav'
width = 800
height = 600
background_colour = 0,0,0
caption= 'Bouncing Ball animation'
velocity = [1,1]
pygame.init ()
frame = pygame.display.set_mode ((width, height))
pygame.display.set_caption (caption)
ball= pygame.image.load (ball_image). convert()
ball_boundary = ball.get_rect (center=(300,300))
sound = pygame.mixer.Sound (bounce_sound)
while True:
for event in pygame.event.get():
print event
if event.type == pygame.QUIT: sys.exit(0)
if ball_boundary.left < 0 or ball_boundary.right > width:
sound.play()
velocity[0] = -1 * velocity[0]
if ball_boundary.top < 0 or ball_boundary.bottom > height:
sound.play()
velocity[1] = -1 * velocity[1]
ball_boundary = ball_boundary.move (velocity)
frame.fill (background_colour)
frame.blit (ball, ball_boundary)
pygame.display.flip()
A:
Here's a very basic restructure of your code. It could still be tidied up a lot, but it should show you how you can use instances of the class.
import pygame
import random
import sys
class Ball:
def __init__(self,X,Y):
self.velocity = [1,1]
self.ball_image = pygame.image.load ('Beachball.jpg'). convert()
self.ball_boundary = self.ball_image.get_rect (center=(X,Y))
self.sound = pygame.mixer.Sound ('Thump.wav')
if __name__ =='__main__':
width = 800
height = 600
background_colour = 0,0,0
pygame.init()
frame = pygame.display.set_mode((width, height))
pygame.display.set_caption("Bouncing Ball animation")
num_balls = 1000
ball_list = []
for i in range(num_balls):
ball_list.append( Ball(random.randint(0, width),random.randint(0, height)) )
while True:
for event in pygame.event.get():
print event
if event.type == pygame.QUIT:
sys.exit(0)
frame.fill (background_colour)
for ball in ball_list:
if ball.ball_boundary.left < 0 or ball.ball_boundary.right > width:
ball.sound.play()
ball.velocity[0] = -1 * ball.velocity[0]
if ball.ball_boundary.top < 0 or ball.ball_boundary.bottom > height:
ball.sound.play()
ball.velocity[1] = -1 * ball.velocity[1]
ball.ball_boundary = ball.ball_boundary.move (ball.velocity)
frame.blit (ball.ball_image, ball.ball_boundary)
pygame.display.flip()
A:
You should probably create a class to represent your beachball. Then you'd instance as many as you like, and put the instances in a Python list.
You'd then go through that list on each frame, updating and rendering each.
You would need to include a method to test for collision against another ball (this is simple for circles). If a collision is detected, the balls involved should simulate a bounce away from each other.
|
How to add a second bouncing ball to the window?
|
I have coded an animation (in python) for a beach ball to bounce around a screen. I now wish to add a second ball to the window, and when the two collide for them to bounce off each other.
So far, my attempts at this have been unsuccessful. Any ideas how to do this? The code I have so far is below.
import pygame
import sys
if __name__ =='__main__':
ball_image = 'Beachball.jpg'
bounce_sound = 'Thump.wav'
width = 800
height = 600
background_colour = 0,0,0
caption= 'Bouncing Ball animation'
velocity = [1,1]
pygame.init ()
frame = pygame.display.set_mode ((width, height))
pygame.display.set_caption (caption)
ball= pygame.image.load (ball_image). convert()
ball_boundary = ball.get_rect (center=(300,300))
sound = pygame.mixer.Sound (bounce_sound)
while True:
for event in pygame.event.get():
print event
if event.type == pygame.QUIT: sys.exit(0)
if ball_boundary.left < 0 or ball_boundary.right > width:
sound.play()
velocity[0] = -1 * velocity[0]
if ball_boundary.top < 0 or ball_boundary.bottom > height:
sound.play()
velocity[1] = -1 * velocity[1]
ball_boundary = ball_boundary.move (velocity)
frame.fill (background_colour)
frame.blit (ball, ball_boundary)
pygame.display.flip()
|
[
"Here's a very basic restructure of your code. It could still be tidied up a lot, but it should show you how you can use instances of the class.\nimport pygame\nimport random\nimport sys\n\nclass Ball:\n def __init__(self,X,Y):\n self.velocity = [1,1]\n self.ball_image = pygame.image.load ('Beachball.jpg'). convert()\n self.ball_boundary = self.ball_image.get_rect (center=(X,Y))\n self.sound = pygame.mixer.Sound ('Thump.wav')\n\nif __name__ =='__main__':\n width = 800\n height = 600\n background_colour = 0,0,0\n pygame.init()\n frame = pygame.display.set_mode((width, height))\n pygame.display.set_caption(\"Bouncing Ball animation\")\n num_balls = 1000\n ball_list = []\n for i in range(num_balls):\n ball_list.append( Ball(random.randint(0, width),random.randint(0, height)) )\n while True:\n for event in pygame.event.get():\n print event \n if event.type == pygame.QUIT:\n sys.exit(0)\n frame.fill (background_colour)\n for ball in ball_list:\n if ball.ball_boundary.left < 0 or ball.ball_boundary.right > width:\n ball.sound.play()\n ball.velocity[0] = -1 * ball.velocity[0]\n if ball.ball_boundary.top < 0 or ball.ball_boundary.bottom > height:\n ball.sound.play()\n ball.velocity[1] = -1 * ball.velocity[1]\n\n ball.ball_boundary = ball.ball_boundary.move (ball.velocity)\n frame.blit (ball.ball_image, ball.ball_boundary)\n pygame.display.flip()\n\n",
"You should probably create a class to represent your beachball. Then you'd instance as many as you like, and put the instances in a Python list.\nYou'd then go through that list on each frame, updating and rendering each.\nYou would need to include a method to test for collision against another ball (this is simple for circles). If a collision is detected, the balls involved should simulate a bounce away from each other.\n"
] |
[
7,
3
] |
[] |
[] |
[
"animation",
"pygame",
"python"
] |
stackoverflow_0000771992_animation_pygame_python.txt
|
Q:
Running python code from standard Cocoa application
I have an XCode project built as a Cocoa single document application (it's not a Python-Cocoa application, that is not what I want).
All the documentation I found assumes I want to create a Cocoa application with code written in Python and this is not the case - I want a standard Cocoa application that calls a method out of a Python class.
Can I use PyObjC to run python code from a file without replacing the Cocoa standard main.c with a main.py file?
Later edit: I am aware of Python.framework but as far as I can tell I can only send parameters to PyRun_SimpleFile through the environment - I'd like to use the bridge directly to exchange data with the python code.
A:
A google search for embed python objective C, returns a few links that might be of interest, in particular:
http://blog.alienoverlord.com/?p=14
http://blog.tlensing.org/2008/11/04/embedding-python-in-a-cocoa-application/
|
Running python code from standard Cocoa application
|
I have an XCode project built as a Cocoa single document application (it's not a Python-Cocoa application, that is not what I want).
All the documentation I found assumes I want to create a Cocoa application with code written in Python and this is not the case - I want a standard Cocoa application that calls a method out of a Python class.
Can I use PyObjC to run python code from a file without replacing the Cocoa standard main.c with a main.py file?
Later edit: I am aware of Python.framework but as far as I can tell I can only send parameters to PyRun_SimpleFile through the environment - I'd like to use the bridge directly to exchange data with the python code.
|
[
"A google search for embed python objective C, returns a few links that might be of interest, in particular:\n\nhttp://blog.alienoverlord.com/?p=14\nhttp://blog.tlensing.org/2008/11/04/embedding-python-in-a-cocoa-application/\n\n"
] |
[
7
] |
[] |
[] |
[
"cocoa",
"macos",
"pyobjc",
"python"
] |
stackoverflow_0000772112_cocoa_macos_pyobjc_python.txt
|
Q:
Oracle / Python Converting to string -> HEX (for RAW column) -> varchar2
I have a table with a RAW column for holding an encrypted string.
I have the PL/SQL code for encrypting from plain text into this field.
I wish to create a trigger containg the encryption code.
I wish to 'misuse' the RAW field to pass the plain text into the trigger. (I can't modify the schema, for example to add another column for the plain text field)
The client inserting the data is Python (cx_Oracle).
My question is how to best convert from a python string into HEX, then back to VARCHAR2 in the trigger so that the encryption code can be used without modification (encryption code expects VARCHAR2).
Here's an example:
create table BOB (name_enc raw(16));
In python. This is my initial attempt at encoding, I suspect I'll need something more portable for international character sets.
name_enc = 'some text'.encode('hex')
The trigger
create or replace trigger enc_bob before insert on BOB
for each row
DECLARE
v_name varchar2(50);
BEGIN
v_name := :new.name_enc; <---- WHAT DO I NEED HERE TO CONVERT FROM HEX to VARCHAR?
--
-- encryption code that expects v_name to contain string
END;
UPDATE
The suggestion for using Base64 worked for me
Python:
name_enc = base64.b64encode('some text')
PL/SQL:
v_name := utl_raw.cast_to_varchar2(UTL_ENCODE.BASE64_DECODE(:new.name_enc));
A:
Do you have to encode to hex?
I think there is a package (utl_encode) available for PL/SQL to decode Base64 for instance, you could use that?
|
Oracle / Python Converting to string -> HEX (for RAW column) -> varchar2
|
I have a table with a RAW column for holding an encrypted string.
I have the PL/SQL code for encrypting from plain text into this field.
I wish to create a trigger containg the encryption code.
I wish to 'misuse' the RAW field to pass the plain text into the trigger. (I can't modify the schema, for example to add another column for the plain text field)
The client inserting the data is Python (cx_Oracle).
My question is how to best convert from a python string into HEX, then back to VARCHAR2 in the trigger so that the encryption code can be used without modification (encryption code expects VARCHAR2).
Here's an example:
create table BOB (name_enc raw(16));
In python. This is my initial attempt at encoding, I suspect I'll need something more portable for international character sets.
name_enc = 'some text'.encode('hex')
The trigger
create or replace trigger enc_bob before insert on BOB
for each row
DECLARE
v_name varchar2(50);
BEGIN
v_name := :new.name_enc; <---- WHAT DO I NEED HERE TO CONVERT FROM HEX to VARCHAR?
--
-- encryption code that expects v_name to contain string
END;
UPDATE
The suggestion for using Base64 worked for me
Python:
name_enc = base64.b64encode('some text')
PL/SQL:
v_name := utl_raw.cast_to_varchar2(UTL_ENCODE.BASE64_DECODE(:new.name_enc));
|
[
"Do you have to encode to hex?\nI think there is a package (utl_encode) available for PL/SQL to decode Base64 for instance, you could use that? \n"
] |
[
2
] |
[] |
[] |
[
"oracle",
"python"
] |
stackoverflow_0000772518_oracle_python.txt
|
Q:
What does the 'shell' argument in subprocess mean on Windows?
The docs for the subprocess module state that 'If shell is True, the specified command will be executed through the shell'. What does this mean in practice, on a Windows OS?
A:
It means that the command will be executed using the program specified in the COMSPEC environment variable. Usually cmd.exe.
To be exact, subprocess calls the CreateProcess windows api function, passing "cmd.exe /c " + args as the lpCommandLine argument.
If shell==False, the lpCommandLine argument to CreateProcess is simply args.
A:
When you execute an external process, the command you want may look something like "foo arg1 arg2 arg3". If "foo" is an executable, that is what gets executed and given the arguments.
However, often it is the case that "foo" is actually a script of some sort, or maybe a command that is built-in to the shell and not an actual executable file on disk. In this case the system can't execute "foo" directly because, strictly speaking, these sorts of things aren't executable. They need some sort of "shell" to execute them. On *nix systems this shell is typically (but not necessarily) /bin/sh. On windows it will typically be cmd.exe (or whatever is stored in the COMSPEC environment variable).
This parameter lets you define what shell you wish to use to execute your command, for the relatively rare case when you don't want the default.
A:
In using-the-subprocess-module, there is an explicit paragraph:
The executable argument specifies the program to execute. It is very seldom needed: Usually, the program to execute is defined by the args argument. If shell=True, the executable argument specifies which shell to use. On Unix, the default shell is /bin/sh. On Windows, the default shell is specified by the COMSPEC environment variable.
Windows example - the shell (cmd.exe) command date -t will not be recognized without the shell:
>>> p=subprocess.Popen(["date", "/t"], stdout=subprocess.PIPE)
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Python26\lib\subprocess.py", line 595, in __init__
errread, errwrite)
File "C:\Python26\lib\subprocess.py", line 804, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
>>>
Using a shell, all is well:
>>> p=subprocess.Popen(["date", "/t"], shell=True, stdout=subprocess.PIPE)
>>> p.communicate()
('Wed 04/22/2009 \r\n', None)
>>>
A:
In addition to what was said in other answers, it is useful in practice if you want to open a file in the default viewer for that file type. For instance, if you want to open an HTML or PDF file, but will not know which browser or viewer is installed on the systems it will be run on, or have no guarantees as to the path to the executable, you can simply pass the file name as the only argument for the args field, then set shell=True. This will have Windows use whatever program is associated with that file type.
One caveat, if the path to your file has spaces, you need to surround it with two ".
eg.
path = "C:\\Documents and Settings\\Bob\\Desktop\\New Folder\\README.txt"
subprocess.call('""' + path + '""', shell = True)
|
What does the 'shell' argument in subprocess mean on Windows?
|
The docs for the subprocess module state that 'If shell is True, the specified command will be executed through the shell'. What does this mean in practice, on a Windows OS?
|
[
"It means that the command will be executed using the program specified in the COMSPEC environment variable. Usually cmd.exe.\nTo be exact, subprocess calls the CreateProcess windows api function, passing \"cmd.exe /c \" + args as the lpCommandLine argument. \nIf shell==False, the lpCommandLine argument to CreateProcess is simply args.\n",
"When you execute an external process, the command you want may look something like \"foo arg1 arg2 arg3\". If \"foo\" is an executable, that is what gets executed and given the arguments. \nHowever, often it is the case that \"foo\" is actually a script of some sort, or maybe a command that is built-in to the shell and not an actual executable file on disk. In this case the system can't execute \"foo\" directly because, strictly speaking, these sorts of things aren't executable. They need some sort of \"shell\" to execute them. On *nix systems this shell is typically (but not necessarily) /bin/sh. On windows it will typically be cmd.exe (or whatever is stored in the COMSPEC environment variable).\nThis parameter lets you define what shell you wish to use to execute your command, for the relatively rare case when you don't want the default.\n",
"In using-the-subprocess-module, there is an explicit paragraph:\n\nThe executable argument specifies the program to execute. It is very seldom needed: Usually, the program to execute is defined by the args argument. If shell=True, the executable argument specifies which shell to use. On Unix, the default shell is /bin/sh. On Windows, the default shell is specified by the COMSPEC environment variable.\n\nWindows example - the shell (cmd.exe) command date -t will not be recognized without the shell:\n>>> p=subprocess.Popen([\"date\", \"/t\"], stdout=subprocess.PIPE)\nTraceback (most recent call last):\n File \"<interactive input>\", line 1, in <module>\n File \"C:\\Python26\\lib\\subprocess.py\", line 595, in __init__\n errread, errwrite)\n File \"C:\\Python26\\lib\\subprocess.py\", line 804, in _execute_child\n startupinfo)\nWindowsError: [Error 2] The system cannot find the file specified\n>>> \n\nUsing a shell, all is well:\n>>> p=subprocess.Popen([\"date\", \"/t\"], shell=True, stdout=subprocess.PIPE)\n>>> p.communicate()\n('Wed 04/22/2009 \\r\\n', None)\n>>>\n\n",
"In addition to what was said in other answers, it is useful in practice if you want to open a file in the default viewer for that file type. For instance, if you want to open an HTML or PDF file, but will not know which browser or viewer is installed on the systems it will be run on, or have no guarantees as to the path to the executable, you can simply pass the file name as the only argument for the args field, then set shell=True. This will have Windows use whatever program is associated with that file type.\nOne caveat, if the path to your file has spaces, you need to surround it with two \".\neg.\npath = \"C:\\\\Documents and Settings\\\\Bob\\\\Desktop\\\\New Folder\\\\README.txt\"\nsubprocess.call('\"\"' + path + '\"\"', shell = True)\n\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"python",
"subprocess"
] |
stackoverflow_0000771816_python_subprocess.txt
|
Q:
How to create a password protected zipfile with python?
Since python2.6, it's now easier to extract data from a password protected zip. But how to create a password protected zipfile in pure python ?
A:
I've looked for this in the past and been unsuccessful. (I'd love to see a solution get posted!)
One option is a commercial package from chilkatsoft that will do this, but at $150. Makes sense if you are doing a commercial app, but tough to swallow otherwise.
I wound up calling out to the system for my solution, a while ago. Unfortunately, this locks it to a platform.
|
How to create a password protected zipfile with python?
|
Since python2.6, it's now easier to extract data from a password protected zip. But how to create a password protected zipfile in pure python ?
|
[
"I've looked for this in the past and been unsuccessful. (I'd love to see a solution get posted!)\nOne option is a commercial package from chilkatsoft that will do this, but at $150. Makes sense if you are doing a commercial app, but tough to swallow otherwise.\nI wound up calling out to the system for my solution, a while ago. Unfortunately, this locks it to a platform.\n"
] |
[
3
] |
[] |
[] |
[
"python",
"zip"
] |
stackoverflow_0000772814_python_zip.txt
|
Q:
Python +sockets
i have to create connecting server<=>client. I use this code:
Server:
import socket
HOST = 'localhost'
PORT = 50007
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connected by', addr
while 1:
data = conn.recv(1024)
if not data: break
conn.send(data)
conn.close()
Client:
import socket
HOST = 'localhost'
PORT = 50007
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('Hello, world')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
It works fine! But if server is created on the computer which hasn't router.
If u have router, before server creating you should open 50007 port on your modem. How can i create server on all computers without port enabling? Torrent-clients do it somehow.
Thanks.
A:
The question is a little confusing, but I will try to help out. Basically, if the port (50007) is blocked on the server machine by a firewall, you will NOT be able to make a tcp connection to it from the client. That is the purpose of the firewall. A lot of protocols (SIP and bittorrent for example) do use firewall and NAT navigation strategies, but that is a complex subject that you can get more information on here. You will note that to use bittorrent effectively, you have to enable port forwarding for NAT and unblock port ranges for firewalls. Also, bittorrent uses tcp connections for most data transfer. Here is the takeaway:
First, note that there are two types of connections that the BitTorrent program must make:
Outbound HTTP connections to the tracker, usually on port 6969.
Inbound and outbound connections to the peer machines, usually on port 6881 and up.
A:
Very difficult to understand your question...
(...) Torrent-clients do it somehow.
The Torrent-clients can do this only when the router -- Internet gateway device (IGD) -- supports the uPNP protocol. The interesting part for your problem is the section about NAT traversal.
|
Python +sockets
|
i have to create connecting server<=>client. I use this code:
Server:
import socket
HOST = 'localhost'
PORT = 50007
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connected by', addr
while 1:
data = conn.recv(1024)
if not data: break
conn.send(data)
conn.close()
Client:
import socket
HOST = 'localhost'
PORT = 50007
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('Hello, world')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
It works fine! But if server is created on the computer which hasn't router.
If u have router, before server creating you should open 50007 port on your modem. How can i create server on all computers without port enabling? Torrent-clients do it somehow.
Thanks.
|
[
"The question is a little confusing, but I will try to help out. Basically, if the port (50007) is blocked on the server machine by a firewall, you will NOT be able to make a tcp connection to it from the client. That is the purpose of the firewall. A lot of protocols (SIP and bittorrent for example) do use firewall and NAT navigation strategies, but that is a complex subject that you can get more information on here. You will note that to use bittorrent effectively, you have to enable port forwarding for NAT and unblock port ranges for firewalls. Also, bittorrent uses tcp connections for most data transfer. Here is the takeaway:\n\nFirst, note that there are two types of connections that the BitTorrent program must make:\n\nOutbound HTTP connections to the tracker, usually on port 6969.\nInbound and outbound connections to the peer machines, usually on port 6881 and up.\n\n\n",
"Very difficult to understand your question...\n\n(...) Torrent-clients do it somehow.\n\nThe Torrent-clients can do this only when the router -- Internet gateway device (IGD) -- supports the uPNP protocol. The interesting part for your problem is the section about NAT traversal.\n"
] |
[
7,
2
] |
[] |
[] |
[
"ports",
"python",
"sockets"
] |
stackoverflow_0000773869_ports_python_sockets.txt
|
Q:
classmethod for Tkinter-Monitor-Window
I would like to realise a monitor window that reports the user about ongoing computations. To do so I wrote a little class. But as I would like to use it accross different modules in an easy fashion I thought to implement it with classmethods. This allows to use it in the following way without instances:
from MonitorModule import Monitor
Monitor.write("xyz")
Also, if I use it in an other module, the output of Monitor.write() within other_module.py will be displayed in the same window.
This I can import in each module to redirect specific outputs to the same monitor. I got it to work except one little thing that I don't understand. I can't close the Monitor-window with the specific handler that I wrote. I could do it with the non-classmethod-way but not with the handler as a classmethod.
Look at the code:
import Tkinter
class Monitor_non_classmothod_way(object):
def __init__(self):
self.mw = Tkinter.Tk()
self.mw.title("Messages by NeuronSimulation")
self.text = Tkinter.Text(self.mw, width = 80, height = 30)
self.text.pack()
self.mw.protocol(name="WM_DELETE_WINDOW", func=self.handler)
self.is_mw = True
def write(self, s):
if self.is_mw:
self.text.insert(Tkinter.END, str(s) + "\n")
else:
print str(s)
def handler(self):
self.is_mw = False
self.mw.quit()
self.mw.destroy()
class Monitor(object):
@classmethod
def write(cls, s):
if cls.is_mw:
cls.text.insert(Tkinter.END, str(s) + "\n")
else:
print str(s)
@classmethod
def handler(cls):
cls.is_mw = False
cls.mw.quit()
cls.mw.destroy()
mw = Tkinter.Tk()
mw.title("Messages by NeuronSimulation")
text = Tkinter.Text(mw, width = 80, height = 30)
text.pack()
mw.protocol(name="WM_DELETE_WINDOW", func=handler)
close = handler
is_mw = True
a = Monitor_non_classmothod_way()
a.write("Hello Monitor one!")
# click the close button: it works
b = Monitor()
Monitor.write("Hello Monitor two!")
# click the close button: it DOESN'T work, BUT:
# >>> Monitor.close()
# works...
So, the classmethod seems to work and also seems to be accessible in the right way! Any idea, what went wrong, that it doesn't work with the button?
Cheers, Philipp
A:
You don't need lots of classmethods just to make it easy to use an object across multiple modules.
Instead consider making an instance at module import time as shown here:
import Tkinter
class Monitor(object):
def __init__(self):
self.mw = Tkinter.Tk()
self.mw.title("Messages by NeuronSimulation")
self.text = Tkinter.Text(self.mw, width = 80, height = 30)
self.text.pack()
self.mw.protocol(name="WM_DELETE_WINDOW", func=self.handler)
self.is_mw = True
def write(self, s):
if self.is_mw:
self.text.insert(Tkinter.END, str(s) + "\n")
else:
print str(s)
def handler(self):
self.is_mw = False
self.mw.quit()
self.mw.destroy()
monitor = Monitor()
other_module.py
from monitor import monitor
monitor.write("Foo")
|
classmethod for Tkinter-Monitor-Window
|
I would like to realise a monitor window that reports the user about ongoing computations. To do so I wrote a little class. But as I would like to use it accross different modules in an easy fashion I thought to implement it with classmethods. This allows to use it in the following way without instances:
from MonitorModule import Monitor
Monitor.write("xyz")
Also, if I use it in an other module, the output of Monitor.write() within other_module.py will be displayed in the same window.
This I can import in each module to redirect specific outputs to the same monitor. I got it to work except one little thing that I don't understand. I can't close the Monitor-window with the specific handler that I wrote. I could do it with the non-classmethod-way but not with the handler as a classmethod.
Look at the code:
import Tkinter
class Monitor_non_classmothod_way(object):
def __init__(self):
self.mw = Tkinter.Tk()
self.mw.title("Messages by NeuronSimulation")
self.text = Tkinter.Text(self.mw, width = 80, height = 30)
self.text.pack()
self.mw.protocol(name="WM_DELETE_WINDOW", func=self.handler)
self.is_mw = True
def write(self, s):
if self.is_mw:
self.text.insert(Tkinter.END, str(s) + "\n")
else:
print str(s)
def handler(self):
self.is_mw = False
self.mw.quit()
self.mw.destroy()
class Monitor(object):
@classmethod
def write(cls, s):
if cls.is_mw:
cls.text.insert(Tkinter.END, str(s) + "\n")
else:
print str(s)
@classmethod
def handler(cls):
cls.is_mw = False
cls.mw.quit()
cls.mw.destroy()
mw = Tkinter.Tk()
mw.title("Messages by NeuronSimulation")
text = Tkinter.Text(mw, width = 80, height = 30)
text.pack()
mw.protocol(name="WM_DELETE_WINDOW", func=handler)
close = handler
is_mw = True
a = Monitor_non_classmothod_way()
a.write("Hello Monitor one!")
# click the close button: it works
b = Monitor()
Monitor.write("Hello Monitor two!")
# click the close button: it DOESN'T work, BUT:
# >>> Monitor.close()
# works...
So, the classmethod seems to work and also seems to be accessible in the right way! Any idea, what went wrong, that it doesn't work with the button?
Cheers, Philipp
|
[
"You don't need lots of classmethods just to make it easy to use an object across multiple modules.\nInstead consider making an instance at module import time as shown here:\nimport Tkinter\n\nclass Monitor(object):\n\n def __init__(self):\n self.mw = Tkinter.Tk()\n self.mw.title(\"Messages by NeuronSimulation\")\n self.text = Tkinter.Text(self.mw, width = 80, height = 30)\n self.text.pack()\n self.mw.protocol(name=\"WM_DELETE_WINDOW\", func=self.handler)\n self.is_mw = True\n\n def write(self, s):\n if self.is_mw:\n self.text.insert(Tkinter.END, str(s) + \"\\n\")\n else:\n print str(s)\n\n def handler(self):\n self.is_mw = False\n self.mw.quit()\n self.mw.destroy()\n\nmonitor = Monitor()\n\nother_module.py\nfrom monitor import monitor\nmonitor.write(\"Foo\")\n\n"
] |
[
3
] |
[] |
[] |
[
"class_method",
"python",
"tkinter"
] |
stackoverflow_0000768474_class_method_python_tkinter.txt
|
Q:
Good python library for generating audio files?
Can anyone recommend a good library for generating an audio file, such as mp3, wav, or even midi, from python?
I've seen recommendations for working with the id tags (song name, artist, etc) in mp3 files, but this is not my goal.
A:
See http://wiki.python.org/moin/Audio/ and http://wiki.python.org/moin/PythonInMusic, maybe some of the projects listed there can be of help.
Also, Google is your friend.
A:
I've never used it, but check out ounk.
|
Good python library for generating audio files?
|
Can anyone recommend a good library for generating an audio file, such as mp3, wav, or even midi, from python?
I've seen recommendations for working with the id tags (song name, artist, etc) in mp3 files, but this is not my goal.
|
[
"See http://wiki.python.org/moin/Audio/ and http://wiki.python.org/moin/PythonInMusic, maybe some of the projects listed there can be of help.\nAlso, Google is your friend.\n",
"I've never used it, but check out ounk.\n"
] |
[
9,
0
] |
[] |
[] |
[
"audio",
"mp3",
"python"
] |
stackoverflow_0000045385_audio_mp3_python.txt
|
Q:
(Python) socket.gaierror: [Errno 11001] getaddrinfo failed
I'm not sure whats wrong with this code I keep getting that socket.gaierror error ;\ .
import sys
import socket
import random
filename = "whoiservers.txt"
server_name = random.choice(list(open(filename)))
print "connecting to %s..." % server_name
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((server_name, 43))
s.send(sys.argv[1] + "\r\n")
response = ''
while True:
d = s.recv(4096)
response += d
if d == '':
break
s.close()
print
print response
s.connect((server_name, 43))
File "<string>", line 1, in connect
socket.gaierror: [Errno 11001] getaddrinfo failed
Update:
After adding server_name = random.choice(list(open(filename)))[:-1] I dont get that socket.gaierror anymore but I get:
socket.error: [Errno 10060] A connection attempt failed because the connected pa
rty did not properly respond after a period of time, or established connection f
ailed because connected host has failed to respond
A:
I think the problem is a newline at the end of server_name.
If the format of your file whoiservers.txt is one hostname on each line then you need to strip the newline at the end of the hostname before passing it to s.connect()
So, for example, change the open line to:
server_name = random.choice(list(open(filename)))[:-1]
A:
Perhaps you have a firewall in between you and these servers that is blocking the request? The last error you posted leads one to believe that it cannot connect to the server at all...
|
(Python) socket.gaierror: [Errno 11001] getaddrinfo failed
|
I'm not sure whats wrong with this code I keep getting that socket.gaierror error ;\ .
import sys
import socket
import random
filename = "whoiservers.txt"
server_name = random.choice(list(open(filename)))
print "connecting to %s..." % server_name
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((server_name, 43))
s.send(sys.argv[1] + "\r\n")
response = ''
while True:
d = s.recv(4096)
response += d
if d == '':
break
s.close()
print
print response
s.connect((server_name, 43))
File "<string>", line 1, in connect
socket.gaierror: [Errno 11001] getaddrinfo failed
Update:
After adding server_name = random.choice(list(open(filename)))[:-1] I dont get that socket.gaierror anymore but I get:
socket.error: [Errno 10060] A connection attempt failed because the connected pa
rty did not properly respond after a period of time, or established connection f
ailed because connected host has failed to respond
|
[
"I think the problem is a newline at the end of server_name.\nIf the format of your file whoiservers.txt is one hostname on each line then you need to strip the newline at the end of the hostname before passing it to s.connect()\nSo, for example, change the open line to:\nserver_name = random.choice(list(open(filename)))[:-1]\n\n",
"Perhaps you have a firewall in between you and these servers that is blocking the request? The last error you posted leads one to believe that it cannot connect to the server at all...\n"
] |
[
7,
0
] |
[] |
[] |
[
"python",
"sockets"
] |
stackoverflow_0000771247_python_sockets.txt
|
Q:
SFTP listing directory
I'm trying to make a connection to a secure sftp site, however I'm not able to list the directory,however, it's possible to connect using python "expect" or php"ssh2_connect" but it gives me the following mesg: Received disconnect from xx.xx.xx.
If I use a GUI appliction like winscp I'm able to go to the sftp server and retrieve files.
I need to script it thus a cli interface is needed.
PS: just in case someone ran into this. I'm trying to connect to Avisena.com sftp server
A:
You can do this easily with paramiko, checkout SFTP with Python
|
SFTP listing directory
|
I'm trying to make a connection to a secure sftp site, however I'm not able to list the directory,however, it's possible to connect using python "expect" or php"ssh2_connect" but it gives me the following mesg: Received disconnect from xx.xx.xx.
If I use a GUI appliction like winscp I'm able to go to the sftp server and retrieve files.
I need to script it thus a cli interface is needed.
PS: just in case someone ran into this. I'm trying to connect to Avisena.com sftp server
|
[
"You can do this easily with paramiko, checkout SFTP with Python\n"
] |
[
3
] |
[] |
[] |
[
"php",
"python",
"sftp"
] |
stackoverflow_0000774039_php_python_sftp.txt
|
Q:
Business rules for calculating prices
The business I work for is an on-line retailer, I'm currently working on a project that among other things involves calculating the customer prices for products. We will probably create a service that looks something like...
public interface IPriceService
{
decimal CalculateCustomerPrice(ISupplierPriceProvider product);
}
public interface ISupplierPriceProvider
{
decimal SupplierPrice { get; }
string Currency { get; }
}
Don't worry it will not look exactly like that, but you get the general idea. In our implementation of this service there will be a number of rules for calculating this price, these rules can change quite often and what we probably want to do sometime down the line is to create some sort of DSL for these rules. At the moment though we're not quite sure what changes will actually be requested by sales department and so forth so I'm thinking about hosting the DLR and having an Iron Python or Iron Ruby script file that contains a lot of the price calculation. This way we can rapidly update the price calculation rules and also get a feel for what type DSL the business people needs. Does this at all sound like a sane idea and also does anyone have any links articles/tutorials on how to host the DLR and letting the script files interact with CLR-objects and return values?
A:
It definitely sounds like a sane idea to me. You can trivially access CLR internals (objects and return values) from IronPython, I don't know about IronRuby. Chapters 1 and 7 of IronPython in Action are available online and would probably be helpful. There is also a "hello world" style tutorial available at the learning python blog.
|
Business rules for calculating prices
|
The business I work for is an on-line retailer, I'm currently working on a project that among other things involves calculating the customer prices for products. We will probably create a service that looks something like...
public interface IPriceService
{
decimal CalculateCustomerPrice(ISupplierPriceProvider product);
}
public interface ISupplierPriceProvider
{
decimal SupplierPrice { get; }
string Currency { get; }
}
Don't worry it will not look exactly like that, but you get the general idea. In our implementation of this service there will be a number of rules for calculating this price, these rules can change quite often and what we probably want to do sometime down the line is to create some sort of DSL for these rules. At the moment though we're not quite sure what changes will actually be requested by sales department and so forth so I'm thinking about hosting the DLR and having an Iron Python or Iron Ruby script file that contains a lot of the price calculation. This way we can rapidly update the price calculation rules and also get a feel for what type DSL the business people needs. Does this at all sound like a sane idea and also does anyone have any links articles/tutorials on how to host the DLR and letting the script files interact with CLR-objects and return values?
|
[
"It definitely sounds like a sane idea to me. You can trivially access CLR internals (objects and return values) from IronPython, I don't know about IronRuby. Chapters 1 and 7 of IronPython in Action are available online and would probably be helpful. There is also a \"hello world\" style tutorial available at the learning python blog.\n"
] |
[
1
] |
[] |
[] |
[
"dynamic_language_runtime",
"python",
"ruby",
"rules",
"scripting"
] |
stackoverflow_0000774245_dynamic_language_runtime_python_ruby_rules_scripting.txt
|
Q:
Python scripted mp3 database, with a php front end
So, here's the deal. I am attempting to write a quick python script that reads the basic id3 tags from an mp3 (artist, album, songname, genre, etc). The python script will use most likely the mutagen library (unless you know of a better one). I'm not sure how to recursively scan through a directory to get each mp3's tags, and then fill a database. Also, as far as the database end, I want to make it as solid as possible, so I was wondering if anyone had any ideas on how I should design the database itself. Should I just use one big table, should I use certain relationships, etc. I am not very good at relational databases so I would appreciate any help. Oh, this is running on a linux box.
A:
To get started with extracting ID3 tags in Python, there's a module for that.
from ID3 import ID3
mp3_filepath = r'/music/song.mp3'
id3_data = ID3(mp3_filepath)
print 'Artist:', id3_data['ARTIST']
print 'Title:', id3_data['TITLE']
More info on ID3 module.
If you want to recursively search a directory for mp3 files, the built-in os module can do that:
import os
def mp3_files(root):
# this is a generator that will return mp3 file paths within given dir
for f in os.listdir(root):
fullpath = os.path.join(root,f)
if os.path.isdir(fullpath) and not os.path.islink(fullpath):
for x in mp3_files(fullpath): # recurse into subdir
yield x
else:
if fullpath[len(fullpath)-3:] == 'mp3':
yield fullpath
for p in mp3_files(root_dir):
id3_data = ID3(p)
print 'Artist:', id3_data['ARTIST']
print 'Title:', id3_data['TITLE']
Reference.
In terms of creating the database, you don't need to reinvent the wheel (storing music data is a common database problem) -- a Google search will help you out. Here's one example.
|
Python scripted mp3 database, with a php front end
|
So, here's the deal. I am attempting to write a quick python script that reads the basic id3 tags from an mp3 (artist, album, songname, genre, etc). The python script will use most likely the mutagen library (unless you know of a better one). I'm not sure how to recursively scan through a directory to get each mp3's tags, and then fill a database. Also, as far as the database end, I want to make it as solid as possible, so I was wondering if anyone had any ideas on how I should design the database itself. Should I just use one big table, should I use certain relationships, etc. I am not very good at relational databases so I would appreciate any help. Oh, this is running on a linux box.
|
[
"To get started with extracting ID3 tags in Python, there's a module for that.\nfrom ID3 import ID3\n\nmp3_filepath = r'/music/song.mp3'\nid3_data = ID3(mp3_filepath)\nprint 'Artist:', id3_data['ARTIST']\nprint 'Title:', id3_data['TITLE']\n\nMore info on ID3 module.\nIf you want to recursively search a directory for mp3 files, the built-in os module can do that:\nimport os\n\ndef mp3_files(root):\n # this is a generator that will return mp3 file paths within given dir\n for f in os.listdir(root):\n fullpath = os.path.join(root,f)\n if os.path.isdir(fullpath) and not os.path.islink(fullpath):\n for x in mp3_files(fullpath): # recurse into subdir\n yield x\n else:\n if fullpath[len(fullpath)-3:] == 'mp3':\n yield fullpath\n\nfor p in mp3_files(root_dir):\n id3_data = ID3(p)\n print 'Artist:', id3_data['ARTIST']\n print 'Title:', id3_data['TITLE']\n\nReference.\nIn terms of creating the database, you don't need to reinvent the wheel (storing music data is a common database problem) -- a Google search will help you out. Here's one example.\n"
] |
[
4
] |
[] |
[] |
[
"database",
"id3",
"mp3",
"php",
"python"
] |
stackoverflow_0000774502_database_id3_mp3_php_python.txt
|
Q:
Programmatically taking screenshots in windows without the application noticing
There are various ways to take screenshots of a running application in Windows. However, I hear that an application can be tailored such that it can notice when a screenshot is being taken of it, through some windows event handlers perhaps? Is there any way of taking a screenshot such that it is impossible for the application to notice? (Maybe even running the application inside a VM, and taking a screenshot from the host?) I'd prefer solutions in Python, but anything will do.
A:
There will certainly be no protection against a screenshot taken with a digital camera.
A:
> I hear that an application can be tailored such that it can notice when a screenshot is being taken of it
Complete nonsense.
Don't repeat what kids say...
Read MSDN about screenshots.
A:
Do you have a particular anti-screenshot program in mind? Ultimately, you're right, running the app in a VM will trump any 'protection' it has, but the method depends on which OS/VM you're using, and it's not worth the overhead until it's needed.
I'd just use this: Take a screenshot via a python script. [Linux] (Windows only)
A:
One could use Remote Desktop or the (low-level) VNC Mirror Driver and take the screenshot on an another computer.
|
Programmatically taking screenshots in windows without the application noticing
|
There are various ways to take screenshots of a running application in Windows. However, I hear that an application can be tailored such that it can notice when a screenshot is being taken of it, through some windows event handlers perhaps? Is there any way of taking a screenshot such that it is impossible for the application to notice? (Maybe even running the application inside a VM, and taking a screenshot from the host?) I'd prefer solutions in Python, but anything will do.
|
[
"There will certainly be no protection against a screenshot taken with a digital camera.\n",
"> I hear that an application can be tailored such that it can notice when a screenshot is being taken of it\nComplete nonsense.\nDon't repeat what kids say...\nRead MSDN about screenshots.\n",
"Do you have a particular anti-screenshot program in mind? Ultimately, you're right, running the app in a VM will trump any 'protection' it has, but the method depends on which OS/VM you're using, and it's not worth the overhead until it's needed.\nI'd just use this: Take a screenshot via a python script. [Linux] (Windows only)\n",
"One could use Remote Desktop or the (low-level) VNC Mirror Driver and take the screenshot on an another computer.\n"
] |
[
6,
3,
2,
0
] |
[] |
[] |
[
"events",
"operating_system",
"python",
"screenshot",
"windows"
] |
stackoverflow_0000767212_events_operating_system_python_screenshot_windows.txt
|
Q:
how can I decode the REG_BINARY value HKLM\Software\Microsoft\Ole\DefaultLaunchPermission to see which users have permission?
I am trying to find a way to decode the REG_BINARY value for "HKLM\Software\Microsoft\Ole\DefaultLaunchPermission" to see which users have permissions by default, and if possible, a method in which I can also append other users by their username.
At work we make use of DCOM and for the most part we always give the same users permission but in some cases we are forced to accommodate our clients and add custom users/groups to suit their needs. Unfortunately the custom users we need to add are random usernames so I am unable to just add all the users and copy the value from the key like I have done with the default users we use 95% of the time.
I am currently working on a command-line executable application where I have a command to set permissions for pre-defined users but I would also like to be able to add an option to append a custom user(s) to add to the default permission along with our predefined default users list.
Currently, to set the default users list with my application I would just type:
MyTool.exe Policies
But I would like to be able to make it a bit more verbose, closer to how NET command is used for windows users, something like:
MyTool.exe Policies /ADD:"MyCustomUsername"
The problem is that the data stored in the REG_BINARY value doesn't seem to be easily decoded. I was able to decode the hex portion in python but I am left with some sort of binary data which I don't have a clue what to do with as I don't even know what kind of encoding was used in the first place to know what to use to decode it. :P
I have done quite a bit of googling but I think my lack of understanding the terminology around this topic has probably caused me to overlook the answer without recognizing it for what it is.
I guess my first real question would have to be what kind of encoding is used for the above key after it has been decoded from hex?
Or better yet, is it even possible to obtain/modify the key's value programmatically so that I can obtain a list of the users that are currently set, and if necessary, append additional users/groups?
I would prefer to keep this application written strictly in Python if possible (or WMI/WMIC), but if necessary I can attempt to implement other types of code into my python application if it means getting the job finally done! I guess it would also be useful to mention that this application is primarily used on Windows XP Professional and most Windows Server versions so I am not worried if any possible solution will not be compatible with earlier Windows OS version(s).
Any assistance, code or just some simple help with getting familiar with this topic, would be GREATLY appreciated!
Thanks in advance for any help you can provide!! :D
A:
We came across similar issues when installing a COM server that was hosted by our .NET service, i.e. we wanted to programmatically alter the the COM ACLs in our install logic. I think you'll find that it's just a binary ACL format that you can manipulate in .NET using the class:
System.Security.AccessControl.CommonSecurityDescriptor
So sorry I can't help you in getting a Python solution, but if your back is to the wall and you can manage .NET, some sample code would look like:
int launchMask = (int) (COM_RIGHTS.EXECUTE | COM_RIGHTS.EXECUTE_LOCAL | COM_RIGHTS.ACTIVATE_LOCAL);
SecurityIdentifier sidAdmins = new SecurityIdentifier(WellKnownSidType.BuiltinAdministratorsSid, null);
SecurityIdentifier sidInteractive = new SecurityIdentifier(WellKnownSidType.InteractiveSid, null);
DiscretionaryAcl launchAcl = new DiscretionaryAcl(false, false, 3);
launchAcl.AddAccess(AccessControlType.Allow, sidAdmins, launchMask, InheritanceFlags.None, PropagationFlags.None);
launchAcl.AddAccess(AccessControlType.Allow, sidInteractive, launchMask, InheritanceFlags.None, PropagationFlags.None);
CommonSecurityDescriptor launchSD = new CommonSecurityDescriptor(false,
false,
ControlFlags.DiscretionaryAclPresent | ControlFlags.SelfRelative,
sidAdmins,
sidAdmins,
null,
launchAcl);
byte[] launchPermission = new byte[launchSD.BinaryLength];
launchSD.GetBinaryForm(launchPermission, 0);
You then take the launch permission byte array and write it to the registry. If .NET is a non-starter you can at least have a look at how the .NET classes work and see what win32 functions they use. You can either use the reflector tool to look at the relevant assembly, or MSFT actually publish the .NET source.
A:
Well REG_BINARY isn't any particular format, it's just a way to tell the registry the data is a custom binary format. So you're right about needing to find out what's in there.
Also, what do you mean by converting the data from hex? Are you unpacking it? I doubt you're interpreting it correctly until you know what has been saved in there in the first place.
Once you find out what's in that registry field, python's struct module will be your best friend.
http://docs.python.org/library/struct.html
Further reading (you've probably already seen these)
http://msdn.microsoft.com/en-us/library/ms687317(VS.85).aspx
<--windows info on perms
http://docs.python.org/library/_winreg.html
<-- python registry access
|
how can I decode the REG_BINARY value HKLM\Software\Microsoft\Ole\DefaultLaunchPermission to see which users have permission?
|
I am trying to find a way to decode the REG_BINARY value for "HKLM\Software\Microsoft\Ole\DefaultLaunchPermission" to see which users have permissions by default, and if possible, a method in which I can also append other users by their username.
At work we make use of DCOM and for the most part we always give the same users permission but in some cases we are forced to accommodate our clients and add custom users/groups to suit their needs. Unfortunately the custom users we need to add are random usernames so I am unable to just add all the users and copy the value from the key like I have done with the default users we use 95% of the time.
I am currently working on a command-line executable application where I have a command to set permissions for pre-defined users but I would also like to be able to add an option to append a custom user(s) to add to the default permission along with our predefined default users list.
Currently, to set the default users list with my application I would just type:
MyTool.exe Policies
But I would like to be able to make it a bit more verbose, closer to how NET command is used for windows users, something like:
MyTool.exe Policies /ADD:"MyCustomUsername"
The problem is that the data stored in the REG_BINARY value doesn't seem to be easily decoded. I was able to decode the hex portion in python but I am left with some sort of binary data which I don't have a clue what to do with as I don't even know what kind of encoding was used in the first place to know what to use to decode it. :P
I have done quite a bit of googling but I think my lack of understanding the terminology around this topic has probably caused me to overlook the answer without recognizing it for what it is.
I guess my first real question would have to be what kind of encoding is used for the above key after it has been decoded from hex?
Or better yet, is it even possible to obtain/modify the key's value programmatically so that I can obtain a list of the users that are currently set, and if necessary, append additional users/groups?
I would prefer to keep this application written strictly in Python if possible (or WMI/WMIC), but if necessary I can attempt to implement other types of code into my python application if it means getting the job finally done! I guess it would also be useful to mention that this application is primarily used on Windows XP Professional and most Windows Server versions so I am not worried if any possible solution will not be compatible with earlier Windows OS version(s).
Any assistance, code or just some simple help with getting familiar with this topic, would be GREATLY appreciated!
Thanks in advance for any help you can provide!! :D
|
[
"We came across similar issues when installing a COM server that was hosted by our .NET service, i.e. we wanted to programmatically alter the the COM ACLs in our install logic. I think you'll find that it's just a binary ACL format that you can manipulate in .NET using the class:\nSystem.Security.AccessControl.CommonSecurityDescriptor\nSo sorry I can't help you in getting a Python solution, but if your back is to the wall and you can manage .NET, some sample code would look like:\nint launchMask = (int) (COM_RIGHTS.EXECUTE | COM_RIGHTS.EXECUTE_LOCAL | COM_RIGHTS.ACTIVATE_LOCAL);\n\nSecurityIdentifier sidAdmins = new SecurityIdentifier(WellKnownSidType.BuiltinAdministratorsSid, null);\nSecurityIdentifier sidInteractive = new SecurityIdentifier(WellKnownSidType.InteractiveSid, null);\n\nDiscretionaryAcl launchAcl = new DiscretionaryAcl(false, false, 3);\nlaunchAcl.AddAccess(AccessControlType.Allow, sidAdmins, launchMask, InheritanceFlags.None, PropagationFlags.None);\nlaunchAcl.AddAccess(AccessControlType.Allow, sidInteractive, launchMask, InheritanceFlags.None, PropagationFlags.None);\n\nCommonSecurityDescriptor launchSD = new CommonSecurityDescriptor(false,\n false,\n ControlFlags.DiscretionaryAclPresent | ControlFlags.SelfRelative,\n sidAdmins,\n sidAdmins,\n null,\n launchAcl);\n\n\nbyte[] launchPermission = new byte[launchSD.BinaryLength];\nlaunchSD.GetBinaryForm(launchPermission, 0);\n\nYou then take the launch permission byte array and write it to the registry. If .NET is a non-starter you can at least have a look at how the .NET classes work and see what win32 functions they use. You can either use the reflector tool to look at the relevant assembly, or MSFT actually publish the .NET source.\n",
"Well REG_BINARY isn't any particular format, it's just a way to tell the registry the data is a custom binary format. So you're right about needing to find out what's in there.\nAlso, what do you mean by converting the data from hex? Are you unpacking it? I doubt you're interpreting it correctly until you know what has been saved in there in the first place.\nOnce you find out what's in that registry field, python's struct module will be your best friend.\nhttp://docs.python.org/library/struct.html\nFurther reading (you've probably already seen these)\n\nhttp://msdn.microsoft.com/en-us/library/ms687317(VS.85).aspx\n<--windows info on perms\nhttp://docs.python.org/library/_winreg.html\n<-- python registry access\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"binary",
"decode",
"python",
"registry",
"wmi"
] |
stackoverflow_0000775365_binary_decode_python_registry_wmi.txt
|
Q:
How do you transfer binary data with Python?
I'm working on a client-server program for the first time, and I'm feeling woefully inadequate on where to begin for what I'm doing.
I'm going to use Google Protocol Buffers to transfer binary data between my client and my server. I'm going to be using the Python variant. The basic idea, as I understand, is that the client will serialize the data, send it to the server, which will then deserialize the data.
The problem is, I'm really not sure where to begin for sending binary data to the server. I was hoping it'd be something "simple" like an HTTP request, but I've been searching around Google for ways to transfer binary data and getting lost in the vast multitude of tutorials, guides and documentation. I can't even tell if I'm barking up the wrong tree by investigating HTTP transfers (I was hoping to use it, so I could knock it up a notch to HTTPS if security is necessary). I really don't want to have to go to the level of socket programming, though - I'd like to use the libraries available before turning to that. (I'd also prefer standard Python libraries, though if there's the perfect 3rd party library I'll live.)
So, if anyone has a good starting point (or wants to explain a bit themselves) on how a good way to transfer binary data via Python, I'd be grateful. The server I'm running is currently running Apache with mod_python, by the way.
A:
Any time you're going to move binary data from one system to another there a couple of things to keep in mind.
Different machines store the same information differently. This has implication both in memory and on the network. More info here (http://en.wikipedia.org/wiki/Endianness)
Because you're using python you can cut yourself some slack here (assuming the client and server will both by in python) and just use cPickle to serialize your data. If you really want binary, you're going to have to get comfortable with python's struct module (http://docs.python.org/library/struct.html). And learn how to pack/unpack your data.
I would first start out with simple line-protocol servers until you get past the difficulty of network communication. If you've never done it before it can get confusing very fast. How to issue commands, how to pass data, how to re-sync on errors etc...
If you already know the basics of client/server protocol design, then practice packing and unpacking binary structures on your disk first. I also refer to the RFCs of HTTP and FTP for cases like this.
-------EDIT BASED ON COMMENT--------
Normally this sort of thing is done by sending the server a "header" that contains a checksum for the file as well as the size of the file in bytes. Note that I don't mean an HTTP header, you can customize it however you want. The chain of events needs to go something like this...
CLIENT: "UPLOAD acbd18db4cc2f85cedef654fccc4a4d8 253521"
SERVER: "OK"
(server splits the text line to get the command, checksum, and size)
CLIENT: "010101101010101100010101010etc..." (up to 253521 bytes)
(server reasembles all received data into a file, then checksums it to make sure it matches the original)
SERVER: "YEP GOT IT"
CLIENT: "COOL CYA"
This is overly simplified, but I hope you can see what I'm talking about here.
A:
I'm not sure I got your question right, but maybe you can take a look at the twisted project.
As you can see in the FAQ, "Twisted is a networking engine written in Python, supporting numerous protocols. It contains a web server, numerous chat clients, chat servers, mail servers, and more. Twisted is made up of a number of sub-projects which can be accessed individually[...]".
The documentation is pretty good, and there are lots of examples on the internet. Hope it helps.
A:
I guess it depends on how tied you are to Google Protocol Buffers, but you might like to check out Thrift.
Thrift is a software framework for
scalable cross-language services
development. It combines a software
stack with a code generation engine to
build services that work efficiently
and seamlessly between C++, Java,
Python, PHP, Ruby, Erlang, Perl,
Haskell, C#, Cocoa, Smalltalk, and
OCaml.
There's a great example for getting started on their home page.
A:
One quick question: why binary? Is the payload itself binary, or do you just prefer a binary format?
If former, it's possible to use base64 encoding with JSON or XML too; it does use more space (~34%), and bit more processing overhead, but not necessarily enough to matter for many use cases.
|
How do you transfer binary data with Python?
|
I'm working on a client-server program for the first time, and I'm feeling woefully inadequate on where to begin for what I'm doing.
I'm going to use Google Protocol Buffers to transfer binary data between my client and my server. I'm going to be using the Python variant. The basic idea, as I understand, is that the client will serialize the data, send it to the server, which will then deserialize the data.
The problem is, I'm really not sure where to begin for sending binary data to the server. I was hoping it'd be something "simple" like an HTTP request, but I've been searching around Google for ways to transfer binary data and getting lost in the vast multitude of tutorials, guides and documentation. I can't even tell if I'm barking up the wrong tree by investigating HTTP transfers (I was hoping to use it, so I could knock it up a notch to HTTPS if security is necessary). I really don't want to have to go to the level of socket programming, though - I'd like to use the libraries available before turning to that. (I'd also prefer standard Python libraries, though if there's the perfect 3rd party library I'll live.)
So, if anyone has a good starting point (or wants to explain a bit themselves) on how a good way to transfer binary data via Python, I'd be grateful. The server I'm running is currently running Apache with mod_python, by the way.
|
[
"Any time you're going to move binary data from one system to another there a couple of things to keep in mind.\nDifferent machines store the same information differently. This has implication both in memory and on the network. More info here (http://en.wikipedia.org/wiki/Endianness)\nBecause you're using python you can cut yourself some slack here (assuming the client and server will both by in python) and just use cPickle to serialize your data. If you really want binary, you're going to have to get comfortable with python's struct module (http://docs.python.org/library/struct.html). And learn how to pack/unpack your data.\nI would first start out with simple line-protocol servers until you get past the difficulty of network communication. If you've never done it before it can get confusing very fast. How to issue commands, how to pass data, how to re-sync on errors etc...\nIf you already know the basics of client/server protocol design, then practice packing and unpacking binary structures on your disk first. I also refer to the RFCs of HTTP and FTP for cases like this.\n-------EDIT BASED ON COMMENT--------\nNormally this sort of thing is done by sending the server a \"header\" that contains a checksum for the file as well as the size of the file in bytes. Note that I don't mean an HTTP header, you can customize it however you want. The chain of events needs to go something like this...\nCLIENT: \"UPLOAD acbd18db4cc2f85cedef654fccc4a4d8 253521\"\nSERVER: \"OK\"\n(server splits the text line to get the command, checksum, and size)\nCLIENT: \"010101101010101100010101010etc...\" (up to 253521 bytes)\n(server reasembles all received data into a file, then checksums it to make sure it matches the original)\nSERVER: \"YEP GOT IT\"\nCLIENT: \"COOL CYA\"\n\nThis is overly simplified, but I hope you can see what I'm talking about here.\n",
"I'm not sure I got your question right, but maybe you can take a look at the twisted project.\nAs you can see in the FAQ, \"Twisted is a networking engine written in Python, supporting numerous protocols. It contains a web server, numerous chat clients, chat servers, mail servers, and more. Twisted is made up of a number of sub-projects which can be accessed individually[...]\".\nThe documentation is pretty good, and there are lots of examples on the internet. Hope it helps.\n",
"I guess it depends on how tied you are to Google Protocol Buffers, but you might like to check out Thrift.\n\nThrift is a software framework for\n scalable cross-language services\n development. It combines a software\n stack with a code generation engine to\n build services that work efficiently\n and seamlessly between C++, Java,\n Python, PHP, Ruby, Erlang, Perl,\n Haskell, C#, Cocoa, Smalltalk, and\n OCaml.\n\nThere's a great example for getting started on their home page.\n",
"One quick question: why binary? Is the payload itself binary, or do you just prefer a binary format?\nIf former, it's possible to use base64 encoding with JSON or XML too; it does use more space (~34%), and bit more processing overhead, but not necessarily enough to matter for many use cases.\n"
] |
[
4,
3,
1,
0
] |
[] |
[] |
[
"client_server",
"file",
"http",
"protocol_buffers",
"python"
] |
stackoverflow_0000775482_client_server_file_http_protocol_buffers_python.txt
|
Q:
Sorted collections: How do i get (extended) slices right?
How can I resolve this?
>>> class unslice:
... def __getitem__(self, item): print type(item), ":", item
...
>>> u = unslice()
>>> u[1,2] # using an extended slice
<type 'tuple'> : (1, 2)
>>> t = (1, 2)
>>> u[t] # or passing a plain tuple
<type 'tuple'> : (1, 2)
Rational:
I'm currently overengineering a sorted associative collection with the ability to return ranges of items. It is quite likely that I will want to store tuples of small integers (or even more pathologically wierd values like Ellipsis) in the collection (as keys), and will need some kind of sane way of differentiating extended slices from plain keys
In the one-dimensional case, it's sort of a non-issue. I can't think of any real reason I would want to collect values of type slice, especially since xrange values are functionally similar and more recognizable to pythonistas (in my judgement). All other extended slice constructs are tuples of slice, Ellipsis or plain-old python values
No other type of extended slice seems to be in common use for any kind of collection except multidimensional arrays as in NumPy.
I do need to support n-dimensional axes, similar to oct-trees or GiS indices.
A:
Since there is no way to differentiate between the calls u[x,y] and u[(x,y)], you should shift one of the two operations you are trying to define off to an actual method. You know, something named u.slice() or u.range() or u.getslice() or u.getrange() or something like that.
Actually, when writing my own programs, I generally find that when I'm trying to overload a Python operation with two quite distinct semantics, it means that both of the things I am doing need to be turned into named methods! Because if the two are so similar in meaning that neither one has an obviously superior claim to getting to use the braces [] getitem shortcut, then probably my code will be more readable if both operations get real, readable, explicit method names.
But, it's hard to say more since you haven't told us how on earth you've gotten into this mess. Why would you want to both store things under tuples and get ranges of things? One suspects you are doing something to complicated to begin with. :-)
Oh, and other languages with this problem make you say a[1][2] to do multi-dimensional access to easily distinguish from a[1,2]. Just so you know there's another option.
A:
From the docs:
There is ambiguity in the formal
syntax here: anything that looks like
an expression list also looks like a
slice list, so any subscription can be
interpreted as a slicing. Rather than
further complicating the syntax, this
is disambiguated by defining that in
this case the interpretation as a
subscription takes priority over the
interpretation as a slicing (this is
the case if the slice list contains no
proper slice nor ellipses). Similarly,
when the slice list has exactly one
short slice and no trailing comma, the
interpretation as a simple slicing
takes priority over that as an
extended slicing.
As such, I don't think it's possible to distinguish u[1,2]-as-extended-slice from u[1,2]-as-tuple-key.
A:
My current thinking about this is to simply let the types that normally associate with slices be uncollectable. I can't think of any sane reason why anyone would want to do anything with a slice value or Ellipsis except to use them in subscript expressions.
On the off chance a user of the collection wants to sort on a tuple (instead of numbers or strings or dates or any other obvious thing) It might make sense to just require some extra cruft. As the example...
>>> u[t]
<type 'tuple'> : (1, 2)
>>> u[t,]
<type 'tuple'> : ((1, 2),)
Haven't really tried it with actual code (don't actually have a working GiS index at this time) but i suspect that this might just automatically do the right thing, since the extended slice is a tuple of length one (one dimension), which happens to be a tuple.
|
Sorted collections: How do i get (extended) slices right?
|
How can I resolve this?
>>> class unslice:
... def __getitem__(self, item): print type(item), ":", item
...
>>> u = unslice()
>>> u[1,2] # using an extended slice
<type 'tuple'> : (1, 2)
>>> t = (1, 2)
>>> u[t] # or passing a plain tuple
<type 'tuple'> : (1, 2)
Rational:
I'm currently overengineering a sorted associative collection with the ability to return ranges of items. It is quite likely that I will want to store tuples of small integers (or even more pathologically wierd values like Ellipsis) in the collection (as keys), and will need some kind of sane way of differentiating extended slices from plain keys
In the one-dimensional case, it's sort of a non-issue. I can't think of any real reason I would want to collect values of type slice, especially since xrange values are functionally similar and more recognizable to pythonistas (in my judgement). All other extended slice constructs are tuples of slice, Ellipsis or plain-old python values
No other type of extended slice seems to be in common use for any kind of collection except multidimensional arrays as in NumPy.
I do need to support n-dimensional axes, similar to oct-trees or GiS indices.
|
[
"Since there is no way to differentiate between the calls u[x,y] and u[(x,y)], you should shift one of the two operations you are trying to define off to an actual method. You know, something named u.slice() or u.range() or u.getslice() or u.getrange() or something like that.\nActually, when writing my own programs, I generally find that when I'm trying to overload a Python operation with two quite distinct semantics, it means that both of the things I am doing need to be turned into named methods! Because if the two are so similar in meaning that neither one has an obviously superior claim to getting to use the braces [] getitem shortcut, then probably my code will be more readable if both operations get real, readable, explicit method names.\nBut, it's hard to say more since you haven't told us how on earth you've gotten into this mess. Why would you want to both store things under tuples and get ranges of things? One suspects you are doing something to complicated to begin with. :-)\nOh, and other languages with this problem make you say a[1][2] to do multi-dimensional access to easily distinguish from a[1,2]. Just so you know there's another option.\n",
"From the docs:\n\nThere is ambiguity in the formal\n syntax here: anything that looks like\n an expression list also looks like a\n slice list, so any subscription can be\n interpreted as a slicing. Rather than\n further complicating the syntax, this\n is disambiguated by defining that in\n this case the interpretation as a\n subscription takes priority over the\n interpretation as a slicing (this is\n the case if the slice list contains no\n proper slice nor ellipses). Similarly,\n when the slice list has exactly one\n short slice and no trailing comma, the\n interpretation as a simple slicing\n takes priority over that as an\n extended slicing.\n\nAs such, I don't think it's possible to distinguish u[1,2]-as-extended-slice from u[1,2]-as-tuple-key.\n",
"My current thinking about this is to simply let the types that normally associate with slices be uncollectable. I can't think of any sane reason why anyone would want to do anything with a slice value or Ellipsis except to use them in subscript expressions. \nOn the off chance a user of the collection wants to sort on a tuple (instead of numbers or strings or dates or any other obvious thing) It might make sense to just require some extra cruft. As the example...\n>>> u[t]\n<type 'tuple'> : (1, 2)\n>>> u[t,]\n<type 'tuple'> : ((1, 2),)\n\nHaven't really tried it with actual code (don't actually have a working GiS index at this time) but i suspect that this might just automatically do the right thing, since the extended slice is a tuple of length one (one dimension), which happens to be a tuple.\n"
] |
[
5,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000775490_python.txt
|
Q:
How to catch POST using WSGIREF
I am trying to catch POST data from a simple form.
This is the first time I am playing around with WSGIREF and I can't seem to find the correct way to do this.
This is the form:
<form action="test" method="POST">
<input type="text" name="name">
<input type="submit"></form>
And the function that is obviously missing the right information to catch post:
def app(environ, start_response):
"""starts the response for the webserver"""
path = environ[ 'PATH_INFO']
method = environ['REQUEST_METHOD']
if method == 'POST':
if path.startswith('/test'):
start_response('200 OK',[('Content-type', 'text/html')])
return "POST info would go here %s" % post_info
else:
start_response('200 OK', [('Content-type', 'text/html')])
return form()
A:
You should be reading responses from the server.
From nosklo's answer to a similar problem: "PEP 333 says you must read environ['wsgi.input']."
Tested code (adapted from this answer):
Caveat: This code is for demonstrative purposes only.
Warning: Try to avoid hard-coding paths or filenames.
def app(environ, start_response):
path = environ['PATH_INFO']
method = environ['REQUEST_METHOD']
if method == 'POST':
if path.startswith('/test'):
try:
request_body_size = int(environ['CONTENT_LENGTH'])
request_body = environ['wsgi.input'].read(request_body_size)
except (TypeError, ValueError):
request_body = "0"
try:
response_body = str(request_body)
except:
response_body = "error"
status = '200 OK'
headers = [('Content-type', 'text/plain')]
start_response(status, headers)
return [response_body]
else:
response_body = open('test.html').read()
status = '200 OK'
headers = [('Content-type', 'text/html'),
('Content-Length', str(len(response_body)))]
start_response(status, headers)
return [response_body]
|
How to catch POST using WSGIREF
|
I am trying to catch POST data from a simple form.
This is the first time I am playing around with WSGIREF and I can't seem to find the correct way to do this.
This is the form:
<form action="test" method="POST">
<input type="text" name="name">
<input type="submit"></form>
And the function that is obviously missing the right information to catch post:
def app(environ, start_response):
"""starts the response for the webserver"""
path = environ[ 'PATH_INFO']
method = environ['REQUEST_METHOD']
if method == 'POST':
if path.startswith('/test'):
start_response('200 OK',[('Content-type', 'text/html')])
return "POST info would go here %s" % post_info
else:
start_response('200 OK', [('Content-type', 'text/html')])
return form()
|
[
"You should be reading responses from the server.\nFrom nosklo's answer to a similar problem: \"PEP 333 says you must read environ['wsgi.input'].\"\nTested code (adapted from this answer):\n\n Caveat: This code is for demonstrative purposes only. \n\n Warning: Try to avoid hard-coding paths or filenames.\ndef app(environ, start_response):\n path = environ['PATH_INFO']\n method = environ['REQUEST_METHOD']\n if method == 'POST':\n if path.startswith('/test'):\n try:\n request_body_size = int(environ['CONTENT_LENGTH'])\n request_body = environ['wsgi.input'].read(request_body_size)\n except (TypeError, ValueError):\n request_body = \"0\"\n try:\n response_body = str(request_body)\n except:\n response_body = \"error\"\n status = '200 OK'\n headers = [('Content-type', 'text/plain')]\n start_response(status, headers)\n return [response_body]\n else:\n response_body = open('test.html').read()\n status = '200 OK'\n headers = [('Content-type', 'text/html'),\n ('Content-Length', str(len(response_body)))]\n start_response(status, headers)\n return [response_body]\n\n"
] |
[
5
] |
[] |
[] |
[
"python",
"wsgi",
"wsgiref"
] |
stackoverflow_0000775396_python_wsgi_wsgiref.txt
|
Q:
Filetype information
I'm in the process of writing a python script, and I want to find out information about a file, such as for example a mime-type (or any useful depiction of what a file contains).
I've heard about python-magic, but I'm really looking for the solution that will allow me to find this information, without requiring the installation of additional packages.
Am I stuck to maintaining a list of file extensions, or does python have something in the standard library? I was not able to find it in the docs.
A:
I am not sure if you want to infer something from file content but if you want to know mime type from file extension mimetypes module will be sufficient
>>> import mimetypes
>>> mimetypes.init()
>>> mimetypes.knownfiles
['/etc/mime.types', '/etc/httpd/mime.types', ... ]
>>> mimetypes.suffix_map['.tgz']
'.tar.gz'
>>> mimetypes.encodings_map['.gz']
'gzip'
>>> mimetypes.types_map['.tgz']
'application/x-tar-gz'
http://docs.python.org/library/mimetypes.html
A:
The standard library has support for mapping filenames to mimetypes.
Your question also sounds like you are interested in other information besides mimetype. The stat module will also give you information about size, owner, time of last modification, etc. but otherwise the most common filesystems (Windows NTFS/FAT, Linux Ext 2/3, Mac OS X) do not store any metadata or "other information" about files. That's why we need to use extensions to find the mimetype for example.
|
Filetype information
|
I'm in the process of writing a python script, and I want to find out information about a file, such as for example a mime-type (or any useful depiction of what a file contains).
I've heard about python-magic, but I'm really looking for the solution that will allow me to find this information, without requiring the installation of additional packages.
Am I stuck to maintaining a list of file extensions, or does python have something in the standard library? I was not able to find it in the docs.
|
[
"I am not sure if you want to infer something from file content but if you want to know mime type from file extension mimetypes module will be sufficient\n>>> import mimetypes\n>>> mimetypes.init()\n>>> mimetypes.knownfiles\n['/etc/mime.types', '/etc/httpd/mime.types', ... ]\n>>> mimetypes.suffix_map['.tgz']\n'.tar.gz'\n>>> mimetypes.encodings_map['.gz']\n'gzip'\n>>> mimetypes.types_map['.tgz']\n'application/x-tar-gz'\n\nhttp://docs.python.org/library/mimetypes.html\n",
"The standard library has support for mapping filenames to mimetypes.\nYour question also sounds like you are interested in other information besides mimetype. The stat module will also give you information about size, owner, time of last modification, etc. but otherwise the most common filesystems (Windows NTFS/FAT, Linux Ext 2/3, Mac OS X) do not store any metadata or \"other information\" about files. That's why we need to use extensions to find the mimetype for example.\n"
] |
[
5,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000775674_python.txt
|
Q:
What is "generator object" in django?
Am using Django voting package and when I use the method get_top() in the shell, it returns something like "generator object at 0x022f7AD0, I've never seen anything like this before, how do you access it and what is it?
my code:
v=Vote.objects.get_top(myModel, limit=10, reversed=False)
print v
<generator object at 0x022f7AD0>
NB: I thought get_top will just return a nice list of myModel, which i can do something like v.name etc
A:
If you want a list, just call list() on your generator object.
A generator object in python is something like a lazy list. The elements are only evaluated as soon as you iterate over them. (Thus calling list on it evaluates all of them.)
For example you can do:
>>> def f(x):
... print "yay!"
... return 2 * x
>>> g = (f(i) for i in xrange(3)) # generator comprehension syntax
>>> g
<generator object <genexpr> at 0x37b6c0>
>>> for j in g: print j
...
yay!
0
yay!
2
yay!
4
See how f is evaluated only as you iterate over it. You can find excellent material on the topic here: http://www.dabeaz.com/generators/
A:
A generator is a kind of iterator. An iterator is a kind of iterable object, and like any other iterable,
You can iterate over every item using a for loop:
for vote in Vote.objects.get_top(myModel, limit=10, reversed=False):
print v.name, vote
If you need to access items by index, you can convert it to a list:
top_votes = list(Vote.objects.get_top(myModel, limit=10, reversed=False))
print top_votes[0]
However, you can only iterate over a particular instance of an iterator once (unlike a more general iterable object, like a list):
>>> top_votes_generator = Vote.objects.get_top(myModel, limit=3)
>>> top_votes_generator
<generator object at 0x022f7AD0>
>>> list(top_votes_generator)
[<Vote: a>, <Vote: b>, <Vote: c>]
>>> list(top_votes_generator)
[]
For more on creating your own generators, see http://docs.python.org/tutorial/classes.html#generators
A:
Hmmmm
I've read this and this and things are quiet clear now;
Actually i can convert generators to list by just doing
mylist=list(myGenerator)
|
What is "generator object" in django?
|
Am using Django voting package and when I use the method get_top() in the shell, it returns something like "generator object at 0x022f7AD0, I've never seen anything like this before, how do you access it and what is it?
my code:
v=Vote.objects.get_top(myModel, limit=10, reversed=False)
print v
<generator object at 0x022f7AD0>
NB: I thought get_top will just return a nice list of myModel, which i can do something like v.name etc
|
[
"If you want a list, just call list() on your generator object.\nA generator object in python is something like a lazy list. The elements are only evaluated as soon as you iterate over them. (Thus calling list on it evaluates all of them.)\nFor example you can do:\n>>> def f(x):\n... print \"yay!\"\n... return 2 * x\n>>> g = (f(i) for i in xrange(3)) # generator comprehension syntax\n>>> g\n<generator object <genexpr> at 0x37b6c0>\n\n>>> for j in g: print j\n... \nyay!\n0\nyay!\n2\nyay!\n4\n\nSee how f is evaluated only as you iterate over it. You can find excellent material on the topic here: http://www.dabeaz.com/generators/\n",
"A generator is a kind of iterator. An iterator is a kind of iterable object, and like any other iterable,\nYou can iterate over every item using a for loop:\nfor vote in Vote.objects.get_top(myModel, limit=10, reversed=False):\n print v.name, vote\n\nIf you need to access items by index, you can convert it to a list:\ntop_votes = list(Vote.objects.get_top(myModel, limit=10, reversed=False))\nprint top_votes[0]\n\nHowever, you can only iterate over a particular instance of an iterator once (unlike a more general iterable object, like a list):\n>>> top_votes_generator = Vote.objects.get_top(myModel, limit=3)\n>>> top_votes_generator\n<generator object at 0x022f7AD0>\n>>> list(top_votes_generator)\n[<Vote: a>, <Vote: b>, <Vote: c>]\n>>> list(top_votes_generator)\n[]\n\nFor more on creating your own generators, see http://docs.python.org/tutorial/classes.html#generators\n",
"Hmmmm \nI've read this and this and things are quiet clear now;\nActually i can convert generators to list by just doing \nmylist=list(myGenerator)\n\n"
] |
[
21,
7,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000776060_python.txt
|
Q:
access eggs in python?
Is there any way to call an installed python egg from python code? I need to cal a sphinx documentation
generator from within a python code, and currently i'm doing it like this:
os.system( "sphinx-build.exe -b html c:\\src c:\\dst" )
This works, but requires some additional configuration: 'scripts' folder
inside a python installation folder need to be added to a system PATH
( i'm on Windows ). Is it any better, native way to call an installed python
egg?
A:
So basically, you want to use Sphinx as a library?
Here is what sphinx-build does:
from pkg_resources import load_entry_point
load_entry_point('Sphinx==0.5.1', 'console_scripts', 'sphinx-build')()
Looking at entry-points.txt in the EGG-INFO directory, notice that the sphinx-build entry point is the sphinx.main function (located in __init__.py).
Have a look at that and duplicate what it does, and you can use sphinx as a library. I have not looked at the code in detail, but it seems that the bulk of the sphinx-build-command is done by the build method on a Sphinx object.
In your code, you would have to do something like:
from sphinx.application import Sphinx
s = Sphinx(...)
s.build(...)
You need to have a look at the Sphinx source code to figure out the parameters to Sphinx.__init__() and Sphinx.build()
A:
Adding the egg to PYTHONPATH or to sys.path will allow you to access the modules and packages within.
|
access eggs in python?
|
Is there any way to call an installed python egg from python code? I need to cal a sphinx documentation
generator from within a python code, and currently i'm doing it like this:
os.system( "sphinx-build.exe -b html c:\\src c:\\dst" )
This works, but requires some additional configuration: 'scripts' folder
inside a python installation folder need to be added to a system PATH
( i'm on Windows ). Is it any better, native way to call an installed python
egg?
|
[
"So basically, you want to use Sphinx as a library?\nHere is what sphinx-build does:\nfrom pkg_resources import load_entry_point\n\nload_entry_point('Sphinx==0.5.1', 'console_scripts', 'sphinx-build')()\n\nLooking at entry-points.txt in the EGG-INFO directory, notice that the sphinx-build entry point is the sphinx.main function (located in __init__.py).\nHave a look at that and duplicate what it does, and you can use sphinx as a library. I have not looked at the code in detail, but it seems that the bulk of the sphinx-build-command is done by the build method on a Sphinx object.\nIn your code, you would have to do something like:\nfrom sphinx.application import Sphinx\ns = Sphinx(...)\ns.build(...)\n\nYou need to have a look at the Sphinx source code to figure out the parameters to Sphinx.__init__() and Sphinx.build()\n",
"Adding the egg to PYTHONPATH or to sys.path will allow you to access the modules and packages within.\n"
] |
[
3,
1
] |
[] |
[] |
[
"egg",
"python"
] |
stackoverflow_0000775880_egg_python.txt
|
Q:
fastest way to store comment data python
Hi I have a small comment shoutbox type cgi process running on a server and currently when someone leaves a comment I simply format that comment into html i.e
<p class="title">$title</p>
<p class="comment">$comment</p>
and store in a flat file.
Would it be faster and acceptably low in LOC to reimplement the storage in xml or json, in a simple spec of my own or stick with the simple html route?.
I don't want to use relational database for this.
A:
If a flat file is fast enough, then go with that, since it's very simple and accessible. Storing as XML and JSON but still using a flat file probably is very comparable in performance.
You might want to consider (ignore this if you just left it out of your question) sanitizing/filtering the text, so that users can't break your HTML by e.g. entering "</p>" in the comment text.
A:
XML is nice, clean way to store this type of data. In Python, you could use lxml to create/update the file:
from lxml import etree
P_XML = 'xml_file_path.xml'
def save_comment(title_text, comment_text):
comment = etree.Element('comment')
title = etree.SubElement(comment, 'title')
title.text = title_text
comment.text = comment_text
f = open(P_XML, 'a')
f.write(etree.tostring(comment, pretty_print=True))
f.close()
save_comment("FIRST!", "FIRST COMMENT!!!")
save_comment("Awesome", "I love this site!")
That's a simple start, but you could do a lot more (i.e. set up an ID for each comment, read in the XML using lxml parser and add to it instead of just appending the file).
A:
A flat-file is the fastest form of persistence. Period. There's no formatting, encoding, indexing, locking, or anything.
JSON (and YAML) impose some overheads. They will be slower. There's some formatting that must be done.
XML imposes more overheads than JSON/YAML. It will be slower still. There's a fair amount of formatting that must be done.
The more overhead, the slower it will be.
None of these have anything to do with sanitizing the comment input so that it will display as valid HTML. You should use cgi.escape to escape any HTML-like character sequences in the comment before saving the text to a file.
|
fastest way to store comment data python
|
Hi I have a small comment shoutbox type cgi process running on a server and currently when someone leaves a comment I simply format that comment into html i.e
<p class="title">$title</p>
<p class="comment">$comment</p>
and store in a flat file.
Would it be faster and acceptably low in LOC to reimplement the storage in xml or json, in a simple spec of my own or stick with the simple html route?.
I don't want to use relational database for this.
|
[
"If a flat file is fast enough, then go with that, since it's very simple and accessible. Storing as XML and JSON but still using a flat file probably is very comparable in performance.\nYou might want to consider (ignore this if you just left it out of your question) sanitizing/filtering the text, so that users can't break your HTML by e.g. entering \"</p>\" in the comment text.\n",
"XML is nice, clean way to store this type of data. In Python, you could use lxml to create/update the file:\nfrom lxml import etree\n\nP_XML = 'xml_file_path.xml'\n\ndef save_comment(title_text, comment_text):\n comment = etree.Element('comment')\n title = etree.SubElement(comment, 'title')\n title.text = title_text\n comment.text = comment_text\n f = open(P_XML, 'a')\n f.write(etree.tostring(comment, pretty_print=True))\n f.close()\n\nsave_comment(\"FIRST!\", \"FIRST COMMENT!!!\")\nsave_comment(\"Awesome\", \"I love this site!\")\n\nThat's a simple start, but you could do a lot more (i.e. set up an ID for each comment, read in the XML using lxml parser and add to it instead of just appending the file).\n",
"A flat-file is the fastest form of persistence. Period. There's no formatting, encoding, indexing, locking, or anything.\nJSON (and YAML) impose some overheads. They will be slower. There's some formatting that must be done.\nXML imposes more overheads than JSON/YAML. It will be slower still. There's a fair amount of formatting that must be done.\nThe more overhead, the slower it will be.\nNone of these have anything to do with sanitizing the comment input so that it will display as valid HTML. You should use cgi.escape to escape any HTML-like character sequences in the comment before saving the text to a file.\n"
] |
[
3,
1,
1
] |
[] |
[] |
[
"json",
"python",
"xml"
] |
stackoverflow_0000777090_json_python_xml.txt
|
Q:
Efficiently determining if a business is open or not based on store hours
Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses.
I have the open and close times for every business for every day of the week
Let's assume a business can open/close only on 00, 15, 30, 45 minute marks of each hour
I'm assuming the same schedule each week.
I am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.
Mind you, some my open at 11pm one day and close 1am the next day.
Holidays don't matter - I will handle these separately
What's the most efficient way to store these open/close times such that with a single time/day-of-week tuple I can speedily figure out which businesses are open?
I am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives.
A:
If you are willing to just look at single week at a time, you can canonicalize all opening/closing times to be set numbers of minutes since the start of the week, say Sunday 0 hrs. For each store, you create a number of tuples of the form [startTime, endTime, storeId]. (For hours that spanned Sunday midnight, you'd have to create two tuples, one going to the end of the week, one starting at the beginning of the week). This set of tuples would be indexed (say, with a tree you would pre-process) on both startTime and endTime. The tuples shouldn't be that large: there are only ~10k minutes in a week, which can fit in 2 bytes. This structure would be graceful inside a MySQL table with appropriate indexes, and would be very resilient to constant insertions & deletions of records as information changed. Your query would simply be "select storeId where startTime <= time and endtime >= time", where time was the canonicalized minutes since midnight on sunday.
If information doesn't change very often, and you want to have lookups be very fast, you could solve every possible query up front and cache the results. For instance, there are only 672 quarter-hour periods in a week. With a list of businesses, each of which had a list of opening & closing times like Brandon Rhodes's solution, you could simply, iterate through every 15-minute period in a week, figure out who's open, then store the answer in a lookup table or in-memory list.
A:
The bitmap field mentioned by another respondent would be incredibly efficient, but gets messy if you want to be able to handle half-hour or quarter-hour times, since you have to increase arithmetically the number of bits and the design of the field each time you encounter a new resolution that you have to match.
I would instead try storing the values as datetimes inside a list:
openclosings = [ open1, close1, open2, close2, ... ]
Then, I would use Python's "bisect_right()" function in its built-in "bisect" module to find, in fast O(log n) time, where in that list your query time "fits". Then, look at the index that is returned. If it is an even number (0, 2, 4...) then the time lies between one of the "closed" times and the next "open" time, so the shop is closed then. If, instead, the bisection index is an odd number (1, 3, 5...) then the time has landed between an opening and a closing time, and the shop is open.
Not as fast as bitmaps, but you don't have to worry about resolution, and I can't think of another O(log n) solution that's as elegant.
A:
You say you're using SOLR, don't care about storage, and want the lookups to be fast. Then instead of storing open/close tuples, index an entry for every open block of time at the level of granularity you need (15 mins). For the encoding itself, you could use just cumulative hours:minutes.
For example, a store open from 4-5 pm on Monday, would have indexed values added for [40:00, 40:15, 40:30, 40:45]. A query at 4:24 pm on Monday would be normalized to 40:15, and therefore match that store document.
This may seem inefficient at first glance, but it's a relatively small constant penalty for indexing speed and space. And makes the searches as fast as possible.
A:
Sorry I don't have an easy answer, but I can tell you that as the manager of a development team at a company in the late 90's we were tasked with solving this very problem and it was HARD.
It's not the weekly hours that's tough, that can be done with a relatively small bitmask (168 bits = 1 per hour of the week), the trick is the businesses which are closed every alternating Tuesday.
Starting with a bitmask then moving on to an exceptions field is the best solution I've ever seen.
A:
In your Solr index, instead of indexing each business as one document with hours, index every "retail session" for every business during the course of a week.
For example if Joe's coffee is open Mon-Sat 6am-9pm and closed on Sunday, you would index six distinct documents, each with two indexed fields, "open" and "close". If your units are 15 minute intervals, then the values can range from 0 to 7*24*4. Assuming you have a unique ID for each business, store this in each document so you can map the sessions to businesses.
Then you can simply do a range search in Solr:
open:[* TO N] AND close:[N+1 TO *]
where N is computed to the Nth 15 minute interval that the current time falls into. For examples if it's 10:10AM on Wednesday, your query would be:
open:[* TO 112] AND close:[113 TO *]
aka "find a session that starts at or before 10:00am Wed and ends at or after 10:15am Wed"
If you want to include other criteria in your search, such as location or products, you will need to index this with each session document as well. This is a bit redundant, but if your index is not huge, it shouldn't be a problem.
A:
If you can control your data well, I see a simple solution, similar to @Sebastian's. Follow the advice of creating the tuples, except create them of the form [time=startTime, storeId] and [time=endTime, storeId], then sort these in a list. To find out if a store is open, simply do a query like:
select storeId
from table
where time <= '@1'
group by storeId
having count(storeId) % 2 == 1
To optimize this, you could build a lookup table at each of time t, store the stores that are open at t, and the store openings/closings between t and t+1 (for any grouping of t).
However, this has the drawback of being harder to maintain (overlapping openings/closings need to be merged into a longer open-close period).
A:
Have you looked at how many unique open/close time combinations there are? If there are not that many, make a reference table of the unique combinations and store the index of the appropriate entry against each business. Then you only have to search the reference table and then find the business with those indices.
|
Efficiently determining if a business is open or not based on store hours
|
Given a time (eg. currently 4:24pm on Tuesday), I'd like to be able to select all businesses that are currently open out of a set of businesses.
I have the open and close times for every business for every day of the week
Let's assume a business can open/close only on 00, 15, 30, 45 minute marks of each hour
I'm assuming the same schedule each week.
I am most interested in being able to quickly look up a set of businesses that is open at a certain time, not the space requirements of the data.
Mind you, some my open at 11pm one day and close 1am the next day.
Holidays don't matter - I will handle these separately
What's the most efficient way to store these open/close times such that with a single time/day-of-week tuple I can speedily figure out which businesses are open?
I am using Python, SOLR and mysql. I'd like to be able to do the querying in SOLR. But frankly, I'm open to any suggestions and alternatives.
|
[
"If you are willing to just look at single week at a time, you can canonicalize all opening/closing times to be set numbers of minutes since the start of the week, say Sunday 0 hrs. For each store, you create a number of tuples of the form [startTime, endTime, storeId]. (For hours that spanned Sunday midnight, you'd have to create two tuples, one going to the end of the week, one starting at the beginning of the week). This set of tuples would be indexed (say, with a tree you would pre-process) on both startTime and endTime. The tuples shouldn't be that large: there are only ~10k minutes in a week, which can fit in 2 bytes. This structure would be graceful inside a MySQL table with appropriate indexes, and would be very resilient to constant insertions & deletions of records as information changed. Your query would simply be \"select storeId where startTime <= time and endtime >= time\", where time was the canonicalized minutes since midnight on sunday.\nIf information doesn't change very often, and you want to have lookups be very fast, you could solve every possible query up front and cache the results. For instance, there are only 672 quarter-hour periods in a week. With a list of businesses, each of which had a list of opening & closing times like Brandon Rhodes's solution, you could simply, iterate through every 15-minute period in a week, figure out who's open, then store the answer in a lookup table or in-memory list.\n",
"The bitmap field mentioned by another respondent would be incredibly efficient, but gets messy if you want to be able to handle half-hour or quarter-hour times, since you have to increase arithmetically the number of bits and the design of the field each time you encounter a new resolution that you have to match.\nI would instead try storing the values as datetimes inside a list:\nopenclosings = [ open1, close1, open2, close2, ... ]\n\nThen, I would use Python's \"bisect_right()\" function in its built-in \"bisect\" module to find, in fast O(log n) time, where in that list your query time \"fits\". Then, look at the index that is returned. If it is an even number (0, 2, 4...) then the time lies between one of the \"closed\" times and the next \"open\" time, so the shop is closed then. If, instead, the bisection index is an odd number (1, 3, 5...) then the time has landed between an opening and a closing time, and the shop is open.\nNot as fast as bitmaps, but you don't have to worry about resolution, and I can't think of another O(log n) solution that's as elegant.\n",
"You say you're using SOLR, don't care about storage, and want the lookups to be fast. Then instead of storing open/close tuples, index an entry for every open block of time at the level of granularity you need (15 mins). For the encoding itself, you could use just cumulative hours:minutes.\nFor example, a store open from 4-5 pm on Monday, would have indexed values added for [40:00, 40:15, 40:30, 40:45]. A query at 4:24 pm on Monday would be normalized to 40:15, and therefore match that store document.\nThis may seem inefficient at first glance, but it's a relatively small constant penalty for indexing speed and space. And makes the searches as fast as possible.\n",
"Sorry I don't have an easy answer, but I can tell you that as the manager of a development team at a company in the late 90's we were tasked with solving this very problem and it was HARD.\nIt's not the weekly hours that's tough, that can be done with a relatively small bitmask (168 bits = 1 per hour of the week), the trick is the businesses which are closed every alternating Tuesday.\nStarting with a bitmask then moving on to an exceptions field is the best solution I've ever seen.\n",
"In your Solr index, instead of indexing each business as one document with hours, index every \"retail session\" for every business during the course of a week. \nFor example if Joe's coffee is open Mon-Sat 6am-9pm and closed on Sunday, you would index six distinct documents, each with two indexed fields, \"open\" and \"close\". If your units are 15 minute intervals, then the values can range from 0 to 7*24*4. Assuming you have a unique ID for each business, store this in each document so you can map the sessions to businesses.\nThen you can simply do a range search in Solr:\nopen:[* TO N] AND close:[N+1 TO *]\nwhere N is computed to the Nth 15 minute interval that the current time falls into. For examples if it's 10:10AM on Wednesday, your query would be:\nopen:[* TO 112] AND close:[113 TO *]\naka \"find a session that starts at or before 10:00am Wed and ends at or after 10:15am Wed\"\nIf you want to include other criteria in your search, such as location or products, you will need to index this with each session document as well. This is a bit redundant, but if your index is not huge, it shouldn't be a problem.\n",
"If you can control your data well, I see a simple solution, similar to @Sebastian's. Follow the advice of creating the tuples, except create them of the form [time=startTime, storeId] and [time=endTime, storeId], then sort these in a list. To find out if a store is open, simply do a query like:\nselect storeId\nfrom table\nwhere time <= '@1'\ngroup by storeId\nhaving count(storeId) % 2 == 1\n\nTo optimize this, you could build a lookup table at each of time t, store the stores that are open at t, and the store openings/closings between t and t+1 (for any grouping of t).\nHowever, this has the drawback of being harder to maintain (overlapping openings/closings need to be merged into a longer open-close period).\n",
"Have you looked at how many unique open/close time combinations there are? If there are not that many, make a reference table of the unique combinations and store the index of the appropriate entry against each business. Then you only have to search the reference table and then find the business with those indices.\n"
] |
[
8,
5,
4,
3,
1,
0,
0
] |
[] |
[] |
[
"mysql",
"performance",
"python",
"solr"
] |
stackoverflow_0000775161_mysql_performance_python_solr.txt
|
Q:
Django Form values without HTML escape
I need to set the Django forms.ChoiceField to display the currency symbols. Since django forms escape all the HTML ASCII characters, I can't get the $ ( € ) or the £ ( £ ) to display the currency symbol.
<select id="id_currency" name="currency">
<option value="&#36;">$</option>
<option value="&pound;">£</option>
<option value="&euro;">€</option>
</select>
Could you suggest any methods to display the actual HTML Currency character at least for the value part of the option?
<select name="currency" id="id_currency">
<option value="&#36;">$</option>
<option value="&pound;">£</option>
<option value="&euro;">€</option>
</select>
Update:
Please note I use Django 0.96 as my application is running on Google App Engine.
And the <SELECT> above is rendered using Django Forms.
currencies = (('$', '$'),
('£', '£'),
('€', '€'))
currency = forms.ChoiceField(choices=currencies, required=False)
Thanks,
Arun.
A:
You can use "safe" in the template or "mark_safe" in the view,
turn off autoescaping in the template,
or use Unicode characters instead of HTML entities in your form.
Using mark_safe
from django.utils.safestring import mark_safe
currencies = ((mark_safe('$'), mark_safe('$')),
(mark_safe('£'), mark_safe('£')),
(mark_safe('€'), mark_safe('€')))
Using autoescape off
As an alternative in your template you can turn off escaping for a block of code.
Everything between tags {% autoescape off %} and {% endautoescape %}
will not be escaped.
Using Unicode characters
When nothing else works try the following. In the file that contains your currencies tuple put the following line as the very first or second line:
# coding=utf-8
and then in your currencies tuple put the actual unicode characters:
currencies = (('$', '$'),
('£', '£'),
('€', '€'))
|
Django Form values without HTML escape
|
I need to set the Django forms.ChoiceField to display the currency symbols. Since django forms escape all the HTML ASCII characters, I can't get the $ ( € ) or the £ ( £ ) to display the currency symbol.
<select id="id_currency" name="currency">
<option value="&#36;">$</option>
<option value="&pound;">£</option>
<option value="&euro;">€</option>
</select>
Could you suggest any methods to display the actual HTML Currency character at least for the value part of the option?
<select name="currency" id="id_currency">
<option value="&#36;">$</option>
<option value="&pound;">£</option>
<option value="&euro;">€</option>
</select>
Update:
Please note I use Django 0.96 as my application is running on Google App Engine.
And the <SELECT> above is rendered using Django Forms.
currencies = (('$', '$'),
('£', '£'),
('€', '€'))
currency = forms.ChoiceField(choices=currencies, required=False)
Thanks,
Arun.
|
[
"You can use \"safe\" in the template or \"mark_safe\" in the view,\nturn off autoescaping in the template, \nor use Unicode characters instead of HTML entities in your form.\nUsing mark_safe\nfrom django.utils.safestring import mark_safe\n\ncurrencies = ((mark_safe('$'), mark_safe('$')), \n (mark_safe('£'), mark_safe('£')), \n (mark_safe('€'), mark_safe('€'))) \n\nUsing autoescape off\nAs an alternative in your template you can turn off escaping for a block of code.\nEverything between tags {% autoescape off %} and {% endautoescape %}\nwill not be escaped.\nUsing Unicode characters\nWhen nothing else works try the following. In the file that contains your currencies tuple put the following line as the very first or second line:\n# coding=utf-8\n\nand then in your currencies tuple put the actual unicode characters:\ncurrencies = (('$', '$'), \n ('£', '£'), \n ('€', '€')) \n\n"
] |
[
9
] |
[] |
[] |
[
"currency",
"django_forms",
"python",
"symbols"
] |
stackoverflow_0000777458_currency_django_forms_python_symbols.txt
|
Q:
Qt-style documentation using Doxygen?
How do I produce Qt-style documentation (Trolltech's C++ Qt or Riverbank's PyQt docs) with Doxygen? I am documenting Python, and I would like to be able to improve the default function brief that it produces.
In particular, I would like to be able to see the return type (which can be user specified) and the parameters in the function brief.
For example:
Functions:
int getNumber(self)
str getString(self)
tuple getTuple(self, int numberOfElements=2)
Function Documentation:
int getNumber(self)
gets the number of items within a list as specified...
Definition at line 63 of ....
etc...
If this isn't possible without modifying the source, maybe there is another tool other than Doxygen that handles Python documentation in this kind of way?
A:
Then just use Doxygen? This will get you started:
This is a guide for automatically
generating documentation off of Python
source code using Doxygen.
Obviously, since Python is not strongly typed, specifying the return type and the expected type of the parameters will be up to you, the documentation writer. That's just best practices anyway.
A:
If you're doing anything documentation related when it comes to Python I recommend Sphinx. It's what the developers of python use for their documentation.
A:
This page seem to detail the method of creating a qt-style docs.
basically there's a tool you get with qt called qhelpgenerator which creates a .qch file, edible by the qt assistant.
I haven't used it before but it looks fairly simple.
A:
You do not have to put doxygen comments in the code. You can put the documentation in other places. Check out this page in the doxygen manual.
|
Qt-style documentation using Doxygen?
|
How do I produce Qt-style documentation (Trolltech's C++ Qt or Riverbank's PyQt docs) with Doxygen? I am documenting Python, and I would like to be able to improve the default function brief that it produces.
In particular, I would like to be able to see the return type (which can be user specified) and the parameters in the function brief.
For example:
Functions:
int getNumber(self)
str getString(self)
tuple getTuple(self, int numberOfElements=2)
Function Documentation:
int getNumber(self)
gets the number of items within a list as specified...
Definition at line 63 of ....
etc...
If this isn't possible without modifying the source, maybe there is another tool other than Doxygen that handles Python documentation in this kind of way?
|
[
"Then just use Doxygen? This will get you started:\n\nThis is a guide for automatically\n generating documentation off of Python\n source code using Doxygen.\n\nObviously, since Python is not strongly typed, specifying the return type and the expected type of the parameters will be up to you, the documentation writer. That's just best practices anyway.\n",
"If you're doing anything documentation related when it comes to Python I recommend Sphinx. It's what the developers of python use for their documentation.\n",
"This page seem to detail the method of creating a qt-style docs.\nbasically there's a tool you get with qt called qhelpgenerator which creates a .qch file, edible by the qt assistant.\nI haven't used it before but it looks fairly simple.\n",
"You do not have to put doxygen comments in the code. You can put the documentation in other places. Check out this page in the doxygen manual. \n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"documentation",
"doxygen",
"python"
] |
stackoverflow_0000776388_documentation_doxygen_python.txt
|
Q:
How do I set up a model to use an AutoField with a legacy database in Python?
I have a legacy database with an integer set as a primary key. It was initially managed manually, but since we are wanting to move to django, the admin tool seemed to be the right place to start. I created the model and am trying to set the primary key to be an autofield. It doesn't seem to be remembering the old id in updates, and it doesn't create new id's on insert. What am I doing wrong?
A:
The DB is responsible for managing the value of the ID. If you want to use AutoField, you have to change the column in the DB to use that. Django is not responsible for managing the generated ID
|
How do I set up a model to use an AutoField with a legacy database in Python?
|
I have a legacy database with an integer set as a primary key. It was initially managed manually, but since we are wanting to move to django, the admin tool seemed to be the right place to start. I created the model and am trying to set the primary key to be an autofield. It doesn't seem to be remembering the old id in updates, and it doesn't create new id's on insert. What am I doing wrong?
|
[
"The DB is responsible for managing the value of the ID. If you want to use AutoField, you have to change the column in the DB to use that. Django is not responsible for managing the generated ID\n"
] |
[
2
] |
[] |
[] |
[
"autofield",
"django",
"oracle",
"python"
] |
stackoverflow_0000777778_autofield_django_oracle_python.txt
|
Q:
Directory Walker for Python
I am currently using the directory walker from Here
import os
class DirectoryWalker:
# a forward iterator that traverses a directory tree
def __init__(self, directory):
self.stack = [directory]
self.files = []
self.index = 0
def __getitem__(self, index):
while 1:
try:
file = self.files[self.index]
self.index = self.index + 1
except IndexError:
# pop next directory from stack
self.directory = self.stack.pop()
self.files = os.listdir(self.directory)
self.index = 0
else:
# got a filename
fullname = os.path.join(self.directory, file)
if os.path.isdir(fullname) and not os.path.islink(fullname):
self.stack.append(fullname)
return fullname
for file in DirectoryWalker(os.path.abspath('.')):
print file
This minor change allows you to have the full path within the file.
Can anyone help me how to find just the filename as well using this? I need both the full path, and just the filename.
A:
Why do you want to do such boring thing yourself?
for path, directories, files in os.walk('.'):
print 'ls %r' % path
for directory in directories:
print ' d%r' % directory
for filename in files:
print ' -%r' % filename
Output:
'.'
d'finction'
d'.hg'
-'setup.py'
-'.hgignore'
'./finction'
-'finction'
-'cdg.pyc'
-'util.pyc'
-'cdg.py'
-'util.py'
-'__init__.pyc'
-'__init__.py'
'./.hg'
d'store'
-'hgrc'
-'requires'
-'00changelog.i'
-'undo.branch'
-'dirstate'
-'undo.dirstate'
-'branch'
'./.hg/store'
d'data'
-'undo'
-'00changelog.i'
-'00manifest.i'
'./.hg/store/data'
d'finction'
-'.hgignore.i'
-'setup.py.i'
'./.hg/store/data/finction'
-'util.py.i'
-'cdg.py.i'
-'finction.i'
-'____init____.py.i'
But if you insist, there's path related tools in os.path, os.basename is what you are looking at.
>>> import os.path
>>> os.path.basename('/hello/world.h')
'world.h'
A:
Rather than using '.' as your directory, refer to its absolute path:
for file in DirectoryWalker(os.path.abspath('.')):
print file
Also, I'd recommend using a word other than 'file', because it means something in the python language. Not a keyword, though so it still runs.
As an aside, when dealing with filenames, I find the os.path module to be incredibly useful - I'd recommend having a look through that, especially
os.path.normpath
Normalises paths (gets rid of redundant '.'s and 'theFolderYouWereJustIn/../'s)
os.path.join
Joins two paths
A:
os.path.dirname()? os.path.normpath()? os.path.abspath()?
This would also be a lovely place to think recursion.
A:
Just prepend the current directory path to the "./foo" path returned:
print os.path.join(os.getcwd(), file)
|
Directory Walker for Python
|
I am currently using the directory walker from Here
import os
class DirectoryWalker:
# a forward iterator that traverses a directory tree
def __init__(self, directory):
self.stack = [directory]
self.files = []
self.index = 0
def __getitem__(self, index):
while 1:
try:
file = self.files[self.index]
self.index = self.index + 1
except IndexError:
# pop next directory from stack
self.directory = self.stack.pop()
self.files = os.listdir(self.directory)
self.index = 0
else:
# got a filename
fullname = os.path.join(self.directory, file)
if os.path.isdir(fullname) and not os.path.islink(fullname):
self.stack.append(fullname)
return fullname
for file in DirectoryWalker(os.path.abspath('.')):
print file
This minor change allows you to have the full path within the file.
Can anyone help me how to find just the filename as well using this? I need both the full path, and just the filename.
|
[
"Why do you want to do such boring thing yourself?\nfor path, directories, files in os.walk('.'):\n print 'ls %r' % path\n for directory in directories:\n print ' d%r' % directory\n for filename in files:\n print ' -%r' % filename\n\nOutput:\n'.'\n d'finction'\n d'.hg'\n -'setup.py'\n -'.hgignore'\n'./finction'\n -'finction'\n -'cdg.pyc'\n -'util.pyc'\n -'cdg.py'\n -'util.py'\n -'__init__.pyc'\n -'__init__.py'\n'./.hg'\n d'store'\n -'hgrc'\n -'requires'\n -'00changelog.i'\n -'undo.branch'\n -'dirstate'\n -'undo.dirstate'\n -'branch'\n'./.hg/store'\n d'data'\n -'undo'\n -'00changelog.i'\n -'00manifest.i'\n'./.hg/store/data'\n d'finction'\n -'.hgignore.i'\n -'setup.py.i'\n'./.hg/store/data/finction'\n -'util.py.i'\n -'cdg.py.i'\n -'finction.i'\n -'____init____.py.i'\n\nBut if you insist, there's path related tools in os.path, os.basename is what you are looking at.\n>>> import os.path\n>>> os.path.basename('/hello/world.h')\n'world.h'\n\n",
"Rather than using '.' as your directory, refer to its absolute path:\nfor file in DirectoryWalker(os.path.abspath('.')):\n print file\n\nAlso, I'd recommend using a word other than 'file', because it means something in the python language. Not a keyword, though so it still runs.\nAs an aside, when dealing with filenames, I find the os.path module to be incredibly useful - I'd recommend having a look through that, especially\nos.path.normpath\n\nNormalises paths (gets rid of redundant '.'s and 'theFolderYouWereJustIn/../'s)\nos.path.join\n\nJoins two paths\n",
"os.path.dirname()? os.path.normpath()? os.path.abspath()?\nThis would also be a lovely place to think recursion.\n",
"Just prepend the current directory path to the \"./foo\" path returned:\nprint os.path.join(os.getcwd(), file)\n\n"
] |
[
14,
6,
1,
0
] |
[] |
[] |
[
"directory_listing",
"python"
] |
stackoverflow_0000775231_directory_listing_python.txt
|
Q:
Python subprocess "object has no attribute 'fileno'" error
This code generates "AttributeError: 'Popen' object has no attribute 'fileno'" when run with Python 2.5.1
Code:
def get_blame(filename):
proc = []
proc.append(Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE))
proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1]), stdout=PIPE)
proc.append(Popen(['tr', r"'\040'", r"';'"], stdin=proc[-1]), stdout=PIPE)
proc.append(Popen(['cut', r"-d", r"\;", '-f', '3'], stdin=proc[-1]), stdout=PIPE)
return proc[-1].stdout.read()
Stack:
function walk_folder in blame.py at line 55
print_file(os.path.join(os.getcwd(), filename), path)
function print_file in blame.py at line 34
users = get_blame(filename)
function get_blame in blame.py at line 20
proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1]), stdout=PIPE)
function __init__ in subprocess.py at line 533
(p2cread, p2cwrite,
function _get_handles in subprocess.py at line 830
p2cread = stdin.fileno()
This code should be working the python docs describe this usage.
A:
Three things
First, your ()'s are wrong.
Second, the result of subprocess.Popen() is a process object, not a file.
proc = []
proc.append(Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE))
proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1]), stdout=PIPE)
The value of proc[-1] isn't the file, it's the process that contains the file.
proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1].stdout, stdout=PIPE))
Third, don't do all that tr and cut junk in the shell, few things could be slower. Write the tr and cut processing in Python -- it's faster and simpler.
A:
There's a few weird things in the script,
Why are you storing each process in a list? Wouldn't it be much more readable to simply use variables? Removing all the .append()s reveals an syntax error, several times you have passed stdout=PIPE to the append arguments, instead of Popen:
proc.append(Popen(...), stdout=PIPE)
So a straight-rewrite (still with errors I'll mention in a second) would become..
def get_blame(filename):
blame = Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE)
tr1 = Popen(['tr', '-s', r"'\040'"], stdin=blame, stdout=PIPE)
tr2 = Popen(['tr', r"'\040'", r"';'"], stdin=tr1), stdout=PIPE)
cut = Popen(['cut', r"-d", r"\;", '-f', '3'], stdin=tr2, stdout=PIPE)
return cut.stdout.read()
On each subsequent command, you have passed the Popen object, not that processes stdout. From the "Replacing shell pipeline" section of the subprocess docs, you do..
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
..whereas you were doing the equivalent of stdin=p1.
The tr1 = (in the above rewritten code) line would become..
tr1 = Popen(['tr', '-s', r"'\040'"], stdin=blame.stdout, stdout=PIPE)
You do not need to escape commands/arguments with subprocess, as subprocess does not run the command in any shell (unless you specify shell=True). See the Securitysection of the subprocess docs.
Instead of..
proc.append(Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE))
..you can safely do..
Popen(['svn', 'blame', filename], stdout=PIPE)
As S.Lott suggested, don't use subprocess to do text-manipulations easier done in Python (the tr/cut commands). For one, tr/cut etc aren't hugely portable (different versions have different arguments), also they are quite hard to read (I've no idea what the tr's and cut are doing)
If I were to rewrite the command, I would probably do something like..
def get_blame(filename):
blame = Popen(['svn', 'blame', filename], stdout=PIPE)
output = blame.communicate()[0] # preferred to blame.stdout.read()
# process commands output:
ret = []
for line in output.split("\n"):
split_line = line.strip().split(" ")
if len(split_line) > 2:
rev = split_line[0]
author = split_line[1]
line = " ".join(split_line[2:])
ret.append({'rev':rev, 'author':author, 'line':line})
return ret
A:
You want the stdout of the process, so replace your stdin=proc[-1] with stdin=proc[-1].stdout
Also, you need to move your paren, it should come after the stdout argument.
proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1]), stdout=PIPE)
should be:
proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1].stdout, stdout=PIPE))
Fix your other append calls in the same way.
|
Python subprocess "object has no attribute 'fileno'" error
|
This code generates "AttributeError: 'Popen' object has no attribute 'fileno'" when run with Python 2.5.1
Code:
def get_blame(filename):
proc = []
proc.append(Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE))
proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1]), stdout=PIPE)
proc.append(Popen(['tr', r"'\040'", r"';'"], stdin=proc[-1]), stdout=PIPE)
proc.append(Popen(['cut', r"-d", r"\;", '-f', '3'], stdin=proc[-1]), stdout=PIPE)
return proc[-1].stdout.read()
Stack:
function walk_folder in blame.py at line 55
print_file(os.path.join(os.getcwd(), filename), path)
function print_file in blame.py at line 34
users = get_blame(filename)
function get_blame in blame.py at line 20
proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1]), stdout=PIPE)
function __init__ in subprocess.py at line 533
(p2cread, p2cwrite,
function _get_handles in subprocess.py at line 830
p2cread = stdin.fileno()
This code should be working the python docs describe this usage.
|
[
"Three things\nFirst, your ()'s are wrong.\nSecond, the result of subprocess.Popen() is a process object, not a file.\nproc = []\nproc.append(Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE))\nproc.append(Popen(['tr', '-s', r\"'\\040'\"], stdin=proc[-1]), stdout=PIPE)\n\nThe value of proc[-1] isn't the file, it's the process that contains the file.\nproc.append(Popen(['tr', '-s', r\"'\\040'\"], stdin=proc[-1].stdout, stdout=PIPE))\n\nThird, don't do all that tr and cut junk in the shell, few things could be slower. Write the tr and cut processing in Python -- it's faster and simpler.\n",
"There's a few weird things in the script,\n\nWhy are you storing each process in a list? Wouldn't it be much more readable to simply use variables? Removing all the .append()s reveals an syntax error, several times you have passed stdout=PIPE to the append arguments, instead of Popen:\nproc.append(Popen(...), stdout=PIPE)\n\nSo a straight-rewrite (still with errors I'll mention in a second) would become..\ndef get_blame(filename): \n blame = Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE)\n tr1 = Popen(['tr', '-s', r\"'\\040'\"], stdin=blame, stdout=PIPE)\n tr2 = Popen(['tr', r\"'\\040'\", r\"';'\"], stdin=tr1), stdout=PIPE)\n cut = Popen(['cut', r\"-d\", r\"\\;\", '-f', '3'], stdin=tr2, stdout=PIPE)\n return cut.stdout.read()\n\nOn each subsequent command, you have passed the Popen object, not that processes stdout. From the \"Replacing shell pipeline\" section of the subprocess docs, you do..\np1 = Popen([\"dmesg\"], stdout=PIPE)\np2 = Popen([\"grep\", \"hda\"], stdin=p1.stdout, stdout=PIPE)\n\n..whereas you were doing the equivalent of stdin=p1.\nThe tr1 = (in the above rewritten code) line would become..\ntr1 = Popen(['tr', '-s', r\"'\\040'\"], stdin=blame.stdout, stdout=PIPE)\n\nYou do not need to escape commands/arguments with subprocess, as subprocess does not run the command in any shell (unless you specify shell=True). See the Securitysection of the subprocess docs.\nInstead of..\nproc.append(Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE))\n\n..you can safely do..\nPopen(['svn', 'blame', filename], stdout=PIPE)\n\nAs S.Lott suggested, don't use subprocess to do text-manipulations easier done in Python (the tr/cut commands). For one, tr/cut etc aren't hugely portable (different versions have different arguments), also they are quite hard to read (I've no idea what the tr's and cut are doing)\nIf I were to rewrite the command, I would probably do something like..\ndef get_blame(filename): \n blame = Popen(['svn', 'blame', filename], stdout=PIPE)\n output = blame.communicate()[0] # preferred to blame.stdout.read()\n # process commands output:\n ret = []\n for line in output.split(\"\\n\"):\n split_line = line.strip().split(\" \")\n if len(split_line) > 2:\n rev = split_line[0]\n author = split_line[1]\n line = \" \".join(split_line[2:])\n\n ret.append({'rev':rev, 'author':author, 'line':line})\n\n return ret\n\n\n",
"You want the stdout of the process, so replace your stdin=proc[-1] with stdin=proc[-1].stdout\nAlso, you need to move your paren, it should come after the stdout argument.\n proc.append(Popen(['tr', '-s', r\"'\\040'\"], stdin=proc[-1]), stdout=PIPE)\n\nshould be:\n proc.append(Popen(['tr', '-s', r\"'\\040'\"], stdin=proc[-1].stdout, stdout=PIPE))\n\nFix your other append calls in the same way.\n"
] |
[
10,
3,
1
] |
[
"looks like syntax error. except first append the rest are erroneous (review brackets).\n",
"Like S.Lott said, processing the text in Python is better.\nBut if you want to use the cmdline utilities, you can keep it readable by using shell=True:\ncmdline = r\"svn blame %s | tr -s '\\040' | tr '\\040' ';' | cut -d \\; -f 3\" % shellquote(filename)\nreturn Popen(cmdline, shell=True, stdout=PIPE).communicate()[0]\n\n"
] |
[
-1,
-2
] |
[
"pipe",
"python",
"subprocess"
] |
stackoverflow_0000777996_pipe_python_subprocess.txt
|
Q:
What's the difference between scgi and wsgi?
What's the difference between these two?
Which is better/faster/reliable?
A:
SCGI is a language-neutral means of connecting a front-end web server and a web application. WSGI is a Python-specific interface standard for web applications.
Though they both have roots in CGI, they're rather different in scope and you could indeed quite reasonably use both at once, for example having a mod_scgi on the webserver talk to a WSGI app run as an SCGI server. There are multiple library implementations that will run WSGI applications as SCGI servers for you (eg. wsgitools, cherrypy).
They are both 'reliable', in as much as you can consider a specification reliable as opposed to a particular implementation. These days you would probably write your application as a WSGI callable, and consider the question of deployment separately.
Maybe an Apache+mod_wsgi (embedded) interface might be a bit faster than an Apache+mod_scgi+(SCGI wrapper lib), but in all likelihood it's not going to be hugely different. More valuable is the ability to run the application on a variety of servers, platforms and connection standards.
A:
SCGI (like FastCGI) is a (serialized) protocol suitable for inter-process communication between a web-server and a web-application.
WSGI is a Python API, connecting two (or more) Python WSGI-compatible modules inside the same process (Python interpreter). One module represents the web-server (being either a Python in-process web-server implementation or a gateway to a web-server in another process via e.g. SCGI). The other module is or represents the web application. Additionally, zero or more modules between theses two modules, may serve as WSGI "middleware" modules, doing things like session/cookie management, content caching, authentication, etc. The WSGI API uses Python language features like iteration/generators and passing of callable objects between the cooperating WSGI-compatible modules.
A:
They are both specifications for plugging a web application into a web server. One glaring difference is that WSGI comes from the Python world, and I believe there are no non-python implementations.
Specifications are generally not comparable based on better/faster/reliable.
Only their implementations are comparable, and I am sure you will find good implementations of both specifications.
Perhaps read and read.
|
What's the difference between scgi and wsgi?
|
What's the difference between these two?
Which is better/faster/reliable?
|
[
"SCGI is a language-neutral means of connecting a front-end web server and a web application. WSGI is a Python-specific interface standard for web applications.\nThough they both have roots in CGI, they're rather different in scope and you could indeed quite reasonably use both at once, for example having a mod_scgi on the webserver talk to a WSGI app run as an SCGI server. There are multiple library implementations that will run WSGI applications as SCGI servers for you (eg. wsgitools, cherrypy).\nThey are both 'reliable', in as much as you can consider a specification reliable as opposed to a particular implementation. These days you would probably write your application as a WSGI callable, and consider the question of deployment separately.\nMaybe an Apache+mod_wsgi (embedded) interface might be a bit faster than an Apache+mod_scgi+(SCGI wrapper lib), but in all likelihood it's not going to be hugely different. More valuable is the ability to run the application on a variety of servers, platforms and connection standards.\n",
"SCGI (like FastCGI) is a (serialized) protocol suitable for inter-process communication between a web-server and a web-application.\nWSGI is a Python API, connecting two (or more) Python WSGI-compatible modules inside the same process (Python interpreter). One module represents the web-server (being either a Python in-process web-server implementation or a gateway to a web-server in another process via e.g. SCGI). The other module is or represents the web application. Additionally, zero or more modules between theses two modules, may serve as WSGI \"middleware\" modules, doing things like session/cookie management, content caching, authentication, etc. The WSGI API uses Python language features like iteration/generators and passing of callable objects between the cooperating WSGI-compatible modules.\n",
"They are both specifications for plugging a web application into a web server. One glaring difference is that WSGI comes from the Python world, and I believe there are no non-python implementations.\nSpecifications are generally not comparable based on better/faster/reliable. \nOnly their implementations are comparable, and I am sure you will find good implementations of both specifications.\nPerhaps read and read.\n"
] |
[
27,
12,
11
] |
[] |
[] |
[
"python",
"scgi",
"wsgi"
] |
stackoverflow_0000257481_python_scgi_wsgi.txt
|
Q:
How to query any constraint's target list without knowing the constraint type?
In Maya, I have a list of constraints gathered by the following code. I want to iterate the constraints and query the targets for each of them:
cons = ls(type='constraint')
for con in cons:
targets = constraint(query=True, targetList=True)
The problem, there is no general constraint command for manipulating all constraints. Instead, each constraint has its own unique MEL command associated with it.
Is there any way to query the targets on a constraint without having to type check each constraint and tediously run its respective MEL command?
A:
listConnections on the .target attr
the cleanup in mel:
string $cons[] = `ls -type "constraint"`;
for ( $con in $cons ){
string $targetAttrString = ( $con+ ".target" );
string $connections[] = `listConnections $targetAttrString`;
string $connectionsFlattened[] = stringArrayRemoveDuplicates($connections);
for ( $f in $connectionsFlattened )
if ( $f != $con )
print ( $f+ " is a target\n" );
}
|
How to query any constraint's target list without knowing the constraint type?
|
In Maya, I have a list of constraints gathered by the following code. I want to iterate the constraints and query the targets for each of them:
cons = ls(type='constraint')
for con in cons:
targets = constraint(query=True, targetList=True)
The problem, there is no general constraint command for manipulating all constraints. Instead, each constraint has its own unique MEL command associated with it.
Is there any way to query the targets on a constraint without having to type check each constraint and tediously run its respective MEL command?
|
[
"listConnections on the .target attr\nthe cleanup in mel:\nstring $cons[] = `ls -type \"constraint\"`;\nfor ( $con in $cons ){\n string $targetAttrString = ( $con+ \".target\" );\n string $connections[] = `listConnections $targetAttrString`;\n string $connectionsFlattened[] = stringArrayRemoveDuplicates($connections);\n for ( $f in $connectionsFlattened )\n if ( $f != $con )\n print ( $f+ \" is a target\\n\" );\n}\n\n"
] |
[
2
] |
[] |
[] |
[
"3d",
"constraints",
"maya",
"mel",
"python"
] |
stackoverflow_0000778083_3d_constraints_maya_mel_python.txt
|
Q:
Other than basic python syntax, what other key areas should I learn to get a website live?
Other than basic python syntax, what other key areas should I learn to get a website live?
Is there a web.config in the python world?
Which libraries handle things like authentication? or is that all done manually via session cookies and database tables?
Are there any web specific libraries?
Edit: sorry!
I am well versed in asp.net, I want to branch out and learn Python, hence this question (sorry, terrible start to this question I know).
A:
Basic Python syntax isn't half of what you need to know.
All of the Python built-in data structures.
Object-oriented design.
What python module and packages are.
The Python libraries -- almost everything you could ever want has already been written.
To name a few things.
If you've done some web development, you probably have some background in HTTP protocol, HTML, .CSS and Javascript and SQL.
You should use a framework to handle the endless collection of mundane details, like authentication. Look at Django.
A:
Answer replaced to correspond with the updated question.
If you're already familiar with ASP.NET, the easiest way to jump into creating a website with Python is probably to look into one of the major web frameworks. Django is very popular, working through the installation guide and the tutorial will probably get you rolling pretty well.
Really though, I'd personally suggest at least learning the language itself to a basic competency level before trying to dive right into using it inside a web framework. I think you'll be trying to force yourself to learn too much at once. In terms of just learning Python, the free book Dive Into Python is always spoken of highly.
A:
Oh, golly.
Look, this is gonna be real hard to answer because, read as you wrote it, you're missing a lot of steps. Like, you need a web server, a design, some HTML, and so on.
Are you building from the ground up? Asking about Python makes me suspect you may be using something like Zope.
A:
Don't forget to give IronPython a try - your .NET experience can help making sense of newly learned Python idioms.
IronPython is an implementation of the Python programming language running under .NET and Silverlight. It supports an interactive console with fully dynamic compilation. It's well integrated with the rest of the .NET Framework and makes all .NET libraries easily available to Python programmers, while maintaining compatibility with the Python language.
A:
Of course the builtins. And become familiar with the standard library (until you start to remember what's in it, I'd suggest looking through it any time you're about to implement something... It might be there already!)
You'll want some kind of framework, I'd recommend Django or TurboGears
But you also need to learn the pythonic-way. For this, open up a Python interpreter and type:
import this
|
Other than basic python syntax, what other key areas should I learn to get a website live?
|
Other than basic python syntax, what other key areas should I learn to get a website live?
Is there a web.config in the python world?
Which libraries handle things like authentication? or is that all done manually via session cookies and database tables?
Are there any web specific libraries?
Edit: sorry!
I am well versed in asp.net, I want to branch out and learn Python, hence this question (sorry, terrible start to this question I know).
|
[
"Basic Python syntax isn't half of what you need to know.\n\nAll of the Python built-in data structures.\nObject-oriented design.\nWhat python module and packages are.\nThe Python libraries -- almost everything you could ever want has already been written.\n\nTo name a few things.\nIf you've done some web development, you probably have some background in HTTP protocol, HTML, .CSS and Javascript and SQL.\nYou should use a framework to handle the endless collection of mundane details, like authentication. Look at Django.\n",
"Answer replaced to correspond with the updated question.\nIf you're already familiar with ASP.NET, the easiest way to jump into creating a website with Python is probably to look into one of the major web frameworks. Django is very popular, working through the installation guide and the tutorial will probably get you rolling pretty well.\nReally though, I'd personally suggest at least learning the language itself to a basic competency level before trying to dive right into using it inside a web framework. I think you'll be trying to force yourself to learn too much at once. In terms of just learning Python, the free book Dive Into Python is always spoken of highly.\n",
"Oh, golly.\nLook, this is gonna be real hard to answer because, read as you wrote it, you're missing a lot of steps. Like, you need a web server, a design, some HTML, and so on.\nAre you building from the ground up? Asking about Python makes me suspect you may be using something like Zope.\n",
"Don't forget to give IronPython a try - your .NET experience can help making sense of newly learned Python idioms.\n\nIronPython is an implementation of the Python programming language running under .NET and Silverlight. It supports an interactive console with fully dynamic compilation. It's well integrated with the rest of the .NET Framework and makes all .NET libraries easily available to Python programmers, while maintaining compatibility with the Python language.\n\n",
"Of course the builtins. And become familiar with the standard library (until you start to remember what's in it, I'd suggest looking through it any time you're about to implement something... It might be there already!)\nYou'll want some kind of framework, I'd recommend Django or TurboGears\nBut you also need to learn the pythonic-way. For this, open up a Python interpreter and type:\nimport this\n\n"
] |
[
4,
1,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000777924_python.txt
|
Q:
regex for parsing SQL statements
I've got an IronPython script that executes a bunch of SQL statements against a SQL Server database. the statements are large strings that actually contain multiple statements, separated by the "GO" keyword. That works when they're run from sql management studio and some other tools, but not in ADO. So I split up the strings using the 2.5 "re" module like so:
splitter = re.compile(r'\bGO\b', re.IGNORECASE)
for script in splitter.split(scriptBlob):
if(script):
[... execute the query ...]
This breaks in the rare case that there's the word "go" in a comment or a string. How in the heck would I work around that? i.e. correctly parse this string into two scripts:
-- this is a great database script! go team go!
INSERT INTO myTable(stringColumn) VALUES ('go away!')
/*
here are some comments that go with this script.
*/
GO
INSERT INTO myTable(stringColumn) VALUES ('this is the next script')
EDIT:
I searched more and found this SQL documentation:
http://msdn.microsoft.com/en-us/library/ms188037(SQL.90).aspx
As it turns out, GO must be on its own line as some answers suggested. However it can be followed by a "count" integer which will actually execute the statement batch that many times (has anybody actually used that before??) and it can be followed by a single-line comments on the same line (but not a multi-line, I tested this.) So the magic regex would look something like:
"(?m)^\s*GO\s*\d*\s*$"
Except this doesn't account for:
a possible single-line comment ("--" followed by any character except a line break) at the end.
the whole line being inside a larger multi-line comment.
I'm not concerned about capturing the "count" argument and using it. Now that I have some technical documentation i'm tantalizingly close to writing this "to spec" and never having to worry about it again.
A:
Is "GO" always on a line by itself? You could just split on "^GO$".
A:
since you can have comments inside comments, nested comments, comments inside queries, etc, there is no sane way to do it with regexes.
Just immagine the following script:
INSERT INTO table (name) VALUES (
-- GO NOW GO
'GO to GO /* GO */ GO' +
/* some comment 'go go go'
-- */ 'GO GO' /*
GO */
)
That without mentioning:
INSERT INTO table (go) values ('xxx') GO
The only way would be to build a stateful parser instead. One that reads a char at a time, and has a flag that will be set when it is inside a comment/quote-delimited string/etc and reset when it ends, so the code can ignore "GO" instances when inside those.
A:
If GO is always on a line by itself you can use split like this:
#!/usr/bin/python
import re
sql = """-- this is a great database script! go team go!
INSERT INTO myTable(stringColumn) VALUES ('go away!')
/*
here are some comments that go with this script.
*/
GO 5 --this is a test
INSERT INTO myTable(stringColumn) VALUES ('this is the next script')"""
statements = re.split("(?m)^\s*GO\s*(?:[0-9]+)?\s*(?:--.*)?$", sql)
for statement in statements:
print "the statement is\n%s\n" % (statement)
(?m) turns on multiline matchings, that is ^ and $ will match start and end of line (instead of start and end of string).
^ matches at the start of a line
\s* matches zero or more whitespaces (space, tab, etc.)
GO matches a literal GO
\s* matches as before
(?:[0-9]+)? matches an optional integer number (with possible leading zeros)
\s* matches as before
(?:--.*)? matches an optional end-of-line comment
$ matches at the end of a line
The split will consume the GO line, so you won't have to worry about it. This will leave you with a list of statements.
This modified split has a problem: it will not give you back the number after the GO, if that is important I would say it is time to move to a parser of some form.
A:
This won't detect if GO ever is used as a variable name inside some statement, but should take care of those inside comments or strings.
EDIT: This now works if GO is part of the statement, as long as it is not in it's own line.
import re
line_comment = r'(?:--|#).*$'
block_comment = r'/\*[\S\s]*?\*/'
singe_quote_string = r"'(?:\\.|[^'\\])*'"
double_quote_string = r'"(?:\\.|[^"\\])*"'
go_word = r'^[^\S\n]*(?P<GO>GO)[^\S\n]*\d*[^\S\n]*(?:(?:--|#).*)?$'
full_pattern = re.compile(r'|'.join((
line_comment,
block_comment,
singe_quote_string,
double_quote_string,
go_word,
)), re.IGNORECASE | re.MULTILINE)
def split_sql_statements(statement_string):
last_end = 0
for match in full_pattern.finditer(statement_string):
if match.group('GO'):
yield statement_string[last_end:match.start()]
last_end = match.end()
yield statement_string[last_end:]
Example usage:
statement_string = r"""
-- this is a great database script! go team go!
INSERT INTO go(go) VALUES ('go away!')
go 7 -- foo
INSERT INTO go(go) VALUES (
'I have to GO " with a /* comment to GO inside a /* GO string /*'
)
/*
here are some comments that go with this script.
*/
GO
INSERT INTO go(go) VALUES ('this is the next script')
"""
for statement in split_sql_statements(statement_string):
print '======='
print statement
Output:
=======
-- this is a great database script! go team go!
INSERT INTO go(go) VALUES ('go away!')
=======
INSERT INTO go(go) VALUES (
'I have to GO " with a /* comment to GO inside a /* GO string /*'
)
/*
here are some comments that go with this script.
*/
=======
INSERT INTO go(go) VALUES ('this is the next script')
|
regex for parsing SQL statements
|
I've got an IronPython script that executes a bunch of SQL statements against a SQL Server database. the statements are large strings that actually contain multiple statements, separated by the "GO" keyword. That works when they're run from sql management studio and some other tools, but not in ADO. So I split up the strings using the 2.5 "re" module like so:
splitter = re.compile(r'\bGO\b', re.IGNORECASE)
for script in splitter.split(scriptBlob):
if(script):
[... execute the query ...]
This breaks in the rare case that there's the word "go" in a comment or a string. How in the heck would I work around that? i.e. correctly parse this string into two scripts:
-- this is a great database script! go team go!
INSERT INTO myTable(stringColumn) VALUES ('go away!')
/*
here are some comments that go with this script.
*/
GO
INSERT INTO myTable(stringColumn) VALUES ('this is the next script')
EDIT:
I searched more and found this SQL documentation:
http://msdn.microsoft.com/en-us/library/ms188037(SQL.90).aspx
As it turns out, GO must be on its own line as some answers suggested. However it can be followed by a "count" integer which will actually execute the statement batch that many times (has anybody actually used that before??) and it can be followed by a single-line comments on the same line (but not a multi-line, I tested this.) So the magic regex would look something like:
"(?m)^\s*GO\s*\d*\s*$"
Except this doesn't account for:
a possible single-line comment ("--" followed by any character except a line break) at the end.
the whole line being inside a larger multi-line comment.
I'm not concerned about capturing the "count" argument and using it. Now that I have some technical documentation i'm tantalizingly close to writing this "to spec" and never having to worry about it again.
|
[
"Is \"GO\" always on a line by itself? You could just split on \"^GO$\".\n",
"since you can have comments inside comments, nested comments, comments inside queries, etc, there is no sane way to do it with regexes.\nJust immagine the following script:\nINSERT INTO table (name) VALUES (\n-- GO NOW GO\n'GO to GO /* GO */ GO' +\n/* some comment 'go go go'\n-- */ 'GO GO' /*\nGO */\n)\n\nThat without mentioning:\nINSERT INTO table (go) values ('xxx') GO\n\nThe only way would be to build a stateful parser instead. One that reads a char at a time, and has a flag that will be set when it is inside a comment/quote-delimited string/etc and reset when it ends, so the code can ignore \"GO\" instances when inside those.\n",
"If GO is always on a line by itself you can use split like this:\n#!/usr/bin/python\n\nimport re\n\nsql = \"\"\"-- this is a great database script! go team go!\nINSERT INTO myTable(stringColumn) VALUES ('go away!')\n/*\n here are some comments that go with this script.\n*/\nGO 5 --this is a test\nINSERT INTO myTable(stringColumn) VALUES ('this is the next script')\"\"\"\n\nstatements = re.split(\"(?m)^\\s*GO\\s*(?:[0-9]+)?\\s*(?:--.*)?$\", sql)\n\nfor statement in statements:\n print \"the statement is\\n%s\\n\" % (statement)\n\n\n(?m) turns on multiline matchings, that is ^ and $ will match start and end of line (instead of start and end of string).\n^ matches at the start of a line\n\\s* matches zero or more whitespaces (space, tab, etc.)\nGO matches a literal GO\n\\s* matches as before\n(?:[0-9]+)? matches an optional integer number (with possible leading zeros)\n\\s* matches as before\n(?:--.*)? matches an optional end-of-line comment \n$ matches at the end of a line\n\nThe split will consume the GO line, so you won't have to worry about it. This will leave you with a list of statements.\nThis modified split has a problem: it will not give you back the number after the GO, if that is important I would say it is time to move to a parser of some form.\n",
"This won't detect if GO ever is used as a variable name inside some statement, but should take care of those inside comments or strings.\nEDIT: This now works if GO is part of the statement, as long as it is not in it's own line.\nimport re\n\nline_comment = r'(?:--|#).*$'\nblock_comment = r'/\\*[\\S\\s]*?\\*/'\nsinge_quote_string = r\"'(?:\\\\.|[^'\\\\])*'\"\ndouble_quote_string = r'\"(?:\\\\.|[^\"\\\\])*\"'\ngo_word = r'^[^\\S\\n]*(?P<GO>GO)[^\\S\\n]*\\d*[^\\S\\n]*(?:(?:--|#).*)?$'\n\nfull_pattern = re.compile(r'|'.join((\n line_comment,\n block_comment,\n singe_quote_string,\n double_quote_string,\n go_word,\n)), re.IGNORECASE | re.MULTILINE)\n\ndef split_sql_statements(statement_string):\n last_end = 0\n for match in full_pattern.finditer(statement_string):\n if match.group('GO'):\n yield statement_string[last_end:match.start()]\n last_end = match.end()\n yield statement_string[last_end:]\n\nExample usage:\nstatement_string = r\"\"\"\n-- this is a great database script! go team go!\nINSERT INTO go(go) VALUES ('go away!')\ngo 7 -- foo\nINSERT INTO go(go) VALUES (\n 'I have to GO \" with a /* comment to GO inside a /* GO string /*'\n)\n/*\n here are some comments that go with this script.\n */\n GO\n INSERT INTO go(go) VALUES ('this is the next script')\n\"\"\"\n\nfor statement in split_sql_statements(statement_string):\n print '======='\n print statement\n\nOutput:\n=======\n\n-- this is a great database script! go team go!\nINSERT INTO go(go) VALUES ('go away!')\n\n=======\n\nINSERT INTO go(go) VALUES (\n 'I have to GO \" with a /* comment to GO inside a /* GO string /*'\n)\n/*\n here are some comments that go with this script.\n */\n\n=======\n\n INSERT INTO go(go) VALUES ('this is the next script')\n\n"
] |
[
8,
5,
5,
2
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000778969_python_regex.txt
|
Q:
Identifying the types of all variables in a C project
I am trying to write a program to check that some C source code conforms to a variable naming convention. In order to do this, I need to analyse the source code and identify the type of all the local and global variables.
The end result will almost certainly be a python program, but the tool to analyse the code could either be a python module or an application that produces an easy-to-parse report. Alternatively (more on this below) it could be a way of extracting information from the compiler (by way of a report or similar). In case that's helpful, in all likelihood, it will be the Keil ARM compiler.
I've been experimenting with ctags and this is very useful for finding all of the typedefs and macro definitions etc, but it doesn't provide a direct way to find the type of variables, especially when the definition is spread over multiple lines (which I hope it won't be!).
Examples might include:
static volatile u8 var1; // should be flagged as static and volatile and a u8 (typedef of unsigned 8-bit integer)
volatile /* comments */
static /* inserted just to make life */
u8 /* difficult! */ var2 =
(u8) 72
; // likewise (nasty syntax, but technically valid C)
const uint_16t *pointer1; // flagged as a pointer to a constant uint_16t
int * const pointer2; // flagged as a constant pointer to an int
const char * const pointer3; // flagged as a constant pointer to a constant char
static MyTypedefTYPE var3; // flagged as a MyTypedefTYPE variable
u8 var4, var5, var6 = 72;
int *array1[SOME_LENGTH]; // flagged as an array of pointers to integers
char array2[FIRST_DIM][72]; // flagged as an array of arrays of type char
etc etc etc
It will also need to identify whether they're local or global/file-scope variables (which ctags can do) and if they're local, I'd ideally like the name of the function that they're declared within.
Also, I'd like to do a similar thing with functions: identify the return type, whether they're static and the type and name of all of their arguments.
Unfortunately, this is rather difficult with the C syntax since there is a certain amount of flexibility in parameter order and lots of flexibility in the amount of white space that is allowed between the parameters. I've toyed with using some fancy regular expressions to do the work, but it's far from ideal as there are so many different situations that can be applied, so the regular expressions quickly become unmanageable. I can't help but think that compilers must be able to do this (in order to work!), so I was wondering whether it was possible to extract this information. The Keil compiler seems to produce a ".crf" file for each source file that's compiled and this appears to contain all of the variables declared in that file, but it's a binary format and I can't find any information on how to parse this file. Alternatively a way of getting the information out of ctags would be perfect.
Any help that anyone can offer with this would be gratefully appreciated.
Thanks,
Al
A:
There are a number of Python parser packages that can be used to describe a syntax and then it will generate Python code to parse that syntax.
Ned Batchelder wrote a very nice summary
Of those, Ply was used in a project called pycparser that parses C source code. I would recommend starting with this.
Some of those other parser projects might also have sample C parsers.
Edit: just noticed that pycparser even has a sample Python script to just parse C type declarations like the old cdecl program.
A:
How about approaching it from the other side completely. You already have a parser that fully understands all of the nuances of the C type system: the compiler itself. So, compile the project with full debug support, and go spelunking in the debug data.
For a system based on formats supported by binutils, most of the detail you need can be learned with the BFD library.
Microsoft's debug formats are (somewhat) supported by libraries and documents at MSDN, but my Google-fu is weak today and I'm not putting my hands on the articles I know exist to link here.
The Keil 8051 compiler (I haven't used their ARM compiler here) uses Intel OMF or OMF2 format, and documents that the debug symbols are for their debugger or "any Intel-compatible emulators". Specs for OMF as used by Keil C51 are available from Keil, so I would imagine that similar specs are available for their other compilers too.
A quick scan of Keil's web site seems to indicate that they abandoned their proprietary ARM compiler in favor of licensing ARM's RealView Compiler, which appears to use ELF objects with DWARF format debug info. Dwarf should be supported by BFD, and should give you everything you need to know to verify that the types and names match.
A:
Check out ANTLR. It's a parser generator, with bindings for python. The ANTLR site provides a whole bunch of grammars for common languages, C included. You could download the grammar for C and add actions in appropriate places to collect the information you're interested in. There's even a neat graphical tool for creating and debugging the grammars. (I know that seems hokey, but it's actually quite handy and not obnoxious)
ANTLR Main Page: http://www.antlr.org
Their Grammars Page: http://www.antlr.org/grammar/list
ANSI C Grammar: http://www.antlr.org/grammar/1153358328744/C.g
ANTLR Python Target Information: http://www.antlr.org/wiki/display/ANTLR3/Antlr3PythonTarget
I just did something sort of similar, except to get my symbol information I'm actually extracting it from GDB.
A:
What you're trying to do is a lightweight form of static analysis. You might have some luck looking at the tools pointed to by Wikipedia.
Parsing the C code yourself sounds like the wrong direction to me: therein lies madness. If you insist, then [f]lex and yacc (bison) are the tools likely used by your compiler-writers.
Or, if ctags or cscope gets you 80% of the way, the source code to both is widely available. The last 20% is a Simple Matter of Programming. :)
A:
I did something similar for a project I was working on a few years ago. I ended up writing the first half of a C compiler. Don't be alarmed by that prospect. It is actually much easier than it sounds, especially if you are only looking for certain tokens (variable definitions, in this case).
Look for documentation online about how to scan C source code, detect tokens of interest, and parse the results. A good place to start is Wikipedia's artricle on lexical analysis.
|
Identifying the types of all variables in a C project
|
I am trying to write a program to check that some C source code conforms to a variable naming convention. In order to do this, I need to analyse the source code and identify the type of all the local and global variables.
The end result will almost certainly be a python program, but the tool to analyse the code could either be a python module or an application that produces an easy-to-parse report. Alternatively (more on this below) it could be a way of extracting information from the compiler (by way of a report or similar). In case that's helpful, in all likelihood, it will be the Keil ARM compiler.
I've been experimenting with ctags and this is very useful for finding all of the typedefs and macro definitions etc, but it doesn't provide a direct way to find the type of variables, especially when the definition is spread over multiple lines (which I hope it won't be!).
Examples might include:
static volatile u8 var1; // should be flagged as static and volatile and a u8 (typedef of unsigned 8-bit integer)
volatile /* comments */
static /* inserted just to make life */
u8 /* difficult! */ var2 =
(u8) 72
; // likewise (nasty syntax, but technically valid C)
const uint_16t *pointer1; // flagged as a pointer to a constant uint_16t
int * const pointer2; // flagged as a constant pointer to an int
const char * const pointer3; // flagged as a constant pointer to a constant char
static MyTypedefTYPE var3; // flagged as a MyTypedefTYPE variable
u8 var4, var5, var6 = 72;
int *array1[SOME_LENGTH]; // flagged as an array of pointers to integers
char array2[FIRST_DIM][72]; // flagged as an array of arrays of type char
etc etc etc
It will also need to identify whether they're local or global/file-scope variables (which ctags can do) and if they're local, I'd ideally like the name of the function that they're declared within.
Also, I'd like to do a similar thing with functions: identify the return type, whether they're static and the type and name of all of their arguments.
Unfortunately, this is rather difficult with the C syntax since there is a certain amount of flexibility in parameter order and lots of flexibility in the amount of white space that is allowed between the parameters. I've toyed with using some fancy regular expressions to do the work, but it's far from ideal as there are so many different situations that can be applied, so the regular expressions quickly become unmanageable. I can't help but think that compilers must be able to do this (in order to work!), so I was wondering whether it was possible to extract this information. The Keil compiler seems to produce a ".crf" file for each source file that's compiled and this appears to contain all of the variables declared in that file, but it's a binary format and I can't find any information on how to parse this file. Alternatively a way of getting the information out of ctags would be perfect.
Any help that anyone can offer with this would be gratefully appreciated.
Thanks,
Al
|
[
"There are a number of Python parser packages that can be used to describe a syntax and then it will generate Python code to parse that syntax.\nNed Batchelder wrote a very nice summary\nOf those, Ply was used in a project called pycparser that parses C source code. I would recommend starting with this.\nSome of those other parser projects might also have sample C parsers.\nEdit: just noticed that pycparser even has a sample Python script to just parse C type declarations like the old cdecl program.\n",
"How about approaching it from the other side completely. You already have a parser that fully understands all of the nuances of the C type system: the compiler itself. So, compile the project with full debug support, and go spelunking in the debug data. \nFor a system based on formats supported by binutils, most of the detail you need can be learned with the BFD library.\nMicrosoft's debug formats are (somewhat) supported by libraries and documents at MSDN, but my Google-fu is weak today and I'm not putting my hands on the articles I know exist to link here.\nThe Keil 8051 compiler (I haven't used their ARM compiler here) uses Intel OMF or OMF2 format, and documents that the debug symbols are for their debugger or \"any Intel-compatible emulators\". Specs for OMF as used by Keil C51 are available from Keil, so I would imagine that similar specs are available for their other compilers too.\nA quick scan of Keil's web site seems to indicate that they abandoned their proprietary ARM compiler in favor of licensing ARM's RealView Compiler, which appears to use ELF objects with DWARF format debug info. Dwarf should be supported by BFD, and should give you everything you need to know to verify that the types and names match.\n",
"Check out ANTLR. It's a parser generator, with bindings for python. The ANTLR site provides a whole bunch of grammars for common languages, C included. You could download the grammar for C and add actions in appropriate places to collect the information you're interested in. There's even a neat graphical tool for creating and debugging the grammars. (I know that seems hokey, but it's actually quite handy and not obnoxious)\n\nANTLR Main Page: http://www.antlr.org\nTheir Grammars Page: http://www.antlr.org/grammar/list\nANSI C Grammar: http://www.antlr.org/grammar/1153358328744/C.g\nANTLR Python Target Information: http://www.antlr.org/wiki/display/ANTLR3/Antlr3PythonTarget\n\nI just did something sort of similar, except to get my symbol information I'm actually extracting it from GDB.\n",
"What you're trying to do is a lightweight form of static analysis. You might have some luck looking at the tools pointed to by Wikipedia.\nParsing the C code yourself sounds like the wrong direction to me: therein lies madness. If you insist, then [f]lex and yacc (bison) are the tools likely used by your compiler-writers.\nOr, if ctags or cscope gets you 80% of the way, the source code to both is widely available. The last 20% is a Simple Matter of Programming. :)\n",
"I did something similar for a project I was working on a few years ago. I ended up writing the first half of a C compiler. Don't be alarmed by that prospect. It is actually much easier than it sounds, especially if you are only looking for certain tokens (variable definitions, in this case).\nLook for documentation online about how to scan C source code, detect tokens of interest, and parse the results. A good place to start is Wikipedia's artricle on lexical analysis.\n"
] |
[
5,
3,
2,
2,
0
] |
[] |
[] |
[
"c",
"code_analysis",
"coding_style",
"python",
"variables"
] |
stackoverflow_0000778468_c_code_analysis_coding_style_python_variables.txt
|
Q:
How to center a GNOME pop-up notification?
To display a GNOME pop-up notification at (200,400) on the screen (using Python):
import pynotify
n = pynotify.Notification("This is my title", "This is my description")
n.set_hint('x', 200)
n.set_hint('y', 400)
n.show()
I'm a gtk noob. How can I make this Notification show up centered on the screen, or at the bottom-center of the screen?
Perhaps my question should be "what Python snippet gets me the Linux screen dimensions?", and I'll plug those into set_hint() as appropriate.
A:
Since you're using GNOME, here's the GTK way of getting the screen resolution
import gtk.gdk
import pynotify
n = pynotify.Notification("This is my title", "This is my description")
n.set_hint('x', gtk.gdk.screen_width()/2.)
n.set_hint('y', gtk.gdk.screen_height()/2.)
n.show()
A:
A bit of a hack, but this works:
from Tkinter import *
r = Tk()
r.withdraw()
width, height = r.winfo_screenwidth(), r.winfo_screenheight()
Another option is:
from commands import getstatusoutput
status, output = getstatusoutput("xwininfo -root")
width = re.compile(r"Width: (\d+)").findall(output)[0]
height = re.compile(r"Height: (\d+)").findall(output)[0]
|
How to center a GNOME pop-up notification?
|
To display a GNOME pop-up notification at (200,400) on the screen (using Python):
import pynotify
n = pynotify.Notification("This is my title", "This is my description")
n.set_hint('x', 200)
n.set_hint('y', 400)
n.show()
I'm a gtk noob. How can I make this Notification show up centered on the screen, or at the bottom-center of the screen?
Perhaps my question should be "what Python snippet gets me the Linux screen dimensions?", and I'll plug those into set_hint() as appropriate.
|
[
"Since you're using GNOME, here's the GTK way of getting the screen resolution\nimport gtk.gdk\nimport pynotify\n\nn = pynotify.Notification(\"This is my title\", \"This is my description\")\nn.set_hint('x', gtk.gdk.screen_width()/2.)\nn.set_hint('y', gtk.gdk.screen_height()/2.)\nn.show()\n\n",
"A bit of a hack, but this works:\nfrom Tkinter import *\nr = Tk()\nr.withdraw()\nwidth, height = r.winfo_screenwidth(), r.winfo_screenheight()\n\nAnother option is:\nfrom commands import getstatusoutput\nstatus, output = getstatusoutput(\"xwininfo -root\")\nwidth = re.compile(r\"Width: (\\d+)\").findall(output)[0]\nheight = re.compile(r\"Height: (\\d+)\").findall(output)[0]\n\n"
] |
[
4,
0
] |
[
"on windows, \n from win32api import GetSystemMetrics\n width = GetSystemMetrics (0)\n height = GetSystemMetrics (1)\n print \"Screen resolution = %dx%d\" % (width, height)\n\nI cant seem to find the linux version for it tho. \n"
] |
[
-4
] |
[
"gtk",
"pynotify",
"python",
"user_interface"
] |
stackoverflow_0000778660_gtk_pynotify_python_user_interface.txt
|
Q:
How to upload a pristine Python package to PyPI?
What's the magic "python setup.py some_incantation_here" command to upload a package to PyPI, in a form that can be downloaded to get the original package in its original form?
I have a package with some source and a few image files (as package_data). If I do "setup.py sdist register upload", the .tar.gz has the image files excluded. If I do "setup.py bdist_egg register upload", the egg contains the images but excludes the setup.py file. I want to be able to get a file uploaded that is just the entirety of my project -- aka "setup.py the_whole_freaking_thing register upload".
Perhaps the best way to do this is to manually tar.gz my project directory and upload it using the PyPI web interface?
Caveat: I'm trying to avoid having to store a simple project I just created in my SVN repo as well as on PyPI -- it seems like a waste of work to keep track of its history and files in two places.
A:
When you perform an "sdist" command, then what controls the list of included files is your "MANIFEST.in" file sitting next to "setup.py", not whatever you have listed in "package_data". This has something to do with the schizophrenic nature of the Python packaging solutions today; "sdist" is powered by the distutils in the standard library, while "bdist_egg" is controlled by the setuptools module.
To solve the problem, try creating a MANIFEST.in next to your setup.py file, and give it contents like this:
include *.jpg
Of course, I'm imaging that your "image files" are actual pictures rather than disk images or ISO images or something; you might have to adjust the above line if I've guessed wrong! But check out the Specifying which files to distribute section of the distutils docs, and see whether you can't get those files appearing in your .tar.gz source distribution! Good luck.
|
How to upload a pristine Python package to PyPI?
|
What's the magic "python setup.py some_incantation_here" command to upload a package to PyPI, in a form that can be downloaded to get the original package in its original form?
I have a package with some source and a few image files (as package_data). If I do "setup.py sdist register upload", the .tar.gz has the image files excluded. If I do "setup.py bdist_egg register upload", the egg contains the images but excludes the setup.py file. I want to be able to get a file uploaded that is just the entirety of my project -- aka "setup.py the_whole_freaking_thing register upload".
Perhaps the best way to do this is to manually tar.gz my project directory and upload it using the PyPI web interface?
Caveat: I'm trying to avoid having to store a simple project I just created in my SVN repo as well as on PyPI -- it seems like a waste of work to keep track of its history and files in two places.
|
[
"When you perform an \"sdist\" command, then what controls the list of included files is your \"MANIFEST.in\" file sitting next to \"setup.py\", not whatever you have listed in \"package_data\". This has something to do with the schizophrenic nature of the Python packaging solutions today; \"sdist\" is powered by the distutils in the standard library, while \"bdist_egg\" is controlled by the setuptools module.\nTo solve the problem, try creating a MANIFEST.in next to your setup.py file, and give it contents like this:\ninclude *.jpg\n\nOf course, I'm imaging that your \"image files\" are actual pictures rather than disk images or ISO images or something; you might have to adjust the above line if I've guessed wrong! But check out the Specifying which files to distribute section of the distutils docs, and see whether you can't get those files appearing in your .tar.gz source distribution! Good luck.\n"
] |
[
16
] |
[] |
[] |
[
"packaging",
"pypi",
"python"
] |
stackoverflow_0000778980_packaging_pypi_python.txt
|
Q:
How do I get the last number from the range() function?
Is there a way to get the last number from the range() function?
I need to get the last number in a Fibonacci sequence for first 20 terms or should I use a list instead of range()?
A:
Not quite sure what you are after here but here goes:
rangeList = range(0,21)
lastNumber = rangeList[len(rangeList)-1:][0]
or:
lastNumber = rangeList[-1]
A:
by in a range, do you mean last value provided by a generator? If so, you can do something like this:
def fibonacci(iterations):
# generate your fibonacci numbers here...
[x for x in fibonacci(20)][-1]
That would get you the last generated value.
A:
I don't think anyone considered that you need fibonacci numbers. No, you'll have to store each number to build the fibonacci sequence recursively, but there is a formula to get the nth term of the fibonacci sequence.
Binet's Formula
If you need the last number of a list, use myList[-1].
A:
Is this what you're after?
somerange = range(0,20)
print len(somerange) # if you want 20
print len(somerange)-1 # if you want 19
now if you want the number or item contained in a list...
x = [1,2,3,4]
print x[len(x)-1]
# OR
print x[-1] # go back 1 element from current index 0, takes you to list end
|
How do I get the last number from the range() function?
|
Is there a way to get the last number from the range() function?
I need to get the last number in a Fibonacci sequence for first 20 terms or should I use a list instead of range()?
|
[
"Not quite sure what you are after here but here goes:\nrangeList = range(0,21)\nlastNumber = rangeList[len(rangeList)-1:][0]\n\nor:\nlastNumber = rangeList[-1]\n\n",
"by in a range, do you mean last value provided by a generator? If so, you can do something like this:\ndef fibonacci(iterations):\n # generate your fibonacci numbers here...\n\n\n[x for x in fibonacci(20)][-1]\n\nThat would get you the last generated value.\n",
"I don't think anyone considered that you need fibonacci numbers. No, you'll have to store each number to build the fibonacci sequence recursively, but there is a formula to get the nth term of the fibonacci sequence. \nBinet's Formula \nIf you need the last number of a list, use myList[-1].\n",
"Is this what you're after?\nsomerange = range(0,20)\nprint len(somerange) # if you want 20\nprint len(somerange)-1 # if you want 19\n\nnow if you want the number or item contained in a list...\nx = [1,2,3,4]\nprint x[len(x)-1]\n# OR\nprint x[-1] # go back 1 element from current index 0, takes you to list end\n\n"
] |
[
12,
3,
1,
0
] |
[] |
[] |
[
"fibonacci",
"python",
"range"
] |
stackoverflow_0000780057_fibonacci_python_range.txt
|
Q:
Generating unique and opaque user IDs in Google App Engine
I'm working on an application that lets registered users create or upload content, and allows anonymous users to view that content and browse registered users' pages to find that content - this is very similar to how a site like Flickr, for example, allows people to browse its users' pages.
To do this, I need a way to identify the user in the anonymous HTTP GET request. A user should be able to type http://myapplication.com/browse/<userid>/<contentid> and get to the right page - should be unique, but mustn't be something like the user's email address, for privacy reasons.
Through Google App Engine, I can get the email address associated with the user, but like I said, I don't want to use that. I can have users of my application pick a unique user name when they register, but I would like to make that optional if at all possible, so that the registration process is as short as possible.
Another option is to generate some random cookie (a GUID?) during the registration process, and use that, I don't see an obvious way of guaranteeing uniqueness of such a cookie without a trip to the database.
Is there a way, given an App Engine user object, of getting a unique identifier for that object that can be used in this way?
I'm looking for a Python solution - I forgot that GAE also supports Java now. Still, I expect the techniques to be similar, regardless of the language.
A:
Your timing is impeccable: Just yesterday, a new release of the SDK came out, with support for unique, permanent user IDs. They meet all the criteria you specified.
A:
I think you should distinguish between two types of users:
1) users that have logged in via Google Accounts or that have already registered on your site with a non-google e-mail address
2) users that opened your site for the first time and are not logged in in any way
For the second case, I can see no other way than to generate some random string (e.g. via uuid.uuid4() or from this user's session cookie key), as an anonymous user does not carry any unique information with himself.
For users that are logged in, however, you already have a unique identifier -- their e-mail address. I agree with your privacy concerns -- you shouldn't use it as an identifier. Instead, how about generating a string that seems random, but is in fact generated from the e-mail address? Hashing functions are perfect for this purpose. Example:
>>> import hashlib
>>> email = '[email protected]'
>>> salt = 'SomeLongStringThatWillBeAppendedToEachEmail'
>>> key = hashlib.sha1('%s$%s' % (email, salt)).hexdigest()
>>> print key
f6cd3459f9a39c97635c652884b3e328f05be0f7
As hashlib.sha1 is not a random function, but for given data returns always the same result, but it is proven to be practically irreversible, you can safely present the hashed key on the website without compromising user's e-mail address. Also, you can safely assume that no two hashes of distinct e-mails will be the same (they can be, but probability of it happening is very, very small). For more information on hashing functions, consult the Wikipedia entry.
A:
Do you mean session cookies?
Try http://code.google.com/p/gaeutilities/
What DzinX said. The only way to create an opaque key that can be authenticated without a database roundtrip is using encryption or a cryptographic hash.
Give the user a random number and hash it or encrypt it with a private key. You still run the (tiny) risk of collisions, but you can avoid this by touching the database on key creation, changing the random number in case of a collision. Make sure the random number is cryptographic, and add a long server-side random number to prevent chosen plaintext attacks.
You'll end up with a token like the Google Docs key, basically a signature proving the user is authenticated, which can be verified without touching the database.
However, given the pricing of GAE and the speed of bigtable, you're probably better off using a session ID if you really can't use Google's own authentication.
|
Generating unique and opaque user IDs in Google App Engine
|
I'm working on an application that lets registered users create or upload content, and allows anonymous users to view that content and browse registered users' pages to find that content - this is very similar to how a site like Flickr, for example, allows people to browse its users' pages.
To do this, I need a way to identify the user in the anonymous HTTP GET request. A user should be able to type http://myapplication.com/browse/<userid>/<contentid> and get to the right page - should be unique, but mustn't be something like the user's email address, for privacy reasons.
Through Google App Engine, I can get the email address associated with the user, but like I said, I don't want to use that. I can have users of my application pick a unique user name when they register, but I would like to make that optional if at all possible, so that the registration process is as short as possible.
Another option is to generate some random cookie (a GUID?) during the registration process, and use that, I don't see an obvious way of guaranteeing uniqueness of such a cookie without a trip to the database.
Is there a way, given an App Engine user object, of getting a unique identifier for that object that can be used in this way?
I'm looking for a Python solution - I forgot that GAE also supports Java now. Still, I expect the techniques to be similar, regardless of the language.
|
[
"Your timing is impeccable: Just yesterday, a new release of the SDK came out, with support for unique, permanent user IDs. They meet all the criteria you specified.\n",
"I think you should distinguish between two types of users:\n1) users that have logged in via Google Accounts or that have already registered on your site with a non-google e-mail address\n2) users that opened your site for the first time and are not logged in in any way\nFor the second case, I can see no other way than to generate some random string (e.g. via uuid.uuid4() or from this user's session cookie key), as an anonymous user does not carry any unique information with himself.\nFor users that are logged in, however, you already have a unique identifier -- their e-mail address. I agree with your privacy concerns -- you shouldn't use it as an identifier. Instead, how about generating a string that seems random, but is in fact generated from the e-mail address? Hashing functions are perfect for this purpose. Example:\n>>> import hashlib\n\n>>> email = '[email protected]'\n>>> salt = 'SomeLongStringThatWillBeAppendedToEachEmail'\n\n>>> key = hashlib.sha1('%s$%s' % (email, salt)).hexdigest()\n>>> print key\nf6cd3459f9a39c97635c652884b3e328f05be0f7\n\nAs hashlib.sha1 is not a random function, but for given data returns always the same result, but it is proven to be practically irreversible, you can safely present the hashed key on the website without compromising user's e-mail address. Also, you can safely assume that no two hashes of distinct e-mails will be the same (they can be, but probability of it happening is very, very small). For more information on hashing functions, consult the Wikipedia entry.\n",
"Do you mean session cookies?\nTry http://code.google.com/p/gaeutilities/\n\nWhat DzinX said. The only way to create an opaque key that can be authenticated without a database roundtrip is using encryption or a cryptographic hash.\nGive the user a random number and hash it or encrypt it with a private key. You still run the (tiny) risk of collisions, but you can avoid this by touching the database on key creation, changing the random number in case of a collision. Make sure the random number is cryptographic, and add a long server-side random number to prevent chosen plaintext attacks.\nYou'll end up with a token like the Google Docs key, basically a signature proving the user is authenticated, which can be verified without touching the database.\nHowever, given the pricing of GAE and the speed of bigtable, you're probably better off using a session ID if you really can't use Google's own authentication.\n"
] |
[
7,
3,
1
] |
[] |
[] |
[
"google_app_engine",
"guid",
"python",
"uniqueidentifier"
] |
stackoverflow_0000778965_google_app_engine_guid_python_uniqueidentifier.txt
|
Q:
How to automatically reload a python file when it is changed
If I make some changes to one of the files belonging to a running app, is there a way to tell the python runtime to automatically reload the module/file?
A:
Take at look at CherryPy's Autoreload feature. I think it looks quite simple and always worked well for me.
A:
Here is a very old module that I posted nearly ten years ago. I may no longer work with current Python versions (I have not checked) but it may give some ideas.
http://mail.python.org/pipermail/python-list/2000-April/031568.html
A:
Take look at Django's autoreload module. It works very well.
|
How to automatically reload a python file when it is changed
|
If I make some changes to one of the files belonging to a running app, is there a way to tell the python runtime to automatically reload the module/file?
|
[
"Take at look at CherryPy's Autoreload feature. I think it looks quite simple and always worked well for me.\n",
"Here is a very old module that I posted nearly ten years ago. I may no longer work with current Python versions (I have not checked) but it may give some ideas.\nhttp://mail.python.org/pipermail/python-list/2000-April/031568.html\n",
"Take look at Django's autoreload module. It works very well.\n"
] |
[
5,
4,
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000780526_python.txt
|
Q:
Add Quotes in url string from file
I need script to add quotes in url string from url.txt
from http://www.site.com/info.xx to "http://www.site.com/info.xx"
A:
url = '"%s"' % url
Example:
line = 'http://www.site.com/info.xx \n'
url = '"%s"' % line.strip()
print url # "http://www.site.com/info.xx"
Remember, adding a backslash before a quotation mark will escape it and therefore won't end the string.
A:
url = '"%s"' % url
Example:
>>> url = "http://www.site.com/info.xx"
>>> print url
http://www.site.com/info.xx
>>> url = '"%s"' % url
>>> print url
"http://www.site.com/info.xx"
>>>
Parsing it from a file:
from __future__ import with_statement
def parseUrlsTXT(filename):
with open(filename, 'r') as f:
for line in f.readlines():
url = '"%s"' % line[:-1]
print url
Usage:
parseUrlsTXT('/your/dir/path/URLs.txt')
A:
write one...
perl is my favourite scripting language... it appears you may prefer Python.
just read in the file and add \" before and after it..
this is pretty easy in perl.
this seems more like a request than a question... should this be on stackoverflow?
A:
If 'url.txt' contains a list of urls, one url per line then:
quote the first non-whitespace character sequence:
$ perl -pe"s~\S+~\"$&\"~" url.txt
or trim whitespaces and quote the rest:
$ perl -nE"$_=trim; say qq(\"$_\")" url.txt
A:
A simple Perl script:
#!/usr/bin/perl
use strict;
use warnings;
while (my $line = <>) {
$line =~ s/\s*(.*)\s*/$1/;
print qq/"$line"\n/;
}
|
Add Quotes in url string from file
|
I need script to add quotes in url string from url.txt
from http://www.site.com/info.xx to "http://www.site.com/info.xx"
|
[
"url = '\"%s\"' % url\n\nExample:\nline = 'http://www.site.com/info.xx \\n'\nurl = '\"%s\"' % line.strip()\nprint url # \"http://www.site.com/info.xx\"\n\nRemember, adding a backslash before a quotation mark will escape it and therefore won't end the string.\n",
"url = '\"%s\"' % url\n\nExample:\n>>> url = \"http://www.site.com/info.xx\"\n>>> print url\nhttp://www.site.com/info.xx\n>>> url = '\"%s\"' % url\n>>> print url\n\"http://www.site.com/info.xx\"\n>>> \n\nParsing it from a file:\nfrom __future__ import with_statement\n\ndef parseUrlsTXT(filename):\n with open(filename, 'r') as f:\n for line in f.readlines():\n url = '\"%s\"' % line[:-1]\n print url\n\nUsage:\nparseUrlsTXT('/your/dir/path/URLs.txt')\n\n",
"write one...\nperl is my favourite scripting language... it appears you may prefer Python.\njust read in the file and add \\\" before and after it..\nthis is pretty easy in perl.\nthis seems more like a request than a question... should this be on stackoverflow?\n",
"If 'url.txt' contains a list of urls, one url per line then:\n\nquote the first non-whitespace character sequence:\n$ perl -pe\"s~\\S+~\\\"$&\\\"~\" url.txt\n\nor trim whitespaces and quote the rest:\n$ perl -nE\"$_=trim; say qq(\\\"$_\\\")\" url.txt\n\n\n",
"A simple Perl script:\n#!/usr/bin/perl\n\nuse strict;\nuse warnings;\n\nwhile (my $line = <>) {\n $line =~ s/\\s*(.*)\\s*/$1/;\n print qq/\"$line\"\\n/;\n}\n\n"
] |
[
7,
4,
0,
0,
0
] |
[] |
[] |
[
"perl",
"python",
"ruby"
] |
stackoverflow_0000543199_perl_python_ruby.txt
|
Q:
How to set correct value for Django ROOT_URLCONF setting in different branches
I've put site directory created by django-admin startproject under version control (Mercurial). Let's say, the site is called frobnicator.
Now I want to make some serious refactoring, so I clone the site using command
hg clone frobnicator frobnicator-refactoring`
but ROOT_URLCONF in settings.py still says frobnicator.urls.
Is there a better way to overcome this problem rather than moving the site in a wrapper directory and storing this directory under version control (to maintain the same site name after branching) or using local branches?
A:
Simply remove project name from the ROOT_URLCONF definition - it is optional. Then you can have project folders with different names.
|
How to set correct value for Django ROOT_URLCONF setting in different branches
|
I've put site directory created by django-admin startproject under version control (Mercurial). Let's say, the site is called frobnicator.
Now I want to make some serious refactoring, so I clone the site using command
hg clone frobnicator frobnicator-refactoring`
but ROOT_URLCONF in settings.py still says frobnicator.urls.
Is there a better way to overcome this problem rather than moving the site in a wrapper directory and storing this directory under version control (to maintain the same site name after branching) or using local branches?
|
[
"Simply remove project name from the ROOT_URLCONF definition - it is optional. Then you can have project folders with different names.\n"
] |
[
13
] |
[] |
[] |
[
"django",
"mercurial",
"python"
] |
stackoverflow_0000781211_django_mercurial_python.txt
|
Q:
Python Multiprocessing: Sending data to a process
I have subclassed Process like so:
class EdgeRenderer(Process):
def __init__(self,starter,*args,**kwargs):
Process.__init__(self,*args,**kwargs)
self.starter=starter
Then I define a run method which uses self.starter.
That starter object is of a class State that I define.
Is it okay that I do this? What happens to the object? Does it get serialized? Does that mean that I always have to ensure the State object is serializable? Does the new process get a duplicate copy of this object?
A:
On unix systems, multiprocessing uses os.fork() to create the children, on windows, it uses some subprocess trickery and serialization to share the data. So to be cross platform, yes - it must be serializable. The child will get a new copy.
That being said, here's an example:
from multiprocessing import Process
import time
class Starter(object):
def __init__(self):
self.state = False
x = Starter()
class EdgeRenderer(Process):
def __init__(self,starter,*args,**kwargs):
Process.__init__(self,*args,**kwargs)
self.starter=starter
def run(self):
self.starter.state = "HAM SANDWICH"
time.sleep(1)
print self.starter.state
a = EdgeRenderer(x)
a.start()
x.state = True
a.join()
print x.state
When run, you will see:
HAM SANDWICH
True
So the changes the parent makes don't get communicated after the fork() and the changes the child makes have the same issue. You have to adhere to fork limitations.
|
Python Multiprocessing: Sending data to a process
|
I have subclassed Process like so:
class EdgeRenderer(Process):
def __init__(self,starter,*args,**kwargs):
Process.__init__(self,*args,**kwargs)
self.starter=starter
Then I define a run method which uses self.starter.
That starter object is of a class State that I define.
Is it okay that I do this? What happens to the object? Does it get serialized? Does that mean that I always have to ensure the State object is serializable? Does the new process get a duplicate copy of this object?
|
[
"On unix systems, multiprocessing uses os.fork() to create the children, on windows, it uses some subprocess trickery and serialization to share the data. So to be cross platform, yes - it must be serializable. The child will get a new copy.\nThat being said, here's an example:\nfrom multiprocessing import Process\nimport time\n\nclass Starter(object):\n def __init__(self):\n self.state = False\n\nx = Starter()\n\nclass EdgeRenderer(Process):\n def __init__(self,starter,*args,**kwargs):\n Process.__init__(self,*args,**kwargs)\n self.starter=starter\n def run(self):\n self.starter.state = \"HAM SANDWICH\"\n time.sleep(1)\n print self.starter.state\n\na = EdgeRenderer(x)\na.start()\nx.state = True\na.join()\nprint x.state\n\nWhen run, you will see:\nHAM SANDWICH\nTrue\n\nSo the changes the parent makes don't get communicated after the fork() and the changes the child makes have the same issue. You have to adhere to fork limitations. \n"
] |
[
8
] |
[] |
[] |
[
"multiprocessing",
"python"
] |
stackoverflow_0000779384_multiprocessing_python.txt
|
Q:
Communicating with a Python service
Problem:
I have a python script that I have running as a service. It's a subclass of the win32 class win32serviceutil.ServiceFramework. I want a simple straightforward way of sending arbitrary commands to it via the command line.
What I've looked at:
It looks like the standard way of controlling the service once it's started is by using another program and sending it command signals, but I need to be able to send a short string to it as well as an argument. It looks like using a NamedPipe might be a good idea, but it's really too complex for what I wanted to do, is there any other simpler way?
A:
Not really.
You have many, many ways to do "Interprocess Communication" (IPC) in Python.
Sockets
Named Pipes (see http://developers.sun.com/solaris/articles/named_pipes.html) -- it involves a little bit of OS magic to create, but then it's just a file that you read and write.
Shared Memory (see http://en.wikipedia.org/wiki/Shared_memory) -- this also involves a fair amount of OS-level magic.
Semphores and Locks; files with locks can work well for IPC.
Higher-level protocols built on sockets...
HTTP; this is what WSGI is all about.
FTP
etc.
A common solution is to use HTTP and define "RESTful" commands. Your service listens on port 80 for HTTP requests that contain arguments and parameters. Look at wsgiref for more information on this.
|
Communicating with a Python service
|
Problem:
I have a python script that I have running as a service. It's a subclass of the win32 class win32serviceutil.ServiceFramework. I want a simple straightforward way of sending arbitrary commands to it via the command line.
What I've looked at:
It looks like the standard way of controlling the service once it's started is by using another program and sending it command signals, but I need to be able to send a short string to it as well as an argument. It looks like using a NamedPipe might be a good idea, but it's really too complex for what I wanted to do, is there any other simpler way?
|
[
"Not really.\nYou have many, many ways to do \"Interprocess Communication\" (IPC) in Python.\n\nSockets\nNamed Pipes (see http://developers.sun.com/solaris/articles/named_pipes.html) -- it involves a little bit of OS magic to create, but then it's just a file that you read and write.\nShared Memory (see http://en.wikipedia.org/wiki/Shared_memory) -- this also involves a fair amount of OS-level magic.\nSemphores and Locks; files with locks can work well for IPC.\nHigher-level protocols built on sockets...\n\nHTTP; this is what WSGI is all about.\nFTP\netc.\n\n\nA common solution is to use HTTP and define \"RESTful\" commands. Your service listens on port 80 for HTTP requests that contain arguments and parameters. Look at wsgiref for more information on this.\n"
] |
[
2
] |
[] |
[] |
[
"command_line",
"python",
"service",
"winapi"
] |
stackoverflow_0000781594_command_line_python_service_winapi.txt
|
Q:
Playing and controlling mp3 files in Python?
First things first, I am a Python beginner, with a typical C++/Java background for object oriented stuff.
I was convinced to try Python for this current endeavor I am working on, and so far I like it. One issue I am having though is finding a good mp3 module.
I have tried TkSnack, which installed and ran fine with no errors(as long as my audio device wasn't busy) but it could never actually produce a sound, it just did nothing... I went online for help, and was disappointed with the amount of documentation.
So I decided to switch. I tried PyMad because it is in the standard repositories for Ubuntu as well. There was even less documentation on this, but I could make it play a sound. The only problem is that it requires a loop to constantly write/play the audio buffer. This makes it particularly hairy to handle playback control(in my opinion) cause I would have to run this in a separate thread or process, and somehow control the seek position for pause and such. This is a little too low level for why I am using Python. I liked the simplicity of TkSnack for its easy commands like "mysound.play()" or "mysound.pause()" rather than controlling a loop.
I also looked at pyMedia, which looks like it is the most up to date with documentation, but I can't get it to install on my machine. I get a "gcc exited with value 1" error or something like that when running the "python setup.py build" command.
So I am looking for any suggestions or help on one of these modules, or a completely different one, that is high level and easy to use for mp3s(and preferably other formats too) I am trying to have basic playback control(pause, stop, skip, seek) and I may also be streaming files too eventually(if I ever get there).
EDIT: I like the python bindings for Gstreamer, but is this a cross-platform solution?? I forgot to mention that as a requirement. But I always just associated GStreamer with Linux, would this work on other OSs?
EDIT: Wikipedia says yes.
A:
Sorry I can't help you with PyMad or pyMedia, but I have other suggestions.
Existing music players written in Python:
Exaile
FUPlayer
Listen
All of the above use the Python bindings for the GStreamer multimedia framework. Docs for the bindings are scarce, but check here, here, here, and examples from the source distribution here.
A:
I just had to deal with this, and from my research I think your best bets are pyglet and pygame. They're interface packages with built-in a/v support.
|
Playing and controlling mp3 files in Python?
|
First things first, I am a Python beginner, with a typical C++/Java background for object oriented stuff.
I was convinced to try Python for this current endeavor I am working on, and so far I like it. One issue I am having though is finding a good mp3 module.
I have tried TkSnack, which installed and ran fine with no errors(as long as my audio device wasn't busy) but it could never actually produce a sound, it just did nothing... I went online for help, and was disappointed with the amount of documentation.
So I decided to switch. I tried PyMad because it is in the standard repositories for Ubuntu as well. There was even less documentation on this, but I could make it play a sound. The only problem is that it requires a loop to constantly write/play the audio buffer. This makes it particularly hairy to handle playback control(in my opinion) cause I would have to run this in a separate thread or process, and somehow control the seek position for pause and such. This is a little too low level for why I am using Python. I liked the simplicity of TkSnack for its easy commands like "mysound.play()" or "mysound.pause()" rather than controlling a loop.
I also looked at pyMedia, which looks like it is the most up to date with documentation, but I can't get it to install on my machine. I get a "gcc exited with value 1" error or something like that when running the "python setup.py build" command.
So I am looking for any suggestions or help on one of these modules, or a completely different one, that is high level and easy to use for mp3s(and preferably other formats too) I am trying to have basic playback control(pause, stop, skip, seek) and I may also be streaming files too eventually(if I ever get there).
EDIT: I like the python bindings for Gstreamer, but is this a cross-platform solution?? I forgot to mention that as a requirement. But I always just associated GStreamer with Linux, would this work on other OSs?
EDIT: Wikipedia says yes.
|
[
"Sorry I can't help you with PyMad or pyMedia, but I have other suggestions.\nExisting music players written in Python:\n\nExaile\nFUPlayer\nListen\n\nAll of the above use the Python bindings for the GStreamer multimedia framework. Docs for the bindings are scarce, but check here, here, here, and examples from the source distribution here.\n",
"I just had to deal with this, and from my research I think your best bets are pyglet and pygame. They're interface packages with built-in a/v support.\n"
] |
[
2,
1
] |
[] |
[] |
[
"audio",
"audio_streaming",
"mp3",
"python"
] |
stackoverflow_0000780711_audio_audio_streaming_mp3_python.txt
|
Q:
Emacs function to message the python function I'm in
I'm editing some Python code with rather long functions and decided it would be useful to quickly get the function name without scrolling up. I put this bit of code together to do it. Is there something built in to emacs in general, or the standard python mode in particular, which I can use instead?
(defun python-show-function-name()
"Message the name of the function the point is in"
(interactive)
(save-excursion
(beginning-of-defun)
(message (format "%s" (thing-at-point 'line)))))
A:
You may find decent results with which-function-mode:
Which Function mode is a minor mode
that displays the current function
name in the mode line, updating it as
you move around in a buffer.
To either enable or disable Which
Function mode, use the command M-x
which-function-mode. This command is
global; it applies to all buffers,
both existing ones and those yet to be
created. However, it takes effect only
in certain major modes, those listed
in the value of which-func-modes. If
the value is t, then Which Function
mode applies to all major modes that
know how to support it—in other words,
all the major modes that support
Imenu.
Although I see it getting a little confused in one Python file that I have here...
A:
Did you try py-beginning-of-def-or-class?
(defun python-show-function-name()
"Message the name of the function the point is in"
(interactive)
(save-excursion
(py-beginning-of-def-or-class)
(message (format "%s" (thing-at-point 'line)))))
I find it gives me better results than your beginning-of-defun, but if that's not the problem you're having, then maybe I'm just seeing another symptom of the cause of the wonkiness in my other answer.
A:
C-c C-u (py-goto-block-up) might be what you want.
|
Emacs function to message the python function I'm in
|
I'm editing some Python code with rather long functions and decided it would be useful to quickly get the function name without scrolling up. I put this bit of code together to do it. Is there something built in to emacs in general, or the standard python mode in particular, which I can use instead?
(defun python-show-function-name()
"Message the name of the function the point is in"
(interactive)
(save-excursion
(beginning-of-defun)
(message (format "%s" (thing-at-point 'line)))))
|
[
"You may find decent results with which-function-mode:\n\nWhich Function mode is a minor mode\n that displays the current function\n name in the mode line, updating it as\n you move around in a buffer.\nTo either enable or disable Which\n Function mode, use the command M-x\n which-function-mode. This command is\n global; it applies to all buffers,\n both existing ones and those yet to be\n created. However, it takes effect only\n in certain major modes, those listed\n in the value of which-func-modes. If\n the value is t, then Which Function\n mode applies to all major modes that\n know how to support it—in other words,\n all the major modes that support\n Imenu.\n\nAlthough I see it getting a little confused in one Python file that I have here...\n",
"Did you try py-beginning-of-def-or-class?\n(defun python-show-function-name()\n \"Message the name of the function the point is in\"\n (interactive)\n (save-excursion\n (py-beginning-of-def-or-class)\n (message (format \"%s\" (thing-at-point 'line)))))\n\nI find it gives me better results than your beginning-of-defun, but if that's not the problem you're having, then maybe I'm just seeing another symptom of the cause of the wonkiness in my other answer.\n",
"C-c C-u (py-goto-block-up) might be what you want.\n"
] |
[
21,
2,
0
] |
[] |
[] |
[
"elisp",
"emacs",
"python"
] |
stackoverflow_0000782357_elisp_emacs_python.txt
|
Q:
Implementing a 'function-calling function'
I would like to write a bit of code that calls a function specified by a given argument. EG:
def caller(func):
return func()
However what I would also like to do is specify optional arguments to the 'caller' function so that 'caller' calls 'func' with the arguments specified (if any).
def caller(func, args):
# calls func with the arguments specified in args
Is there a simple, pythonic way to do this?
A:
You can do this by using arbitrary argument lists and unpacking argument lists.
>>> def caller(func, *args, **kwargs):
... return func(*args, **kwargs)
...
>>> def hello(a, b, c):
... print a, b, c
...
>>> caller(hello, 1, b=5, c=7)
1 5 7
Not sure why you feel the need to do it, though.
A:
This already exists as the apply function, though it is considered obsolete due to the new *args and **kwargs syntax.
>>> def foo(a,b,c): print a,b,c
>>> apply(foo, (1,2,3))
1 2 3
>>> apply(foo, (1,2), {'c':3}) # also accepts keyword args
However, the * and ** syntax is generally a better solution. The above is equivalent to:
>>> foo(*(1,2), **{'c':3})
|
Implementing a 'function-calling function'
|
I would like to write a bit of code that calls a function specified by a given argument. EG:
def caller(func):
return func()
However what I would also like to do is specify optional arguments to the 'caller' function so that 'caller' calls 'func' with the arguments specified (if any).
def caller(func, args):
# calls func with the arguments specified in args
Is there a simple, pythonic way to do this?
|
[
"You can do this by using arbitrary argument lists and unpacking argument lists.\n>>> def caller(func, *args, **kwargs):\n... return func(*args, **kwargs)\n...\n>>> def hello(a, b, c):\n... print a, b, c\n...\n>>> caller(hello, 1, b=5, c=7)\n1 5 7\n\nNot sure why you feel the need to do it, though.\n",
"This already exists as the apply function, though it is considered obsolete due to the new *args and **kwargs syntax.\n>>> def foo(a,b,c): print a,b,c\n>>> apply(foo, (1,2,3))\n1 2 3\n>>> apply(foo, (1,2), {'c':3}) # also accepts keyword args\n\nHowever, the * and ** syntax is generally a better solution. The above is equivalent to:\n>>> foo(*(1,2), **{'c':3})\n\n"
] |
[
12,
7
] |
[] |
[] |
[
"function",
"functional_programming",
"python"
] |
stackoverflow_0000782605_function_functional_programming_python.txt
|
Q:
How to hide __methods__ in python?
I just wondered, how to hide special
__.*__
methods in python*? Especially I am using an interactive python interpreter with tab-completion, and I would like to display only the methods my modules expose ...
thanks,
/ myyn /
*(at least from the user, who uses a python shell)
it looks like this now:
h[2] >>> Q.
Q.ALL( Q.__delattr__( Q.__getattribute__(
Q.__package__ Q.__sizeof__( Q.find_values(
Q.json
Q.DEFAULT_CONDITION( Q.__dict__ Q.__hash__(
Q.__reduce__( Q.__str__( Q.get_loops_total_platform(
Q.jsonlib
Q.SUCCESSFUL( Q.__doc__ Q.__init__(
Q.__reduce_ex__( Q.__subclasshook__( Q.get_platforms(
Q.memoize(
Q.__all__ Q.__file__ Q.__name__
Q.__repr__( Q.cached_open( Q.get_snippets(
Q.__class__( Q.__format__( Q.__new__(
Q.__setattr__( Q.find_results( Q.get_subjects(
h[2] >>> Q.
and I wish it looked like:
h[2] >>> Q.
Q.ALL( Q.find_values( Q.json
Q.DEFAULT_CONDITION( Q.get_loops_total_platform(
Q.jsonlib Q.SUCCESSFUL( Q.get_platforms(
Q.memoize( Q.cached_open( Q.get_snippets(
Q.find_results( Q.get_subjects(
h[2] >>> Q.
A:
I think you should look for a way to get that particular environment/interpreter to stop displaying the "private" methods when you press TAB. I don't think there is a way to "hide" methods from Python itself, that would be very weird.
A:
Well, you could create a subclass of rlcompleter.Completer, override
the methods in question, and install that into readline.
import rlcompleter
import readline
class MyCompleter(rlcompleter.Completer):
def global_matches(self, text):
....
def attr_matches(self, text):
....
import readline
readline.set_completer(MyCompleter().complete)
These code snippets allow case-insensitive tab completion:
http://www.nabble.com/Re%3A-Tab-completion-question-p22905952.html
A:
I would take a look into ipython. Maybe your are able to hook ipythons interactive shell without an subprocess into your app and apply the private method filtering from there.
|
How to hide __methods__ in python?
|
I just wondered, how to hide special
__.*__
methods in python*? Especially I am using an interactive python interpreter with tab-completion, and I would like to display only the methods my modules expose ...
thanks,
/ myyn /
*(at least from the user, who uses a python shell)
it looks like this now:
h[2] >>> Q.
Q.ALL( Q.__delattr__( Q.__getattribute__(
Q.__package__ Q.__sizeof__( Q.find_values(
Q.json
Q.DEFAULT_CONDITION( Q.__dict__ Q.__hash__(
Q.__reduce__( Q.__str__( Q.get_loops_total_platform(
Q.jsonlib
Q.SUCCESSFUL( Q.__doc__ Q.__init__(
Q.__reduce_ex__( Q.__subclasshook__( Q.get_platforms(
Q.memoize(
Q.__all__ Q.__file__ Q.__name__
Q.__repr__( Q.cached_open( Q.get_snippets(
Q.__class__( Q.__format__( Q.__new__(
Q.__setattr__( Q.find_results( Q.get_subjects(
h[2] >>> Q.
and I wish it looked like:
h[2] >>> Q.
Q.ALL( Q.find_values( Q.json
Q.DEFAULT_CONDITION( Q.get_loops_total_platform(
Q.jsonlib Q.SUCCESSFUL( Q.get_platforms(
Q.memoize( Q.cached_open( Q.get_snippets(
Q.find_results( Q.get_subjects(
h[2] >>> Q.
|
[
"I think you should look for a way to get that particular environment/interpreter to stop displaying the \"private\" methods when you press TAB. I don't think there is a way to \"hide\" methods from Python itself, that would be very weird.\n",
"Well, you could create a subclass of rlcompleter.Completer, override\nthe methods in question, and install that into readline.\nimport rlcompleter\nimport readline\nclass MyCompleter(rlcompleter.Completer):\n def global_matches(self, text):\n ....\n def attr_matches(self, text):\n ....\n\nimport readline\nreadline.set_completer(MyCompleter().complete) \n\nThese code snippets allow case-insensitive tab completion:\nhttp://www.nabble.com/Re%3A-Tab-completion-question-p22905952.html\n",
"I would take a look into ipython. Maybe your are able to hook ipythons interactive shell without an subprocess into your app and apply the private method filtering from there.\n"
] |
[
3,
3,
1
] |
[] |
[] |
[
"facade",
"hide",
"magic_methods",
"python"
] |
stackoverflow_0000781667_facade_hide_magic_methods_python.txt
|
Q:
English and/or Finnish text validation
Is there an easy-to-use python module that'd do english or finnish text validation?
It'd be ok if I could just check the words exist in user-defined dictionary and possibly checking that the grammar is somewhat okay.
I am planning to implement a fancy validation for a directory contents I did while ago back. This involves some simple stuff like checking that the config scripts won't crash and does it all well. It's all quite easy otherwise.
For the validator I should just be able to input whole files or strings of unicode text.
A:
I'm not sure what you're trying to do, but if you're looking for something that can say 'this is valid English' or 'this is valid Finnish', then you're looking at a class of problems that is quite likely unsolvable.
If not, then use a dictionary and/or letter frequencies and Bayesian analysis to determine whether or not given text is English-like or Finnish-like. If you're trying to auto-detect a language, this is likely the best route, although you'll run into problems with mixed-language text.
|
English and/or Finnish text validation
|
Is there an easy-to-use python module that'd do english or finnish text validation?
It'd be ok if I could just check the words exist in user-defined dictionary and possibly checking that the grammar is somewhat okay.
I am planning to implement a fancy validation for a directory contents I did while ago back. This involves some simple stuff like checking that the config scripts won't crash and does it all well. It's all quite easy otherwise.
For the validator I should just be able to input whole files or strings of unicode text.
|
[
"I'm not sure what you're trying to do, but if you're looking for something that can say 'this is valid English' or 'this is valid Finnish', then you're looking at a class of problems that is quite likely unsolvable.\nIf not, then use a dictionary and/or letter frequencies and Bayesian analysis to determine whether or not given text is English-like or Finnish-like. If you're trying to auto-detect a language, this is likely the best route, although you'll run into problems with mixed-language text.\n"
] |
[
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000783189_python.txt
|
Q:
Send data to the browser while waiting (Python)
I have the following code
print "Starting stage 1<br>"
# Something that takes about 5 seconds
time.sleep(5)
print "Stage 1 complete"
I view the script with my browser as it's part of a web-app, the problem is that it's displaying all of it together. I want it to display first the starting message before it starts and then add the completion message after it's complete.
What do I use to push information to the browser in this fashion?
A:
Try flush the output after the first print using sys.stdout.flush()
|
Send data to the browser while waiting (Python)
|
I have the following code
print "Starting stage 1<br>"
# Something that takes about 5 seconds
time.sleep(5)
print "Stage 1 complete"
I view the script with my browser as it's part of a web-app, the problem is that it's displaying all of it together. I want it to display first the starting message before it starts and then add the completion message after it's complete.
What do I use to push information to the browser in this fashion?
|
[
"Try flush the output after the first print using sys.stdout.flush()\n"
] |
[
5
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000783262_python.txt
|
Q:
wxPython: Using EVT_IDLE
I defined an handler for EVT_IDLE that does a certain background task for me. (That task is to take completed work from a few processes and integrate it into some object, making a visible change in the GUI.)
The problem is that when the user is not moving the mouse or doing anything, EVT_IDLE doesn't get called more than once. I would like this handler to be working all the time. So I tried calling event.RequestMore() at the end of the handler. Works, but now it takes a whole lot of CPU. (I'm guessing it's just looping excessively on that task.)
I'm willing to limit the number of times the task will be carried out per second; How do I do that?
Or do you have another solution in mind?
A:
Something like this (executes at most every second):
...
def On_Idle(self, event):
if not self.queued_batch:
wx.CallLater(1000, self.Do_Batch)
self.queued_batch = True
def Do_Batch(self):
# <- insert your stuff here
self.queued_batch = False
...
Oh, and don't forget to set self.queued_batch to False in the constructor and maybe call event.RequestMore() in some way in On_Idle.
A:
This sounds like a use case for wxTimerEvent instead of wxIdleEvent. When there is processing to do call wxTimerEvent.Start(). When there isn't any to do, call wxTimerEvent.Stop() and call your methods to do processing from EVT_TIMER.
(note: i use from wxWidghets for C++ and am not familiar with wxPython but I assume they have a similar API)
|
wxPython: Using EVT_IDLE
|
I defined an handler for EVT_IDLE that does a certain background task for me. (That task is to take completed work from a few processes and integrate it into some object, making a visible change in the GUI.)
The problem is that when the user is not moving the mouse or doing anything, EVT_IDLE doesn't get called more than once. I would like this handler to be working all the time. So I tried calling event.RequestMore() at the end of the handler. Works, but now it takes a whole lot of CPU. (I'm guessing it's just looping excessively on that task.)
I'm willing to limit the number of times the task will be carried out per second; How do I do that?
Or do you have another solution in mind?
|
[
"Something like this (executes at most every second):\n...\n\ndef On_Idle(self, event):\n if not self.queued_batch:\n wx.CallLater(1000, self.Do_Batch)\n self.queued_batch = True\n\ndef Do_Batch(self):\n # <- insert your stuff here\n self.queued_batch = False\n\n...\n\nOh, and don't forget to set self.queued_batch to False in the constructor and maybe call event.RequestMore() in some way in On_Idle.\n",
"This sounds like a use case for wxTimerEvent instead of wxIdleEvent. When there is processing to do call wxTimerEvent.Start(). When there isn't any to do, call wxTimerEvent.Stop() and call your methods to do processing from EVT_TIMER.\n(note: i use from wxWidghets for C++ and am not familiar with wxPython but I assume they have a similar API)\n"
] |
[
2,
0
] |
[] |
[] |
[
"python",
"wxpython"
] |
stackoverflow_0000783023_python_wxpython.txt
|
Q:
seek(), then read(), then write() in python
When running the following python code:
>>> f = open(r"myfile.txt", "a+")
>>> f.seek(-1,2)
>>> f.read()
'a'
>>> f.write('\n')
I get the following (helpful) exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 0] Error
The same thing happens when openning with "r+".
Is this supposed to fail? Why?
Edit:
Obviously, this is just an example, not what I am actually trying to execute. My actual goal was to verify that the files ends with "\n", or add one, before adding the new lines.
I am working under Windows XP, and they problem exists in both Python 2.5 and Python 2.6.
I managed to bypass the problem by calling seek() again:
f = open(r"myfile.txt", "a+")
f.seek(-1,2)
f.read()
'a'
f.seek(-10,2)
f.write('\n')
The actual parameters of the 2nd seek call don't seem to matter.
A:
This appears to be a Windows-specific problem - see http://bugs.python.org/issue1521491 for a similar issue.
Even better, a workaround given and explained at http://mail.python.org/pipermail/python-bugs-list/2005-August/029886.html, insert:
f.seek(f.tell())
between the read() and write() calls.
A:
the a+ mode is for appending, if you want to read & write, you are looking for r+.
try this:
>>> f = open("myfile.txt", "r+")
>>> f.write('\n')
Edit:
you should have specified your platform initially... there are known problems with seek within windows. When trying to seek, UNIX and Win32 have different line endings, LF and CRLF respectively. There is also an issue with reading to the end of a file. I think you are looking for the seek(2) offset for the end of the file, then carry on from there.
these articles may be of interest to you (the second one more specifically):
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-08/2512.html
http://mail.python.org/pipermail/python-list/2002-June/150556.html
A:
Works for me:
$ echo hello > myfile.txt
$ python
Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> f = open('myfile.txt', 'r+')
>>> f.seek(-1, 2)
>>> f.tell()
5L
>>> f.read()
'\n'
>>> f.write('\n')
>>> f.close()
Are you on windows? If so, try 'rb+' instead of 'r+' in the mode.
|
seek(), then read(), then write() in python
|
When running the following python code:
>>> f = open(r"myfile.txt", "a+")
>>> f.seek(-1,2)
>>> f.read()
'a'
>>> f.write('\n')
I get the following (helpful) exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 0] Error
The same thing happens when openning with "r+".
Is this supposed to fail? Why?
Edit:
Obviously, this is just an example, not what I am actually trying to execute. My actual goal was to verify that the files ends with "\n", or add one, before adding the new lines.
I am working under Windows XP, and they problem exists in both Python 2.5 and Python 2.6.
I managed to bypass the problem by calling seek() again:
f = open(r"myfile.txt", "a+")
f.seek(-1,2)
f.read()
'a'
f.seek(-10,2)
f.write('\n')
The actual parameters of the 2nd seek call don't seem to matter.
|
[
"This appears to be a Windows-specific problem - see http://bugs.python.org/issue1521491 for a similar issue.\nEven better, a workaround given and explained at http://mail.python.org/pipermail/python-bugs-list/2005-August/029886.html, insert:\nf.seek(f.tell())\n\nbetween the read() and write() calls.\n",
"the a+ mode is for appending, if you want to read & write, you are looking for r+.\ntry this:\n>>> f = open(\"myfile.txt\", \"r+\")\n>>> f.write('\\n')\n\nEdit:\nyou should have specified your platform initially... there are known problems with seek within windows. When trying to seek, UNIX and Win32 have different line endings, LF and CRLF respectively. There is also an issue with reading to the end of a file. I think you are looking for the seek(2) offset for the end of the file, then carry on from there.\nthese articles may be of interest to you (the second one more specifically):\nhttp://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-08/2512.html\nhttp://mail.python.org/pipermail/python-list/2002-June/150556.html\n",
"Works for me:\n$ echo hello > myfile.txt\n$ python\nPython 2.5.2 (r252:60911, Oct 5 2008, 19:24:49) \n[GCC 4.3.2] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> f = open('myfile.txt', 'r+')\n>>> f.seek(-1, 2)\n>>> f.tell()\n5L\n>>> f.read()\n'\\n'\n>>> f.write('\\n')\n>>> f.close()\n\nAre you on windows? If so, try 'rb+' instead of 'r+' in the mode.\n"
] |
[
5,
1,
0
] |
[] |
[] |
[
"file_io",
"python"
] |
stackoverflow_0000783792_file_io_python.txt
|
Q:
SQLAlchemy - Dictionary of tags
I have question regarding the SQLAlchemy. How can I add into my mapped class the dictionary-like attribute, which maps the string keys into string values and which will be stored in the database (in the same or another table as original mapped object). I want this add support for arbitrary tags of my objects.
I found the following example in SQLAlchemy documentation:
from sqlalchemy.orm.collections import column_mapped_collection, attribute_mapped_collection, mapped_collection
mapper(Item, items_table, properties={
# key by column
'notes': relation(Note, collection_class=column_mapped_collection(notes_table.c.keyword)),
# or named attribute
'notes2': relation(Note, collection_class=attribute_mapped_collection('keyword')),
# or any callable
'notes3': relation(Note, collection_class=mapped_collection(lambda entity: entity.a + entity.b))
})
item = Item()
item.notes['color'] = Note('color', 'blue')
But I want the following behavior:
mapper(Item, items_table, properties={
# key by column
'notes': relation(...),
})
item = Item()
item.notes['color'] = 'blue'
It is possible in SQLAlchemy?
Thank you
A:
The simple answer is yes.
Just use an association proxy:
from sqlalchemy import Column, Integer, String, Table, create_engine
from sqlalchemy import orm, MetaData, Column, ForeignKey
from sqlalchemy.orm import relation, mapper, sessionmaker
from sqlalchemy.orm.collections import column_mapped_collection
from sqlalchemy.ext.associationproxy import association_proxy
Create a test environment:
engine = create_engine('sqlite:///:memory:', echo=True)
meta = MetaData(bind=engine)
Define the tables:
tb_items = Table('items', meta,
Column('id', Integer, primary_key=True),
Column('name', String(20)),
Column('description', String(100)),
)
tb_notes = Table('notes', meta,
Column('id_item', Integer, ForeignKey('items.id'), primary_key=True),
Column('name', String(20), primary_key=True),
Column('value', String(100)),
)
meta.create_all()
Classes (note the association_proxy in the class):
class Note(object):
def __init__(self, name, value):
self.name = name
self.value = value
class Item(object):
def __init__(self, name, description=''):
self.name = name
self.description = description
notes = association_proxy('_notesdict', 'value', creator=Note)
Mapping:
mapper(Note, tb_notes)
mapper(Item, tb_items, properties={
'_notesdict': relation(Note,
collection_class=column_mapped_collection(tb_notes.c.name)),
})
Then just test it:
Session = sessionmaker(bind=engine)
s = Session()
i = Item('ball', 'A round full ball')
i.notes['color'] = 'orange'
i.notes['size'] = 'big'
i.notes['data'] = 'none'
s.add(i)
s.commit()
print i.notes
That prints:
{u'color': u'orange', u'data': u'none', u'size': u'big'}
But, are those in the notes table?
>>> print list(tb_notes.select().execute())
[(1, u'color', u'orange'), (1, u'data', u'none'), (1, u'size', u'big')]
It works!! :)
|
SQLAlchemy - Dictionary of tags
|
I have question regarding the SQLAlchemy. How can I add into my mapped class the dictionary-like attribute, which maps the string keys into string values and which will be stored in the database (in the same or another table as original mapped object). I want this add support for arbitrary tags of my objects.
I found the following example in SQLAlchemy documentation:
from sqlalchemy.orm.collections import column_mapped_collection, attribute_mapped_collection, mapped_collection
mapper(Item, items_table, properties={
# key by column
'notes': relation(Note, collection_class=column_mapped_collection(notes_table.c.keyword)),
# or named attribute
'notes2': relation(Note, collection_class=attribute_mapped_collection('keyword')),
# or any callable
'notes3': relation(Note, collection_class=mapped_collection(lambda entity: entity.a + entity.b))
})
item = Item()
item.notes['color'] = Note('color', 'blue')
But I want the following behavior:
mapper(Item, items_table, properties={
# key by column
'notes': relation(...),
})
item = Item()
item.notes['color'] = 'blue'
It is possible in SQLAlchemy?
Thank you
|
[
"The simple answer is yes.\nJust use an association proxy:\nfrom sqlalchemy import Column, Integer, String, Table, create_engine\nfrom sqlalchemy import orm, MetaData, Column, ForeignKey\nfrom sqlalchemy.orm import relation, mapper, sessionmaker\nfrom sqlalchemy.orm.collections import column_mapped_collection\nfrom sqlalchemy.ext.associationproxy import association_proxy\n\nCreate a test environment:\nengine = create_engine('sqlite:///:memory:', echo=True)\nmeta = MetaData(bind=engine)\n\nDefine the tables:\ntb_items = Table('items', meta, \n Column('id', Integer, primary_key=True), \n Column('name', String(20)),\n Column('description', String(100)),\n )\ntb_notes = Table('notes', meta, \n Column('id_item', Integer, ForeignKey('items.id'), primary_key=True),\n Column('name', String(20), primary_key=True),\n Column('value', String(100)),\n )\nmeta.create_all()\n\nClasses (note the association_proxy in the class):\nclass Note(object):\n def __init__(self, name, value):\n self.name = name\n self.value = value\nclass Item(object):\n def __init__(self, name, description=''):\n self.name = name\n self.description = description\n notes = association_proxy('_notesdict', 'value', creator=Note)\n\nMapping:\nmapper(Note, tb_notes)\nmapper(Item, tb_items, properties={\n '_notesdict': relation(Note, \n collection_class=column_mapped_collection(tb_notes.c.name)),\n })\n\nThen just test it:\nSession = sessionmaker(bind=engine)\ns = Session()\n\ni = Item('ball', 'A round full ball')\ni.notes['color'] = 'orange'\ni.notes['size'] = 'big'\ni.notes['data'] = 'none'\n\ns.add(i)\ns.commit()\nprint i.notes\n\nThat prints:\n{u'color': u'orange', u'data': u'none', u'size': u'big'}\n\nBut, are those in the notes table?\n>>> print list(tb_notes.select().execute())\n[(1, u'color', u'orange'), (1, u'data', u'none'), (1, u'size', u'big')]\n\nIt works!! :)\n"
] |
[
22
] |
[
"The simple answer is 'no'.\nSQLAlchemy is wrapper on a SQL database.\nThe relation examples you quote translate a relationship between SQL tables into a Python map-like structure to make it slightly simpler to do the SQL SELECT statements and locate rows in another table.\nThe \nitem.notes['color'] = Note('color', 'blue')\n\nis essential because the Note is a separate table with two columns. You can't leave the Note part out.\nYou must define this other SQL table, and you must create objects which are mapped to that SQL table.\n"
] |
[
-6
] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0000780774_python_sqlalchemy.txt
|
Q:
Python Module/Class Variable Bleeding
Okay, it took me a little while to narrow down this problem, but it appears python is doing this one purpose. Can someone explain why this is happening and what I can do to fix this?
File: library/testModule.py
class testClass:
myvars = dict()
def __getattr__(self, k):
if self.myvars.has_key(k):
return self.myvars[k]
def __setattr__(self, k, v):
self.myvars[k] = v
def __str__(self):
l = []
for k, v in self.myvars.iteritems():
l.append(str(k) + ":" + str(v))
return " - ".join(l)
test.py
from library import testModule
#I get the same result if I instantiate both classes one after another
c1 = testClass()
c1.foo = "hello"
c2 = testClass()
print("c1: " + str(c1) + "\n")
print("c2: " + str(c2) + "\n")
Output:
c1: foo:hello
c2: foo:hello
My best guess is that because library has an "__init__.py" file, the whole module is loaded like a class object and it's now become part of a lasting object.. is this the case?
A:
myvars is a property of the class, not the instance. This means that when you insert an attribute into myvars from the instance c1, the attribute gets associated with the class testClass, not the instance c1 specifically. Since c2 is an instance of the same class, it also has the same attribute.
You could get the behavior you want by writing this:
class testClass:
def __init__(self):
self.myvars = dict()
def __getattr__(self, k):
if self.myvars.has_key(k):
return self.myvars[k]
def __setattr__(self, k, v):
self.myvars[k] = v
def __str__(self):
l = []
for k, v in self.myvars.iteritems():
l.append(str(k) + ":" + str(v))
return " - ".join(l)
A:
The other answers are correct and to the point. Let me address some of what I think your misconceptions are.
My best guess is that because library has an "__init__.py" file, the whole module is loaded like a class object and it's now become part of a lasting object.. is this the case?
All packages have an __init__.py file. It is necessary to make something a python package. That package may or may not have any code in it. If it does it is guaranteed to execute. In the general case, this doesn't have anything to do with how the other modules in the package are executed, although it certainly is possible to put a lot of really cool trickery in there that does affect it.
As for how modules and classes work, it is often a really good idea to think of a module as a class object that gets instantiated once. The loader executes the files once and all variables, class definitions, and function definitions that are available at the end of the file are then accessible as part of the module.
The same is true of classes, with the main exception that functions declared within classes are transformed into methods (and one special method let's you instantiate the class). So testModule has-a testClass has-a myvars. All three objects are unique: there will not be multiple instances of any of them. And the has-a relathionship is really more-or-less the same whether we say "module has-a class object" or "class object has-a class variable". (The difference being implementation details that you ought not be concerned with.)
A:
For a good reference on how to use getattr and other methods like it, refer to How-To Guide for Descriptors and there's nothing like practice!
|
Python Module/Class Variable Bleeding
|
Okay, it took me a little while to narrow down this problem, but it appears python is doing this one purpose. Can someone explain why this is happening and what I can do to fix this?
File: library/testModule.py
class testClass:
myvars = dict()
def __getattr__(self, k):
if self.myvars.has_key(k):
return self.myvars[k]
def __setattr__(self, k, v):
self.myvars[k] = v
def __str__(self):
l = []
for k, v in self.myvars.iteritems():
l.append(str(k) + ":" + str(v))
return " - ".join(l)
test.py
from library import testModule
#I get the same result if I instantiate both classes one after another
c1 = testClass()
c1.foo = "hello"
c2 = testClass()
print("c1: " + str(c1) + "\n")
print("c2: " + str(c2) + "\n")
Output:
c1: foo:hello
c2: foo:hello
My best guess is that because library has an "__init__.py" file, the whole module is loaded like a class object and it's now become part of a lasting object.. is this the case?
|
[
"myvars is a property of the class, not the instance. This means that when you insert an attribute into myvars from the instance c1, the attribute gets associated with the class testClass, not the instance c1 specifically. Since c2 is an instance of the same class, it also has the same attribute.\nYou could get the behavior you want by writing this:\nclass testClass:\n def __init__(self):\n self.myvars = dict()\n\n def __getattr__(self, k):\n if self.myvars.has_key(k):\n return self.myvars[k]\n\n def __setattr__(self, k, v):\n self.myvars[k] = v\n\n def __str__(self):\n l = []\n for k, v in self.myvars.iteritems():\n l.append(str(k) + \":\" + str(v))\n return \" - \".join(l)\n\n",
"The other answers are correct and to the point. Let me address some of what I think your misconceptions are.\n\nMy best guess is that because library has an \"__init__.py\" file, the whole module is loaded like a class object and it's now become part of a lasting object.. is this the case?\n\nAll packages have an __init__.py file. It is necessary to make something a python package. That package may or may not have any code in it. If it does it is guaranteed to execute. In the general case, this doesn't have anything to do with how the other modules in the package are executed, although it certainly is possible to put a lot of really cool trickery in there that does affect it.\nAs for how modules and classes work, it is often a really good idea to think of a module as a class object that gets instantiated once. The loader executes the files once and all variables, class definitions, and function definitions that are available at the end of the file are then accessible as part of the module.\nThe same is true of classes, with the main exception that functions declared within classes are transformed into methods (and one special method let's you instantiate the class). So testModule has-a testClass has-a myvars. All three objects are unique: there will not be multiple instances of any of them. And the has-a relathionship is really more-or-less the same whether we say \"module has-a class object\" or \"class object has-a class variable\". (The difference being implementation details that you ought not be concerned with.)\n",
"For a good reference on how to use getattr and other methods like it, refer to How-To Guide for Descriptors and there's nothing like practice!\n"
] |
[
7,
2,
0
] |
[] |
[] |
[
"class",
"oop",
"python"
] |
stackoverflow_0000784149_class_oop_python.txt
|
Q:
Penalties of a script constantly looping in the background
I know this topic has been discussed in the past, but I am a tiny bit paranoid about resource usage.
I am looking into writing a daemon for queing jobs to archive files into zip files for a web app i am working on. It would behave something like this:
while True:
while morejobs():
zipfile()
sleep(15seconds)
What sort of resources would be consumed by a process constantly looping away in the background (provided there is nothing to zip)? Is there anything i should be aware of or careful of?
Edit: It looks like most of the answers are concerned about the duration of the sleep. I blindly set it to sleep (in the code example) for 15 milliseconds at a time. I actually intended it to be 15 seconds, and i have 'updated' the code to reflect that.
Edit Again: What would be the lowest reasonable time for the script to be sleeping? Is 5 seconds to low? I have no idea what the load of this app would be or how often new jobs would be added to the queue.
A:
Sleep involves no overhead. The Linux OS uses a very simple signal to wake a sleeping process.
What you're showing is the "busy-waiting" design pattern.
To eliminate overhead, you want to be woken ONLY when there's work to do.
Ways to do this.
Wait on read.
Wait on a select function call. See http://docs.python.org/library/select.html
Wait for a lock to be released. See http://docs.python.org/library/posixfile.html.
Of these, waiting on a read is perhaps easiest. Reading from a pipe or a socket is what you want to do.
I'm guessing that you have a "multiple-writers-single-reader" design pattern. In this case, there are two candidate solutions.
Multiple requests per socket. This is the FTP-like solution where you write a simple server that listens for connections on one socket and opens a dedicated connection for each client. Then you use select to determine which client is sending a file.
Single request per socket. This is the HTTP-like solution where you receive requests in some socket and the request is a big flood of data. When the request is all finished, the socket is closed so another client can get it.
In these two cases, you're not sleeping -- you're waiting for I/O's to complete.
A:
Instead of sleeping for 15 seconds, it might be better to have a callback which restarts your job when new files arrive.
Process available files
Check for new files every 60 seconds or whatever interval you choose
When a new file arrives, process it and any others which may have arrived since the last interval
A:
Why not just use a cron job to run a script every minute or so? At least you are not depending on your own loop to be continuously running in the background.
A:
If it takes (and these figures are examples) 20 seconds for a file to arrive and 5 seconds for you to process it, what is the harm in your process waiting for, on average, another 7.5 seconds before it even detects that the file is there?
A sleeping process should have as close to zero impact on the CPU as it is possible to get.
So no, I would not be concerned about this aspect at all.
The one thing you should be concerned about is how to restart the process automatically if it fails. I would run a cron job every 5 minutes (your choice of actual frequency) to kill off the old copy (politely, and only if it's running) and then start a new one. That way, there'll only be a 5-minute downtime at most if something goes wrong.
I say politely because the old one may be in the middle of processing files and you should not interrupt that unless it's recoverable.
A:
As an alternative you can lower the priority of your process.
(I'm only familiar with the windows method)
On Windows:
def setpriority(pid=None,priority=1):
""" Set The Priority of a Windows Process. Priority is a value between 0-5 where
2 is normal priority. Default sets the priority of the current
python process but can take any valid process ID. """
import win32api,win32process,win32con
priorityclasses = [win32process.IDLE_PRIORITY_CLASS,
win32process.BELOW_NORMAL_PRIORITY_CLASS,
win32process.NORMAL_PRIORITY_CLASS,
win32process.ABOVE_NORMAL_PRIORITY_CLASS,
win32process.HIGH_PRIORITY_CLASS,
win32process.REALTIME_PRIORITY_CLASS]
if pid == None:
pid = win32api.GetCurrentProcessId()
handle = win32api.OpenProcess(win32con.PROCESS_ALL_ACCESS, True, pid)
win32process.SetPriorityClass(handle, priorityclasses[priority])
from:
http://code.activestate.com/recipes/496767/
A:
This has the potential to hammer your CPU, even when there is nothing to process.
Edit: Actually sleep() takes an argument as a number of seconds, not milliseconds so I don't think the CPU is going to be a problem. Still, perhaps you could use a cron job to schedule something like this.
A:
Besides the cost of hammering your cpu, there is the cost of the morejobs() call. You can mitigate by using a higher value for sleep(), or you can use some sort of mailbox that receives requests and then fires the zipfile() operation.
It is normal for some operations to have a background thread scheduled that temporarily checks for something. In this case the best is to use sensible values for sleep().
A:
"A thousand reasoned opinions are worth one measurement".
Just try it.
|
Penalties of a script constantly looping in the background
|
I know this topic has been discussed in the past, but I am a tiny bit paranoid about resource usage.
I am looking into writing a daemon for queing jobs to archive files into zip files for a web app i am working on. It would behave something like this:
while True:
while morejobs():
zipfile()
sleep(15seconds)
What sort of resources would be consumed by a process constantly looping away in the background (provided there is nothing to zip)? Is there anything i should be aware of or careful of?
Edit: It looks like most of the answers are concerned about the duration of the sleep. I blindly set it to sleep (in the code example) for 15 milliseconds at a time. I actually intended it to be 15 seconds, and i have 'updated' the code to reflect that.
Edit Again: What would be the lowest reasonable time for the script to be sleeping? Is 5 seconds to low? I have no idea what the load of this app would be or how often new jobs would be added to the queue.
|
[
"Sleep involves no overhead. The Linux OS uses a very simple signal to wake a sleeping process.\nWhat you're showing is the \"busy-waiting\" design pattern.\nTo eliminate overhead, you want to be woken ONLY when there's work to do.\nWays to do this.\n\nWait on read.\nWait on a select function call. See http://docs.python.org/library/select.html\nWait for a lock to be released. See http://docs.python.org/library/posixfile.html.\n\nOf these, waiting on a read is perhaps easiest. Reading from a pipe or a socket is what you want to do. \nI'm guessing that you have a \"multiple-writers-single-reader\" design pattern. In this case, there are two candidate solutions.\n\nMultiple requests per socket. This is the FTP-like solution where you write a simple server that listens for connections on one socket and opens a dedicated connection for each client. Then you use select to determine which client is sending a file.\nSingle request per socket. This is the HTTP-like solution where you receive requests in some socket and the request is a big flood of data. When the request is all finished, the socket is closed so another client can get it.\n\nIn these two cases, you're not sleeping -- you're waiting for I/O's to complete.\n",
"Instead of sleeping for 15 seconds, it might be better to have a callback which restarts your job when new files arrive.\n\nProcess available files\nCheck for new files every 60 seconds or whatever interval you choose\nWhen a new file arrives, process it and any others which may have arrived since the last interval\n\n",
"Why not just use a cron job to run a script every minute or so? At least you are not depending on your own loop to be continuously running in the background.\n",
"If it takes (and these figures are examples) 20 seconds for a file to arrive and 5 seconds for you to process it, what is the harm in your process waiting for, on average, another 7.5 seconds before it even detects that the file is there?\nA sleeping process should have as close to zero impact on the CPU as it is possible to get.\nSo no, I would not be concerned about this aspect at all.\nThe one thing you should be concerned about is how to restart the process automatically if it fails. I would run a cron job every 5 minutes (your choice of actual frequency) to kill off the old copy (politely, and only if it's running) and then start a new one. That way, there'll only be a 5-minute downtime at most if something goes wrong.\nI say politely because the old one may be in the middle of processing files and you should not interrupt that unless it's recoverable.\n",
"As an alternative you can lower the priority of your process.\n(I'm only familiar with the windows method)\nOn Windows:\ndef setpriority(pid=None,priority=1):\n \"\"\" Set The Priority of a Windows Process. Priority is a value between 0-5 where\n 2 is normal priority. Default sets the priority of the current\n python process but can take any valid process ID. \"\"\"\n\n import win32api,win32process,win32con\n\n priorityclasses = [win32process.IDLE_PRIORITY_CLASS,\n win32process.BELOW_NORMAL_PRIORITY_CLASS,\n win32process.NORMAL_PRIORITY_CLASS,\n win32process.ABOVE_NORMAL_PRIORITY_CLASS,\n win32process.HIGH_PRIORITY_CLASS,\n win32process.REALTIME_PRIORITY_CLASS]\n if pid == None:\n pid = win32api.GetCurrentProcessId()\n handle = win32api.OpenProcess(win32con.PROCESS_ALL_ACCESS, True, pid)\n win32process.SetPriorityClass(handle, priorityclasses[priority])\n\nfrom:\nhttp://code.activestate.com/recipes/496767/\n",
"This has the potential to hammer your CPU, even when there is nothing to process.\nEdit: Actually sleep() takes an argument as a number of seconds, not milliseconds so I don't think the CPU is going to be a problem. Still, perhaps you could use a cron job to schedule something like this.\n",
"Besides the cost of hammering your cpu, there is the cost of the morejobs() call. You can mitigate by using a higher value for sleep(), or you can use some sort of mailbox that receives requests and then fires the zipfile() operation.\nIt is normal for some operations to have a background thread scheduled that temporarily checks for something. In this case the best is to use sensible values for sleep().\n",
"\"A thousand reasoned opinions are worth one measurement\". \nJust try it.\n"
] |
[
4,
1,
1,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"performance",
"python",
"queue",
"resources"
] |
stackoverflow_0000781896_performance_python_queue_resources.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.