content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Fastest nested loops over a single list (with elements remove or not)
I am looking for advice about how to parse a single list, using two nested loops, in the fastest way, avoiding doing len(list)^2 comparisons, and avoiding duplicate files in groups.
More precisely: I have a list of 'file' objects, that each has a timestamp. I want to group the files by their timestamp and a time offset. Ex. starting from a file X, I want to create a group with all the files that have the timestamp < (timestamp(x) + offset).
For this, I did:
for file_a in list:
temp_group = group()
temp_group.add(file_a)
list.remove(file_a)
for file_b in list:
if (file_b.timestamp < (file_a.timestamp + offset)):
temp_group.add(file_b)
list.remove(file_b)
groups.add(temp_group)
(ok, the code is more complicated, but this is the main idea)
This obviously doesn't work, because I am modifying the list during the loops, and strange things happen :)
I thought I have to use copy of 'list' for the loops, but, this doesn't work either:
for file_a in list[:]:
temp_group = group()
temp_group.add(file_a)
list.remove(file_a)
for file_b in list[:]:
if (file_b.timestamp < (file_a.timestamp + offset)):
temp_group.add(file_b)
list.remove(file_b)
groups.add(temp_group)
Well.. I know I can do this without removing elements from the list, but then I need to mark the ones that were already 'processed', and I need to check them each time - which is a speed penalty.
Can anyone give me some advice about how this can be done in the fastest/best way?
Thank you,
Alex
EDIT: I have found an other solution, that doesn't exactly answers the question, but it is what I actually needed (my mistake for asking the question in that way). I am posting this here because it may help someone looking for related issues with loops over lists, in Python.
It may not be the fastest (considering the number of 'passes' through the list), but it was quite simple to understand and implement, and it does not require the list to be sorted.
The reason for which I avoid sorting is that it may take some more time, and because after I make the first set of groups, some of them will be 'locked', and the unlocked groups will be 'dissolved', and regrouped using a different time offset. (And when dissolving groups, the files order may be changed, and they will require sorting again).
Anyway, the solution was to control the loops index myself. If I delete a file from the list, I skip increasing the index (ex: when I delete index "3", the previous index "4" is now "3", and I don't want to increment the loop counter, because I will skip over it). If at that iteration I don't delete any item, then the index is increased normally. Here's the code (with some extras; ignore all that 'bucket' stuff):
def regroup(self, time_offset):
#create list of files to be used for regrouping
regroup_files_list = []
if len(self.groups) == 0:
#on first 'regroup', we start with a copy of jpeg_list, so that we do not change it further on
regroup_files_list = copy.copy(self.jpeg_list)
else:
i = 0
while True:
try:
group = self.groups[i]
except IndexError:
break
if group.is_locked == False:
regroup_files_list.extend(group)
self.groups.remove(group)
continue
else:
i += 1
bucket_group = FilesGroup()
bucket_group.name = c_bucket_group_name
while len(regroup_files_list) > 0: #we create groups until there are no files left
file_a = regroup_files_list[0]
regroup_files_list.remove(file_a)
temp_group = FilesGroup()
temp_group.start_time = file_a._iso_time
temp_group.add(file_a)
#manually manage the list index when iterating for file_b, because we're removing files
i = 0
while True:
try:
file_b = regroup_files_list[i]
except IndexError:
break
timediff = file_a._iso_time - file_b._iso_time
if timediff.days < 0 or timediff.seconds < 0:
timediff = file_b._iso_time - file_a._iso_time
if timediff < time_offset:
temp_group.add(file_b)
regroup_files_list.remove(file_b)
continue # :D we reuse the old position, because all elements were shifted to the left
else:
i += 1 #the index is increased normally
self.groups.append(temp_group)
#move files to the bucket group, if the temp group is too small
if c_bucket_group_enabled == True:
if len(temp_group) < c_bucket_group_min_count:
for file in temp_group:
bucket_group.add(file)
temp_group.remove(file)
else:
self.groups.append(temp_group)
if len(bucket_group) > 0:
self.groups.append(bucket_group)
A:
A simple solution that works by sorting the list then using a generator to create groups:
def time_offsets(files, offset):
files = sorted(files, key=lambda x:x.timestamp)
group = []
timestamp = 0
for f in files:
if f.timestamp < timestamp + offset:
group.append(f)
else:
yield group
timestamp = f.timestamp
group = [timestamp]
else:
yield group
# Now you can do this...
for group in time_offsets(files, 86400):
print group
And here's a complete script you can run to test:
class File:
def __init__(self, timestamp):
self.timestamp = timestamp
def __repr__(self):
return "File: <%d>" % self.timestamp
def gen_files(num=100):
import random
files = []
for i in range(num):
timestamp = random.randint(0,1000000)
files.append(File(timestamp))
return files
def time_offsets(files, offset):
files = sorted(files, key=lambda x:x.timestamp)
group = []
timestamp = 0
for f in files:
if f.timestamp < timestamp + offset:
group.append(f)
else:
yield group
timestamp = f.timestamp
group = [timestamp]
else:
yield group
# Now you can do this to group files by day (assuming timestamp in seconds)
files = gen_files()
for group in time_offsets(files, 86400):
print group
A:
The best solution I can think of is O(n log n).
listA = getListOfFiles()
listB = stableMergesort(listA, lambda el: el.timestamp)
listC = groupAdjacentElementsByTimestampRange(listB, offset)
Note that groupAdjacentElementsByTimestampRange is O(n).
A:
I'm not exactly sure what you are trying to do - it seems to me that the order of the list will affect the groupings, but your existing code can be modified to work like this.
#This is O(n^2)
while lst:
file_a=lst.pop()
temp_group = group()
temp_group.add(file_a)
while lst
file_b=lst[-1]
if (file_b.timestamp < (file_a.timestamp + offset)):
temp_group.add(lst.pop())
groups.add(temp_group)
Does the group have to start at file_a.timestamp?
# This is O(n)
from collections import defaultdict
groups=defaultdict(list) # This is why you shouldn't use `list` as a variable name
for item in lst:
groups[item.timestamp/offset].append(item)
Is much simpler way to chop up into groups with similar timestamps
|
Fastest nested loops over a single list (with elements remove or not)
|
I am looking for advice about how to parse a single list, using two nested loops, in the fastest way, avoiding doing len(list)^2 comparisons, and avoiding duplicate files in groups.
More precisely: I have a list of 'file' objects, that each has a timestamp. I want to group the files by their timestamp and a time offset. Ex. starting from a file X, I want to create a group with all the files that have the timestamp < (timestamp(x) + offset).
For this, I did:
for file_a in list:
temp_group = group()
temp_group.add(file_a)
list.remove(file_a)
for file_b in list:
if (file_b.timestamp < (file_a.timestamp + offset)):
temp_group.add(file_b)
list.remove(file_b)
groups.add(temp_group)
(ok, the code is more complicated, but this is the main idea)
This obviously doesn't work, because I am modifying the list during the loops, and strange things happen :)
I thought I have to use copy of 'list' for the loops, but, this doesn't work either:
for file_a in list[:]:
temp_group = group()
temp_group.add(file_a)
list.remove(file_a)
for file_b in list[:]:
if (file_b.timestamp < (file_a.timestamp + offset)):
temp_group.add(file_b)
list.remove(file_b)
groups.add(temp_group)
Well.. I know I can do this without removing elements from the list, but then I need to mark the ones that were already 'processed', and I need to check them each time - which is a speed penalty.
Can anyone give me some advice about how this can be done in the fastest/best way?
Thank you,
Alex
EDIT: I have found an other solution, that doesn't exactly answers the question, but it is what I actually needed (my mistake for asking the question in that way). I am posting this here because it may help someone looking for related issues with loops over lists, in Python.
It may not be the fastest (considering the number of 'passes' through the list), but it was quite simple to understand and implement, and it does not require the list to be sorted.
The reason for which I avoid sorting is that it may take some more time, and because after I make the first set of groups, some of them will be 'locked', and the unlocked groups will be 'dissolved', and regrouped using a different time offset. (And when dissolving groups, the files order may be changed, and they will require sorting again).
Anyway, the solution was to control the loops index myself. If I delete a file from the list, I skip increasing the index (ex: when I delete index "3", the previous index "4" is now "3", and I don't want to increment the loop counter, because I will skip over it). If at that iteration I don't delete any item, then the index is increased normally. Here's the code (with some extras; ignore all that 'bucket' stuff):
def regroup(self, time_offset):
#create list of files to be used for regrouping
regroup_files_list = []
if len(self.groups) == 0:
#on first 'regroup', we start with a copy of jpeg_list, so that we do not change it further on
regroup_files_list = copy.copy(self.jpeg_list)
else:
i = 0
while True:
try:
group = self.groups[i]
except IndexError:
break
if group.is_locked == False:
regroup_files_list.extend(group)
self.groups.remove(group)
continue
else:
i += 1
bucket_group = FilesGroup()
bucket_group.name = c_bucket_group_name
while len(regroup_files_list) > 0: #we create groups until there are no files left
file_a = regroup_files_list[0]
regroup_files_list.remove(file_a)
temp_group = FilesGroup()
temp_group.start_time = file_a._iso_time
temp_group.add(file_a)
#manually manage the list index when iterating for file_b, because we're removing files
i = 0
while True:
try:
file_b = regroup_files_list[i]
except IndexError:
break
timediff = file_a._iso_time - file_b._iso_time
if timediff.days < 0 or timediff.seconds < 0:
timediff = file_b._iso_time - file_a._iso_time
if timediff < time_offset:
temp_group.add(file_b)
regroup_files_list.remove(file_b)
continue # :D we reuse the old position, because all elements were shifted to the left
else:
i += 1 #the index is increased normally
self.groups.append(temp_group)
#move files to the bucket group, if the temp group is too small
if c_bucket_group_enabled == True:
if len(temp_group) < c_bucket_group_min_count:
for file in temp_group:
bucket_group.add(file)
temp_group.remove(file)
else:
self.groups.append(temp_group)
if len(bucket_group) > 0:
self.groups.append(bucket_group)
|
[
"A simple solution that works by sorting the list then using a generator to create groups:\ndef time_offsets(files, offset):\n\n files = sorted(files, key=lambda x:x.timestamp)\n\n group = [] \n timestamp = 0\n\n for f in files:\n if f.timestamp < timestamp + offset:\n group.append(f)\n else:\n yield group\n timestamp = f.timestamp\n group = [timestamp]\n else:\n yield group\n\n# Now you can do this...\nfor group in time_offsets(files, 86400):\n print group\n\nAnd here's a complete script you can run to test:\nclass File:\n def __init__(self, timestamp):\n self.timestamp = timestamp\n\n def __repr__(self):\n return \"File: <%d>\" % self.timestamp\n\ndef gen_files(num=100):\n import random\n files = []\n for i in range(num):\n timestamp = random.randint(0,1000000)\n files.append(File(timestamp))\n\n return files\n \n\ndef time_offsets(files, offset):\n\n files = sorted(files, key=lambda x:x.timestamp)\n\n group = [] \n timestamp = 0\n\n for f in files:\n if f.timestamp < timestamp + offset:\n group.append(f)\n else:\n yield group\n timestamp = f.timestamp\n group = [timestamp]\n else:\n yield group\n\n# Now you can do this to group files by day (assuming timestamp in seconds)\nfiles = gen_files()\nfor group in time_offsets(files, 86400):\n print group\n\n",
"The best solution I can think of is O(n log n).\nlistA = getListOfFiles()\nlistB = stableMergesort(listA, lambda el: el.timestamp)\nlistC = groupAdjacentElementsByTimestampRange(listB, offset)\n\nNote that groupAdjacentElementsByTimestampRange is O(n).\n",
"I'm not exactly sure what you are trying to do - it seems to me that the order of the list will affect the groupings, but your existing code can be modified to work like this.\n#This is O(n^2)\nwhile lst:\n file_a=lst.pop()\n temp_group = group()\n temp_group.add(file_a)\n while lst\n file_b=lst[-1] \n if (file_b.timestamp < (file_a.timestamp + offset)):\n temp_group.add(lst.pop())\n groups.add(temp_group)\n\nDoes the group have to start at file_a.timestamp?\n# This is O(n)\nfrom collections import defaultdict\ngroups=defaultdict(list) # This is why you shouldn't use `list` as a variable name\nfor item in lst:\n groups[item.timestamp/offset].append(item)\n\nIs much simpler way to chop up into groups with similar timestamps\n"
] |
[
3,
1,
1
] |
[] |
[] |
[
"list",
"nested_loops",
"performance",
"python"
] |
stackoverflow_0001579771_list_nested_loops_performance_python.txt
|
Q:
python, lxml and xpath - html table parsing
I 'am new to lxml, quite new to python and could not find a solution to the following:
I need to import a few tables with 3 columns and an undefined number of rows starting at row 3.
When the second column of any row is empty, this row is discarded and the processing of the table is aborted.
The following code prints the table's data fine (but I'm unable to reuse the data afterwards):
from lxml.html import parse
def process_row(row):
for cell in row.xpath('./td'):
print cell.text_content()
yield cell.text_content()
def process_table(table):
return [process_row(row) for row in table.xpath('./tr')]
doc = parse(url).getroot()
tbl = doc.xpath("/html//table[2]")[0]
data = process_table(tbl)
This only prints the first column :(
for i in data:
print i.next()
The following only import the third row, and not the subsequent
tbl = doc.xpath("//body/table[2]//tr[position()>2]")[0]
Anyone knows a fancy solution to get all the data from row 3 into tbl and copy it into an array so it can be processed into a module with no lxml dependency?
Thanks in advance for your help, Alex
A:
This is a generator:
def process_row(row):
for cell in row.xpath('./td'):
print cell.text_content()
yield cell.text_content()
You're calling it as though you thought it returns a list. It doesn't. There are contexts in which it behaves like a list:
print [r for r in process_row(row)]
but that's only because a generator and a list both expose the same interface to for loops. Using it in a context where it gets evaluated just one time, e.g.:
return [process_row(row) for row in table.xpath('./tr')]
just calls a new instance of the generator once for each new value of row, returning the first result yielded.
So that's your first problem. Your second one is that you're expecting:
tbl = doc.xpath("//body/table[2]//tr[position()>2]")[0]
to give you the third and all subsequent rows, and it's only setting tbl to the third row. Well, the call to xpath is returning the third and all subsequent rows. It's the [0] at the end that's messing you up.
A:
You need to use a loop to access the row's data, like this:
for row in data:
for col in row:
print col
Calling next() once as you did will access only the first item, which is why you see one column.
Note that due to the nature of generators, you can only access them once. If you changed the call process_row(row) into list(process_row(row)), the generator would be converted to a list which can be reused.
Update: If you need just the 3rd row and on, use data[2:]
|
python, lxml and xpath - html table parsing
|
I 'am new to lxml, quite new to python and could not find a solution to the following:
I need to import a few tables with 3 columns and an undefined number of rows starting at row 3.
When the second column of any row is empty, this row is discarded and the processing of the table is aborted.
The following code prints the table's data fine (but I'm unable to reuse the data afterwards):
from lxml.html import parse
def process_row(row):
for cell in row.xpath('./td'):
print cell.text_content()
yield cell.text_content()
def process_table(table):
return [process_row(row) for row in table.xpath('./tr')]
doc = parse(url).getroot()
tbl = doc.xpath("/html//table[2]")[0]
data = process_table(tbl)
This only prints the first column :(
for i in data:
print i.next()
The following only import the third row, and not the subsequent
tbl = doc.xpath("//body/table[2]//tr[position()>2]")[0]
Anyone knows a fancy solution to get all the data from row 3 into tbl and copy it into an array so it can be processed into a module with no lxml dependency?
Thanks in advance for your help, Alex
|
[
"This is a generator:\ndef process_row(row): \n for cell in row.xpath('./td'): \n print cell.text_content() \n yield cell.text_content() \n\nYou're calling it as though you thought it returns a list. It doesn't. There are contexts in which it behaves like a list:\nprint [r for r in process_row(row)]\n\nbut that's only because a generator and a list both expose the same interface to for loops. Using it in a context where it gets evaluated just one time, e.g.:\nreturn [process_row(row) for row in table.xpath('./tr')]\n\njust calls a new instance of the generator once for each new value of row, returning the first result yielded.\nSo that's your first problem. Your second one is that you're expecting:\ntbl = doc.xpath(\"//body/table[2]//tr[position()>2]\")[0]\n\nto give you the third and all subsequent rows, and it's only setting tbl to the third row. Well, the call to xpath is returning the third and all subsequent rows. It's the [0] at the end that's messing you up.\n",
"You need to use a loop to access the row's data, like this:\nfor row in data: \n for col in row:\n print col\n\nCalling next() once as you did will access only the first item, which is why you see one column.\nNote that due to the nature of generators, you can only access them once. If you changed the call process_row(row) into list(process_row(row)), the generator would be converted to a list which can be reused.\nUpdate: If you need just the 3rd row and on, use data[2:]\n"
] |
[
2,
0
] |
[] |
[] |
[
"lxml",
"python",
"xpath"
] |
stackoverflow_0001577487_lxml_python_xpath.txt
|
Q:
How can I check that a column in a tab-delimited file has valid values?
I have a large file named CHECKME which is tab delimited. There are 8 columns in each row. Column 4 is integers.
By using Perl or Python, is it possible to verify that each row in CHECKME has 8 columns and that column 4 is an integer?
A:
In Perl
while(<>) {
my @F=split/\t/;
die "Invalid line: $_" if @F!=8 or $F[3]!~/^-?\d+$/;
}
A:
In Python:
def isfileok(filename):
f = open(filename)
for line in f:
pieces = line.split('\t')
if len(pieces) != 8:
return False
if not pieces[3].isdigit():
return False
return True
I assume that by "column no. 4" you mean the 4th one, hence the [3] since Python (like most computer languages) indices from 0.
Here I'm just returning a boolean result, but I split up the code so it's easy to give good diagnostics about what line is wrong, and how, if you so desire.
A:
It's very easy work for Perl:
perl -F\\t -ane'die"Invalid!"if@F!=8||$F[3]!~/^-?\d+$/' CHECKME
A:
In Perl:
while (<>) {
if (! /^[^\t]+\t[^\t]+\t[^\t]+\t\d+\t[^\t]+\t[^\t]+\t[^\t]+\t[^\t]+$/) {
die "Line $. is bad: $_";
}
}
Checks to see that the line starts with one or more non-tabs, followed by a tab, followed by one or more non-tabs, followed by a tab, followed by one or more non-tabs, followed by a tab, followed by one or more digits, etc. until the eighth set of non-tab(s), which must be at the end of the line.
Thats the quick and dirty solution, but in the long run, it'd probably be better to use a "split /\t/" and count the number of fields it gets and then check to make sure field 3 (zero origin) is just digits. That way when (not if) the requirements change and you now need 9 fields, with the 9th field being a prime number, it's easy to make the change.
A:
validate-input.py
Read files given on the command-line or stdin. Print invalid lines. Return code is zero if there are no errors or one otherwise.
import fileinput, sys
def error(msg, line):
print >> sys.stderr, "%s:%d:%s\terror: %s" % (
fileinput.filename(), fileinput.filelineno(), line, msg)
error.count += 1
error.count = 0
ncol, icol = 8, 3
for row in (line.split('\t') for line in fileinput.input()):
if len(row) == ncol:
try: int(row[icol])
except ValueError:
error("%dth field '%s' is not integer" % (
(icol + 1), row[icol]), '\t'.join(row))
else:
error('wrong number of columns (want: %d, got: %d)' % (
ncol, len(row)), '\t'.join(row))
sys.exit(error.count != 0)
Example
$ echo 1 2 3 | python validate-input.py *.txt -
not_integer.txt:2:a b c 1.1 e f g h
error: 4th field '1.1' is not integer
wrong_cols.txt:3:a b
error: wrong number of columns (want: 8, got: 3)
<stdin>:1:1 2 3
error: wrong number of columns (want: 8, got: 1)
A:
The following Python code should display the line as you have indicated:
for n,line in enumerate(open("filename")):
line=line.split()
if len(line)!=8:
print "line %d error" % n+1
if not line[3].isdigit():
print "line %d error" % n+1
|
How can I check that a column in a tab-delimited file has valid values?
|
I have a large file named CHECKME which is tab delimited. There are 8 columns in each row. Column 4 is integers.
By using Perl or Python, is it possible to verify that each row in CHECKME has 8 columns and that column 4 is an integer?
|
[
"In Perl\nwhile(<>) {\n my @F=split/\\t/;\n die \"Invalid line: $_\" if @F!=8 or $F[3]!~/^-?\\d+$/;\n}\n\n",
"In Python:\ndef isfileok(filename):\n f = open(filename)\n for line in f:\n pieces = line.split('\\t')\n if len(pieces) != 8:\n return False\n if not pieces[3].isdigit():\n return False\n return True\n\nI assume that by \"column no. 4\" you mean the 4th one, hence the [3] since Python (like most computer languages) indices from 0.\nHere I'm just returning a boolean result, but I split up the code so it's easy to give good diagnostics about what line is wrong, and how, if you so desire.\n",
"It's very easy work for Perl:\nperl -F\\\\t -ane'die\"Invalid!\"if@F!=8||$F[3]!~/^-?\\d+$/' CHECKME\n\n",
"In Perl:\nwhile (<>) {\n if (! /^[^\\t]+\\t[^\\t]+\\t[^\\t]+\\t\\d+\\t[^\\t]+\\t[^\\t]+\\t[^\\t]+\\t[^\\t]+$/) {\n die \"Line $. is bad: $_\";\n }\n}\n\nChecks to see that the line starts with one or more non-tabs, followed by a tab, followed by one or more non-tabs, followed by a tab, followed by one or more non-tabs, followed by a tab, followed by one or more digits, etc. until the eighth set of non-tab(s), which must be at the end of the line.\nThats the quick and dirty solution, but in the long run, it'd probably be better to use a \"split /\\t/\" and count the number of fields it gets and then check to make sure field 3 (zero origin) is just digits. That way when (not if) the requirements change and you now need 9 fields, with the 9th field being a prime number, it's easy to make the change.\n",
"validate-input.py\nRead files given on the command-line or stdin. Print invalid lines. Return code is zero if there are no errors or one otherwise.\nimport fileinput, sys\n\ndef error(msg, line):\n print >> sys.stderr, \"%s:%d:%s\\terror: %s\" % (\n fileinput.filename(), fileinput.filelineno(), line, msg)\n error.count += 1\nerror.count = 0\n\nncol, icol = 8, 3\nfor row in (line.split('\\t') for line in fileinput.input()):\n if len(row) == ncol:\n try: int(row[icol])\n except ValueError:\n error(\"%dth field '%s' is not integer\" % (\n (icol + 1), row[icol]), '\\t'.join(row))\n else:\n error('wrong number of columns (want: %d, got: %d)' % (\n ncol, len(row)), '\\t'.join(row))\n\nsys.exit(error.count != 0)\n\nExample\n\n$ echo 1 2 3 | python validate-input.py *.txt -\nnot_integer.txt:2:a b c 1.1 e f g h\n error: 4th field '1.1' is not integer\nwrong_cols.txt:3:a b \n error: wrong number of columns (want: 8, got: 3)\n<stdin>:1:1 2 3\n error: wrong number of columns (want: 8, got: 1)\n\n",
"The following Python code should display the line as you have indicated:\nfor n,line in enumerate(open(\"filename\")):\n line=line.split()\n if len(line)!=8: \n print \"line %d error\" % n+1 \n if not line[3].isdigit(): \n print \"line %d error\" % n+1\n\n"
] |
[
8,
5,
4,
2,
1,
0
] |
[] |
[] |
[
"perl",
"python"
] |
stackoverflow_0001575936_perl_python.txt
|
Q:
Mixing Python web platforms PHP, e.g. - Mediawiki, Wordpress, etc
Is anyone developing application integrated with Mediawiki - using Django or other Python web development platforms using mod_wsgi?
Would be very interested to find out what has been done in this direction and maybe there is some code available for re-use. (I've started creating wiki extensions working with MW database in python whose output is injected via Apache's include virtual directive. it works ok, but a bit slow so far - maybe I can optimize it though)
Basically I would like to have certain parts of displayed wiki pages be prepared with python.
Has anyone reproduced common MW skins in python templates?
edit: found this nice video showing how PyCon site does just that (not with MW though) - using custom template loader
http://showmedo.com/videos/video?name=pythonNapleonePyConTech2&fromSeriesID=54
Thanks.
A:
There are so many different ways to do this.
You can make a mediawiki skin that uses iframes and inserts things from a Python server.
You can write a python app that accesses mediawikis data somehow and outputs it.
You can put a Python server in front that extracts the content from mediawiki and put's it into a page that is otherwise generated from Python.
You can use deliverence to skin mediawiki, and use it's pyref functionality to call pythonscripts and insert that into the skin (I think, I haven't done that myself).
Which way is best for you completely depends.
A:
Can't you use Mediawiki HTTP based API? Loose coupling is great.
|
Mixing Python web platforms PHP, e.g. - Mediawiki, Wordpress, etc
|
Is anyone developing application integrated with Mediawiki - using Django or other Python web development platforms using mod_wsgi?
Would be very interested to find out what has been done in this direction and maybe there is some code available for re-use. (I've started creating wiki extensions working with MW database in python whose output is injected via Apache's include virtual directive. it works ok, but a bit slow so far - maybe I can optimize it though)
Basically I would like to have certain parts of displayed wiki pages be prepared with python.
Has anyone reproduced common MW skins in python templates?
edit: found this nice video showing how PyCon site does just that (not with MW though) - using custom template loader
http://showmedo.com/videos/video?name=pythonNapleonePyConTech2&fromSeriesID=54
Thanks.
|
[
"There are so many different ways to do this.\n\nYou can make a mediawiki skin that uses iframes and inserts things from a Python server.\nYou can write a python app that accesses mediawikis data somehow and outputs it.\nYou can put a Python server in front that extracts the content from mediawiki and put's it into a page that is otherwise generated from Python.\nYou can use deliverence to skin mediawiki, and use it's pyref functionality to call pythonscripts and insert that into the skin (I think, I haven't done that myself).\n\nWhich way is best for you completely depends.\n",
"Can't you use Mediawiki HTTP based API? Loose coupling is great.\n"
] |
[
2,
1
] |
[] |
[] |
[
"mediawiki",
"mod_wsgi",
"php",
"python",
"wordpress"
] |
stackoverflow_0001580245_mediawiki_mod_wsgi_php_python_wordpress.txt
|
Q:
Key compare using dictionary
I have a file with the following structure:
system.action.webMessage=An error has happened during web access.
system.action.okMessage=Everything is ok.
core.alert.inform=Error number 5512.
I need a script to compare the keys in 2 files with this structure. I was working in a script to convert the file into a dictionary and use the dictionary structure to compare de keys (strings before '=') in both files and tells me with value from which key is equal.
file = open('system.keys','r')
lines = []
for i in file:
lines.append(i.split('='))
dic = {}
for k, v in lines:
dic[k] = v
But I'm receiving the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack
Any one have some clue or help? :(
I've try lots of things that I found in google but no solution.
A:
file = open('system.keys','r')
lines = []
for i in file:
lines.append(i.partition('='))
dic = {}
for k,_,v in lines:
dic[k] = v
or using split
myfile = open('system.keys','r')
dic = dict(i.split("=",1) for i in myfile)
since dict() knows how to make a dictionary from a sequence of (key,value) pairs
A:
If a line has more than one '=' in it, you'll get a list with more than two items, while your for-loop (for k, v in items) expects that each list will only have two items.
Try using i.split('=', 1).
For example:
>>> "a=b=c".split('=')
['a', 'b', 'c']
>>> "a=b=c".split('=', 1)
['a', 'b=c']
|
Key compare using dictionary
|
I have a file with the following structure:
system.action.webMessage=An error has happened during web access.
system.action.okMessage=Everything is ok.
core.alert.inform=Error number 5512.
I need a script to compare the keys in 2 files with this structure. I was working in a script to convert the file into a dictionary and use the dictionary structure to compare de keys (strings before '=') in both files and tells me with value from which key is equal.
file = open('system.keys','r')
lines = []
for i in file:
lines.append(i.split('='))
dic = {}
for k, v in lines:
dic[k] = v
But I'm receiving the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack
Any one have some clue or help? :(
I've try lots of things that I found in google but no solution.
|
[
"file = open('system.keys','r')\nlines = []\nfor i in file:\n lines.append(i.partition('='))\n\ndic = {}\nfor k,_,v in lines:\n dic[k] = v\n\nor using split\nmyfile = open('system.keys','r')\ndic = dict(i.split(\"=\",1) for i in myfile)\n\nsince dict() knows how to make a dictionary from a sequence of (key,value) pairs\n",
"If a line has more than one '=' in it, you'll get a list with more than two items, while your for-loop (for k, v in items) expects that each list will only have two items.\nTry using i.split('=', 1).\nFor example:\n>>> \"a=b=c\".split('=')\n['a', 'b', 'c']\n>>> \"a=b=c\".split('=', 1)\n['a', 'b=c']\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"compare",
"dictionary",
"file",
"list",
"python"
] |
stackoverflow_0001580563_compare_dictionary_file_list_python.txt
|
Q:
How do I know what data type to use in Python?
I'm working through some tutorials on Python and am at a position where I am trying to decide what data type/structure to use in a certain situation.
I'm not clear on the differences between arrays, lists, dictionaries and tuples.
How do you decide which one is appropriate - my current understanding doesn't let me distinguish between them at all - they seem to be the same thing.
What are the benefits/typical use cases for each one?
A:
How do you decide which data type to use? Easy:
You look at which are available and choose the one that does what you want. And if there isn't one, you make one.
In this case a dict is a pretty obvious solution.
A:
Best type for counting elements like this is usually defaultdict
from collections import defaultdict
s = 'asdhbaklfbdkabhvsdybvailybvdaklybdfklabhdvhba'
d = defaultdict(int)
for c in s:
d[c] += 1
print d['a'] # prints 7
A:
Tuples first. These are list-like things that cannot be modified. Because the contents of a tuple cannot change, you can use a tuple as a key in a dictionary. That's the most useful place for them in my opinion. For instance if you have a list like item = ["Ford pickup", 1993, 9995] and you want to make a little in-memory database with the prices you might try something like:
ikey = tuple(item[0], item[1])
idata = item[2]
db[ikey] = idata
Lists, seem to be like arrays or vectors in other programming languages and are usually used for the same types of things in Python. However, they are more flexible in that you can put different types of things into the same list. Generally, they are the most flexible data structure since you can put a whole list into a single list element of another list, but for real data crunching they may not be efficient enough.
a = [1,"fred",7.3]
b = []
b.append(1)
b[0] = "fred"
b.append(a) # now the second element of b is the whole list a
Dictionaries are often used a lot like lists, but now you can use any immutable thing as the index to the dictionary. However, unlike lists, dictionaries don't have a natural order and can't be sorted in place. Of course you can create your own class that incorporates a sorted list and a dictionary in order to make a dict behave like an Ordered Dictionary. There are examples on the Python Cookbook site.
c = {}
d = ("ford pickup",1993)
c[d] = 9995
Arrays are getting closer to the bit level for when you are doing heavy duty data crunching and you don't want the frills of lists or dictionaries. They are not often used outside of scientific applications. Leave these until you know for sure that you need them.
Lists and Dicts are the real workhorses of Python data storage.
A:
Do you really require speed/efficiency? Then go with a pure and simple dict.
A:
Personal:
I mostly work with lists and dictionaries.
It seems that this satisfies most cases.
Sometimes:
Tuples can be helpful--if you want to pair/match elements. Besides that, I don't really use it.
However:
I write high-level scripts that don't need to drill down into the core "efficiency" where every byte and every memory/nanosecond matters. I don't believe most people need to drill this deep.
|
How do I know what data type to use in Python?
|
I'm working through some tutorials on Python and am at a position where I am trying to decide what data type/structure to use in a certain situation.
I'm not clear on the differences between arrays, lists, dictionaries and tuples.
How do you decide which one is appropriate - my current understanding doesn't let me distinguish between them at all - they seem to be the same thing.
What are the benefits/typical use cases for each one?
|
[
"How do you decide which data type to use? Easy:\nYou look at which are available and choose the one that does what you want. And if there isn't one, you make one.\nIn this case a dict is a pretty obvious solution.\n",
"Best type for counting elements like this is usually defaultdict\nfrom collections import defaultdict\n\ns = 'asdhbaklfbdkabhvsdybvailybvdaklybdfklabhdvhba'\nd = defaultdict(int)\n\nfor c in s:\n d[c] += 1\n\nprint d['a'] # prints 7\n\n",
"Tuples first. These are list-like things that cannot be modified. Because the contents of a tuple cannot change, you can use a tuple as a key in a dictionary. That's the most useful place for them in my opinion. For instance if you have a list like item = [\"Ford pickup\", 1993, 9995] and you want to make a little in-memory database with the prices you might try something like:\nikey = tuple(item[0], item[1])\nidata = item[2]\ndb[ikey] = idata \n\nLists, seem to be like arrays or vectors in other programming languages and are usually used for the same types of things in Python. However, they are more flexible in that you can put different types of things into the same list. Generally, they are the most flexible data structure since you can put a whole list into a single list element of another list, but for real data crunching they may not be efficient enough.\na = [1,\"fred\",7.3]\nb = []\nb.append(1)\nb[0] = \"fred\"\nb.append(a) # now the second element of b is the whole list a\n\nDictionaries are often used a lot like lists, but now you can use any immutable thing as the index to the dictionary. However, unlike lists, dictionaries don't have a natural order and can't be sorted in place. Of course you can create your own class that incorporates a sorted list and a dictionary in order to make a dict behave like an Ordered Dictionary. There are examples on the Python Cookbook site.\nc = {}\nd = (\"ford pickup\",1993)\nc[d] = 9995\n\nArrays are getting closer to the bit level for when you are doing heavy duty data crunching and you don't want the frills of lists or dictionaries. They are not often used outside of scientific applications. Leave these until you know for sure that you need them.\nLists and Dicts are the real workhorses of Python data storage.\n",
"Do you really require speed/efficiency? Then go with a pure and simple dict.\n",
"Personal:\nI mostly work with lists and dictionaries.\nIt seems that this satisfies most cases.\nSometimes:\nTuples can be helpful--if you want to pair/match elements. Besides that, I don't really use it.\nHowever:\nI write high-level scripts that don't need to drill down into the core \"efficiency\" where every byte and every memory/nanosecond matters. I don't believe most people need to drill this deep.\n"
] |
[
6,
3,
3,
0,
0
] |
[] |
[] |
[
"arrays",
"python",
"tuples",
"types"
] |
stackoverflow_0001579744_arrays_python_tuples_types.txt
|
Q:
Python - Twisted and Unit Tests
I'm writing unit tests for a portion of an application that runs as an HTTP server. The approach I have been trying to take is to import the module that contains the HTTP server, start it. Then, the unit tests will use urllib2 to connect, send data, and check the response.
Our HTTP server is using Twisted. One problem here is that I'm just not that familiar with Twisted :)
Now, I instantiate our HTTP server and start it in the setUp() method and then I stop it in the tearDown() method.
Problem is, Twisted doesn't appear to like this, and it will only run one unit test. After the first one, the reactor won't start anymore.
I've searched and searched and searched, and I just can't seem to find an answer that makes sense.
Am I taking the wrong approach entirely, or just missing something obvious?
A:
Here's some info: Writing tests for Twisted code using Trial
You should also look at the -help of the trial command. There'a lot of good stuff in trial! But it's not always easy to do testing in a async application. Good luck!
A:
I believe that for unit testing within Twisted you're supposed to use TwistedTrial (it's a core component, i.e., comes with the Twisted tarball in the twisted/trial directory). However, as the URL I've pointed to says, the doc is mostly by having a look through the source (including sources of various Twisted projects, as they're tested with Trial too).
A:
As others mentioned, you should be using Trial for unit tests in Twisted.
You also should be unit testing from the bottom up - that's what the "unit" in unit testing implies. Test your data and logic before you test your interface. For a HTTP interface, you should be calling processGET, processPOST, etc with a mock request, but you should only be doing this after you've tested what these methods are calling. Each test should assume that the units tested elsewhere are working as designed.
If you're speaking HTTP, or you need a running server or other state, you're probably making higher level tests such as functional or integration tests. This isn't a bad thing, but you might want to rephrase your question.
A:
There is a known bug with Twisted (that probably won't get fixed) where re-starting the reactor causes a crash.
This is why your unit tests don't work.
As well as using Trial you might want to consider seperate testing systems that talk to your HTTP server like a client will.
Webdriver - an API to drive a browser session around your site.
TestGen4Web - Firefox plugin that records interactions with site and can replay.
|
Python - Twisted and Unit Tests
|
I'm writing unit tests for a portion of an application that runs as an HTTP server. The approach I have been trying to take is to import the module that contains the HTTP server, start it. Then, the unit tests will use urllib2 to connect, send data, and check the response.
Our HTTP server is using Twisted. One problem here is that I'm just not that familiar with Twisted :)
Now, I instantiate our HTTP server and start it in the setUp() method and then I stop it in the tearDown() method.
Problem is, Twisted doesn't appear to like this, and it will only run one unit test. After the first one, the reactor won't start anymore.
I've searched and searched and searched, and I just can't seem to find an answer that makes sense.
Am I taking the wrong approach entirely, or just missing something obvious?
|
[
"Here's some info: Writing tests for Twisted code using Trial\nYou should also look at the -help of the trial command. There'a lot of good stuff in trial! But it's not always easy to do testing in a async application. Good luck!\n",
"I believe that for unit testing within Twisted you're supposed to use TwistedTrial (it's a core component, i.e., comes with the Twisted tarball in the twisted/trial directory). However, as the URL I've pointed to says, the doc is mostly by having a look through the source (including sources of various Twisted projects, as they're tested with Trial too).\n",
"As others mentioned, you should be using Trial for unit tests in Twisted.\nYou also should be unit testing from the bottom up - that's what the \"unit\" in unit testing implies. Test your data and logic before you test your interface. For a HTTP interface, you should be calling processGET, processPOST, etc with a mock request, but you should only be doing this after you've tested what these methods are calling. Each test should assume that the units tested elsewhere are working as designed.\nIf you're speaking HTTP, or you need a running server or other state, you're probably making higher level tests such as functional or integration tests. This isn't a bad thing, but you might want to rephrase your question.\n",
"There is a known bug with Twisted (that probably won't get fixed) where re-starting the reactor causes a crash.\nThis is why your unit tests don't work. \nAs well as using Trial you might want to consider seperate testing systems that talk to your HTTP server like a client will.\n\nWebdriver - an API to drive a browser session around your site.\nTestGen4Web - Firefox plugin that records interactions with site and can replay.\n\n"
] |
[
18,
7,
4,
3
] |
[] |
[] |
[
"python",
"twisted",
"unit_testing"
] |
stackoverflow_0001575966_python_twisted_unit_testing.txt
|
Q:
Replace SRC of all IMG elements using Parser
I am looking for a way to replace the SRC attribute in all IMG tags not using Regular expressions. (Would like to use any out-of-the box HTML parser included with default Python install) I need to reduce the source from what ever it may be to:
<img src="cid:imagename">
I am trying to replace all src tags to point to the cid of an attachment for an HTML email so I will also need to change whatever the source is so it's simply the file name without the path or extension.
A:
There is a HTML parser in the Python standard library, but it’s not very useful and it’s deprecated since Python 2.6. Doing this kind of things with BeautifulSoup is really easy:
from BeautifulSoup import BeautifulSoup
from os.path import basename, splitext
soup = BeautifulSoup(my_html_string)
for img in soup.findAll('img'):
img['src'] = 'cid:' + splitext(basename(img['src']))[0]
my_html_string = str(soup)
A:
Here is a pyparsing approach to your problem. You'll need to do your own code to transform the http src attribute.
from pyparsing import *
import urllib2
imgtag = makeHTMLTags("img")[0]
page = urllib2.urlopen("http://www.yahoo.com")
html = page.read()
page.close()
# print html
def modifySrcRef(tokens):
ret = "<img"
for k,i in tokens.items():
if k in ("startImg","empty"): continue
if k.lower() == "src":
# or do whatever with this
i = i.upper()
ret += ' %s="%s"' % (k,i)
return ret + " />"
imgtag.setParseAction(modifySrcRef)
print imgtag.transformString(html)
The tags convert to:
<img src="HTTP://L.YIMG.COM/A/I/WW/BETA/Y3.GIF" title="Yahoo" height="44" width="232" alt="Yahoo!" />
<a href="r/xy"><img src="HTTP://L.YIMG.COM/A/I/WW/TBL/ALLYS.GIF" height="20" width="138" alt="All Yahoo! Services" border="0" /></a>
|
Replace SRC of all IMG elements using Parser
|
I am looking for a way to replace the SRC attribute in all IMG tags not using Regular expressions. (Would like to use any out-of-the box HTML parser included with default Python install) I need to reduce the source from what ever it may be to:
<img src="cid:imagename">
I am trying to replace all src tags to point to the cid of an attachment for an HTML email so I will also need to change whatever the source is so it's simply the file name without the path or extension.
|
[
"There is a HTML parser in the Python standard library, but it’s not very useful and it’s deprecated since Python 2.6. Doing this kind of things with BeautifulSoup is really easy:\nfrom BeautifulSoup import BeautifulSoup\nfrom os.path import basename, splitext\nsoup = BeautifulSoup(my_html_string)\nfor img in soup.findAll('img'):\n img['src'] = 'cid:' + splitext(basename(img['src']))[0]\nmy_html_string = str(soup)\n\n",
"Here is a pyparsing approach to your problem. You'll need to do your own code to transform the http src attribute.\nfrom pyparsing import *\nimport urllib2\n\nimgtag = makeHTMLTags(\"img\")[0]\n\npage = urllib2.urlopen(\"http://www.yahoo.com\")\nhtml = page.read()\npage.close()\n\n# print html\n\ndef modifySrcRef(tokens):\n ret = \"<img\"\n for k,i in tokens.items():\n if k in (\"startImg\",\"empty\"): continue\n if k.lower() == \"src\":\n # or do whatever with this\n i = i.upper() \n ret += ' %s=\"%s\"' % (k,i)\n return ret + \" />\"\n\nimgtag.setParseAction(modifySrcRef)\n\nprint imgtag.transformString(html)\n\nThe tags convert to:\n<img src=\"HTTP://L.YIMG.COM/A/I/WW/BETA/Y3.GIF\" title=\"Yahoo\" height=\"44\" width=\"232\" alt=\"Yahoo!\" />\n<a href=\"r/xy\"><img src=\"HTTP://L.YIMG.COM/A/I/WW/TBL/ALLYS.GIF\" height=\"20\" width=\"138\" alt=\"All Yahoo! Services\" border=\"0\" /></a>\n\n"
] |
[
27,
1
] |
[] |
[] |
[
"html",
"image",
"parsing",
"python",
"src"
] |
stackoverflow_0001579133_html_image_parsing_python_src.txt
|
Q:
How to avoid excessive parameter passing?
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
import a
p = some_command_line_argument_value
a.meth1(p)
a.py:
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
A:
Create objects of types relevant to your program, and store the command line options relevant to each in them. Example:
import WidgetFrobnosticator
f = WidgetFrobnosticator()
f.allow_oncave_widgets = option_allow_concave_widgets
f.respect_weasel_pins = option_respect_weasel_pins
# Now the methods of WidgetFrobnosticator have access to your command-line parameters,
# in a way that's not dependent on the input format.
import PlatypusFactory
p = PlatypusFactory()
p.allow_parthenogenesis = option_allow_parthenogenesis
p.max_population = option_max_population
# The platypus factory knows about its own options, but not those of the WidgetFrobnosticator
# or vice versa. This makes each class easier to read and implement.
A:
Maybe you should organize your code more into classes and objects? As I was writing this, Jimmy showed a class-instance based answer, so here is a pure class-based answer. This would be most useful if you only ever wanted a single behavior; if there is any chance at all you might want different defaults some of the time, you should use ordinary object-oriented programming in Python, i.e. pass around class instances with the property p set in the instance, not the class.
class Aclass(object):
p = None
@classmethod
def init_p(cls, value):
p = value
@classmethod
def meth1(cls):
# some code
res = cls.meth2()
# some more code w/ res
@classmethod
def meth2(cls):
# do something with p
pass
from a import Aclass as ac
ac.init_p(some_command_line_argument_value)
ac.meth1()
ac.meth2()
A:
If "a" is a real object and not just a set of independent helper methods, you can create an "p" member variable in "a" and set it when you instantiate an "a" object. Then your main class will not need to pass "p" into meth1 and meth2 once "a" has been instantiated.
A:
[Caution: my answer isn't specific to python.]
I remember that Code Complete called this kind of parameter a "tramp parameter". Googling for "tramp parameter" doesn't return many results, however.
Some alternatives to tramp parameters might include:
Put the data in a global variable
Put the data in a static variable of a class (similar to global data)
Put the data in an instance variable of a class
Pseudo-global variable: hidden behind a singleton, or some dependency injection mechanism
Personally, I don't mind a tramp parameter as long as there's no more than one; i.e. your example is OK for me, but I wouldn't like ...
import a
p1 = some_command_line_argument_value
p2 = another_command_line_argument_value
p3 = a_further_command_line_argument_value
a.meth1(p1, p2, p3)
... instead I'd prefer ...
import a
p = several_command_line_argument_values
a.meth1(p)
... because if meth2 decides that it wants more data than before, I'd prefer if it could extract this extra data from the original parameter which it's already being passed, so that I don't need to edit meth1.
A:
With objects, parameter lists should normally be very small, since most appropriate information is a property of the object itself. The standard way to handle this is to configure the object properties and then call the appropriate methods of that object. In this case set p as an attribute of a. Your meth2 should also complain if p is not set.
A:
Your example is reminiscent of the code smell Message Chains. You may find the corresponding refactoring, Hide Delegate, informative.
|
How to avoid excessive parameter passing?
|
I am developing a medium size program in python spread across 5 modules. The program accepts command line arguments using OptionParser in the main module e.g. main.py. These options are later used to determine how methods in other modules behave (e.g. a.py, b.py). As I extend the ability for the user to customise the behaviour or the program I find that I end up requiring this user-defined parameter in a method in a.py that is not directly called by main.py, but is instead called by another method in a.py:
main.py:
import a
p = some_command_line_argument_value
a.meth1(p)
a.py:
meth1(p):
# some code
res = meth2(p)
# some more code w/ res
meth2(p):
# do something with p
This excessive parameter passing seems wasteful and wrong, but has hard as I try I cannot think of a design pattern that solves this problem. While I had some formal CS education (minor in CS during my B.Sc.), I've only really come to appreciate good coding practices since I started using python. Please help me become a better programmer!
|
[
"Create objects of types relevant to your program, and store the command line options relevant to each in them. Example:\nimport WidgetFrobnosticator\nf = WidgetFrobnosticator()\nf.allow_oncave_widgets = option_allow_concave_widgets\nf.respect_weasel_pins = option_respect_weasel_pins\n\n# Now the methods of WidgetFrobnosticator have access to your command-line parameters,\n# in a way that's not dependent on the input format.\n\nimport PlatypusFactory\np = PlatypusFactory()\np.allow_parthenogenesis = option_allow_parthenogenesis\np.max_population = option_max_population\n\n# The platypus factory knows about its own options, but not those of the WidgetFrobnosticator\n# or vice versa. This makes each class easier to read and implement.\n\n",
"Maybe you should organize your code more into classes and objects? As I was writing this, Jimmy showed a class-instance based answer, so here is a pure class-based answer. This would be most useful if you only ever wanted a single behavior; if there is any chance at all you might want different defaults some of the time, you should use ordinary object-oriented programming in Python, i.e. pass around class instances with the property p set in the instance, not the class.\nclass Aclass(object):\n p = None\n @classmethod\n def init_p(cls, value):\n p = value\n @classmethod\n def meth1(cls):\n # some code\n res = cls.meth2()\n # some more code w/ res\n @classmethod\n def meth2(cls):\n # do something with p\n pass\n\nfrom a import Aclass as ac\n\nac.init_p(some_command_line_argument_value)\n\nac.meth1()\nac.meth2()\n\n",
"If \"a\" is a real object and not just a set of independent helper methods, you can create an \"p\" member variable in \"a\" and set it when you instantiate an \"a\" object. Then your main class will not need to pass \"p\" into meth1 and meth2 once \"a\" has been instantiated.\n",
"[Caution: my answer isn't specific to python.]\nI remember that Code Complete called this kind of parameter a \"tramp parameter\". Googling for \"tramp parameter\" doesn't return many results, however.\nSome alternatives to tramp parameters might include:\n\nPut the data in a global variable\nPut the data in a static variable of a class (similar to global data)\nPut the data in an instance variable of a class\nPseudo-global variable: hidden behind a singleton, or some dependency injection mechanism\n\nPersonally, I don't mind a tramp parameter as long as there's no more than one; i.e. your example is OK for me, but I wouldn't like ...\nimport a\np1 = some_command_line_argument_value\np2 = another_command_line_argument_value\np3 = a_further_command_line_argument_value\na.meth1(p1, p2, p3)\n\n... instead I'd prefer ...\nimport a\np = several_command_line_argument_values\na.meth1(p)\n\n... because if meth2 decides that it wants more data than before, I'd prefer if it could extract this extra data from the original parameter which it's already being passed, so that I don't need to edit meth1.\n",
"With objects, parameter lists should normally be very small, since most appropriate information is a property of the object itself. The standard way to handle this is to configure the object properties and then call the appropriate methods of that object. In this case set p as an attribute of a. Your meth2 should also complain if p is not set.\n",
"Your example is reminiscent of the code smell Message Chains. You may find the corresponding refactoring, Hide Delegate, informative.\n"
] |
[
8,
4,
2,
2,
1,
1
] |
[] |
[] |
[
"design_patterns",
"python"
] |
stackoverflow_0001580792_design_patterns_python.txt
|
Q:
Can a WIN32 program authenticate into Django authentication system, using MYSQL?
I have a web service with Django Framework.
My friend's project is a WIN32 program and also a MS-sql server.
The Win32 program currently has a login system that talks to a MS-sql for authentication.
However, we would like to INTEGRATE this login system as one.
Please answer the 2 things:
I want scrap the MS-SQL to use only the Django authentication system on the linux server. Can the WIN32 client talk to Django using a Django API (login)?
If not, what is the best way of combining the authentication?
A:
The Win32 client can act like a web client to pass the user's credentials to the server. You will want to store the session cookie you get once you are authenticated and use that cookie in all following requests
|
Can a WIN32 program authenticate into Django authentication system, using MYSQL?
|
I have a web service with Django Framework.
My friend's project is a WIN32 program and also a MS-sql server.
The Win32 program currently has a login system that talks to a MS-sql for authentication.
However, we would like to INTEGRATE this login system as one.
Please answer the 2 things:
I want scrap the MS-SQL to use only the Django authentication system on the linux server. Can the WIN32 client talk to Django using a Django API (login)?
If not, what is the best way of combining the authentication?
|
[
"The Win32 client can act like a web client to pass the user's credentials to the server. You will want to store the session cookie you get once you are authenticated and use that cookie in all following requests\n"
] |
[
1
] |
[] |
[] |
[
"django",
"mysql",
"python",
"windows"
] |
stackoverflow_0001533259_django_mysql_python_windows.txt
|
Q:
Django inheritance: how to have one method for all subclasses?
I have a model
BaseModel
and several subclasses of it
ChildModelA(BaseModel), ChildModelB(BaseModel), ...
using multi-table inheritance. In future I plan to have dozens of subclass models.
All subclasses have some implementation of method
do_something()
How can I call do_somthing from a BaseModel instance?
Almost identical problem (without solution) is posted here:
http://peterbraden.co.uk/article/django-inheritance
A simpler question: how I resolve BaseModel instnace to one of its subclasses instance without checking all possible subclasses?
A:
If you want to avoid checking all possible subclasses, the only way I can think of would be to store the class name associated with the subclass in a field defined on the base class. Your base class might have a method like this:
def resolve(self):
module, cls_name = self.class_name.rsplit(".",1)
module = import_module(module)
cls = getattr(module, cls_name)
return cls.objects.get(pk=self.pk)
This answer does not make me happy and I too would love to see a better solution, as I will be facing a similar problem soon.
A:
Will you ever be working with an instance of the base type or will you always be working with instances of the children? If the latter is the case then call the method, even if you have a reference to the base type since the object itself IS-A child type.
Since Python support duck typing this means that your method call will be bond appropriately since the child instance will truly have this method.
A pythonic programming style which
determines an object’s type by
inspection of its method or attribute
signature rather than by explicit
relationship to some type object (“If
it looks like a duck and quacks like a
duck, it must be a duck.”) By
emphasizing interfaces rather than
specific types, well-designed code
improves its flexibility by allowing
polymorphic substitution. Duck-typing
avoids tests using type() or
isinstance(). (Note, however, that
duck-typing can be complemented with
abstract base classes.) Instead, it
typically employs hasattr() tests or
EAFP programming.
Note that EAFP stands for Easier to Ask Forgiveness than Permission:
Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is characterized by the presence of many try and except statements. The technique contrasts with the LBYL style common to many other languages such as C.
A:
I agree with Andrew. On a couple of sites we have a class that supports a whole bunch of methods (but not fields (this was pre-ORM refactor)) that are common to most-but-not-all of our content classes. They make use of hasattr to sidestep situations where the method doesn't make sense.
This means most of our classes are defined as:
class Foo(models.Model, OurKitchenSinkClass):
Basically it's sort of a MixIn type of thing. Works great, easy to maintain.
|
Django inheritance: how to have one method for all subclasses?
|
I have a model
BaseModel
and several subclasses of it
ChildModelA(BaseModel), ChildModelB(BaseModel), ...
using multi-table inheritance. In future I plan to have dozens of subclass models.
All subclasses have some implementation of method
do_something()
How can I call do_somthing from a BaseModel instance?
Almost identical problem (without solution) is posted here:
http://peterbraden.co.uk/article/django-inheritance
A simpler question: how I resolve BaseModel instnace to one of its subclasses instance without checking all possible subclasses?
|
[
"If you want to avoid checking all possible subclasses, the only way I can think of would be to store the class name associated with the subclass in a field defined on the base class. Your base class might have a method like this:\ndef resolve(self):\n module, cls_name = self.class_name.rsplit(\".\",1)\n module = import_module(module)\n cls = getattr(module, cls_name)\n return cls.objects.get(pk=self.pk)\n\nThis answer does not make me happy and I too would love to see a better solution, as I will be facing a similar problem soon.\n",
"Will you ever be working with an instance of the base type or will you always be working with instances of the children? If the latter is the case then call the method, even if you have a reference to the base type since the object itself IS-A child type.\nSince Python support duck typing this means that your method call will be bond appropriately since the child instance will truly have this method.\n\nA pythonic programming style which\n determines an object’s type by\n inspection of its method or attribute\n signature rather than by explicit\n relationship to some type object (“If\n it looks like a duck and quacks like a\n duck, it must be a duck.”) By\n emphasizing interfaces rather than\n specific types, well-designed code\n improves its flexibility by allowing\n polymorphic substitution. Duck-typing\n avoids tests using type() or\n isinstance(). (Note, however, that\n duck-typing can be complemented with\n abstract base classes.) Instead, it\n typically employs hasattr() tests or\n EAFP programming.\n\nNote that EAFP stands for Easier to Ask Forgiveness than Permission:\n\nEasier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is characterized by the presence of many try and except statements. The technique contrasts with the LBYL style common to many other languages such as C.\n\n",
"I agree with Andrew. On a couple of sites we have a class that supports a whole bunch of methods (but not fields (this was pre-ORM refactor)) that are common to most-but-not-all of our content classes. They make use of hasattr to sidestep situations where the method doesn't make sense.\nThis means most of our classes are defined as:\nclass Foo(models.Model, OurKitchenSinkClass):\n\nBasically it's sort of a MixIn type of thing. Works great, easy to maintain.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"django",
"django_models",
"inheritance",
"overloading",
"python"
] |
stackoverflow_0001581024_django_django_models_inheritance_overloading_python.txt
|
Q:
How to perform a "Group By" query in Django 1.1?
I have seen a lot of talk about 1.1's aggregation, but I am not sure how to use it to perform a simple group by.
I am trying to use Django's sitemap framework to create a sitemap.xml file that Google can crawl to find all the pages of my site. Currently I am passing it all the objects, as in Model.objects.all() - however all that really matters is that only 1 per name gets passed. There could be 5-10 Model instances with the same name but I only want to pass one to avoid having duplicates.
If I do something like this:
Model.objects.values('name').annotate(Count('name'))
It gives me what I want, but then I am not retrieving all the fields from the model - the code that creates the sitemap would then be forced to re-query for every single Model to create the link. So how do I get it to group by the name while retrieving all the fields of the model?
A:
Django models are lazy loaded. It will be the same amount of overhead if your code walks across your model relationships as if the sitemap did. The models fields are essentially proxies until you request related models.
A:
May be Distinct will help you?
Model.objects.values('name').all().distinct()
|
How to perform a "Group By" query in Django 1.1?
|
I have seen a lot of talk about 1.1's aggregation, but I am not sure how to use it to perform a simple group by.
I am trying to use Django's sitemap framework to create a sitemap.xml file that Google can crawl to find all the pages of my site. Currently I am passing it all the objects, as in Model.objects.all() - however all that really matters is that only 1 per name gets passed. There could be 5-10 Model instances with the same name but I only want to pass one to avoid having duplicates.
If I do something like this:
Model.objects.values('name').annotate(Count('name'))
It gives me what I want, but then I am not retrieving all the fields from the model - the code that creates the sitemap would then be forced to re-query for every single Model to create the link. So how do I get it to group by the name while retrieving all the fields of the model?
|
[
"Django models are lazy loaded. It will be the same amount of overhead if your code walks across your model relationships as if the sitemap did. The models fields are essentially proxies until you request related models.\n",
"May be Distinct will help you?\n\nModel.objects.values('name').all().distinct()\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001581383_django_python.txt
|
Q:
How do I make these relative imports work in Python 3?
I have a directory structure that looks like this:
project/
__init__.py
foo/
__init.py__
first.py
second.py
third.py
plum.py
In project/foo/__init__.py I import classes from first.py, second.py and third.py and put them in __all__.
There's a class in first.py named WonderfulThing which I'd like to use in second.py, and want to import by importing * from foo. (It's outside of the scope of this question why I'd like to do so, assume I have a good reason.)
In second.py I've tried from .foo import *, from foo import * and from . import * and in none of these cases is WonderfulThing imported. I also tried from ..foo import *, which raises an error "Attempted relative import beyond toplevel package".
I've read the docs and the PEP, and I can't work out how to make this work. Any assistance would be appreciated.
Clarification/Edit: It seems like I may have been misunderstanding the way __all__ works in packages. I was using it the same as in modules,
from .first import WonderfulThing
__all__ = [ "WonderfulThing" ]
but looking at the docs again it seems to suggest that __all__ may only be used in packages to specify the names of modules to be imported by default; there doesn't seem to be any way to include anything that's not a module.
Is this correct?
A non-wildcard import failed (cannot import name WonderfulThing). Trying from . import foo failed, but import foo works. Unfortunately, dir(foo) shows nothing.
A:
Edit: I did misunderstand the question: No __all__ is not restricted to just modules.
One question is why you want to do a relative import. There is nothing wrong with doing from project.foo import *, here. Secondly, the __all__ restriction on foo won't prevent you from doing from project.foo.first import WonderfulThing, or just from .first import WonderfulThing, which still will be the best way.
And if you really want to import a a lot of things, it's probably best to do from project import foo, and then use the things with foo.WonderfulThing instead for doing an import * and then using WonderfulThing directly.
However to answer your direct question, to import from the __init__ file in second.py you do this:
from . import WonderfulThing
or
from . import *
|
How do I make these relative imports work in Python 3?
|
I have a directory structure that looks like this:
project/
__init__.py
foo/
__init.py__
first.py
second.py
third.py
plum.py
In project/foo/__init__.py I import classes from first.py, second.py and third.py and put them in __all__.
There's a class in first.py named WonderfulThing which I'd like to use in second.py, and want to import by importing * from foo. (It's outside of the scope of this question why I'd like to do so, assume I have a good reason.)
In second.py I've tried from .foo import *, from foo import * and from . import * and in none of these cases is WonderfulThing imported. I also tried from ..foo import *, which raises an error "Attempted relative import beyond toplevel package".
I've read the docs and the PEP, and I can't work out how to make this work. Any assistance would be appreciated.
Clarification/Edit: It seems like I may have been misunderstanding the way __all__ works in packages. I was using it the same as in modules,
from .first import WonderfulThing
__all__ = [ "WonderfulThing" ]
but looking at the docs again it seems to suggest that __all__ may only be used in packages to specify the names of modules to be imported by default; there doesn't seem to be any way to include anything that's not a module.
Is this correct?
A non-wildcard import failed (cannot import name WonderfulThing). Trying from . import foo failed, but import foo works. Unfortunately, dir(foo) shows nothing.
|
[
"Edit: I did misunderstand the question: No __all__ is not restricted to just modules.\nOne question is why you want to do a relative import. There is nothing wrong with doing from project.foo import *, here. Secondly, the __all__ restriction on foo won't prevent you from doing from project.foo.first import WonderfulThing, or just from .first import WonderfulThing, which still will be the best way.\nAnd if you really want to import a a lot of things, it's probably best to do from project import foo, and then use the things with foo.WonderfulThing instead for doing an import * and then using WonderfulThing directly.\nHowever to answer your direct question, to import from the __init__ file in second.py you do this:\nfrom . import WonderfulThing\n\nor\nfrom . import *\n\n"
] |
[
3
] |
[] |
[] |
[
"import",
"python",
"python_3.x",
"relative_path"
] |
stackoverflow_0001581260_import_python_python_3.x_relative_path.txt
|
Q:
How do i use perspective projection in this library
i found a library called pyeuclid and it seems to do what i want in respect to 3D math.
it contins a 3D vector class and a 4X4 matrix class capable of transformations like rotate,translate and scale.
matrix creation is simple, simply pass along the arguments and the matrix is created.
>>> m = Matrix4()
>>> m.translate(50,50,50)
Matrix4([ 1.00 0.00 0.00 50.00
0.00 1.00 0.00 50.00
0.00 0.00 1.00 50.00
0.00 0.00 0.00 1.00])
the lbrary comes with a 3D vector class, if i wanted to translate a 3D point with this matrix i would need to first create the vector class, like this.
>>> v = Vector3(100,200,300)
>>> v
Vector3(100.00, 200.00, 300.00)
>>> v.x
100
>>> v.y
200
>>> v.z
300
Now the Matrix4() class comes with a method to transform coordinates, and its used like so.
>>> m.transform(v)
Point3(150.00, 250.00, 350.00)
this is great, however, there is no perspective projection applied, the library comes with a method called Matrix4.new_perspective(fov_y, aspect, near, far).
Thats where the problem lies, i have no idea how to use this function properly, (m.transform(vector3) doesent produce anything usable )the documentation doesent show how its used with vectors or other matrices, it says something about being 'equivalent to the OpenGL call gluPerspective.fov_y', but ive never used opengl so that wont help.
this is the module (from euclid import Vector3 , Matrix4)
what i dont want, is recomendtions for some game engine/visulization library, or something thats bent on drawing something to the screen, i just want to know if i can use this library to to turn 3D vector coordnates into 2D screen coordinates, and if not what library can i use???
so get in there and help out noob in distress! :-)
thank you for your help.
A:
This tutorial explains the arguments to gluPerspective(), and should transfer over since your library is written with that as a model.
I would expect the new_perspective() method to work like a constructor, i.e. it returns a Matrix set up as a perspective transformation. You should then be able to transform world-space coordinates into perspective-projected coordinates by multiplying vectors with that matrix.
A:
I don't know if this is relevant as you may already know this but you'll need a view matrix too. The view matrix represents the inverse of the transform of your camera in the world. You may find the matrix class also has helper methods for creating view matrices, sometimes called lookAt os similar. Alternatively just create a matrix manually that positions and orientates your (imaginary) camera and then invert it.
If you apply the projection matrix to world co-ords you're making the implicit assumption that your view matrix is the identity and so your camera is at the origin of the world and pointing down an axis. Which axis that is will depend on the projection matrix but if it's like most other systems it'll be the z-axis since convention has camera depth measured along z. It could be in either the positive or negative direction so again check the documentation for details.
So, to summarise, you need to multiply your model matrix with the view matrix and then by the projection matrix. The projection matrix is designed to work in camera coordinates rather than world coordinates.
|
How do i use perspective projection in this library
|
i found a library called pyeuclid and it seems to do what i want in respect to 3D math.
it contins a 3D vector class and a 4X4 matrix class capable of transformations like rotate,translate and scale.
matrix creation is simple, simply pass along the arguments and the matrix is created.
>>> m = Matrix4()
>>> m.translate(50,50,50)
Matrix4([ 1.00 0.00 0.00 50.00
0.00 1.00 0.00 50.00
0.00 0.00 1.00 50.00
0.00 0.00 0.00 1.00])
the lbrary comes with a 3D vector class, if i wanted to translate a 3D point with this matrix i would need to first create the vector class, like this.
>>> v = Vector3(100,200,300)
>>> v
Vector3(100.00, 200.00, 300.00)
>>> v.x
100
>>> v.y
200
>>> v.z
300
Now the Matrix4() class comes with a method to transform coordinates, and its used like so.
>>> m.transform(v)
Point3(150.00, 250.00, 350.00)
this is great, however, there is no perspective projection applied, the library comes with a method called Matrix4.new_perspective(fov_y, aspect, near, far).
Thats where the problem lies, i have no idea how to use this function properly, (m.transform(vector3) doesent produce anything usable )the documentation doesent show how its used with vectors or other matrices, it says something about being 'equivalent to the OpenGL call gluPerspective.fov_y', but ive never used opengl so that wont help.
this is the module (from euclid import Vector3 , Matrix4)
what i dont want, is recomendtions for some game engine/visulization library, or something thats bent on drawing something to the screen, i just want to know if i can use this library to to turn 3D vector coordnates into 2D screen coordinates, and if not what library can i use???
so get in there and help out noob in distress! :-)
thank you for your help.
|
[
"This tutorial explains the arguments to gluPerspective(), and should transfer over since your library is written with that as a model.\nI would expect the new_perspective() method to work like a constructor, i.e. it returns a Matrix set up as a perspective transformation. You should then be able to transform world-space coordinates into perspective-projected coordinates by multiplying vectors with that matrix.\n",
"I don't know if this is relevant as you may already know this but you'll need a view matrix too. The view matrix represents the inverse of the transform of your camera in the world. You may find the matrix class also has helper methods for creating view matrices, sometimes called lookAt os similar. Alternatively just create a matrix manually that positions and orientates your (imaginary) camera and then invert it.\nIf you apply the projection matrix to world co-ords you're making the implicit assumption that your view matrix is the identity and so your camera is at the origin of the world and pointing down an axis. Which axis that is will depend on the projection matrix but if it's like most other systems it'll be the z-axis since convention has camera depth measured along z. It could be in either the positive or negative direction so again check the documentation for details.\nSo, to summarise, you need to multiply your model matrix with the view matrix and then by the projection matrix. The projection matrix is designed to work in camera coordinates rather than world coordinates.\n"
] |
[
0,
0
] |
[] |
[] |
[
"3d",
"matrix",
"projection",
"python",
"vector"
] |
stackoverflow_0001559083_3d_matrix_projection_python_vector.txt
|
Q:
phpDocumentor goes to PHP, as X goes to Python (Django)
phpDocumentor goes to PHP, as X goes to Python (Django)
What is the X?
A:
Or Sphinx, as seen on TV and at python.org.
A:
Epydoc, pydoctor or standard pydoc.
|
phpDocumentor goes to PHP, as X goes to Python (Django)
|
phpDocumentor goes to PHP, as X goes to Python (Django)
What is the X?
|
[
"Or Sphinx, as seen on TV and at python.org.\n",
"Epydoc, pydoctor or standard pydoc.\n"
] |
[
7,
5
] |
[] |
[] |
[
"django",
"php",
"phpdoc",
"python"
] |
stackoverflow_0001582145_django_php_phpdoc_python.txt
|
Q:
Python TCP stack implementation
Is there a python library which implements a standalone TCP stack?
I can't use the usual python socket library because I'm receiving a stream of packets over a socket (they are being tunneled to me over this socket). When I receive a TCP SYN packet addressed to a particular port, I'd like to accept the connection (send a syn-ack) and then get the data sent by the other end (ack'ing appropriately).
I was hoping there was some sort of TCP stack already written which I could utilize. Any ideas? I've used lwip in the past for a C project -- something along those lines in python would be perfect.
A:
You don't say which platform you are working on, but if you are working on linux, I'd open a tun/tap interface and get the IP packets back into the kernel as a real network interface so the kernel can do all that tricky TCP stuff.
This is how (for example) OpenVPN works - it receives the raw IP packets over UDP or TCP and tunnels them back into the kernel over a tun/tap interface.
I think that there is a tun/tap interface for windows too now which was developed for the OpenVPN port to windows.
A:
Glancing over Scapy, it looks like it might be able to handle these low-level situations. I haven't used it myself so I can't confirm that it does what you've explained; I've only glanced over the documentation.
A:
I know this isn't directly Python related but if you are looking to do heavy network processing, you should consider Erlang instead of Python. Just a suggestion really... you can always take a shot a doing this with Twisted... if you feel adventurous (and have lots of time on your side) ;-)
A:
You might be able to use the ctypes module to import lwip and use it again.
A:
If you are already committed to the software at the other end of the socket, that is forwarding TCP packets to you, then perhaps TCPWatch will show you how to get at the SYN packets. SCAPY is certainly great for sending exactly the packets that you want, but I'm not sure that it will work as a proxy.
http://hathawaymix.org/Software/TCPWatch
However, if you are not committed to what is on the sending end, then consider using Twisted Conch or Paramiko to do SSH forwarding. Even if you don't need encryption, you can still use these with blowfish which has a low impact on your CPU. This doesn't mean that you need Conch on the other end, since SSH is standardised so any SSH software should work. In the SSH world this is normally referred to as "port forwarding" and people use an SSH terminal client to log into an SSH server and set up the port forwarding tunnel. Conch and Paramiko allow you to build this into a Python application so that there is no need for the SSH terminal client.
|
Python TCP stack implementation
|
Is there a python library which implements a standalone TCP stack?
I can't use the usual python socket library because I'm receiving a stream of packets over a socket (they are being tunneled to me over this socket). When I receive a TCP SYN packet addressed to a particular port, I'd like to accept the connection (send a syn-ack) and then get the data sent by the other end (ack'ing appropriately).
I was hoping there was some sort of TCP stack already written which I could utilize. Any ideas? I've used lwip in the past for a C project -- something along those lines in python would be perfect.
|
[
"You don't say which platform you are working on, but if you are working on linux, I'd open a tun/tap interface and get the IP packets back into the kernel as a real network interface so the kernel can do all that tricky TCP stuff.\nThis is how (for example) OpenVPN works - it receives the raw IP packets over UDP or TCP and tunnels them back into the kernel over a tun/tap interface.\nI think that there is a tun/tap interface for windows too now which was developed for the OpenVPN port to windows.\n",
"Glancing over Scapy, it looks like it might be able to handle these low-level situations. I haven't used it myself so I can't confirm that it does what you've explained; I've only glanced over the documentation.\n",
"I know this isn't directly Python related but if you are looking to do heavy network processing, you should consider Erlang instead of Python. Just a suggestion really... you can always take a shot a doing this with Twisted... if you feel adventurous (and have lots of time on your side) ;-)\n",
"You might be able to use the ctypes module to import lwip and use it again.\n",
"If you are already committed to the software at the other end of the socket, that is forwarding TCP packets to you, then perhaps TCPWatch will show you how to get at the SYN packets. SCAPY is certainly great for sending exactly the packets that you want, but I'm not sure that it will work as a proxy.\nhttp://hathawaymix.org/Software/TCPWatch\nHowever, if you are not committed to what is on the sending end, then consider using Twisted Conch or Paramiko to do SSH forwarding. Even if you don't need encryption, you can still use these with blowfish which has a low impact on your CPU. This doesn't mean that you need Conch on the other end, since SSH is standardised so any SSH software should work. In the SSH world this is normally referred to as \"port forwarding\" and people use an SSH terminal client to log into an SSH server and set up the port forwarding tunnel. Conch and Paramiko allow you to build this into a Python application so that there is no need for the SSH terminal client.\n"
] |
[
7,
2,
0,
0,
0
] |
[] |
[] |
[
"network_programming",
"network_protocols",
"python",
"raw_sockets",
"tcp"
] |
stackoverflow_0001581087_network_programming_network_protocols_python_raw_sockets_tcp.txt
|
Q:
IronPython libraries for scientific plots
What are good python libraries which IronPython supports (current version wise) for drawing scientific plots on Win ?
By "scientific plots" I mean simple x-y plots, x-y-z surface plots and x-y-z shaded plots.
A:
According to this it's possible to use matplotlib with IronPython. Which will at least get you 2D plots. Another way of running matplotlib.
gnuplot can generate 3D charts - http://www.resolverhacks.net/gnuplot_plotting.html might be a starting point.
A:
If you get Resolver, then this Resolver Spreadsheet Challenge winner shows that it is quite capable of doing 3d shaded scientific plots.
http://www.voidspace.org.uk/python/weblog/arch_d7_2009_01_17.shtml#e1049
Resolver One is a sophisticated spreadsheet built entirely in IronPython which is becoming popular with scientists and people in the financial services industry due to the power of the IronPython scripting engine within it. You can dowload a trial version of it here:
http://www.resolversystems.com/
Nevertheless, if you work with colleagues who do most of their work on unix-like systems, you might want to choose matplotlib anyhow because there is more possibility of sharing code etc. Resolver One does not yet run on Mono.
|
IronPython libraries for scientific plots
|
What are good python libraries which IronPython supports (current version wise) for drawing scientific plots on Win ?
By "scientific plots" I mean simple x-y plots, x-y-z surface plots and x-y-z shaded plots.
|
[
"According to this it's possible to use matplotlib with IronPython. Which will at least get you 2D plots. Another way of running matplotlib.\ngnuplot can generate 3D charts - http://www.resolverhacks.net/gnuplot_plotting.html might be a starting point.\n",
"If you get Resolver, then this Resolver Spreadsheet Challenge winner shows that it is quite capable of doing 3d shaded scientific plots. \nhttp://www.voidspace.org.uk/python/weblog/arch_d7_2009_01_17.shtml#e1049\nResolver One is a sophisticated spreadsheet built entirely in IronPython which is becoming popular with scientists and people in the financial services industry due to the power of the IronPython scripting engine within it. You can dowload a trial version of it here:\nhttp://www.resolversystems.com/\nNevertheless, if you work with colleagues who do most of their work on unix-like systems, you might want to choose matplotlib anyhow because there is more possibility of sharing code etc. Resolver One does not yet run on Mono.\n"
] |
[
8,
0
] |
[] |
[] |
[
"ironpython",
"python"
] |
stackoverflow_0001412412_ironpython_python.txt
|
Q:
Using/Creating Python objects with Jython
HI,
lets say I have a Java interface B, something like this. B.java :
public interface B { String FooBar(String s); }
and I want to use it with a Python class D witch inherits B, like this. D.py :
class D(B):
def FooBar(s)
return s + 'e'
So now how do I get an instance of D in java? I'm sorry im asking such a n00b question but the Jython doc sucks / is partially off line.
A:
Code for your example above. You also need to change the FooBar implementation to take a self argument since it is not a static method.
You need to have jython.jar on the classpath for this example to compile and run.
import org.python.core.PyObject;
import org.python.core.PyString;
import org.python.util.PythonInterpreter;
public class Main {
public static B create()
{
PythonInterpreter interpreter = new PythonInterpreter();
interpreter.exec("from D import D");
PyObject DClass = interpreter.get("D");
PyObject DObject = DClass.__call__();
return (B)DObject.__tojava__(B.class);
}
public static void main(String[] args)
{
B b = create();
System.out.println(b.FooBar("Wall-"));
}
}
For more info see the chapter on Jython and Java integration in the Jython Book
|
Using/Creating Python objects with Jython
|
HI,
lets say I have a Java interface B, something like this. B.java :
public interface B { String FooBar(String s); }
and I want to use it with a Python class D witch inherits B, like this. D.py :
class D(B):
def FooBar(s)
return s + 'e'
So now how do I get an instance of D in java? I'm sorry im asking such a n00b question but the Jython doc sucks / is partially off line.
|
[
"Code for your example above. You also need to change the FooBar implementation to take a self argument since it is not a static method.\nYou need to have jython.jar on the classpath for this example to compile and run.\nimport org.python.core.PyObject;\nimport org.python.core.PyString;\nimport org.python.util.PythonInterpreter;\npublic class Main {\n\n public static B create() \n {\n PythonInterpreter interpreter = new PythonInterpreter();\n interpreter.exec(\"from D import D\");\n PyObject DClass = interpreter.get(\"D\");\n\n PyObject DObject = DClass.__call__();\n return (B)DObject.__tojava__(B.class);\n }\n\n public static void main(String[] args) \n {\n B b = create();\n System.out.println(b.FooBar(\"Wall-\"));\n }\n}\n\nFor more info see the chapter on Jython and Java integration in the Jython Book\n"
] |
[
4
] |
[] |
[] |
[
"java",
"jython",
"python"
] |
stackoverflow_0001582674_java_jython_python.txt
|
Q:
Selenium IDE - can't select "table" tab
Can someone tell me why I can't select the "table" tab?
Here is a pic:
alt text http://img110.imageshack.us/img110/935/imgzx.jpg
A:
What "Format" are you using? The table is available in HTML only. I think.
Select Options/Format/HTML from the menu and the Table tab should be activated.
|
Selenium IDE - can't select "table" tab
|
Can someone tell me why I can't select the "table" tab?
Here is a pic:
alt text http://img110.imageshack.us/img110/935/imgzx.jpg
|
[
"What \"Format\" are you using? The table is available in HTML only. I think. \nSelect Options/Format/HTML from the menu and the Table tab should be activated.\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001582679_python.txt
|
Q:
String matching in python with re
I have a file in this structure:
009/foo/bar/hi23123/foo/bar231123/foo/bar/yo232131
What i need is to find the exact match of a string; e.g. only /foo/bar among /foo/bar/hi and /foo/bar/yo
One solution came up in my mind is like to check for ending "/" for the input string. Because if there is ending "/" in the possible results, that means it got something different than only /foo/bar.
For this solution, I must say that:
input = /foo/bar
and
match input without ending "/"
How can I do this by Regex in python?
Btw, if there any other solution suggestion, you're welcome to share here.
A:
So you want /foo/bar not followed by a /? If so, then you're looking for a "negative lookahead",
r = re.compile(r'/foo/bar(?!/)')
and then r.search to your heart's content.
|
String matching in python with re
|
I have a file in this structure:
009/foo/bar/hi23123/foo/bar231123/foo/bar/yo232131
What i need is to find the exact match of a string; e.g. only /foo/bar among /foo/bar/hi and /foo/bar/yo
One solution came up in my mind is like to check for ending "/" for the input string. Because if there is ending "/" in the possible results, that means it got something different than only /foo/bar.
For this solution, I must say that:
input = /foo/bar
and
match input without ending "/"
How can I do this by Regex in python?
Btw, if there any other solution suggestion, you're welcome to share here.
|
[
"So you want /foo/bar not followed by a /? If so, then you're looking for a \"negative lookahead\", \nr = re.compile(r'/foo/bar(?!/)')\n\nand then r.search to your heart's content.\n"
] |
[
8
] |
[] |
[] |
[
"python",
"regex",
"string_matching"
] |
stackoverflow_0001582895_python_regex_string_matching.txt
|
Q:
Reposition a VLC window programmatically
I'm sure others have run into this problem too...
I often watch videos in a small VLC window while working on other tasks, but no matter where the window is placed, I eventually need to access something in the GUI behind it, and have to manually reposition the video window first.
This could be solved by having the VLC window snap to another corner whenever the mouse pointer is moved over it. I haven't found an app that does this, so would like to write one. What technologies could I use to do this? Cross platform might be harder... so what if just on Windows?
I'd prefer something in C# (or Python), but am willing to learn something new if need be.
A:
Here is a windows only solution. You dont need to actually put the mouse over the window. All you need to do is Find the window using its name and send WM_MOVE. I dont know the name of the window which VLC uses. You could use Spy++ to find its name.
A:
This is a bit OOT, but in Windows 7, shaking the active window will hide others to reveal the desktop (and so will clicking/hovering the rightmost taskbar button). Instead of hiding/moving vlc, you could just temporarily reveal the whole desktop. Shaking the active window again brings everything back.
|
Reposition a VLC window programmatically
|
I'm sure others have run into this problem too...
I often watch videos in a small VLC window while working on other tasks, but no matter where the window is placed, I eventually need to access something in the GUI behind it, and have to manually reposition the video window first.
This could be solved by having the VLC window snap to another corner whenever the mouse pointer is moved over it. I haven't found an app that does this, so would like to write one. What technologies could I use to do this? Cross platform might be harder... so what if just on Windows?
I'd prefer something in C# (or Python), but am willing to learn something new if need be.
|
[
"Here is a windows only solution. You dont need to actually put the mouse over the window. All you need to do is Find the window using its name and send WM_MOVE. I dont know the name of the window which VLC uses. You could use Spy++ to find its name.\n",
"This is a bit OOT, but in Windows 7, shaking the active window will hide others to reveal the desktop (and so will clicking/hovering the rightmost taskbar button). Instead of hiding/moving vlc, you could just temporarily reveal the whole desktop. Shaking the active window again brings everything back.\n"
] |
[
1,
0
] |
[] |
[] |
[
"c#",
"cross_platform",
"python",
"vlc",
"windows"
] |
stackoverflow_0001581782_c#_cross_platform_python_vlc_windows.txt
|
Q:
XML-RPC C# and Python RPC Server
On my server, I'm using the standard example for Python (with an extra Hello World Method) and on the Client side I'm using the XML-RPC.NET Library in C#.
But everytime I run my client I get the exception that the method is not found. Any Ideas how fix that.
thanks!
Python:
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
# Restrict to a particular path.
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ('/RPC2',)
# Create server
server = SimpleXMLRPCServer(("", 8000),
requestHandler=RequestHandler)
server.register_introspection_functions()
# Register pow() function; this will use the value of
# pow.__name__ as the name, which is just 'pow'.
server.register_function(pow)
# Register a function under a different name
def adder_function(x,y):
return x + y
server.register_function(adder_function, 'add')
def HelloWorld():
return "Hello Henrik"
server.register_function(HelloWorld,'HelloWorld')
# Register an instance; all the methods of the instance are
# published as XML-RPC methods (in this case, just 'div').
class MyFuncs:
def div(self, x, y):
return x // y
server.register_instance(MyFuncs())
# Run the server's main loop
server.serve_forever()
C#
namespace XMLRPC_Test
{
[XmlRpcUrl("http://188.40.xxx.xxx:8000")]
public interface HelloWorld : IXmlRpcProxy
{
[XmlRpcMethod]
String HelloWorld();
}
[XmlRpcUrl("http://188.40.xxx.xxx:8000")]
public interface add : IXmlRpcProxy
{
[XmlRpcMethod]
int add(int x, int y);
}
[XmlRpcUrl("http://188.40.xxx.xxx:8000")]
public interface listMethods : IXmlRpcProxy
{
[XmlRpcMethod("system.listMethods")]
String listMethods();
}
class Program
{
static void Main(string[] args)
{
listMethods proxy = XmlRpcProxyGen.Create<listMethods>();
Console.WriteLine(proxy.listMethods());
Console.ReadLine();
}
}
}
A:
Does it work if you change the declaration to this?
[XmlRpcUrl("http://188.40.xxx.xxx:8000/RPC2")]
From the Python docs:
SimpleXMLRPCRequestHandler.rpc_paths
An attribute value that must be a tuple listing valid path portions of the URL for receiving XML-RPC requests. Requests posted to other paths will result in a 404 “no such page” HTTP error. If this tuple is empty, all paths will be considered valid. The default value is ('/', '/RPC2').
|
XML-RPC C# and Python RPC Server
|
On my server, I'm using the standard example for Python (with an extra Hello World Method) and on the Client side I'm using the XML-RPC.NET Library in C#.
But everytime I run my client I get the exception that the method is not found. Any Ideas how fix that.
thanks!
Python:
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
# Restrict to a particular path.
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ('/RPC2',)
# Create server
server = SimpleXMLRPCServer(("", 8000),
requestHandler=RequestHandler)
server.register_introspection_functions()
# Register pow() function; this will use the value of
# pow.__name__ as the name, which is just 'pow'.
server.register_function(pow)
# Register a function under a different name
def adder_function(x,y):
return x + y
server.register_function(adder_function, 'add')
def HelloWorld():
return "Hello Henrik"
server.register_function(HelloWorld,'HelloWorld')
# Register an instance; all the methods of the instance are
# published as XML-RPC methods (in this case, just 'div').
class MyFuncs:
def div(self, x, y):
return x // y
server.register_instance(MyFuncs())
# Run the server's main loop
server.serve_forever()
C#
namespace XMLRPC_Test
{
[XmlRpcUrl("http://188.40.xxx.xxx:8000")]
public interface HelloWorld : IXmlRpcProxy
{
[XmlRpcMethod]
String HelloWorld();
}
[XmlRpcUrl("http://188.40.xxx.xxx:8000")]
public interface add : IXmlRpcProxy
{
[XmlRpcMethod]
int add(int x, int y);
}
[XmlRpcUrl("http://188.40.xxx.xxx:8000")]
public interface listMethods : IXmlRpcProxy
{
[XmlRpcMethod("system.listMethods")]
String listMethods();
}
class Program
{
static void Main(string[] args)
{
listMethods proxy = XmlRpcProxyGen.Create<listMethods>();
Console.WriteLine(proxy.listMethods());
Console.ReadLine();
}
}
}
|
[
"Does it work if you change the declaration to this?\n[XmlRpcUrl(\"http://188.40.xxx.xxx:8000/RPC2\")]\n\nFrom the Python docs:\n\nSimpleXMLRPCRequestHandler.rpc_paths\nAn attribute value that must be a tuple listing valid path portions of the URL for receiving XML-RPC requests. Requests posted to other paths will result in a 404 “no such page” HTTP error. If this tuple is empty, all paths will be considered valid. The default value is ('/', '/RPC2').\n\n"
] |
[
5
] |
[] |
[] |
[
"c#",
"python",
"xml_rpc"
] |
stackoverflow_0001583017_c#_python_xml_rpc.txt
|
Q:
Naming convention for actually choosing the words in Python, PEP8 compliant
I’m looking for a better way to name everything in Python. Yes, I’ve read PEP8, Spolsky’s wonderful rant, and various other articles. But I’m looking for more guidance in choosing the actual words.
And yes I know
A Foolish Consistency is the Hobgoblin
of Little Minds.
But, you can keep consistent with PEP8 etc, and still not have consistent variable/method/class names which are easy to remember. By consistent, I mean that if you were presented with the same decision twice, you would produce the same name.
As an example, there are multiple PEP8 compliant ways to name the items below:
number of columns in the table
current column number
column object
sum of column
Yeah, sure, it is easy to make a decision to use something like num_col and count_col rather than col_num and col_count (or v.v.). But, I would like to see an example that has seen some testing/refining over time. I often start with a given convention, and then it starts to break down as I venture into a new area.
I guess what I am looking for is not only what the prefix/root/tag/suffix should do (which was partly covered for Apps Hungarian in the Spolsky article), but (many) examples for each, or a rule for generating each.
A:
I believe that the need for complex variable naming conventions goes away with good object-oriented design. In the Spolsky article, much focus is on how variable naming helps preventing errors. I believe that those errors will more often occur when you have many variables in the same scope; this can be avoided by grouping data into objects - then, a single naming context will have only few variables, which don't need combined names.
The other purpose of a naming convention is to better remember the names. Again, object-orientation helps (by hiding much data from users that look from the outside); what you then need is a convention for naming methods, not data. In addition, tools can help which provide you with a list of names available in a certain scope (again, those tools rely on object-orientation to do their job).
In your specific example, if column is an object, I would expect that len(table) gives me the number of columns in a table, sum(column) or column.sum() gives me its sum; and the current column is just the variable in the for loop (often c or column).
A:
Remember that in English, when two ambiguous words are next to each other, the first one becomes an adjective which describes the second one. Try to stick to this rule and always name things with two components where the first component decribes the second.
For instance col_num is a number. What kind of number? A column number.
Next rule is the word of. It is a nice short word so please do not leave it out. Same goes with pluralization. And past tense ending in -ed or -d. Maybe even -ing.
For instance, num_col is a column. What kind? A number column. If you really wanted to refer to the number of columns, you should write num_of_cols. Received date is recd_date or rcvd_date, not rec_date or rcv_date.
English speaking readers are very sensitive to -s and -d at the end of words, as well as of in the middle of a phrase, so don't assume that such short bits of text would be missed. That is very unlikely because we are programmed from a very young age to notice a handful of word endings.
Try for consistency, and keep a glossary or data dictionary of any words, or word fragments that you use. Don't use the same fragment for two different things. If you are using recd to mean received, and you need a variable name for recorded, then either write it out in full or come up with a new abbreviation and put it in the glossary. If you use a relational database, try to be consistent with the naming convention used there. Let the dba know what you are doing and tell them where to find your glossary.
A:
The universe is multi-dimensional.
You have at least two dimensions to each variable name.
"Total", "Count", "Of Columns", "In a Table"
"Current", "Index", "", "Of a Column"
"Current", "Column", "", ""
"Sum", "Of Something", "", "In a Column"
Rats. It's irregular.
Worse, we can pick anything as the "Primary" dimension and pick any sequence of other features as "secondary" dimensions.
Even worse, we could have a truly complex thing. "Total", "Count", "of Non-Underscore", "Columns", "In Tables", "With Even-Length Names", "From a Dictionary", "Keyed by", "Mother's Maiden Name".
Frankly, there's no possible schema for variable names that encompasses "all" knowledge in a systematic, repeatable form.
Keep trying though. It's always fun and games until someone finds a counter-example.
You can keep trying or you can simply use simple, clear names. If your scope of names is small (a small method function, for example), there's nothing to "remember". It's all perfectly visible in the 20 lines of code that make up the method function.
|
Naming convention for actually choosing the words in Python, PEP8 compliant
|
I’m looking for a better way to name everything in Python. Yes, I’ve read PEP8, Spolsky’s wonderful rant, and various other articles. But I’m looking for more guidance in choosing the actual words.
And yes I know
A Foolish Consistency is the Hobgoblin
of Little Minds.
But, you can keep consistent with PEP8 etc, and still not have consistent variable/method/class names which are easy to remember. By consistent, I mean that if you were presented with the same decision twice, you would produce the same name.
As an example, there are multiple PEP8 compliant ways to name the items below:
number of columns in the table
current column number
column object
sum of column
Yeah, sure, it is easy to make a decision to use something like num_col and count_col rather than col_num and col_count (or v.v.). But, I would like to see an example that has seen some testing/refining over time. I often start with a given convention, and then it starts to break down as I venture into a new area.
I guess what I am looking for is not only what the prefix/root/tag/suffix should do (which was partly covered for Apps Hungarian in the Spolsky article), but (many) examples for each, or a rule for generating each.
|
[
"I believe that the need for complex variable naming conventions goes away with good object-oriented design. In the Spolsky article, much focus is on how variable naming helps preventing errors. I believe that those errors will more often occur when you have many variables in the same scope; this can be avoided by grouping data into objects - then, a single naming context will have only few variables, which don't need combined names.\nThe other purpose of a naming convention is to better remember the names. Again, object-orientation helps (by hiding much data from users that look from the outside); what you then need is a convention for naming methods, not data. In addition, tools can help which provide you with a list of names available in a certain scope (again, those tools rely on object-orientation to do their job).\nIn your specific example, if column is an object, I would expect that len(table) gives me the number of columns in a table, sum(column) or column.sum() gives me its sum; and the current column is just the variable in the for loop (often c or column).\n",
"Remember that in English, when two ambiguous words are next to each other, the first one becomes an adjective which describes the second one. Try to stick to this rule and always name things with two components where the first component decribes the second.\nFor instance col_num is a number. What kind of number? A column number.\nNext rule is the word of. It is a nice short word so please do not leave it out. Same goes with pluralization. And past tense ending in -ed or -d. Maybe even -ing.\nFor instance, num_col is a column. What kind? A number column. If you really wanted to refer to the number of columns, you should write num_of_cols. Received date is recd_date or rcvd_date, not rec_date or rcv_date.\nEnglish speaking readers are very sensitive to -s and -d at the end of words, as well as of in the middle of a phrase, so don't assume that such short bits of text would be missed. That is very unlikely because we are programmed from a very young age to notice a handful of word endings. \nTry for consistency, and keep a glossary or data dictionary of any words, or word fragments that you use. Don't use the same fragment for two different things. If you are using recd to mean received, and you need a variable name for recorded, then either write it out in full or come up with a new abbreviation and put it in the glossary. If you use a relational database, try to be consistent with the naming convention used there. Let the dba know what you are doing and tell them where to find your glossary.\n",
"The universe is multi-dimensional.\nYou have at least two dimensions to each variable name.\n\"Total\", \"Count\", \"Of Columns\", \"In a Table\"\n\"Current\", \"Index\", \"\", \"Of a Column\"\n\"Current\", \"Column\", \"\", \"\"\n\"Sum\", \"Of Something\", \"\", \"In a Column\"\nRats. It's irregular. \nWorse, we can pick anything as the \"Primary\" dimension and pick any sequence of other features as \"secondary\" dimensions.\nEven worse, we could have a truly complex thing. \"Total\", \"Count\", \"of Non-Underscore\", \"Columns\", \"In Tables\", \"With Even-Length Names\", \"From a Dictionary\", \"Keyed by\", \"Mother's Maiden Name\".\nFrankly, there's no possible schema for variable names that encompasses \"all\" knowledge in a systematic, repeatable form.\nKeep trying though. It's always fun and games until someone finds a counter-example.\nYou can keep trying or you can simply use simple, clear names. If your scope of names is small (a small method function, for example), there's nothing to \"remember\". It's all perfectly visible in the 20 lines of code that make up the method function. \n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"naming_conventions",
"python"
] |
stackoverflow_0001563673_naming_conventions_python.txt
|
Q:
Python: Int not iterable error
I'm attempting to get my feet wet with python on Project Euler, but I'm having an issue with the first problem (find the sum of the multiples of 3 or 5 up to 1,000). I can successfully print out multiples of three and five, but when I attempt to include the sum function I get the following error:
TypeError: 'int' object is not iterable
Any help would be appreciated.
n = 100
p = 0
while p<n:
p = p + 1
x = range(0, p)
# check to see if numbers are divisable by 3 or 5
def numcheck(x):
for numbers in x:
if numbers%3==0 and numbers%5==0:
sum(numbers)
numcheck(x)
A:
In the for-loop
for numbers in x:
"numbers" steps through the elements in x one at a time, for each pass through the loop.
It would be perhaps better to name the variable "number" because you are only getting
one number at a time. "numbers" equals an integer each time through the loop.
sum(numbers)
throws a TypeError because the function sum() expects an iterable object (like a list of numbers), not just one integer.
So perhaps try:
def numcheck(x):
s=0
for number in x:
if number%3==0 and number%5==0:
s+=number
print(s)
numcheck(range(1000))
A:
numbers needs to be a list or similar when it is passed to sum(). In the code example above, it is an integer - one of the integers from x.
Try something like:
numbers = [num for num in x if num%3==0 and num%5 ==0]
print sum(numbers)
A:
The sum function expects a list, not a single number.
When you do for numbers in, then the variable numbers has a single integer object. Add a print statement, you'll see that numbers is a single number.
You might want to accumulate all the multiples of 3 and 5 in a list. Once you have the list, you can then use the sum function on that list.
A:
I think you want something like what follows.
def numcheck(x):
total = 0
for number in x:
if number % 3 == 0 or and number % 5 == 0:
total += number
print total
Alternatively, you could append each of the divisible numbers to a list, and then call sum() on that list.
A:
help(sum)
Help on built-in function sum in module builtin:
sum(...)
sum(sequence[, start]) -> value
Returns the sum of a sequence of numbers (NOT strings) plus the value
of parameter 'start' (which defaults to 0). When the sequence is
empty, returns start.
(END)
You are passing numbers which is of type int to sum(), but sum takes a sequence.
A:
Here is how I would do this:
n = 100
# the next 4 lines are just to confirm that xrange is n numbers starting at 0
junk = xrange(n)
print junk[0] # print first number in sequence
print junk[-1] # print last number in sequence
print "================"
# check to see if numbers are divisable by 3 or 5
def numcheck(x):
for numbers in x:
if numbers%3==0 and numbers%5==0:
print numbers
numcheck(xrange(n))
You may find it strange that I pass xrange(n) as a parameter. This is an iterator that
will eventually produce the list of n numbers as you go through the loop in numcheck. It's a bit like passing a pointer to a function in C. The key thing is that by using xrange, you do not need to allocate any memory for the list of numbers, so you can more easily run a check on the first billion integers, for instance.
|
Python: Int not iterable error
|
I'm attempting to get my feet wet with python on Project Euler, but I'm having an issue with the first problem (find the sum of the multiples of 3 or 5 up to 1,000). I can successfully print out multiples of three and five, but when I attempt to include the sum function I get the following error:
TypeError: 'int' object is not iterable
Any help would be appreciated.
n = 100
p = 0
while p<n:
p = p + 1
x = range(0, p)
# check to see if numbers are divisable by 3 or 5
def numcheck(x):
for numbers in x:
if numbers%3==0 and numbers%5==0:
sum(numbers)
numcheck(x)
|
[
"In the for-loop \nfor numbers in x:\n\n\"numbers\" steps through the elements in x one at a time, for each pass through the loop.\nIt would be perhaps better to name the variable \"number\" because you are only getting\none number at a time. \"numbers\" equals an integer each time through the loop.\nsum(numbers)\n\nthrows a TypeError because the function sum() expects an iterable object (like a list of numbers), not just one integer.\nSo perhaps try:\ndef numcheck(x):\n s=0\n for number in x:\n if number%3==0 and number%5==0:\n s+=number\n print(s)\nnumcheck(range(1000))\n\n",
"numbers needs to be a list or similar when it is passed to sum(). In the code example above, it is an integer - one of the integers from x.\nTry something like:\nnumbers = [num for num in x if num%3==0 and num%5 ==0]\nprint sum(numbers)\n\n",
"The sum function expects a list, not a single number.\nWhen you do for numbers in, then the variable numbers has a single integer object. Add a print statement, you'll see that numbers is a single number.\nYou might want to accumulate all the multiples of 3 and 5 in a list. Once you have the list, you can then use the sum function on that list.\n",
"I think you want something like what follows.\ndef numcheck(x):\n total = 0\n for number in x:\n if number % 3 == 0 or and number % 5 == 0:\n total += number\n print total\n\nAlternatively, you could append each of the divisible numbers to a list, and then call sum() on that list.\n",
"help(sum)\nHelp on built-in function sum in module builtin:\nsum(...)\n sum(sequence[, start]) -> value\nReturns the sum of a sequence of numbers (NOT strings) plus the value\nof parameter 'start' (which defaults to 0). When the sequence is\nempty, returns start.\n\n(END) \nYou are passing numbers which is of type int to sum(), but sum takes a sequence.\n",
"Here is how I would do this:\nn = 100\n\n# the next 4 lines are just to confirm that xrange is n numbers starting at 0\njunk = xrange(n)\nprint junk[0] # print first number in sequence\nprint junk[-1] # print last number in sequence\nprint \"================\"\n\n# check to see if numbers are divisable by 3 or 5\ndef numcheck(x): \n for numbers in x:\n if numbers%3==0 and numbers%5==0:\n print numbers\n\nnumcheck(xrange(n))\n\nYou may find it strange that I pass xrange(n) as a parameter. This is an iterator that\nwill eventually produce the list of n numbers as you go through the loop in numcheck. It's a bit like passing a pointer to a function in C. The key thing is that by using xrange, you do not need to allocate any memory for the list of numbers, so you can more easily run a check on the first billion integers, for instance.\n"
] |
[
7,
5,
1,
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001583148_python.txt
|
Q:
Long, slow operation in Django view causes timeout. Any way for Python to speak AJAX instead?
I've been programming Python a while, but DJango and web programming in general is new to me.
I have a very long operation performed in a Python view. Since the local() function in my view takes so long to return, there's an HTTP timeout. Fair enough, I understand that part.
What's the best way to give an HTTPresponse back to my users immediately, then dynamically show the results of some python code within the page? I suspect the answer may lie in AJAX but I;m not sure how AJAX on the client can be fed from Python on the server, or even the modules one would commonly use to do such a thing.
A:
Ajax doesn't require any particular technology on the server side. All you need is to return a response in some form that some Javascript on the client side can understand. JSON is an excellent choice here, as it's easy to create in Python (there's a json library in 2.6, and Django has django.utils.simplejson for other versions).
So all you need to do is to put your data in JSON form then send it just as you would any other response - ie by wrapping it in an HTTPResponse.
A:
One way is to submit the task using AJAX/JS or the normal way, start it in background in your view and return immediately. Then use AJAX/JS on client side to periodically check if task is done. If it's done reload the page or provide a link to the client.
CLIENT "Please start a task using this data."-> SERVER
CLIENT <- "Task started!" SERVER
CLIENT "Done?"-> SERVER
CLIENT <- "Nope." SERVER
CLIENT "Done?"-> SERVER
CLIENT <- "Yep, here's a link where you can view results" SERVER
While sending data from server to client without client asking for it is possible, well kind a, (the technology is called Comet) it isn't really necessary in your case.
A:
I'm not sure if this is what you are looking for, but maybe this question (How to implement a minimal server for AJAX in Python?) is helpful. In my answer I give a minimal example (which is not very well written, for example I would now use jquery...).
Edit: As requested by the OP, here is an example for the frontend with JQuery. Note that I'm no expert on this, so there might be issues. This example is supposed to work with a JSON-RPC backend, like this one.
<html>
<head>
<title>JSON-RPC test</title>
<script type="text/javascript" src="jquery-1.3.2.min.js"></script>
<script type="text/javascript" src="json2.js"></script>
<script type="text/javascript">
function test_button() {
var data = $("[name=test_text]").val();
var json_object = {"method": "power",
"params": [parseInt(data), 3],
"id": "test_button"};
var json_string = JSON.stringify(json_object);
$.post("frontend.html", json_string, test_callback, "json")
}
function test_callback(json_object) {
$("#test_result").text(json_object.result.toString());
}
</script>
</head>
<body>
<input type="text" name="test_text" value="2" size="4">
** 3 =
<span id="test_result">0</span>
<input type=button onClick="test_button();" value="calc" title="calculate value">
</body>
</html>
|
Long, slow operation in Django view causes timeout. Any way for Python to speak AJAX instead?
|
I've been programming Python a while, but DJango and web programming in general is new to me.
I have a very long operation performed in a Python view. Since the local() function in my view takes so long to return, there's an HTTP timeout. Fair enough, I understand that part.
What's the best way to give an HTTPresponse back to my users immediately, then dynamically show the results of some python code within the page? I suspect the answer may lie in AJAX but I;m not sure how AJAX on the client can be fed from Python on the server, or even the modules one would commonly use to do such a thing.
|
[
"Ajax doesn't require any particular technology on the server side. All you need is to return a response in some form that some Javascript on the client side can understand. JSON is an excellent choice here, as it's easy to create in Python (there's a json library in 2.6, and Django has django.utils.simplejson for other versions).\nSo all you need to do is to put your data in JSON form then send it just as you would any other response - ie by wrapping it in an HTTPResponse.\n",
"One way is to submit the task using AJAX/JS or the normal way, start it in background in your view and return immediately. Then use AJAX/JS on client side to periodically check if task is done. If it's done reload the page or provide a link to the client.\n\nCLIENT \"Please start a task using this data.\"-> SERVER\nCLIENT <- \"Task started!\" SERVER\nCLIENT \"Done?\"-> SERVER\nCLIENT <- \"Nope.\" SERVER\nCLIENT \"Done?\"-> SERVER\nCLIENT <- \"Yep, here's a link where you can view results\" SERVER\n\nWhile sending data from server to client without client asking for it is possible, well kind a, (the technology is called Comet) it isn't really necessary in your case.\n",
"I'm not sure if this is what you are looking for, but maybe this question (How to implement a minimal server for AJAX in Python?) is helpful. In my answer I give a minimal example (which is not very well written, for example I would now use jquery...).\nEdit: As requested by the OP, here is an example for the frontend with JQuery. Note that I'm no expert on this, so there might be issues. This example is supposed to work with a JSON-RPC backend, like this one.\n<html>\n<head>\n\n<title>JSON-RPC test</title>\n\n<script type=\"text/javascript\" src=\"jquery-1.3.2.min.js\"></script>\n<script type=\"text/javascript\" src=\"json2.js\"></script>\n\n<script type=\"text/javascript\">\n\nfunction test_button() {\n var data = $(\"[name=test_text]\").val();\n var json_object = {\"method\": \"power\",\n \"params\": [parseInt(data), 3],\n \"id\": \"test_button\"};\n var json_string = JSON.stringify(json_object);\n $.post(\"frontend.html\", json_string, test_callback, \"json\")\n}\n\nfunction test_callback(json_object) {\n $(\"#test_result\").text(json_object.result.toString());\n}\n\n</script>\n\n</head>\n<body>\n\n<input type=\"text\" name=\"test_text\" value=\"2\" size=\"4\">\n** 3 =\n<span id=\"test_result\">0</span>\n<input type=button onClick=\"test_button();\" value=\"calc\" title=\"calculate value\">\n\n</body>\n</html>\n\n"
] |
[
7,
6,
1
] |
[] |
[] |
[
"ajax",
"django",
"python",
"timeout"
] |
stackoverflow_0001582708_ajax_django_python_timeout.txt
|
Q:
activestate pythonwin missing import modules?
I'm working my way through DiveIntoPython.com and I'm having trouble getting the import to work. I've installed ActiveState's Pythonwin on a windows xp prof environment.
In the website, there is an exercise which involves 'import odbchelper' and odbchelper.name
http://www.diveintopython.org/getting_to_know_python/testing_modules.html
When I run it interactive, i get:
>>> import odbchelper
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
ImportError: No module named odbchelper
>>> odbchelper.__name__
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
NameError: name 'odbchelper' is not defined
I'm guessing either i don't have the pathing set correctly or the module does not exist referenced in one of the folders when I run 'sys.path'. Any ideas?
thanks in advance
A:
This module doesn't come packaged with Python2.6 for sure (just tried on my machine). Have you tried googling where this module might be?
Consider this post.
A:
figured it out..
found this:
http://www.faqs.org/docs/diveintopython/odbchelper_divein.html
downloaded the file and then put it into a folder. c:\temp\python\ in my case with the commands:
>> import sys
>> sys.path.append('c:\\temp\\python\\')
|
activestate pythonwin missing import modules?
|
I'm working my way through DiveIntoPython.com and I'm having trouble getting the import to work. I've installed ActiveState's Pythonwin on a windows xp prof environment.
In the website, there is an exercise which involves 'import odbchelper' and odbchelper.name
http://www.diveintopython.org/getting_to_know_python/testing_modules.html
When I run it interactive, i get:
>>> import odbchelper
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
ImportError: No module named odbchelper
>>> odbchelper.__name__
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
NameError: name 'odbchelper' is not defined
I'm guessing either i don't have the pathing set correctly or the module does not exist referenced in one of the folders when I run 'sys.path'. Any ideas?
thanks in advance
|
[
"This module doesn't come packaged with Python2.6 for sure (just tried on my machine). Have you tried googling where this module might be?\nConsider this post.\n",
"figured it out.. \nfound this: \nhttp://www.faqs.org/docs/diveintopython/odbchelper_divein.html\n\ndownloaded the file and then put it into a folder. c:\\temp\\python\\ in my case with the commands: \n>> import sys \n>> sys.path.append('c:\\\\temp\\\\python\\\\')\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"import",
"python"
] |
stackoverflow_0001583947_import_python.txt
|
Q:
Apache2 + RewriteMap + Python -- when returning 'NULL', apache hangs
[SOLVED: See solution below.]
I'm having a problem writing a RewriteMap program (using Python). I have a RewriteMap directive pointing to a Python script which determines if the requested URL needs to be redirected elsewhere.
When the script outputs a string terminated by a linebreak, Apache redirects accordingly. However, when the script outputs NULL (with no linebreak), Apache hangs and subsequent HTTP requests are effectively ignored.
The error log shows no errors. The rewrite log only shows a pass through followed by a redirect when successful, then only pass through when NULL is returned by the script. Subsequent requests also only show pass through.
Additionally, replacing stdout with os.fdopen(sys.stdout.fileno(), 'w', 0) to set buffer length to zero did not help.
Any help would be greatly appreciated. Thank you in advance.
/etc/apache2/httpd.conf
[...]
RewriteLock /tmp/apache_rewrite.lock
/etc/apache2/sites-available/default
<VirtualHost *:80>
[...]
RewriteEngine on
RewriteLogLevel 1
RewriteLog /var/www/logs/rewrite.log
RewriteMap remap prg:/var/www/remap.py
[...]
</VirtualHost>
/var/www/webroot/.htaccess
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule (.*_.*) /${remap:$1} [R=301]
/var/www/remap.py
#!/usr/bin/python
import sys
def getRedirect(str):
new_url = None
# if url needs to be redirected, put this value in new_url
# otherwise new_url remains None
return new_url
while True:
request = sys.stdin.readline().strip()
response = getRedirect(request)
if response:
sys.stdout.write(response + '\n')
else:
sys.stdout.write('NULL')
sys.stdout.flush()
A:
You have to return a single newline, not 'NULL'.
Apache waits for a newline to know when the URL to be rewrite to ends. If your script sends no newline, Apache waits forever.
So just change return ('NULL') to return ('NULL\n'), this will then redirect to /. If you don't want this to happen, have the program to return the URL you want when there's no match in the map.
If you want not to redirect when there's no match I would:
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond (${remap:$1}) !NULL
RewriteRule (.*_.*) /%1 [R=301]
Use a match in the RewriteCond (this would work with NULL as well, of course). But given your problem, this looks like the proper solution.
A:
The best solution I've come up with thus far is to have the RewriteMap script return the new url or '__NULL__\n' if no redirect is desired and store this value in a ENV variable. Then, check the ENV variable for !__NULL__ and redirect. See .htaccess file below.
Also, if anyone is planning on doing something similar to this, inside of the Python script I wrapped a fair amount of it in try/except blocks to prevent the script from dying (in my case, due to failed file/database reads) and subsequent queries being ignored.
/var/www/webroot/.htaccess
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.+)$ - [E=REMAP_RESULT:${remap:$1},NS]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{ENV:REMAP_RESULT} !^__NULL__$
RewriteRule ^(.+)$ /%{ENV:REMAP_RESULT} [R=301,L]
/var/www/remap.py
#!/usr/bin/python
import sys
def getRedirect(str):
try: # to prevent the script from dying on any errors
new_url = str
# if url needs to be redirected, put this value in new_url
# otherwise new_url remains None
if new_url == str: new_url = '__NULL__'
return new_url
except:
return '__NULL__'
while True:
request = sys.stdin.readline().strip()
response = getRedirect(request)
sys.stdout.write(response + '\n')
sys.stdout.flush()
Vinko, you definitely helped me figure this one out. If I had more experience with stackoverflow, you would have received ^ from me. Thank you.
I hope this post helps someone dealing with a similar problem in the future.
Cheers,
Andrew
|
Apache2 + RewriteMap + Python -- when returning 'NULL', apache hangs
|
[SOLVED: See solution below.]
I'm having a problem writing a RewriteMap program (using Python). I have a RewriteMap directive pointing to a Python script which determines if the requested URL needs to be redirected elsewhere.
When the script outputs a string terminated by a linebreak, Apache redirects accordingly. However, when the script outputs NULL (with no linebreak), Apache hangs and subsequent HTTP requests are effectively ignored.
The error log shows no errors. The rewrite log only shows a pass through followed by a redirect when successful, then only pass through when NULL is returned by the script. Subsequent requests also only show pass through.
Additionally, replacing stdout with os.fdopen(sys.stdout.fileno(), 'w', 0) to set buffer length to zero did not help.
Any help would be greatly appreciated. Thank you in advance.
/etc/apache2/httpd.conf
[...]
RewriteLock /tmp/apache_rewrite.lock
/etc/apache2/sites-available/default
<VirtualHost *:80>
[...]
RewriteEngine on
RewriteLogLevel 1
RewriteLog /var/www/logs/rewrite.log
RewriteMap remap prg:/var/www/remap.py
[...]
</VirtualHost>
/var/www/webroot/.htaccess
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule (.*_.*) /${remap:$1} [R=301]
/var/www/remap.py
#!/usr/bin/python
import sys
def getRedirect(str):
new_url = None
# if url needs to be redirected, put this value in new_url
# otherwise new_url remains None
return new_url
while True:
request = sys.stdin.readline().strip()
response = getRedirect(request)
if response:
sys.stdout.write(response + '\n')
else:
sys.stdout.write('NULL')
sys.stdout.flush()
|
[
"You have to return a single newline, not 'NULL'. \nApache waits for a newline to know when the URL to be rewrite to ends. If your script sends no newline, Apache waits forever.\nSo just change return ('NULL') to return ('NULL\\n'), this will then redirect to /. If you don't want this to happen, have the program to return the URL you want when there's no match in the map.\nIf you want not to redirect when there's no match I would:\nRewriteEngine on\nRewriteCond %{REQUEST_FILENAME} !-f\nRewriteCond (${remap:$1}) !NULL\nRewriteRule (.*_.*) /%1 [R=301]\n\nUse a match in the RewriteCond (this would work with NULL as well, of course). But given your problem, this looks like the proper solution.\n",
"The best solution I've come up with thus far is to have the RewriteMap script return the new url or '__NULL__\\n' if no redirect is desired and store this value in a ENV variable. Then, check the ENV variable for !__NULL__ and redirect. See .htaccess file below.\nAlso, if anyone is planning on doing something similar to this, inside of the Python script I wrapped a fair amount of it in try/except blocks to prevent the script from dying (in my case, due to failed file/database reads) and subsequent queries being ignored.\n/var/www/webroot/.htaccess\nRewriteCond %{REQUEST_FILENAME} !-f\nRewriteRule ^(.+)$ - [E=REMAP_RESULT:${remap:$1},NS]\nRewriteCond %{REQUEST_FILENAME} !-f\nRewriteCond %{ENV:REMAP_RESULT} !^__NULL__$\nRewriteRule ^(.+)$ /%{ENV:REMAP_RESULT} [R=301,L]\n\n/var/www/remap.py\n#!/usr/bin/python\n\nimport sys\n\ndef getRedirect(str):\n try: # to prevent the script from dying on any errors\n new_url = str\n # if url needs to be redirected, put this value in new_url\n # otherwise new_url remains None\n if new_url == str: new_url = '__NULL__'\n return new_url\n except:\n return '__NULL__'\n\nwhile True:\n request = sys.stdin.readline().strip()\n response = getRedirect(request)\n sys.stdout.write(response + '\\n')\n sys.stdout.flush()\n\nVinko, you definitely helped me figure this one out. If I had more experience with stackoverflow, you would have received ^ from me. Thank you.\nI hope this post helps someone dealing with a similar problem in the future.\nCheers,\nAndrew\n"
] |
[
6,
3
] |
[] |
[] |
[
"apache",
"apache2",
"python",
"rewrite"
] |
stackoverflow_0001580780_apache_apache2_python_rewrite.txt
|
Q:
What does "lambda" mean in Python, and what's the simplest way to use it?
Can you give an example and other examples that show when and when not to use Lambda?
My book gives me examples, but they're confusing.
A:
Lambda, which originated from Lambda Calculus and (AFAIK) was first implemented in Lisp, is basically an anonymous function - a function which doesn't have a name, and is used in-line, in other words you can assign an identifier to a lambda function in a single expression as such:
>>> addTwo = lambda x: x+2
>>> addTwo(2)
4
This assigns addTwo to the anonymous function, which accepts 1 argument x, and in the function body it adds 2 to x, it returns the last value of the last expression in the function body so there's no return keyword.
The code above is roughly equivalent to:
>>> def addTwo(x):
... return x+2
...
>>> addTwo(2)
4
Except you're not using a function definition, you're assigning an identifier to the lambda.
The best place to use them is when you don't really want to define a function with a name, possibly because that function will only be used one time and not numerous times, in which case you would be better off with a function definition.
Example of a hash tree using lambdas:
>>> mapTree = {
... 'number': lambda x: x**x,
... 'string': lambda x: x[1:]
... }
>>> otype = 'number'
>>> mapTree[otype](2)
4
>>> otype = 'string'
>>> mapTree[otype]('foo')
'oo'
In this example I don't really want to define a name to either of those functions because I'll only use them within the hash, therefore I'll use lambdas.
A:
I do not know which book you are using, but Dive into Python has a section which I think is informative.
A:
Use of lambda is sort of a style thing. When you can get away with a very simple function, and usually where you are just storing it somewhere (in a list of functions perhaps, or in a GUI toolkit data structure, etc.) people feel lambda reduces clutter in their code.
In Python it is only possible to make a lambda that returns a single expression, and the lambda cannot span multiple lines (unless you join the multiple lines by using the backslash-at-the-end-of-a-line trick). People have requested that Python add improvements for lambda, but it hasn't happened. As I understand it, the changes to make lambda able to write any function would significantly complicate the parsing code in Python. And, since we already have def to define a function, the gain is not considered worth the complication. So there are some cases where you might wish to use lambda where it is not possible. In that case, you can just use a def:
object1.register_callback_function(lambda x: x.foo() > 3)
def fn(x):
if x.foo() > 3:
x.recalibrate()
return x.value() > 9
elif x.bar() > 3:
x.do_something_else()
return x.other_value < 0
else:
x.whatever()
return True
object2.register_callback_function(fn)
del(fn)
The first callback function was simple and a lambda sufficed. For the second one, it is simply not possible to use a lambda. We achieve the same effect by using def and making a function object that is bound to the name fn, and then passing fn to register_callback_function(). Then, just to show we can, we call del() on the name fn to unbind it. Now the name fn no longer is bound with any object, but register_callback_function() still has a reference to the function object so the function object lives on.
|
What does "lambda" mean in Python, and what's the simplest way to use it?
|
Can you give an example and other examples that show when and when not to use Lambda?
My book gives me examples, but they're confusing.
|
[
"Lambda, which originated from Lambda Calculus and (AFAIK) was first implemented in Lisp, is basically an anonymous function - a function which doesn't have a name, and is used in-line, in other words you can assign an identifier to a lambda function in a single expression as such:\n>>> addTwo = lambda x: x+2\n>>> addTwo(2)\n4\n\nThis assigns addTwo to the anonymous function, which accepts 1 argument x, and in the function body it adds 2 to x, it returns the last value of the last expression in the function body so there's no return keyword.\nThe code above is roughly equivalent to:\n>>> def addTwo(x):\n... return x+2\n... \n>>> addTwo(2)\n4\n\nExcept you're not using a function definition, you're assigning an identifier to the lambda.\nThe best place to use them is when you don't really want to define a function with a name, possibly because that function will only be used one time and not numerous times, in which case you would be better off with a function definition.\nExample of a hash tree using lambdas:\n>>> mapTree = {\n... 'number': lambda x: x**x,\n... 'string': lambda x: x[1:]\n... }\n>>> otype = 'number'\n>>> mapTree[otype](2)\n4\n>>> otype = 'string'\n>>> mapTree[otype]('foo')\n'oo'\n\nIn this example I don't really want to define a name to either of those functions because I'll only use them within the hash, therefore I'll use lambdas.\n",
"I do not know which book you are using, but Dive into Python has a section which I think is informative.\n",
"Use of lambda is sort of a style thing. When you can get away with a very simple function, and usually where you are just storing it somewhere (in a list of functions perhaps, or in a GUI toolkit data structure, etc.) people feel lambda reduces clutter in their code.\nIn Python it is only possible to make a lambda that returns a single expression, and the lambda cannot span multiple lines (unless you join the multiple lines by using the backslash-at-the-end-of-a-line trick). People have requested that Python add improvements for lambda, but it hasn't happened. As I understand it, the changes to make lambda able to write any function would significantly complicate the parsing code in Python. And, since we already have def to define a function, the gain is not considered worth the complication. So there are some cases where you might wish to use lambda where it is not possible. In that case, you can just use a def:\nobject1.register_callback_function(lambda x: x.foo() > 3)\n\ndef fn(x):\n if x.foo() > 3:\n x.recalibrate()\n return x.value() > 9\n elif x.bar() > 3:\n x.do_something_else()\n return x.other_value < 0\n else:\n x.whatever()\n return True\n\nobject2.register_callback_function(fn)\ndel(fn)\n\nThe first callback function was simple and a lambda sufficed. For the second one, it is simply not possible to use a lambda. We achieve the same effect by using def and making a function object that is bound to the name fn, and then passing fn to register_callback_function(). Then, just to show we can, we call del() on the name fn to unbind it. Now the name fn no longer is bound with any object, but register_callback_function() still has a reference to the function object so the function object lives on.\n"
] |
[
40,
4,
3
] |
[] |
[] |
[
"lambda",
"python"
] |
stackoverflow_0001583617_lambda_python.txt
|
Q:
Python xpath not working?
Okay, this is starting to drive me a little bit nuts. I've tried several xml/xpath libraries for Python, and can't figure out a simple way to get a stinkin' "title" element.
The latest attempt looks like this (using Amara):
def view(req, url):
req.content_type = 'text/plain'
doc = amara.parse(urlopen(url))
for node in doc.xml_xpath('//title'):
req.write(str(node)+'\n')
But that prints out nothing. My XML looks like this: http://programanddesign.com/feed/atom/
If I try //* instead of //title it returns everything as expected. I know that the XML has titles in there, so what's the problem? Is it the namespace or something? If so, how can I fix it?
Can't seem to get it working with no prefix, but this does work:
def view(req, url):
req.content_type = 'text/plain'
doc = amara.parse(url, prefixes={'atom': 'http://www.w3.org/2005/Atom'})
req.write(str(doc.xml_xpath('//atom:title')))
A:
You probably just have to take into account the namespace of the document which you're dealing with.
I'd suggest looking up how to deal with namespaces in Amara:
http://www.xml3k.org/Amara/Manual#namespaces
Edit: Using your code snippet I made some edits. I don't know what version of Amara you're using but based on the docs I tried to accommodate it as much as possible:
def view(req, url):
req.content_type = 'text/plain'
ns = {u'f' : u'http://www.w3.org/2005/Atom',
u't' : u'http://purl.org/syndication/thread/1.0'}
doc = amara.parse(urlopen(url), prefixes=ns)
req.write(str(doc.xml_xpath(u'f:title')))
A:
It is indeed the namespaces. It was a bit tricky to find in the lxml docs, but here's how you do it:
from lxml import etree
doc = etree.parse(open('index.html'))
doc.xpath('//default:title', namespaces={'default':'http://www.w3.org/2005/Atom'})
You can also do this:
title_finder = etree.ETXPath('//{http://www.w3.org/2005/Atom}title')
title_finder(doc)
And you'll get the titles back in both cases.
|
Python xpath not working?
|
Okay, this is starting to drive me a little bit nuts. I've tried several xml/xpath libraries for Python, and can't figure out a simple way to get a stinkin' "title" element.
The latest attempt looks like this (using Amara):
def view(req, url):
req.content_type = 'text/plain'
doc = amara.parse(urlopen(url))
for node in doc.xml_xpath('//title'):
req.write(str(node)+'\n')
But that prints out nothing. My XML looks like this: http://programanddesign.com/feed/atom/
If I try //* instead of //title it returns everything as expected. I know that the XML has titles in there, so what's the problem? Is it the namespace or something? If so, how can I fix it?
Can't seem to get it working with no prefix, but this does work:
def view(req, url):
req.content_type = 'text/plain'
doc = amara.parse(url, prefixes={'atom': 'http://www.w3.org/2005/Atom'})
req.write(str(doc.xml_xpath('//atom:title')))
|
[
"You probably just have to take into account the namespace of the document which you're dealing with.\nI'd suggest looking up how to deal with namespaces in Amara:\nhttp://www.xml3k.org/Amara/Manual#namespaces\nEdit: Using your code snippet I made some edits. I don't know what version of Amara you're using but based on the docs I tried to accommodate it as much as possible:\ndef view(req, url):\n req.content_type = 'text/plain'\n ns = {u'f' : u'http://www.w3.org/2005/Atom',\n u't' : u'http://purl.org/syndication/thread/1.0'}\n doc = amara.parse(urlopen(url), prefixes=ns)\n req.write(str(doc.xml_xpath(u'f:title')))\n\n",
"It is indeed the namespaces. It was a bit tricky to find in the lxml docs, but here's how you do it:\nfrom lxml import etree\ndoc = etree.parse(open('index.html'))\ndoc.xpath('//default:title', namespaces={'default':'http://www.w3.org/2005/Atom'})\n\nYou can also do this:\ntitle_finder = etree.ETXPath('//{http://www.w3.org/2005/Atom}title')\ntitle_finder(doc)\n\nAnd you'll get the titles back in both cases.\n"
] |
[
1,
1
] |
[] |
[] |
[
"amara",
"python",
"xml",
"xml_namespaces",
"xpath"
] |
stackoverflow_0001584180_amara_python_xml_xml_namespaces_xpath.txt
|
Q:
Making a C extension to Python that requires another extension
I have a couple of Python functions that I use to make game development with Pygame easier. I have them in a file called helper.py in my Python-path, so I can import them from any game I make. I thought, as an exercise to learn about Python extensions, to convert this module to C. My first problem is that I need to use functions from Pygame, and I'm not sure if this is possible. Pygame installs some header files, but they don't seem to have C versions of the Python functions. Maybe I'm missing something.
How can I solve this? As a workaround, the function currently accepts a function parameter and calls that, but it's not the ideal solution.
Using Windows XP, Python 2.6 and Pygame 1.9.1, by the way.
A:
/* get the sys.modules dictionary */
PyObject* sysmodules PyImport_GetModuleDict();
PyObject* pygame_module;
if(PyMapping_HasKeyString(sysmodules, "pygame")) {
pygame_module = PyMapping_GetItemString(sysmodules, "pygame");
} else {
PyObject* initresult;
pygame_module = PyImport_ImportModule("pygame");
if(!pygame_module) {
/* insert error handling here! and exit this function */
}
initresult = PyObject_CallMethod(pygame_module, "init", NULL);
if(!initresult) {
/* more error handling &c */
}
Py_DECREF(initresult);
}
/* use PyObject_CallMethod(pygame_module, ...) to your heart's contents */
/* and lastly, when done, don't forget, before you exit, to: */
Py_DECREF(pygame_module);
A:
You can import python modules from C code and call things defined in just like you can in python code. It is a bit long winded, but perfectly possible.
When I want to work out how to do something like this I look at the C API documentation. The section on importing modules will help. You'll also need to read how to read attributes, call functions etc which is all in the docs.
However I suspect what you really want to do is call the underlying library sdl from C. This is a C library and is really easy to use from C.
Here is some sample code to import a python module in C adapted from a bit of working code
PyObject *module = 0;
PyObject *result = 0;
PyObject *module_dict = 0;
PyObject *func = 0;
module = PyImport_ImportModule((char *)"pygame"); /* new ref */
if (module == 0)
{
PyErr_Print();
log("Couldn't find python module pygame");
goto out;
}
module_dict = PyModule_GetDict(module); /* borrowed */
if (module_dict == 0)
{
PyErr_Print();
log("Couldn't find read python module pygame");
goto out;
}
func = PyDict_GetItemString(module_dict, "pygame_function"); /* borrowed */
if (func == 0)
{
PyErr_Print();
log("Couldn't find pygame.pygame_function");
goto out;
}
result = PyEval_CallObject(func, NULL); /* new ref */
if (result == 0)
{
PyErr_Print();
log("Couldn't run pygame.pygame_function");
goto out;
}
/* do stuff with result */
out:;
Py_XDECREF(result);
Py_XDECREF(module);
A:
Most functions in pygame module are just wrappers around SDL functions, that is where you have to look for C version of its functions. pygame.h defines a series of import_pygame_*() functions. Call import_pygame_base() and others once at initialization of extension module to get access to needed part of C API of pygame modules (it's defined in header file for each). Google code search will bring you some examples.
|
Making a C extension to Python that requires another extension
|
I have a couple of Python functions that I use to make game development with Pygame easier. I have them in a file called helper.py in my Python-path, so I can import them from any game I make. I thought, as an exercise to learn about Python extensions, to convert this module to C. My first problem is that I need to use functions from Pygame, and I'm not sure if this is possible. Pygame installs some header files, but they don't seem to have C versions of the Python functions. Maybe I'm missing something.
How can I solve this? As a workaround, the function currently accepts a function parameter and calls that, but it's not the ideal solution.
Using Windows XP, Python 2.6 and Pygame 1.9.1, by the way.
|
[
"/* get the sys.modules dictionary */\nPyObject* sysmodules PyImport_GetModuleDict();\nPyObject* pygame_module;\nif(PyMapping_HasKeyString(sysmodules, \"pygame\")) {\n pygame_module = PyMapping_GetItemString(sysmodules, \"pygame\");\n} else {\n PyObject* initresult;\n pygame_module = PyImport_ImportModule(\"pygame\");\n if(!pygame_module) {\n /* insert error handling here! and exit this function */\n }\n initresult = PyObject_CallMethod(pygame_module, \"init\", NULL);\n if(!initresult) {\n /* more error handling &c */\n }\n Py_DECREF(initresult);\n}\n/* use PyObject_CallMethod(pygame_module, ...) to your heart's contents */\n/* and lastly, when done, don't forget, before you exit, to: */\nPy_DECREF(pygame_module);\n\n",
"You can import python modules from C code and call things defined in just like you can in python code. It is a bit long winded, but perfectly possible.\nWhen I want to work out how to do something like this I look at the C API documentation. The section on importing modules will help. You'll also need to read how to read attributes, call functions etc which is all in the docs.\nHowever I suspect what you really want to do is call the underlying library sdl from C. This is a C library and is really easy to use from C.\nHere is some sample code to import a python module in C adapted from a bit of working code\nPyObject *module = 0;\nPyObject *result = 0;\nPyObject *module_dict = 0;\nPyObject *func = 0;\n\nmodule = PyImport_ImportModule((char *)\"pygame\"); /* new ref */\nif (module == 0)\n{\n PyErr_Print();\n log(\"Couldn't find python module pygame\");\n goto out;\n}\nmodule_dict = PyModule_GetDict(module); /* borrowed */\nif (module_dict == 0)\n{\n PyErr_Print();\n log(\"Couldn't find read python module pygame\");\n goto out;\n}\nfunc = PyDict_GetItemString(module_dict, \"pygame_function\"); /* borrowed */\nif (func == 0)\n{\n PyErr_Print();\n log(\"Couldn't find pygame.pygame_function\");\n goto out;\n}\nresult = PyEval_CallObject(func, NULL); /* new ref */\nif (result == 0)\n{\n PyErr_Print();\n log(\"Couldn't run pygame.pygame_function\");\n goto out;\n}\n/* do stuff with result */\nout:;\nPy_XDECREF(result);\nPy_XDECREF(module);\n\n",
"Most functions in pygame module are just wrappers around SDL functions, that is where you have to look for C version of its functions. pygame.h defines a series of import_pygame_*() functions. Call import_pygame_base() and others once at initialization of extension module to get access to needed part of C API of pygame modules (it's defined in header file for each). Google code search will bring you some examples.\n"
] |
[
6,
3,
0
] |
[] |
[] |
[
"c",
"pygame",
"python"
] |
stackoverflow_0001583077_c_pygame_python.txt
|
Q:
Unit testing a method called during initialization?
I have a class like the following:
class Positive(object):
def __init__(self, item):
self._validate_item(item)
self.item = item
def _validate_item(self, item):
if item <= 0:
raise ValueError("item should be positive.")
I'd like to write a unit test for _validate_item(), like the following:
class PositiveTests(unittest.TestCase):
def test_validate_item_error(self):
self.assertRaises(
ValueError,
Positive._validate_item,
0
)
Unfortunately, this won't work because the unit test only passes 0 to the method, instead of a class instance (for the self parameter) and the 0. Is there any solution to this other than having to test this validation method indirectly via the __init__() of the class?
A:
If you're not using self in the method's body, it's a hint that it might not need to be a class member. You can either move the _validate_item function into module scope:
def _validate_item(item):
if item <= 0:
raise ValueError("item should be positive.")
Or if it really has to stay in the class, the mark the method static:
class Positive(object):
def __init__(self, item):
self._validate_item(item)
self.item = item
@staticmethod
def _validate_item(item):
if item <= 0:
raise ValueError("item should be positive.")
Your test should then work as written.
A:
You're not creating an instance of Positive. How about
Positive()._validate_item, 0
A:
Well, _validate_item() is tested through the constructor. Invoking it with a null or negative value will raise the ValueError exception.
Taking a step back, that's the goal no ? The "requirement" is that object shall not be created with a zero or negative value.
Now this is a contrived example, so the above could not be applicable to the real class ; another possibility, to really have a test dedicated to the _validate_item() method, could be to create an object with a positive value, and then to invoke the validate_item() on it.
|
Unit testing a method called during initialization?
|
I have a class like the following:
class Positive(object):
def __init__(self, item):
self._validate_item(item)
self.item = item
def _validate_item(self, item):
if item <= 0:
raise ValueError("item should be positive.")
I'd like to write a unit test for _validate_item(), like the following:
class PositiveTests(unittest.TestCase):
def test_validate_item_error(self):
self.assertRaises(
ValueError,
Positive._validate_item,
0
)
Unfortunately, this won't work because the unit test only passes 0 to the method, instead of a class instance (for the self parameter) and the 0. Is there any solution to this other than having to test this validation method indirectly via the __init__() of the class?
|
[
"If you're not using self in the method's body, it's a hint that it might not need to be a class member. You can either move the _validate_item function into module scope:\ndef _validate_item(item):\n if item <= 0:\n raise ValueError(\"item should be positive.\")\n\nOr if it really has to stay in the class, the mark the method static:\nclass Positive(object):\n def __init__(self, item):\n self._validate_item(item)\n self.item = item\n\n @staticmethod\n def _validate_item(item):\n if item <= 0:\n raise ValueError(\"item should be positive.\")\n\nYour test should then work as written.\n",
"You're not creating an instance of Positive. How about\nPositive()._validate_item, 0\n\n",
"Well, _validate_item() is tested through the constructor. Invoking it with a null or negative value will raise the ValueError exception.\nTaking a step back, that's the goal no ? The \"requirement\" is that object shall not be created with a zero or negative value. \nNow this is a contrived example, so the above could not be applicable to the real class ; another possibility, to really have a test dedicated to the _validate_item() method, could be to create an object with a positive value, and then to invoke the validate_item() on it. \n"
] |
[
7,
1,
1
] |
[] |
[] |
[
"oop",
"python",
"testing",
"unit_testing"
] |
stackoverflow_0001584220_oop_python_testing_unit_testing.txt
|
Q:
Apparently my app. runs but I don't see anything
I'm learning python and Qt to create graphical desktop apps. I designed the UI with Qt Designer and converted the .ui to .py using pyuic, according to the tutorial I'm following, I should be able to run my app. but when I do it, a terminal window opens and it says:
cd '/Users/andresacevedo/' && '/opt/local/bin/python2.6' '/Users/andresacevedo/aj.pyw' && echo Exit status: $? && exit 1
Exit status: 0
logout
[Process completed]
Does it means that the app. exited without errors? then why I don't see the UI that I designed?
P.S. I'm using OS X Snow leopard
Thanks,
Edit (This is the source code of my app.)
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'principal.ui'
#
# Created: Sat Oct 17 15:07:17 2009
# by: PyQt4 UI code generator 4.6
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(379, 330)
self.centralwidget = QtGui.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtGui.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 379, 22))
self.menubar.setObjectName("menubar")
self.menuMenu_1 = QtGui.QMenu(self.menubar)
self.menuMenu_1.setObjectName("menuMenu_1")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtGui.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.actionOpcion_1 = QtGui.QAction(MainWindow)
self.actionOpcion_1.setObjectName("actionOpcion_1")
self.menuMenu_1.addAction(self.actionOpcion_1)
self.menubar.addAction(self.menuMenu_1.menuAction())
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(QtGui.QApplication.translate("MainWindow", "MainWindow", None, QtGui.QApplication.UnicodeUTF8))
self.menuMenu_1.setTitle(QtGui.QApplication.translate("MainWindow", "Menu 1", None, QtGui.QApplication.UnicodeUTF8))
self.actionOpcion_1.setText(QtGui.QApplication.translate("MainWindow", "Opcion 1", None, QtGui.QApplication.UnicodeUTF8))
A:
The problem is that your python code is merely defining a class, but has no main program which invokes the class or causes QT to pop up a window.
It seems a little unusual that your Ui_MainWindow class isn't actually a subclass of QMainWindow; it isn't a widget itself, but it merely configures the MainWindow which gets passed to it. But I think that can still work, with something like the (untested) code below.
import sys
from PyQt4 import Qt
# (define class Ui_MainWindow here...)
if __name__=="__main__":
app=Qt.QApplication(sys.argv)
mywin = Qt.QMainWindow()
myui = Ui_MainWindow(mywin)
myui.setupUI(mywin)
app.connect(app, Qt.SIGNAL("lastWindowClosed()"),
app, Qt.SLOT("quit()"))
mywin.show()
app.exec_()
A:
I'm doing very newbie questions, well because I'm a pyqt newbie... what was happening was that I called pyuic without the -x attribute, so the code just creates the UI but not the code for running it, anyway your help was very valuable.
A:
The code you get from Qt Designer is only a class with set of widgets. It is not an application. See the PyQt manual for info how to use the Designer code in your applications. I'd suggest to read some Qt tutorial first, write some "hello world" application first, and only then write applications that use Designer forms.
|
Apparently my app. runs but I don't see anything
|
I'm learning python and Qt to create graphical desktop apps. I designed the UI with Qt Designer and converted the .ui to .py using pyuic, according to the tutorial I'm following, I should be able to run my app. but when I do it, a terminal window opens and it says:
cd '/Users/andresacevedo/' && '/opt/local/bin/python2.6' '/Users/andresacevedo/aj.pyw' && echo Exit status: $? && exit 1
Exit status: 0
logout
[Process completed]
Does it means that the app. exited without errors? then why I don't see the UI that I designed?
P.S. I'm using OS X Snow leopard
Thanks,
Edit (This is the source code of my app.)
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'principal.ui'
#
# Created: Sat Oct 17 15:07:17 2009
# by: PyQt4 UI code generator 4.6
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(379, 330)
self.centralwidget = QtGui.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtGui.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 379, 22))
self.menubar.setObjectName("menubar")
self.menuMenu_1 = QtGui.QMenu(self.menubar)
self.menuMenu_1.setObjectName("menuMenu_1")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtGui.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.actionOpcion_1 = QtGui.QAction(MainWindow)
self.actionOpcion_1.setObjectName("actionOpcion_1")
self.menuMenu_1.addAction(self.actionOpcion_1)
self.menubar.addAction(self.menuMenu_1.menuAction())
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(QtGui.QApplication.translate("MainWindow", "MainWindow", None, QtGui.QApplication.UnicodeUTF8))
self.menuMenu_1.setTitle(QtGui.QApplication.translate("MainWindow", "Menu 1", None, QtGui.QApplication.UnicodeUTF8))
self.actionOpcion_1.setText(QtGui.QApplication.translate("MainWindow", "Opcion 1", None, QtGui.QApplication.UnicodeUTF8))
|
[
"The problem is that your python code is merely defining a class, but has no main program which invokes the class or causes QT to pop up a window. \nIt seems a little unusual that your Ui_MainWindow class isn't actually a subclass of QMainWindow; it isn't a widget itself, but it merely configures the MainWindow which gets passed to it. But I think that can still work, with something like the (untested) code below.\nimport sys\nfrom PyQt4 import Qt\n\n# (define class Ui_MainWindow here...)\n\nif __name__==\"__main__\":\n\n app=Qt.QApplication(sys.argv)\n mywin = Qt.QMainWindow()\n myui = Ui_MainWindow(mywin)\n myui.setupUI(mywin)\n\n app.connect(app, Qt.SIGNAL(\"lastWindowClosed()\"),\n app, Qt.SLOT(\"quit()\"))\n mywin.show()\n\n app.exec_()\n\n",
"I'm doing very newbie questions, well because I'm a pyqt newbie... what was happening was that I called pyuic without the -x attribute, so the code just creates the UI but not the code for running it, anyway your help was very valuable.\n",
"The code you get from Qt Designer is only a class with set of widgets. It is not an application. See the PyQt manual for info how to use the Designer code in your applications. I'd suggest to read some Qt tutorial first, write some \"hello world\" application first, and only then write applications that use Designer forms.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"macos",
"pyqt",
"python"
] |
stackoverflow_0001583437_macos_pyqt_python.txt
|
Q:
How to concatenate multiple Python source files into a single file?
(Assume that: application start-up time is absolutely critical; my application is started a lot; my application runs in an environment in which importing is slower than usual; many files need to be imported; and compilation to .pyc files is not available.)
I would like to concatenate all the Python source files that define a collection of modules into a single new Python source file.
I would like the result of importing the new file to be as if I imported one of the original files (which would then import some more of the original files, and so on).
Is this possible?
Here is a rough, manual simulation of what a tool might produce when fed the source files for modules 'bar' and 'baz'. You would run such a tool prior to deploying the code.
__file__ = 'foo.py'
def _module(_name):
import types
mod = types.ModuleType(name)
mod.__file__ = __file__
sys.modules[module_name] = mod
return mod
def _bar_module():
def hello():
print 'Hello World! BAR'
mod = create_module('foo.bar')
mod.hello = hello
return mod
bar = _bar_module()
del _bar_module
def _baz_module():
def hello():
print 'Hello World! BAZ'
mod = create_module('foo.bar.baz')
mod.hello = hello
return mod
baz = _baz_module()
del _baz_module
And now you can:
from foo.bar import hello
hello()
This code doesn't take account of things like import statements and dependencies. Is there any existing code that will assemble source files using this, or some other technique?
This is very similar idea to tools being used to assemble and optimise JavaScript files before sending to the browser, where the latency of multiple HTTP requests hurts performance. In this Python case, it's the latency of importing hundreds of Python source files at startup which hurts.
A:
If this is on google app engine as the tags indicate, make sure you are using this idiom
def main():
#do stuff
if __name__ == '__main__':
main()
Because GAE doesn't restart your app every request unless the .py has changed, it just runs main() again.
This trick lets you write CGI style apps without the startup performance hit
AppCaching
If a handler script provides a main()
routine, the runtime environment also
caches the script. Otherwise, the
handler script is loaded for every
request.
A:
I think that due to the precompilation of Python files and some system caching, the speed up that you'll eventually get won't be measurable.
A:
Doing this is unlikely to yield any performance benefits. You're still importing the same amount of Python code, just in fewer modules - and you're sacrificing all modularity for it.
A better approach would be to modify your code and/or libraries to only import things when needed, so that a minimum of required code is loaded for each request.
A:
Without dealing with the question, whether or not this technique would boost up things at your environment, say you are right, here is what I would have done.
I would make a list of all my modules e.g.
my_files = ['foo', 'bar', 'baz']
I would then use os.path utilities to read all lines in all files under the source directory and writes them all into a new file, filtering all import foo|bar|baz lines since all code is now within a single file.
Of curse, at last adding the main() from __init__.py (if there is such) at the tail of the file.
|
How to concatenate multiple Python source files into a single file?
|
(Assume that: application start-up time is absolutely critical; my application is started a lot; my application runs in an environment in which importing is slower than usual; many files need to be imported; and compilation to .pyc files is not available.)
I would like to concatenate all the Python source files that define a collection of modules into a single new Python source file.
I would like the result of importing the new file to be as if I imported one of the original files (which would then import some more of the original files, and so on).
Is this possible?
Here is a rough, manual simulation of what a tool might produce when fed the source files for modules 'bar' and 'baz'. You would run such a tool prior to deploying the code.
__file__ = 'foo.py'
def _module(_name):
import types
mod = types.ModuleType(name)
mod.__file__ = __file__
sys.modules[module_name] = mod
return mod
def _bar_module():
def hello():
print 'Hello World! BAR'
mod = create_module('foo.bar')
mod.hello = hello
return mod
bar = _bar_module()
del _bar_module
def _baz_module():
def hello():
print 'Hello World! BAZ'
mod = create_module('foo.bar.baz')
mod.hello = hello
return mod
baz = _baz_module()
del _baz_module
And now you can:
from foo.bar import hello
hello()
This code doesn't take account of things like import statements and dependencies. Is there any existing code that will assemble source files using this, or some other technique?
This is very similar idea to tools being used to assemble and optimise JavaScript files before sending to the browser, where the latency of multiple HTTP requests hurts performance. In this Python case, it's the latency of importing hundreds of Python source files at startup which hurts.
|
[
"If this is on google app engine as the tags indicate, make sure you are using this idiom\ndef main(): \n #do stuff\nif __name__ == '__main__':\n main()\n\nBecause GAE doesn't restart your app every request unless the .py has changed, it just runs main() again.\nThis trick lets you write CGI style apps without the startup performance hit\nAppCaching\n\nIf a handler script provides a main()\n routine, the runtime environment also\n caches the script. Otherwise, the\n handler script is loaded for every\n request.\n\n",
"I think that due to the precompilation of Python files and some system caching, the speed up that you'll eventually get won't be measurable.\n",
"Doing this is unlikely to yield any performance benefits. You're still importing the same amount of Python code, just in fewer modules - and you're sacrificing all modularity for it.\nA better approach would be to modify your code and/or libraries to only import things when needed, so that a minimum of required code is loaded for each request.\n",
"Without dealing with the question, whether or not this technique would boost up things at your environment, say you are right, here is what I would have done.\nI would make a list of all my modules e.g.\nmy_files = ['foo', 'bar', 'baz']\nI would then use os.path utilities to read all lines in all files under the source directory and writes them all into a new file, filtering all import foo|bar|baz lines since all code is now within a single file.\nOf curse, at last adding the main() from __init__.py (if there is such) at the tail of the file.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"concatenation",
"google_app_engine",
"import",
"module",
"python"
] |
stackoverflow_0001580746_concatenation_google_app_engine_import_module_python.txt
|
Q:
Is there a standalone Python type conversion library?
Are there any standalone type conversion libraries?
I have a data storage system that only understands bytes/strings, but I can tag metadata such as the type to be converted to.
I could hack up some naive system of type converters, as every other application has done before me, or I could hopefully use a standalone library, except I can't find one. Odd for such a common activity.
Just to clarify, I will have something like:
('123', 'integer') and I want to get out 123
A:
You've got two options, either use the struct or pickle modules.
With struct you specify a format and it compacts your data to byte array. This is useful for working with C structures or writing to networked apps that require are binary protocol.
pickle can automatically serialise and deserialise complex Python structures to a string. There are some caveats so it's best read the documentation. I think this is the most likely the library you want.
>>> import pickle
>>> v = pickle.dumps(123)
>>> v
'I123\n.'
>>> pickle.loads(v)
123
>>> v = pickle.dumps({"abc": 123})
>>> v
"(dp0\nS'abc'\np1\nI123\ns."
>>> pickle.loads(v)
{'abc': 123}
A:
Consider this.
import datetime
def toDate( someString ):
return datetime.datetime.strptime( someString, "%x" ).date()
typeConversionMapping = { 'integer': int, 'string': str, 'float': float, 'date': toDate }
def typeConversionFunction( typeConversionTuple ):
theStringRepresentation, theTypeName = typeConversionTuple
return typeConversionMapping[theTypeName](theStringRepresentation)
Is that a good enough standalone library for such a common activity? Would that be enough of a well-tested, error-resilient library? Or is there something more that's required?
If you need more or different date/time conversions, you simply add new toDate functions with different formats.
A:
Flatland does this well. http://discorporate.us/projects/flatland/
|
Is there a standalone Python type conversion library?
|
Are there any standalone type conversion libraries?
I have a data storage system that only understands bytes/strings, but I can tag metadata such as the type to be converted to.
I could hack up some naive system of type converters, as every other application has done before me, or I could hopefully use a standalone library, except I can't find one. Odd for such a common activity.
Just to clarify, I will have something like:
('123', 'integer') and I want to get out 123
|
[
"You've got two options, either use the struct or pickle modules.\nWith struct you specify a format and it compacts your data to byte array. This is useful for working with C structures or writing to networked apps that require are binary protocol.\npickle can automatically serialise and deserialise complex Python structures to a string. There are some caveats so it's best read the documentation. I think this is the most likely the library you want.\n\n>>> import pickle\n>>> v = pickle.dumps(123)\n>>> v\n'I123\\n.'\n>>> pickle.loads(v)\n123\n>>> v = pickle.dumps({\"abc\": 123})\n>>> v\n\"(dp0\\nS'abc'\\np1\\nI123\\ns.\"\n>>> pickle.loads(v)\n{'abc': 123}\n\n",
"Consider this.\nimport datetime\n\ndef toDate( someString ):\n return datetime.datetime.strptime( someString, \"%x\" ).date()\n\ntypeConversionMapping = { 'integer': int, 'string': str, 'float': float, 'date': toDate }\ndef typeConversionFunction( typeConversionTuple ):\n theStringRepresentation, theTypeName = typeConversionTuple\n return typeConversionMapping[theTypeName](theStringRepresentation)\n\nIs that a good enough standalone library for such a common activity? Would that be enough of a well-tested, error-resilient library? Or is there something more that's required? \nIf you need more or different date/time conversions, you simply add new toDate functions with different formats.\n",
"Flatland does this well. http://discorporate.us/projects/flatland/\n"
] |
[
3,
3,
1
] |
[] |
[] |
[
"python",
"type_conversion"
] |
stackoverflow_0000468639_python_type_conversion.txt
|
Q:
web2py - require selected dropdown values validate from db
i have a table member that include SQLField("year", db.All_years)
and All_years table as the following:
db.define_table("All_years",
SQLField("fromY","integer"),
SQLField("toY","integer")
)
and constrains are :
db.member.year.requires = IS_IN_DB(db, 'All_years.id','All_years.fromY')
The problem is when I select a year from dropdown the value of year column is the id of year, not the year value e.g: if year 2009 has db id=1 the value of year in db equal=1 not equal 2009.
I don't understand why.
A:
I see your project is progressing well!
The validator is IS_IN_DB(dbset, field, label). So you should try:
db.member.year.requires = IS_IN_DB(db, 'All_years.id', '%(fromY)d')
to have a correct label in your drop-down list.
Now from your table it looks like you would rather choose an interval rather than just the beginning year, in that case you can use this:
db.member.year.requires = IS_IN_DB(db, 'All_years.id', '%(fromY)d to %(toY)d')
that will display, for example, "1980 to 1985", and so on.
|
web2py - require selected dropdown values validate from db
|
i have a table member that include SQLField("year", db.All_years)
and All_years table as the following:
db.define_table("All_years",
SQLField("fromY","integer"),
SQLField("toY","integer")
)
and constrains are :
db.member.year.requires = IS_IN_DB(db, 'All_years.id','All_years.fromY')
The problem is when I select a year from dropdown the value of year column is the id of year, not the year value e.g: if year 2009 has db id=1 the value of year in db equal=1 not equal 2009.
I don't understand why.
|
[
"I see your project is progressing well!\nThe validator is IS_IN_DB(dbset, field, label). So you should try:\ndb.member.year.requires = IS_IN_DB(db, 'All_years.id', '%(fromY)d')\n\nto have a correct label in your drop-down list.\nNow from your table it looks like you would rather choose an interval rather than just the beginning year, in that case you can use this:\ndb.member.year.requires = IS_IN_DB(db, 'All_years.id', '%(fromY)d to %(toY)d')\n\nthat will display, for example, \"1980 to 1985\", and so on.\n"
] |
[
2
] |
[] |
[] |
[
"python",
"web2py"
] |
stackoverflow_0001584909_python_web2py.txt
|
Q:
Grep multi-layered iterable for strings that match (Python)
Say that we have a multilayered iterable with some strings at the "final" level, yes strings are iterable, but I think that you get my meaning:
['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch')),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
...
]
Is there any convenient way that I could implement a search that would give me the indices of the matching strings? I would like something that would act something like this (where the above list is data):
>>> grep('javascript', data)
and it would return [ (2,1,1), (2,2,0), (2,2,1), (2,2,2) ] perhaps. Maybe I'm missing a comparable solution that returns nothing of the sort but can help me find some strings within a multi-layered list of iterables of iterables of .... strings.
I wrote a little bit but it was seeming juvenile and inelegant so I thought I would ask here. I guess that I could just keep nesting the exception the way I started here to the number of levels that the function would then support, but I was hoping to get something neat, abstract, pythonic.
import re
def rgrep(s, data):
''' given a iterable of strings or an iterable of iterables of strings,
returns the index/indices of strings that contain the search string.
Args::
s - the string that you are searching for
data - the iterable of strings or iterable of iterables of strings
'''
results = []
expr = re.compile(s)
for item in data:
try:
match = expr.search(item)
if match != None:
results.append( data.index(item) )
except TypeError:
for t in item:
try:
m = expr.search(t)
if m != None:
results.append( (list.index(item), item.index(t)) )
except TypeError:
''' you can only go 2 deep! '''
pass
return results
A:
I'd split recursive enumeration from grepping:
def enumerate_recursive(iter, base=()):
for index, item in enumerate(iter):
if isinstance(item, basestring):
yield (base + (index,)), item
else:
for pair in enumerate_recursive(item, (base + (index,))):
yield pair
def grep_index(filt, iter):
return (index for index, text in iter if filt in text)
This way you can do both non-recursive and recursive grepping:
l = list(grep_index('opt1', enumerate(sys.argv))) # non-recursive
r = list(grep_index('diff', enumerate_recursive(your_data))) # recursive
Also note that we're using iterators here, saving RAM for longer sequences if necessary.
Even more generic solution would be to give a callable instead of string to grep_index. But that might not be necessary for you.
A:
Here is a grep that uses recursion to search the data structure.
Note that good data structures lead the way to elegant solutions.
Bad data structures make you bend over backwards to accomodate.
This feels to me like one of those cases where a bad data structure is obstructing
rather than helping you.
Having a simple data structure with a more uniform structure
(instead of using this grep) might be worth investigating.
#!/usr/bin/env python
data=['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch',['find','java deep','down'])),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
]
def grep(astr,data,prefix=[]):
result=[]
for idx,elt in enumerate(data):
if isinstance(elt,basestring):
if astr in elt:
result.append(tuple(prefix+[idx]))
else:
result.extend(grep(astr,elt,prefix+[idx]))
return result
def pick(data,idx):
if idx:
return pick(data[idx[0]],idx[1:])
else:
return data
idxs=grep('java',data)
print(idxs)
for idx in idxs:
print('data[%s] = %s'%(idx,pick(data,idx)))
A:
To get the position use enumerate()
>>> data = [('foo', 'bar', 'frrr', 'baz'), ('foo/bar', 'baz/foo')]
>>>
>>> for l1, v1 in enumerate(data):
... for l2, v2 in enumerate(v1):
... if 'f' in v2:
... print l1, l2, v2
...
0 0 foo
1 0 foo/bar
1 1 baz/foo
In this example I am using a simple match 'foo' in bar yet you probably use regex for the job.
Obviously, enumerate() can provide support in more than 2 levels as in your edited post.
|
Grep multi-layered iterable for strings that match (Python)
|
Say that we have a multilayered iterable with some strings at the "final" level, yes strings are iterable, but I think that you get my meaning:
['something',
('Diff',
('diff', 'udiff'),
('*.diff', '*.patch'),
('text/x-diff', 'text/x-patch')),
('Delphi',
('delphi', 'pas', 'pascal', 'objectpascal'),
('*.pas',),
('text/x-pascal',['lets', 'put one here'], )),
('JavaScript+Mako',
('js+mako', 'javascript+mako'),
('application/x-javascript+mako',
'text/x-javascript+mako',
'text/javascript+mako')),
...
]
Is there any convenient way that I could implement a search that would give me the indices of the matching strings? I would like something that would act something like this (where the above list is data):
>>> grep('javascript', data)
and it would return [ (2,1,1), (2,2,0), (2,2,1), (2,2,2) ] perhaps. Maybe I'm missing a comparable solution that returns nothing of the sort but can help me find some strings within a multi-layered list of iterables of iterables of .... strings.
I wrote a little bit but it was seeming juvenile and inelegant so I thought I would ask here. I guess that I could just keep nesting the exception the way I started here to the number of levels that the function would then support, but I was hoping to get something neat, abstract, pythonic.
import re
def rgrep(s, data):
''' given a iterable of strings or an iterable of iterables of strings,
returns the index/indices of strings that contain the search string.
Args::
s - the string that you are searching for
data - the iterable of strings or iterable of iterables of strings
'''
results = []
expr = re.compile(s)
for item in data:
try:
match = expr.search(item)
if match != None:
results.append( data.index(item) )
except TypeError:
for t in item:
try:
m = expr.search(t)
if m != None:
results.append( (list.index(item), item.index(t)) )
except TypeError:
''' you can only go 2 deep! '''
pass
return results
|
[
"I'd split recursive enumeration from grepping:\ndef enumerate_recursive(iter, base=()):\n for index, item in enumerate(iter):\n if isinstance(item, basestring):\n yield (base + (index,)), item\n else:\n for pair in enumerate_recursive(item, (base + (index,))):\n yield pair\n\ndef grep_index(filt, iter):\n return (index for index, text in iter if filt in text)\n\nThis way you can do both non-recursive and recursive grepping:\nl = list(grep_index('opt1', enumerate(sys.argv))) # non-recursive\nr = list(grep_index('diff', enumerate_recursive(your_data))) # recursive\n\nAlso note that we're using iterators here, saving RAM for longer sequences if necessary.\nEven more generic solution would be to give a callable instead of string to grep_index. But that might not be necessary for you.\n",
"Here is a grep that uses recursion to search the data structure. \nNote that good data structures lead the way to elegant solutions. \nBad data structures make you bend over backwards to accomodate. \nThis feels to me like one of those cases where a bad data structure is obstructing \nrather than helping you.\nHaving a simple data structure with a more uniform structure \n(instead of using this grep) might be worth investigating.\n#!/usr/bin/env python\n\ndata=['something', \n('Diff',\n('diff', 'udiff'),\n('*.diff', '*.patch'),\n('text/x-diff', 'text/x-patch',['find','java deep','down'])),\n\n('Delphi',\n('delphi', 'pas', 'pascal', 'objectpascal'),\n('*.pas',),\n('text/x-pascal',['lets', 'put one here'], )),\n\n('JavaScript+Mako',\n('js+mako', 'javascript+mako'),\n('application/x-javascript+mako',\n'text/x-javascript+mako',\n'text/javascript+mako')),\n]\n\ndef grep(astr,data,prefix=[]):\n result=[]\n for idx,elt in enumerate(data):\n if isinstance(elt,basestring):\n if astr in elt:\n result.append(tuple(prefix+[idx]))\n else:\n result.extend(grep(astr,elt,prefix+[idx]))\n return result\n\ndef pick(data,idx):\n if idx:\n return pick(data[idx[0]],idx[1:])\n else:\n return data\nidxs=grep('java',data)\nprint(idxs)\nfor idx in idxs:\n print('data[%s] = %s'%(idx,pick(data,idx)))\n\n",
"To get the position use enumerate()\n>>> data = [('foo', 'bar', 'frrr', 'baz'), ('foo/bar', 'baz/foo')]\n>>> \n>>> for l1, v1 in enumerate(data):\n... for l2, v2 in enumerate(v1):\n... if 'f' in v2:\n... print l1, l2, v2\n... \n0 0 foo\n1 0 foo/bar\n1 1 baz/foo\n\nIn this example I am using a simple match 'foo' in bar yet you probably use regex for the job.\nObviously, enumerate() can provide support in more than 2 levels as in your edited post.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"data_structures",
"python",
"regex",
"search",
"string"
] |
stackoverflow_0001584864_data_structures_python_regex_search_string.txt
|
Q:
Python optional parameters
Guys, I just started python recently and get confused with the optional parameters, say I have the program like this:
class B:
pass
class A:
def __init__(self, builds = B()):
self.builds = builds
If I create A twice
b = A()
c = A()
and print their builds
print b.builds
print c.builds
I found they are using the exactly same object,
<__main__.B instance at 0x68ee0>
<__main__.B instance at 0x68ee0>
But it is not what I want, since if b changed some internal state of builds, the one in c object will also be changed.
Is it possible to recreate this optional parameters each time by using this optional parameters syntax?
A:
You need to understand how default values work in order to use them effectively.
Functions are objects. As such, they have attributes. So, if I create this function:
>>> def f(x, y=[]):
y.append(x)
return y
I've created an object. Here are its attributes:
>>> dir(f)
['__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__',
'__dict__', '__doc__', '__format__', '__get__', '__getattribute__', '__globals__',
'__hash__', '__init__', '__module__', '__name__', '__new__', '__reduce__',
'__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__',
'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals',
'func_name']
One of them is func_defaults. That sounds promising, what's in there?
>>> f.func_defaults
([],)
That's a tuple that contains the function's default values. If a default value is an object, the tuple contains an instance of that object.
This leads to some fairly counterintuitive behavior if you're thinking that f adds an item to a list, returning a list containing only that item if no list is provided:
>>> f(1)
[1]
>>> f(2)
[1, 2]
But if you know that the default value is an object instance that's stored in one of the function's attributes, it's much less counterintuitive:
>>> x = f(3)
>>> y = f(4)
>>> x == y
True
>>> x
[1, 2, 3, 4]
>>> x.append(5)
>>> f(6)
[1, 2, 3, 4, 5, 6]
Knowing this, it's clear that if you want a default value of a function's parameter to be a new list (or any new object), you can't simply stash an instance of the object in func_defaults. You have to create a new one every time the function is called:
>>>def g(x, y=None):
if y==None:
y = []
y.append(x)
return y
A:
you need to do the following:
class A:
def __init__(self, builds=None):
if builds is None:
builds = B()
self.builds = builds
it's a very wide-spread error, using mutable parameters as a default arguments. there are plenty of dupes probably on SO.
A:
Yes; default parameters are evaluated only at the time when the function is defined.
One possible solution would be to have the parameter be a class rather than an instance, a la
def foo(blah, klass = B):
b = klass()
# etc
|
Python optional parameters
|
Guys, I just started python recently and get confused with the optional parameters, say I have the program like this:
class B:
pass
class A:
def __init__(self, builds = B()):
self.builds = builds
If I create A twice
b = A()
c = A()
and print their builds
print b.builds
print c.builds
I found they are using the exactly same object,
<__main__.B instance at 0x68ee0>
<__main__.B instance at 0x68ee0>
But it is not what I want, since if b changed some internal state of builds, the one in c object will also be changed.
Is it possible to recreate this optional parameters each time by using this optional parameters syntax?
|
[
"You need to understand how default values work in order to use them effectively.\nFunctions are objects. As such, they have attributes. So, if I create this function:\n>>> def f(x, y=[]):\n y.append(x)\n return y\n\nI've created an object. Here are its attributes:\n>>> dir(f)\n['__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', \n'__dict__', '__doc__', '__format__', '__get__', '__getattribute__', '__globals__', \n'__hash__', '__init__', '__module__', '__name__', '__new__', '__reduce__', \n'__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', \n'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals', \n'func_name']\n\nOne of them is func_defaults. That sounds promising, what's in there?\n>>> f.func_defaults\n([],)\n\nThat's a tuple that contains the function's default values. If a default value is an object, the tuple contains an instance of that object. \nThis leads to some fairly counterintuitive behavior if you're thinking that f adds an item to a list, returning a list containing only that item if no list is provided:\n>>> f(1)\n[1]\n>>> f(2)\n[1, 2]\n\nBut if you know that the default value is an object instance that's stored in one of the function's attributes, it's much less counterintuitive:\n>>> x = f(3)\n>>> y = f(4)\n>>> x == y\nTrue\n>>> x\n[1, 2, 3, 4]\n>>> x.append(5)\n>>> f(6)\n[1, 2, 3, 4, 5, 6]\n\nKnowing this, it's clear that if you want a default value of a function's parameter to be a new list (or any new object), you can't simply stash an instance of the object in func_defaults. You have to create a new one every time the function is called:\n>>>def g(x, y=None):\n if y==None:\n y = []\n y.append(x)\n return y\n\n",
"you need to do the following:\nclass A:\n def __init__(self, builds=None):\n if builds is None:\n builds = B()\n self.builds = builds\n\nit's a very wide-spread error, using mutable parameters as a default arguments. there are plenty of dupes probably on SO.\n",
"Yes; default parameters are evaluated only at the time when the function is defined.\nOne possible solution would be to have the parameter be a class rather than an instance, a la\ndef foo(blah, klass = B):\n b = klass()\n # etc\n\n"
] |
[
47,
15,
6
] |
[] |
[] |
[
"optional_arguments",
"python"
] |
stackoverflow_0001585247_optional_arguments_python.txt
|
Q:
django Unicode GET Parameter Values
I'm trying to get a GET parameter value that looks like this:
http://someurl/handler.json?&q=%E1%F8%E0%F1%F8%E9
The q parameter in this case is Hebrew.
I'm trying to read the value using the following code:
request.GET.get("q", None)
I'm getting gybrish instead of the correct text.
Any idea what's wrong here? Am I missing some setting?
A:
The query string is in ISO-8859-8, but Django's default encoding is UTF-8. You will have to change either DEFAULT_CHARSET or HttpRequest.encoding to ISO-8859-8 to get the correct Unicode data.
|
django Unicode GET Parameter Values
|
I'm trying to get a GET parameter value that looks like this:
http://someurl/handler.json?&q=%E1%F8%E0%F1%F8%E9
The q parameter in this case is Hebrew.
I'm trying to read the value using the following code:
request.GET.get("q", None)
I'm getting gybrish instead of the correct text.
Any idea what's wrong here? Am I missing some setting?
|
[
"The query string is in ISO-8859-8, but Django's default encoding is UTF-8. You will have to change either DEFAULT_CHARSET or HttpRequest.encoding to ISO-8859-8 to get the correct Unicode data.\n"
] |
[
3
] |
[] |
[] |
[
"django",
"python",
"unicode"
] |
stackoverflow_0001585439_django_python_unicode.txt
|
Q:
sending email from wave robot
Anyone know how to send an email using google wave python api?
Thanks
A:
Being a wave robot is nothing special here - you've got to determine at what point you want to send email, but you haven't told us anything about that, so it's hard to advise you.
When you've worked out what you want to send, just follow the normal instructions for sending email from Python in AppEngine.
|
sending email from wave robot
|
Anyone know how to send an email using google wave python api?
Thanks
|
[
"Being a wave robot is nothing special here - you've got to determine at what point you want to send email, but you haven't told us anything about that, so it's hard to advise you.\nWhen you've worked out what you want to send, just follow the normal instructions for sending email from Python in AppEngine.\n"
] |
[
5
] |
[] |
[] |
[
"api",
"google_app_engine",
"google_wave",
"python"
] |
stackoverflow_0001585487_api_google_app_engine_google_wave_python.txt
|
Q:
Is the Python GIL really per interpreter?
I often see people talking that the GIL is per Python Interpreter (even here on stackoverflow).
But what I see in the source code it seems to be that the GIL is a global variable and therefore there is one GIL for all Interpreters in each python process. I know they did this because there is no interpreter object passed around like lua or TCL does, it was just not designed well in the beginning. And thread local storage seems to be not portable for the python guys to use.
Is this correct? I had a short look at the 2.4 version I'm using in a project here.
Had this changed in later versions, especially in 3.0?
A:
The GIL is indeed per-process, not per-interpreter. This is unchanged in 3.x.
A:
Perhaps the confusion comes about because most people assume Python has one interpreter per process. I recall reading that the support for multiple interpreters via the C API was largely untested and hardly ever used. (And when I gave it a go, didn't work properly.)
A:
I believe it is true (at least as of Python 2.6) that each process may have at most one CPython interpreter embedded (other runtimes may have different constraints). I'm not sure if this is an issue with the GIL per se, but it is likely due to global state, or to protect from conflicting global state in third-party C modules. From the CPython API Docs:
[Py___Initialize()] is a no-op when called for a second time (without calling Py_Finalize() first). There is no return value; it is a fatal error if the initialization fails.
You might be interested in the Unladen Swallow project, which aims eventually to remove the GIL entirely from CPython. Other Python runtimes don't have the GIL at all, like (I believe) Stackless Python, and certainly Jython.
Also note that the GIL is still present in CPython 3.x.
|
Is the Python GIL really per interpreter?
|
I often see people talking that the GIL is per Python Interpreter (even here on stackoverflow).
But what I see in the source code it seems to be that the GIL is a global variable and therefore there is one GIL for all Interpreters in each python process. I know they did this because there is no interpreter object passed around like lua or TCL does, it was just not designed well in the beginning. And thread local storage seems to be not portable for the python guys to use.
Is this correct? I had a short look at the 2.4 version I'm using in a project here.
Had this changed in later versions, especially in 3.0?
|
[
"The GIL is indeed per-process, not per-interpreter. This is unchanged in 3.x.\n",
"Perhaps the confusion comes about because most people assume Python has one interpreter per process. I recall reading that the support for multiple interpreters via the C API was largely untested and hardly ever used. (And when I gave it a go, didn't work properly.)\n",
"I believe it is true (at least as of Python 2.6) that each process may have at most one CPython interpreter embedded (other runtimes may have different constraints). I'm not sure if this is an issue with the GIL per se, but it is likely due to global state, or to protect from conflicting global state in third-party C modules. From the CPython API Docs:\n\n[Py___Initialize()] is a no-op when called for a second time (without calling Py_Finalize() first). There is no return value; it is a fatal error if the initialization fails.\n\nYou might be interested in the Unladen Swallow project, which aims eventually to remove the GIL entirely from CPython. Other Python runtimes don't have the GIL at all, like (I believe) Stackless Python, and certainly Jython.\nAlso note that the GIL is still present in CPython 3.x.\n"
] |
[
12,
3,
0
] |
[] |
[] |
[
"gil",
"multithreading",
"python"
] |
stackoverflow_0001585181_gil_multithreading_python.txt
|
Q:
Better way to do string filtering/manipulation
mystring = '14| "Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2"|minor'
So I have 3 elements (id, message and level) divided by pipe ("|"). I want to get each element so I have written these little functions:
def get_msg(i):
x = i.split("|")
return x[1].strip().replace('"','')
def get_level(i):
x = i.split("|")
return x[2].strip()
#testing
print get_msg(mystring ) # Missing Input PID, PID: 20 : Port 4 of a static component
print get_level(mystring )# major
Right now it works well but I feel like this is not pythonic way to solve it, how could these 2 functions can be improved? Regular expression feels like fitting here but I'm very naive at it so couldn't apply.
A:
I think the most pythonic way is to use the csv module.
From PyMotW with delimiter option:
import csv
import sys
f = open(sys.argv[1], 'rt')
try:
reader = csv.reader(f, delimiter='|')
for row in reader:
print row
finally:
f.close()
A:
lst = msg.split('|')
level = lst[2].strip()
message = lst[1].strip(' "')
you're splitting your string twice which is a bit of a waste, other than that modification is minor.
A:
I think the best practice would be to actually have a better formatted string, or not use a string for that. Why is it a string? Where are you parsing this from? A database? Xml? Can the origin be altered?
{ 'id': 14, 'message': 'foo', 'type': 'minor' }
A datatype like this I think would be a best practice, if it's stored in a database then split it up in multiple columns.
Edit: I'm probably going to get stoned for this because it's probably overkill/inefficient but if you add lots of sections later on you could store these in a nice hash map:
>>> formatParts = {
... 'id': lambda x: x[0],
... 'message': lambda x: x[1].strip(' "'),
... 'level': lambda x: x[2].strip()
... }
>>> myList = mystring.split('|')
>>> formatParts['id'](myList)
'14'
>>> formatParts['message'](myList)
'Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2'
>>> formatParts['level'](myList)
'minor'
A:
class MyParser(object):
def __init__(self, value):
self.lst = value.split('|')
def id(self):
return self.lst[0]
def level(self):
return self.lst[2].strip()
def message(self):
return self.lst[1].strip(' "')
A:
If you don't need the getter functions, this should work nicely:
>>> m_id,msg,lvl = [s.strip(' "') for s in mystring.split('|')]
>>> m_id,msg,lvl
('14', 'Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2',
'minor')
Note: avoid shadowing built-in function 'id'
|
Better way to do string filtering/manipulation
|
mystring = '14| "Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2"|minor'
So I have 3 elements (id, message and level) divided by pipe ("|"). I want to get each element so I have written these little functions:
def get_msg(i):
x = i.split("|")
return x[1].strip().replace('"','')
def get_level(i):
x = i.split("|")
return x[2].strip()
#testing
print get_msg(mystring ) # Missing Input PID, PID: 20 : Port 4 of a static component
print get_level(mystring )# major
Right now it works well but I feel like this is not pythonic way to solve it, how could these 2 functions can be improved? Regular expression feels like fitting here but I'm very naive at it so couldn't apply.
|
[
"I think the most pythonic way is to use the csv module.\nFrom PyMotW with delimiter option:\nimport csv\nimport sys\n\nf = open(sys.argv[1], 'rt')\ntry:\n reader = csv.reader(f, delimiter='|')\n for row in reader:\n print row\nfinally:\n f.close()\n\n",
"lst = msg.split('|')\nlevel = lst[2].strip()\nmessage = lst[1].strip(' \"')\n\nyou're splitting your string twice which is a bit of a waste, other than that modification is minor.\n",
"I think the best practice would be to actually have a better formatted string, or not use a string for that. Why is it a string? Where are you parsing this from? A database? Xml? Can the origin be altered? \n{ 'id': 14, 'message': 'foo', 'type': 'minor' }\n\nA datatype like this I think would be a best practice, if it's stored in a database then split it up in multiple columns. \nEdit: I'm probably going to get stoned for this because it's probably overkill/inefficient but if you add lots of sections later on you could store these in a nice hash map:\n>>> formatParts = {\n... 'id': lambda x: x[0],\n... 'message': lambda x: x[1].strip(' \"'),\n... 'level': lambda x: x[2].strip()\n... }\n>>> myList = mystring.split('|')\n>>> formatParts['id'](myList)\n'14'\n>>> formatParts['message'](myList)\n'Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2'\n>>> formatParts['level'](myList)\n'minor'\n\n",
"class MyParser(object):\n def __init__(self, value):\n self.lst = value.split('|')\n def id(self):\n return self.lst[0]\n def level(self):\n return self.lst[2].strip()\n def message(self):\n return self.lst[1].strip(' \"')\n\n",
"If you don't need the getter functions, this should work nicely:\n>>> m_id,msg,lvl = [s.strip(' \"') for s in mystring.split('|')]\n>>> m_id,msg,lvl\n('14', 'Preprocessor Frame Count Not Incrementing; Card: Motherboard, Port: 2',\n'minor')\n\nNote: avoid shadowing built-in function 'id'\n"
] |
[
5,
2,
1,
1,
0
] |
[] |
[] |
[
"python",
"string"
] |
stackoverflow_0001584639_python_string.txt
|
Q:
Storing wiki revisions on Google App Engine/Django - Modifying This Existing Code
In the past, I created a Django wiki, and it was fairly straightforward to make a Page table for the current wiki entries, and then to store old revisions into a Revision table.
More recently, I decided to set up a website on Google App Engine, and I used some wiki code that another programmer wrote. Because he created his Page model in sort of a complicated way (complicated to me at least) using Entities, I am unsure about how to create the Revision table and integrate it with his Page model.
Here is the relevant code. Could someone help me write the Revision model, and integrate saving the revisions with the Save method of the Page model?
class Page(object):
def __init__(self, name, entity=None):
self.name = name
self.entity = entity
if entity:
self.content = entity['content']
if entity.has_key('user'):
self.user = entity['user']
else:
self.user = None
self.created = entity['created']
self.modified = entity['modified']
else:
# New pages should start out with a simple title to get the user going
now = datetime.datetime.now()
self.content = '<h1>' + cgi.escape(name) + '</h1>'
self.user = None
self.created = now
self.modified = now
def save(self):
"""Creates or edits this page in the datastore."""
now = datetime.datetime.now()
if self.entity:
entity = self.entity
else:
entity = datastore.Entity('Page')
entity['name'] = self.name
entity['created'] = now
entity['content'] = datastore_types.Text(self.content)
entity['modified'] = now
if users.GetCurrentUser():
entity['user'] = users.GetCurrentUser()
elif entity.has_key('user'):
del entity['user']
datastore.Put(entity)
By the way, this code comes from: http://code.google.com/p/google-app-engine-samples/downloads/list
I'm pretty inexperienced with GAE Django models, and mine tend to be very simple. For example, here's my model for a blog Article:
class Article(db.Model):
author = db.UserProperty()
title = db.StringProperty(required=True)
text = db.TextProperty(required=True)
tags = db.StringProperty(required=True)
date_created = db.DateProperty(auto_now_add=True)
A:
The code in your first snippet is not a model - it's a custom class that uses the low-level datastore module. If you want to extend it, I would recommend throwing it out and replacing it with actual models, along similar lines to the Article model you demonstrated in your second snippet.
Also, they're App Engine models, not Django models - Django models don't work on App Engine.
A:
I created this model (which mimics the Page class):
class Revision (db.Model):
name = db.StringProperty(required=True)
created = db.DateTimeProperty(required=True)
modified = db.DateTimeProperty(auto_now_add=True)
content = db.TextProperty(required=True)
user = db.UserProperty()
In the Save() method of the Page class, I added this code to save a Revision, before I updated the fields with the new data:
r = Revision(name = self.name,
content = self.content,
created = self.created,
modified = self.modified,
user = self.user)
r.put()
I have the wiki set up now to accept a GET parameter to specify which revision you want to see or edit. When the user wants a revision, I fetch the Page from the database, and replace the Page's Content with the Revision's Content:
page = models.Page.load(title)
if request.GET.get('rev'):
query = db.Query(models.Revision)
query.filter('name =', title).order('created')
rev = request.GET.get('rev')
rev_page = query.fetch(1, int(rev))
page.content = rev_page.content
|
Storing wiki revisions on Google App Engine/Django - Modifying This Existing Code
|
In the past, I created a Django wiki, and it was fairly straightforward to make a Page table for the current wiki entries, and then to store old revisions into a Revision table.
More recently, I decided to set up a website on Google App Engine, and I used some wiki code that another programmer wrote. Because he created his Page model in sort of a complicated way (complicated to me at least) using Entities, I am unsure about how to create the Revision table and integrate it with his Page model.
Here is the relevant code. Could someone help me write the Revision model, and integrate saving the revisions with the Save method of the Page model?
class Page(object):
def __init__(self, name, entity=None):
self.name = name
self.entity = entity
if entity:
self.content = entity['content']
if entity.has_key('user'):
self.user = entity['user']
else:
self.user = None
self.created = entity['created']
self.modified = entity['modified']
else:
# New pages should start out with a simple title to get the user going
now = datetime.datetime.now()
self.content = '<h1>' + cgi.escape(name) + '</h1>'
self.user = None
self.created = now
self.modified = now
def save(self):
"""Creates or edits this page in the datastore."""
now = datetime.datetime.now()
if self.entity:
entity = self.entity
else:
entity = datastore.Entity('Page')
entity['name'] = self.name
entity['created'] = now
entity['content'] = datastore_types.Text(self.content)
entity['modified'] = now
if users.GetCurrentUser():
entity['user'] = users.GetCurrentUser()
elif entity.has_key('user'):
del entity['user']
datastore.Put(entity)
By the way, this code comes from: http://code.google.com/p/google-app-engine-samples/downloads/list
I'm pretty inexperienced with GAE Django models, and mine tend to be very simple. For example, here's my model for a blog Article:
class Article(db.Model):
author = db.UserProperty()
title = db.StringProperty(required=True)
text = db.TextProperty(required=True)
tags = db.StringProperty(required=True)
date_created = db.DateProperty(auto_now_add=True)
|
[
"The code in your first snippet is not a model - it's a custom class that uses the low-level datastore module. If you want to extend it, I would recommend throwing it out and replacing it with actual models, along similar lines to the Article model you demonstrated in your second snippet.\nAlso, they're App Engine models, not Django models - Django models don't work on App Engine.\n",
"I created this model (which mimics the Page class):\nclass Revision (db.Model):\n name = db.StringProperty(required=True)\n created = db.DateTimeProperty(required=True)\n modified = db.DateTimeProperty(auto_now_add=True)\n content = db.TextProperty(required=True)\n user = db.UserProperty()\n\nIn the Save() method of the Page class, I added this code to save a Revision, before I updated the fields with the new data:\nr = Revision(name = self.name,\n content = self.content,\n created = self.created,\n modified = self.modified,\n user = self.user)\nr.put()\n\nI have the wiki set up now to accept a GET parameter to specify which revision you want to see or edit. When the user wants a revision, I fetch the Page from the database, and replace the Page's Content with the Revision's Content:\npage = models.Page.load(title)\n\nif request.GET.get('rev'):\n query = db.Query(models.Revision)\n query.filter('name =', title).order('created')\n rev = request.GET.get('rev')\n rev_page = query.fetch(1, int(rev))\n page.content = rev_page.content\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"django_models",
"google_app_engine",
"python"
] |
stackoverflow_0001583595_django_django_models_google_app_engine_python.txt
|
Q:
sqlite3 and cursor.description
When using the sqlite3 module in python, all elements of cursor.description except the column names are set to None, so this tuple cannot be used to find the column types for a query result (unlike other DB-API compliant modules). Is the only way to get the types of the columns to use pragma table_info(table_name).fetchall() to get a description of the table, store it in memory, and then match the column names from cursor.description to that overall table description?
A:
No, it's not the only way. Alternatively, you can also fetch one row, iterate over it, and inspect the individual column Python objects and types. Unless the value is None (in which case the SQL field is NULL), this should give you a fairly precise indication what the database column type was.
sqlite3 only uses sqlite3_column_decltype and sqlite3_column_type in one place, each, and neither are accessible to the Python application - so their is no "direct" way that you may have been looking for.
A:
I haven't tried this in Python, but you could try something like
SELECT *
FROM sqlite_master
WHERE type = 'table';
which contains the DDL CREATE statement used to create the table. By parsing the DDL you can get the column type info, such as it is. Remember that SQLITE is rather vague and unrestrictive when it comes to column data types.
|
sqlite3 and cursor.description
|
When using the sqlite3 module in python, all elements of cursor.description except the column names are set to None, so this tuple cannot be used to find the column types for a query result (unlike other DB-API compliant modules). Is the only way to get the types of the columns to use pragma table_info(table_name).fetchall() to get a description of the table, store it in memory, and then match the column names from cursor.description to that overall table description?
|
[
"No, it's not the only way. Alternatively, you can also fetch one row, iterate over it, and inspect the individual column Python objects and types. Unless the value is None (in which case the SQL field is NULL), this should give you a fairly precise indication what the database column type was.\nsqlite3 only uses sqlite3_column_decltype and sqlite3_column_type in one place, each, and neither are accessible to the Python application - so their is no \"direct\" way that you may have been looking for.\n",
"I haven't tried this in Python, but you could try something like\nSELECT *\nFROM sqlite_master\nWHERE type = 'table';\n\nwhich contains the DDL CREATE statement used to create the table. By parsing the DDL you can get the column type info, such as it is. Remember that SQLITE is rather vague and unrestrictive when it comes to column data types.\n"
] |
[
5,
2
] |
[] |
[] |
[
"python",
"python_db_api",
"sqlite"
] |
stackoverflow_0001583350_python_python_db_api_sqlite.txt
|
Q:
Does anyone know a "working" Python library that can read .ARC files?
An ARC file is a lossless data-compression format.
http://en.wikipedia.org/wiki/ARC_%28file_format%29
I've tried googling some, but the Python ARC readers are 404 errors, or cannot be found.
Anyone know of any library I can use?
A:
If you are able to use SWIG then possibly the ARC source code from FreeBSD could be used. Or you could have a look at the source, and perhaps reimplement it in Python. I remember ARC and it did not last very long as a popular tool so I suspect that it is not overly complex.
|
Does anyone know a "working" Python library that can read .ARC files?
|
An ARC file is a lossless data-compression format.
http://en.wikipedia.org/wiki/ARC_%28file_format%29
I've tried googling some, but the Python ARC readers are 404 errors, or cannot be found.
Anyone know of any library I can use?
|
[
"If you are able to use SWIG then possibly the ARC source code from FreeBSD could be used. Or you could have a look at the source, and perhaps reimplement it in Python. I remember ARC and it did not last very long as a popular tool so I suspect that it is not overly complex. \n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001569836_python.txt
|
Q:
union with sort in Google-App-Engine
I have a class:
class Transaction(db.Model):
accountDebit = db.ReferenceProperty(reference_class=Account,
collection_name="kontoDuguje")
accountCredit = db.ReferenceProperty(reference_class=Account,
collection_name="kontoPotrazuje")
amount = db.FloatProperty()
Tran_date = db.DateProperty()
comment = db.StringProperty()
here is the method of Account class by which I would like to get all the
transactions for the particular account (transactions with
accountDebit or accountCredit), but sorted by date:
def GetTransactions(self):
transactions = []
transactions_debit = db.GqlQuery('SELECT * FROM Transaction ' +
'WHERE accountDebit=:1',self)
transactions_credit = db.GqlQuery('SELECT * FROM Transaction ' +
'WHERE accountCredit=:1',self)
for x in transactions_debit:
x.amount = -x.amount
transactions.append(x)
for x in transactions_credit:
x.amount = x.amount
transactions.append(x)
return transactions
The aim is to make union with the sort of this two results, but with limit + offset. Consider the fact that you can not fetch more than 1000 rows in a single query ...
Please help
A:
You can do an OR (Python laboriously synthesizes it for you at application level), which takes care of the "union with sorty". However, if you need to worry about > 1000 transactions, that won't help (nor will offset and limit: the sum of offset + limit is what's limited to 1000!). You'll need to slice by something (presumably the same field you're sorting on, tran_date I imagine?) with a couple of < conditions there, and that of course can't guarantee you the exact limit and offset you desire, so you'll have to exceed them a bit and slice off the eccess at application level.
Edit: OR is not actually synthesized at application level (IN and != are the two operations that are), so you need to synthesize it yourself (also at application level of course), e.g.:
def GetTransactions(account):
transactions = list(db.GqlQuery(
'SELECT * FROM Transaction WHERE '
'accountDebit = :1 ORDER BY Tran_date', account))
transactions.extend(db.GqlQuery(
'SELECT * FROM Transaction WHERE '
'accountCredit = :1 ORDER BY Tran_date', account))
transactions.sort(key=operator.attrgetter('Tran_date'))
return transactions
But the big issues are still those outlined above.
So what are the numbers in play -- typical numbers of transactions for a user (say per week or per day), typical max total for a user, what order of magnitudes are you going to need in your offset and limit, etc, etc? Hard to suggest specific design choices without having any idea of the orders of magnitude of these numbers!-)
Edit: there is no solution that will be optimal, or even reasonable, for ANY order of magnitude of each of these parameters -- how you deal efficiently with many millions of transactions per user per day is just going to be deeply different from how you deal with a few transactions per user per day; I can't even imagine an architecture that would make sense in both cases (I might, perhaps, in a relational context, but not in a non-relational one such as we have here -- e.g., to decently deal with the case of millions of transactions per day, you really want a finer-grained timestamp on a transaction than just recording its date can provide!-).
|
union with sort in Google-App-Engine
|
I have a class:
class Transaction(db.Model):
accountDebit = db.ReferenceProperty(reference_class=Account,
collection_name="kontoDuguje")
accountCredit = db.ReferenceProperty(reference_class=Account,
collection_name="kontoPotrazuje")
amount = db.FloatProperty()
Tran_date = db.DateProperty()
comment = db.StringProperty()
here is the method of Account class by which I would like to get all the
transactions for the particular account (transactions with
accountDebit or accountCredit), but sorted by date:
def GetTransactions(self):
transactions = []
transactions_debit = db.GqlQuery('SELECT * FROM Transaction ' +
'WHERE accountDebit=:1',self)
transactions_credit = db.GqlQuery('SELECT * FROM Transaction ' +
'WHERE accountCredit=:1',self)
for x in transactions_debit:
x.amount = -x.amount
transactions.append(x)
for x in transactions_credit:
x.amount = x.amount
transactions.append(x)
return transactions
The aim is to make union with the sort of this two results, but with limit + offset. Consider the fact that you can not fetch more than 1000 rows in a single query ...
Please help
|
[
"You can do an OR (Python laboriously synthesizes it for you at application level), which takes care of the \"union with sorty\". However, if you need to worry about > 1000 transactions, that won't help (nor will offset and limit: the sum of offset + limit is what's limited to 1000!). You'll need to slice by something (presumably the same field you're sorting on, tran_date I imagine?) with a couple of < conditions there, and that of course can't guarantee you the exact limit and offset you desire, so you'll have to exceed them a bit and slice off the eccess at application level.\nEdit: OR is not actually synthesized at application level (IN and != are the two operations that are), so you need to synthesize it yourself (also at application level of course), e.g.:\ndef GetTransactions(account):\n transactions = list(db.GqlQuery(\n 'SELECT * FROM Transaction WHERE '\n 'accountDebit = :1 ORDER BY Tran_date', account))\n transactions.extend(db.GqlQuery(\n 'SELECT * FROM Transaction WHERE '\n 'accountCredit = :1 ORDER BY Tran_date', account))\n transactions.sort(key=operator.attrgetter('Tran_date'))\n return transactions\n\nBut the big issues are still those outlined above.\nSo what are the numbers in play -- typical numbers of transactions for a user (say per week or per day), typical max total for a user, what order of magnitudes are you going to need in your offset and limit, etc, etc? Hard to suggest specific design choices without having any idea of the orders of magnitude of these numbers!-)\nEdit: there is no solution that will be optimal, or even reasonable, for ANY order of magnitude of each of these parameters -- how you deal efficiently with many millions of transactions per user per day is just going to be deeply different from how you deal with a few transactions per user per day; I can't even imagine an architecture that would make sense in both cases (I might, perhaps, in a relational context, but not in a non-relational one such as we have here -- e.g., to decently deal with the case of millions of transactions per day, you really want a finer-grained timestamp on a transaction than just recording its date can provide!-).\n"
] |
[
2
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001585299_google_app_engine_python.txt
|
Q:
Using SWIG with pointer to function in C struct
I'm trying to write a SWIG wrapper for a C library that uses pointers to functions in its structs. I can't figure out how to handle structs that contain function pointers. A simplified example follows.
test.i:
/* test.i */
%module test
%{
typedef struct {
int (*my_func)(int);
} test_struct;
int add1(int n) { return n+1; }
test_struct *init_test()
{
test_struct *t = (test_struct*) malloc(sizeof(test_struct));
t->my_func = add1;
}
%}
typedef struct {
int (*my_func)(int);
} test_struct;
extern test_struct *init_test();
sample session:
Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41)
[GCC 4.3.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import test
>>> t = test.init_test()
>>> t
<test.test_struct; proxy of <Swig Object of type 'test_struct *' at 0xa1cafd0> >
>>> t.my_func
<Swig Object of type 'int (*)(int)' at 0xb8009810>
>>> t.my_func(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'PySwigObject' object is not callable
Anyone know if it's possible to get t.my_func(1) to return 2?
Thanks!
A:
I found an answer. If I declare the function pointer as a SWIG "member function", it seems to work as expected:
%module test
%{
typedef struct {
int (*my_func)(int);
} test_struct;
int add1(int n) { return n+1; }
test_struct *init_test()
{
test_struct *t = (test_struct*) malloc(sizeof(test_struct));
t->my_func = add1;
return t;
}
%}
typedef struct {
int my_func(int);
} test_struct;
extern test_struct *init_test();
Session:
$ python
Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41)
[GCC 4.3.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import test
>>> t = test.init_test()
>>> t.my_func(1)
2
I was hoping for something that wouldn't require writing any custom SWIG-specific code (I'd prefer to just "%include" my headers without modification), but this will do I guess.
A:
You forget to "return t;" in init_test():
#include <stdlib.h>
#include <stdio.h>
typedef struct {
int (*my_func)(int);
} test_struct;
int add1(int n) { return n+1; }
test_struct *init_test(){
test_struct *t = (test_struct*) malloc(sizeof(test_struct));
t->my_func = add1;
return t;
}
int main(){
test_struct *s=init_test();
printf( "%i\n", s->my_func(1) );
}
|
Using SWIG with pointer to function in C struct
|
I'm trying to write a SWIG wrapper for a C library that uses pointers to functions in its structs. I can't figure out how to handle structs that contain function pointers. A simplified example follows.
test.i:
/* test.i */
%module test
%{
typedef struct {
int (*my_func)(int);
} test_struct;
int add1(int n) { return n+1; }
test_struct *init_test()
{
test_struct *t = (test_struct*) malloc(sizeof(test_struct));
t->my_func = add1;
}
%}
typedef struct {
int (*my_func)(int);
} test_struct;
extern test_struct *init_test();
sample session:
Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41)
[GCC 4.3.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import test
>>> t = test.init_test()
>>> t
<test.test_struct; proxy of <Swig Object of type 'test_struct *' at 0xa1cafd0> >
>>> t.my_func
<Swig Object of type 'int (*)(int)' at 0xb8009810>
>>> t.my_func(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'PySwigObject' object is not callable
Anyone know if it's possible to get t.my_func(1) to return 2?
Thanks!
|
[
"I found an answer. If I declare the function pointer as a SWIG \"member function\", it seems to work as expected:\n%module test\n%{\n\ntypedef struct {\n int (*my_func)(int);\n} test_struct;\n\nint add1(int n) { return n+1; }\n\ntest_struct *init_test()\n{\n test_struct *t = (test_struct*) malloc(sizeof(test_struct));\n t->my_func = add1;\n return t;\n}\n\n%}\n\ntypedef struct {\n int my_func(int);\n} test_struct;\n\nextern test_struct *init_test();\n\nSession:\n$ python\nPython 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) \n[GCC 4.3.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import test\n>>> t = test.init_test()\n>>> t.my_func(1)\n2\n\nI was hoping for something that wouldn't require writing any custom SWIG-specific code (I'd prefer to just \"%include\" my headers without modification), but this will do I guess.\n",
"You forget to \"return t;\" in init_test():\n#include <stdlib.h> \n#include <stdio.h> \n\ntypedef struct {\n int (*my_func)(int);\n} test_struct;\n\nint add1(int n) { return n+1; }\n\ntest_struct *init_test(){\n test_struct *t = (test_struct*) malloc(sizeof(test_struct));\n t->my_func = add1;\n return t;\n}\n\nint main(){\n test_struct *s=init_test();\n\n printf( \"%i\\n\", s->my_func(1) );\n}\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"c",
"function",
"pointers",
"python",
"swig"
] |
stackoverflow_0001583293_c_function_pointers_python_swig.txt
|
Q:
django and executing a separate .py to manipute a database
I want to execute a random .py file, say foo.py on the myproject/myapp folder by using crobjob by some periods
I have this basic model in my model.py for the app:
class Mymodel(models.Model):
content = models.TextField()
Say I have this in my foo.py, I want to check if there is any Mymodel object that has a content field as same as mytext, if not make a new Mymodel with the mytext as content, if already existing do nothing.
<do django importings>
mytext = "something here"
if Mymodel.filter(content=mytext) == null:
newitem = Mymodel(content=mytext)
newitem.save()
else:
pass
So here is my question, what django imports shall I be doing? Also how can I check if the query has no item (don't know if if Mymodel.filter(content=mytext) == null would work. Also I don't know if this is an efficient way to achieve my goal as the amount of Mymodel will be high.
Thanks
A:
You have two separate questions here - it would have been better to split them out.
To run a separate script, you're best off creating a ./manage.py command. See the documentation on how to do this.
For your second question, the code you give is not valid Python, since there is no 'null' value - you mean None. However even then the code will not work, as that isn't how you write that query using the Django ORM. You want something like this:
if not MyModel.objects.filter(content=mytext).count():
which asks the database how many items there are with content=mytext, and is True if there are none.
A:
You might also check out django-extensions, which has a built-in manage.py extension called "runscript" that executes any python script in your django project's context.
|
django and executing a separate .py to manipute a database
|
I want to execute a random .py file, say foo.py on the myproject/myapp folder by using crobjob by some periods
I have this basic model in my model.py for the app:
class Mymodel(models.Model):
content = models.TextField()
Say I have this in my foo.py, I want to check if there is any Mymodel object that has a content field as same as mytext, if not make a new Mymodel with the mytext as content, if already existing do nothing.
<do django importings>
mytext = "something here"
if Mymodel.filter(content=mytext) == null:
newitem = Mymodel(content=mytext)
newitem.save()
else:
pass
So here is my question, what django imports shall I be doing? Also how can I check if the query has no item (don't know if if Mymodel.filter(content=mytext) == null would work. Also I don't know if this is an efficient way to achieve my goal as the amount of Mymodel will be high.
Thanks
|
[
"You have two separate questions here - it would have been better to split them out.\nTo run a separate script, you're best off creating a ./manage.py command. See the documentation on how to do this.\nFor your second question, the code you give is not valid Python, since there is no 'null' value - you mean None. However even then the code will not work, as that isn't how you write that query using the Django ORM. You want something like this:\nif not MyModel.objects.filter(content=mytext).count():\n\nwhich asks the database how many items there are with content=mytext, and is True if there are none.\n",
"You might also check out django-extensions, which has a built-in manage.py extension called \"runscript\" that executes any python script in your django project's context.\n"
] |
[
5,
2
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0001586041_django_django_models_python.txt
|
Q:
Human-readable binary data using Python
My work requires that I perform a mathematical simulation whose parameters come from a binary file. The simulator can read such binary file without a problem.
However, I need to peek inside the binary file to make sure the parameters are what I need them to be, and I cannot seem to be able to do it.
I would like to write an script in Python which would allow me to read in the binary file, search for the parameters that I care about, and display what their values are.
What I know about the binary file:
It represents simple text (as opposed to an image or soud file). There is a piece of code that can "dump" the file into a readable format: if I open that dump in Emacs I will find things like:
CENTRAL_BODY = 'SUN'
All the file is just a series of similar instructions. I could use that dump code, but I much rather have Python do that.
This seems to be a very trivial question, and I apologize for not knowing better. I thought I was a proficient programmer!
Many thanks.
A:
You can read the file's content into a string in memory:
thedata = open(thefilename, 'rb').read()
and then locate a string in it:
where = thedata.find('CENTRAL_BODY')
and finally slice off the part you care about:
thepart = thedata[where:where+50] # or whatever length
and display it as you prefer (e.g. find the string value by locating within thepart an = sign, then the first following quote, then the next quote after that).
A:
If it's a binary file, you will need to use the struct module. You will need to know how the data is formatted in the file. If that is not documented, you will have to reverse engineer it.
Do you have source code of the other dumping program? You may be able to just port that to Python
We can probably help you better if we can see what the binary file and the corresponding dump looks like
A:
It sounds like this 'dump' program already does what you need: interpreting the binary file. I guess my approach would be to write a python program that can take a dump'ed file, extract the parameters you want and display them.
Then parse it with something like this:
myparms.py:
import sys
d = {}
for line in sys.stdin:
parts = line.split("=",2)
if len(parts) < 2:
continue
k = parts[0].strip()
v = parts[1].strip()
d[k] = v
print d['CENTRAL_BODY']
Use this like:
dump parameters.bin | python myparms.py
You didn't mention a platform or provide details about the dump'ed format, but this should be a place to start.
A:
You have to know the format the data is stored in; there's simply no way around that.
If there's no written spec on it, try to open it in a hex editor and study the format, using the text-dump as a reference. If you can get the source code for the tool that creates the text-dumps, that would help you alot.
Keep in mind that the data could be scrambled in someway or another, e.g. rot13.
|
Human-readable binary data using Python
|
My work requires that I perform a mathematical simulation whose parameters come from a binary file. The simulator can read such binary file without a problem.
However, I need to peek inside the binary file to make sure the parameters are what I need them to be, and I cannot seem to be able to do it.
I would like to write an script in Python which would allow me to read in the binary file, search for the parameters that I care about, and display what their values are.
What I know about the binary file:
It represents simple text (as opposed to an image or soud file). There is a piece of code that can "dump" the file into a readable format: if I open that dump in Emacs I will find things like:
CENTRAL_BODY = 'SUN'
All the file is just a series of similar instructions. I could use that dump code, but I much rather have Python do that.
This seems to be a very trivial question, and I apologize for not knowing better. I thought I was a proficient programmer!
Many thanks.
|
[
"You can read the file's content into a string in memory:\nthedata = open(thefilename, 'rb').read()\n\nand then locate a string in it:\nwhere = thedata.find('CENTRAL_BODY')\n\nand finally slice off the part you care about:\nthepart = thedata[where:where+50] # or whatever length\n\nand display it as you prefer (e.g. find the string value by locating within thepart an = sign, then the first following quote, then the next quote after that).\n",
"If it's a binary file, you will need to use the struct module. You will need to know how the data is formatted in the file. If that is not documented, you will have to reverse engineer it. \nDo you have source code of the other dumping program? You may be able to just port that to Python\nWe can probably help you better if we can see what the binary file and the corresponding dump looks like\n",
"It sounds like this 'dump' program already does what you need: interpreting the binary file. I guess my approach would be to write a python program that can take a dump'ed file, extract the parameters you want and display them.\nThen parse it with something like this:\nmyparms.py:\nimport sys\n\nd = {}\nfor line in sys.stdin:\n parts = line.split(\"=\",2)\n if len(parts) < 2:\n continue\n k = parts[0].strip()\n v = parts[1].strip()\n d[k] = v\n\nprint d['CENTRAL_BODY']\n\nUse this like:\ndump parameters.bin | python myparms.py\nYou didn't mention a platform or provide details about the dump'ed format, but this should be a place to start.\n",
"You have to know the format the data is stored in; there's simply no way around that.\nIf there's no written spec on it, try to open it in a hex editor and study the format, using the text-dump as a reference. If you can get the source code for the tool that creates the text-dumps, that would help you alot.\nKeep in mind that the data could be scrambled in someway or another, e.g. rot13.\n"
] |
[
4,
1,
1,
0
] |
[] |
[] |
[
"ascii",
"binary_data",
"format",
"python"
] |
stackoverflow_0001585950_ascii_binary_data_format_python.txt
|
Q:
parsing string to a dict
I have a string output which is in form of a dict ex.
{'key1':'value1','key2':'value2'}
how can make easily save it as a dict and not as a string?
A:
astr is a string which is "in the form of a dict".
ast.literal_eval converts it to a python dict object.
In [110]: import ast
In [111]: astr="{'key1':'value1','key2':'value2'}"
In [113]: ast.literal_eval(astr)
Out[113]: {'key1': 'value1', 'key2': 'value2'}
A:
This is best if you're on Python 2.6+, as it's not subject to the security holes in eval.
import ast
s = """{'key1':'value1','key2':'value2'}"""
d = ast.literal_eval(s)
A:
using json.loads - may be faster
A:
Where are you getting this string from? Is it in JSON format? or python dictionary format? or just some ad-hoc format that happens to be similar to python dictionaries?
If it's JSON, or if it's only a dict, and only contains strings and/or numbers, you can use json.loads, it's the most safe option as it simply can't parse python code.
This approach has some shortcomings though, if for instance, the strings are enclosed in single quotes ' instead of double quotes ", not to mention that it only parses json objects/arrays, which only coincidentally happen to share similar syntax with pythons dicts/arrays.
Though I think it's likely the string you're getting is intended to be in JSON format. I make this assumption because it's a common format for data exchange.
|
parsing string to a dict
|
I have a string output which is in form of a dict ex.
{'key1':'value1','key2':'value2'}
how can make easily save it as a dict and not as a string?
|
[
"astr is a string which is \"in the form of a dict\".\nast.literal_eval converts it to a python dict object.\nIn [110]: import ast\n\nIn [111]: astr=\"{'key1':'value1','key2':'value2'}\"\n\nIn [113]: ast.literal_eval(astr)\nOut[113]: {'key1': 'value1', 'key2': 'value2'}\n\n",
"This is best if you're on Python 2.6+, as it's not subject to the security holes in eval.\nimport ast\n\ns = \"\"\"{'key1':'value1','key2':'value2'}\"\"\"\nd = ast.literal_eval(s)\n\n",
"using json.loads - may be faster\n",
"Where are you getting this string from? Is it in JSON format? or python dictionary format? or just some ad-hoc format that happens to be similar to python dictionaries?\nIf it's JSON, or if it's only a dict, and only contains strings and/or numbers, you can use json.loads, it's the most safe option as it simply can't parse python code. \nThis approach has some shortcomings though, if for instance, the strings are enclosed in single quotes ' instead of double quotes \", not to mention that it only parses json objects/arrays, which only coincidentally happen to share similar syntax with pythons dicts/arrays.\nThough I think it's likely the string you're getting is intended to be in JSON format. I make this assumption because it's a common format for data exchange.\n"
] |
[
6,
4,
1,
1
] |
[] |
[] |
[
"abstract_syntax_tree",
"eval",
"parsing",
"python",
"string"
] |
stackoverflow_0001585267_abstract_syntax_tree_eval_parsing_python_string.txt
|
Q:
Given a string, how do I know if it needs decoding
I'm using python's base64 module and I get a string that can be encoded or not encoded. I would like to do something like:
if isEncoded(s):
output = base64.decodestring(s)
else:
output = s
ideas?
A:
In general, it's impossible; if you receive string 'MjMj', for example, how could you possibly know whether it's already decoded and needs to be used as is, or decoded into '23#'?
A:
You could just try it, and see what happens:
import base64
def decode_if_necessary(s):
try:
return base64.decodestring(s)
except:
return s
But you have to ask yourself: what if the original message was in fact a syntactically valid base64 string, but not meant to be one? Then "decoding" it will succeed, but the result is not the required output. So I have to ask: is this really what you want?
Edit: Note that decodestring is deprecated.
A:
You could check to see if a string may be base64 encoded. In general, the function can predict with 75%+ accuracy is the data is encoded.
def isBase64(s):
return (len(s) % 4 == 0) and re.match('^[A-Za-z0-9+/]+[=]{0,2}$', s)
|
Given a string, how do I know if it needs decoding
|
I'm using python's base64 module and I get a string that can be encoded or not encoded. I would like to do something like:
if isEncoded(s):
output = base64.decodestring(s)
else:
output = s
ideas?
|
[
"In general, it's impossible; if you receive string 'MjMj', for example, how could you possibly know whether it's already decoded and needs to be used as is, or decoded into '23#'?\n",
"You could just try it, and see what happens:\nimport base64\n\ndef decode_if_necessary(s):\n try:\n return base64.decodestring(s)\n except:\n return s\n\nBut you have to ask yourself: what if the original message was in fact a syntactically valid base64 string, but not meant to be one? Then \"decoding\" it will succeed, but the result is not the required output. So I have to ask: is this really what you want?\nEdit: Note that decodestring is deprecated.\n",
"You could check to see if a string may be base64 encoded. In general, the function can predict with 75%+ accuracy is the data is encoded.\ndef isBase64(s):\n return (len(s) % 4 == 0) and re.match('^[A-Za-z0-9+/]+[=]{0,2}$', s)\n\n"
] |
[
11,
5,
5
] |
[] |
[] |
[
"base64",
"python"
] |
stackoverflow_0001532567_base64_python.txt
|
Q:
google wave OnBlipSubmitted
I'm trying to create a wave robot, and I have the basic stuff working. I'm trying to create a new blip with help text when someone types @help but for some reason it doesnt create it. I'm getting no errors in the log console, and I'm seeing the info log 'in @log'
def OnBlipSubmitted(properties, context):
# Get the blip that was just submitted.
blip = context.GetBlipById(properties['blipId'])
text = blip.GetDocument().GetText()
if text.startswith('@help') == True:
logging.info('in @help')
blip.CreateChild().GetDocument().SetText('help text')
A:
if it just started working, I have two suggestions...
-->Have you been updating the Robot Version in the constructor? You should change the values as you update changes so that the caches can be updated.
if __name__ == '__main__':
myRobot = robot.Robot('waverobotdev',
image_url = baseurl + 'assets/wave_robot_icon.png',
version = '61', # <-------------HERE
profile_url = baseurl)
-->The server connection between Wave and AppSpot has recently been extremely variable. Sometimes it takes 10+ minutes for the AppSpot server to receive my event, othertimes a few seconds. Verify you're receiving the events you expect.
Edit:
The code you provided looks good, so I wouldn't expect you're doing anything wrong in that respect.
A:
Have you tried using Append() instead of SetText()? That's what I'd do in my C# API - I haven't used the Python API, but I'd imagine it's similar. Here's a sample from my demo robot:
protected override void OnBlipSubmitted(IEvent e)
{
if (e.Blip.Document.Text.Contains("robot"))
{
IBlip blip = e.Blip.CreateChild();
ITextView textView = blip.Document;
textView.Append("Are you talking to me?");
}
}
That works fine.
EDIT: Here's the resulting JSON from the above code:
{
"javaClass": "com.google.wave.api.impl.OperationMessageBundle",
"version": "173784133",
"operations": {
"javaClass": "java.util.ArrayList",
"list": [
{
"javaClass": "com.google.wave.api.impl.OperationImpl",
"type": "BLIP_CREATE_CHILD",
"waveId": "googlewave.com!w+PHAstGbKC",
"waveletId": "googlewave.com!conv+root",
"blipId": "b+Iw_Xw7FCC",
"index": -1,
"property": {
"javaClass": "com.google.wave.api.impl.BlipData",
"annotations": {
"javaClass": "java.util.ArrayList",
"list": []
},
"lastModifiedTime": -1,
"contributors": {
"javaClass": "java.util.ArrayList",
"list": []
},
"waveId": "googlewave.com!w+PHAstGbKC",
"waveletId": "googlewave.com!conv+root",
"version": -1,
"parentBlipId": null,
"creator": null,
"content": "\nAre you talking to me?",
"blipId": "410621dc-d7a1-4be5-876c-0a9d313858bb",
"elements": {
"map": {},
"javaClass": "java.util.HashMap"
},
"childBlipIds": {
"javaClass": "java.util.ArrayList",
"list": []
}
}
},
{
"javaClass": "com.google.wave.api.impl.OperationImpl",
"type": "DOCUMENT_APPEND",
"waveId": "googlewave.com!w+PHAstGbKC",
"waveletId": "googlewave.com!conv+root",
"blipId": "410621dc-d7a1-4be5-876c-0a9d313858bb",
"index": 0,
"property": "Are you talking to me?"
}
]
}
}
How does that compare with the JSON which comes out of your robot?
A:
For some reason it just started working. I think the google wave is spotty.
|
google wave OnBlipSubmitted
|
I'm trying to create a wave robot, and I have the basic stuff working. I'm trying to create a new blip with help text when someone types @help but for some reason it doesnt create it. I'm getting no errors in the log console, and I'm seeing the info log 'in @log'
def OnBlipSubmitted(properties, context):
# Get the blip that was just submitted.
blip = context.GetBlipById(properties['blipId'])
text = blip.GetDocument().GetText()
if text.startswith('@help') == True:
logging.info('in @help')
blip.CreateChild().GetDocument().SetText('help text')
|
[
"if it just started working, I have two suggestions...\n-->Have you been updating the Robot Version in the constructor? You should change the values as you update changes so that the caches can be updated.\nif __name__ == '__main__': \n myRobot = robot.Robot('waverobotdev',\n image_url = baseurl + 'assets/wave_robot_icon.png',\n version = '61', # <-------------HERE\n profile_url = baseurl)\n\n-->The server connection between Wave and AppSpot has recently been extremely variable. Sometimes it takes 10+ minutes for the AppSpot server to receive my event, othertimes a few seconds. Verify you're receiving the events you expect.\nEdit:\nThe code you provided looks good, so I wouldn't expect you're doing anything wrong in that respect.\n",
"Have you tried using Append() instead of SetText()? That's what I'd do in my C# API - I haven't used the Python API, but I'd imagine it's similar. Here's a sample from my demo robot:\nprotected override void OnBlipSubmitted(IEvent e)\n{\n if (e.Blip.Document.Text.Contains(\"robot\"))\n {\n IBlip blip = e.Blip.CreateChild();\n ITextView textView = blip.Document;\n textView.Append(\"Are you talking to me?\");\n }\n}\n\nThat works fine.\nEDIT: Here's the resulting JSON from the above code:\n{\n \"javaClass\": \"com.google.wave.api.impl.OperationMessageBundle\",\n \"version\": \"173784133\",\n \"operations\": {\n \"javaClass\": \"java.util.ArrayList\",\n \"list\": [\n {\n \"javaClass\": \"com.google.wave.api.impl.OperationImpl\",\n \"type\": \"BLIP_CREATE_CHILD\",\n \"waveId\": \"googlewave.com!w+PHAstGbKC\",\n \"waveletId\": \"googlewave.com!conv+root\",\n \"blipId\": \"b+Iw_Xw7FCC\",\n \"index\": -1,\n \"property\": {\n \"javaClass\": \"com.google.wave.api.impl.BlipData\",\n \"annotations\": {\n \"javaClass\": \"java.util.ArrayList\",\n \"list\": []\n },\n \"lastModifiedTime\": -1,\n \"contributors\": {\n \"javaClass\": \"java.util.ArrayList\",\n \"list\": []\n },\n \"waveId\": \"googlewave.com!w+PHAstGbKC\",\n \"waveletId\": \"googlewave.com!conv+root\",\n \"version\": -1,\n \"parentBlipId\": null,\n \"creator\": null,\n \"content\": \"\\nAre you talking to me?\",\n \"blipId\": \"410621dc-d7a1-4be5-876c-0a9d313858bb\",\n \"elements\": {\n \"map\": {},\n \"javaClass\": \"java.util.HashMap\"\n },\n \"childBlipIds\": {\n \"javaClass\": \"java.util.ArrayList\",\n \"list\": []\n }\n }\n },\n {\n \"javaClass\": \"com.google.wave.api.impl.OperationImpl\",\n \"type\": \"DOCUMENT_APPEND\",\n \"waveId\": \"googlewave.com!w+PHAstGbKC\",\n \"waveletId\": \"googlewave.com!conv+root\",\n \"blipId\": \"410621dc-d7a1-4be5-876c-0a9d313858bb\",\n \"index\": 0,\n \"property\": \"Are you talking to me?\"\n }\n ]\n }\n}\n\nHow does that compare with the JSON which comes out of your robot?\n",
"For some reason it just started working. I think the google wave is spotty.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"google_app_engine",
"google_wave",
"python"
] |
stackoverflow_0001584406_google_app_engine_google_wave_python.txt
|
Q:
Custom Django-admin command issue
trying to understand how custom admin commands work, I have my project named "mailing" and app inside named "msystem", I have written this retrieve.py to the mailing/msystem/management/commands/ folder and I have pasted an empty init.py both to the management and cpmmands folders.
from django.core.management.base import BaseCommand
from mailing.msystem.models import Alarm
class Command(BaseCommand):
help = "Displays data"
def handle(self, *args, **options):
x = Alarm.objects.all()
for i in x:
print i.name
I am weirdly getting "indention" error for handle function when I try "python manage.py retrieve" however it looks fine to me, can you suggest me what to do or point me the problem
Thanks
A:
Your indentation needs to be consistent through the entire file, which it isn't in the snippet you posted above.
The "help = " line is indented four spaces after "class" but then the "x =" line is indented many more than four.
Maybe you are mixing spaces and tabs and thus have two tabs before "x ="?
Your code should look like this:
from django.core.management.base import BaseCommand
from mailing.msystem.models import Alarm
class Command(BaseCommand):
help = "Displays data"
def handle(self, *args, **options):
x = Alarm.objects.all()
for i in x:
print i.name
A:
If you're getting an "indentation error" and everything looks aligned, this usually suggests that you're mixing tabs and spaces.
I suggest ensuring that your module is only using spaces.
|
Custom Django-admin command issue
|
trying to understand how custom admin commands work, I have my project named "mailing" and app inside named "msystem", I have written this retrieve.py to the mailing/msystem/management/commands/ folder and I have pasted an empty init.py both to the management and cpmmands folders.
from django.core.management.base import BaseCommand
from mailing.msystem.models import Alarm
class Command(BaseCommand):
help = "Displays data"
def handle(self, *args, **options):
x = Alarm.objects.all()
for i in x:
print i.name
I am weirdly getting "indention" error for handle function when I try "python manage.py retrieve" however it looks fine to me, can you suggest me what to do or point me the problem
Thanks
|
[
"Your indentation needs to be consistent through the entire file, which it isn't in the snippet you posted above.\nThe \"help = \" line is indented four spaces after \"class\" but then the \"x =\" line is indented many more than four.\nMaybe you are mixing spaces and tabs and thus have two tabs before \"x =\"?\nYour code should look like this:\nfrom django.core.management.base import BaseCommand\nfrom mailing.msystem.models import Alarm\n\nclass Command(BaseCommand):\n help = \"Displays data\"\n def handle(self, *args, **options):\n x = Alarm.objects.all()\n for i in x:\n print i.name\n\n",
"If you're getting an \"indentation error\" and everything looks aligned, this usually suggests that you're mixing tabs and spaces.\nI suggest ensuring that your module is only using spaces.\n"
] |
[
4,
2
] |
[] |
[] |
[
"django",
"django_admin",
"python"
] |
stackoverflow_0001587282_django_django_admin_python.txt
|
Q:
How to write a program that will automically generate sample exam questions from a file?
How can I write a program that will automatically generate a sample examination?
For example, the user will be prompted to supply four categories of questions to be included in the a 6 question exam from the following list:
Loops
Functions
Decisions
Data Types
Built-in functions
Recursion
Algorithms
Top-down design
Objects
I also need to prompt the user to supply the total marks of the exam and also prompt the user for how many multiple questions there are in the exam.
The sample questions, their category, their value (number of marks) and
whether they are multiple choice questions are stored in a Questions file
that I need to open to read all of the questions. Then the program should read the Question file and randomly select questions according to what the user has entered.
The file format is a text file in notepad, and looks like the following:
Multiple Choice Questions
Loops Questions
1. Which of the following is not a part of the IPO pattern?
a)Input b)Program c)Process d)Output
2. In Python, getting user input is done with a special expression called.
a)for b)read c)simultaneous assignment d)input
Function Questions
3. A Python function definition begins with
a)def b)define c)function d)defun
4.A function with no return statement returns
a)nothing b)its parameters c)its variables d)None
Decision Questions
5. An expression that evaluates to either true or false is called
a)operational b)Boolean c)simple d)compound
6.The literals for type bool are
a)T,F b)True,False c)true,false d)procrastination
DataTypes Questions
7. Which of the following is not a Python type-conversion function?
a)float b)round c)int d)long
8.The number of distinct values that can be represented using 5 bits is
a)5 b)10 c)32 d)50
Built-in Functions
9.The part of a program that uses a function is called the
a)user b)caller c)callee d)statement
10.A function can send output back to the program with a(n)
a)return b)print c)assignment d)SASE
Recursion
11.Recursions on sequence often use this as a base case:
a)0 b)1 c)an empty sequence d)None
12.The recursive Fibonacci function is inefficient because
a)it does many repeated computations b)recursion is inherently inefficient compared to iteration
c)calculating Fibonacci numbers is intractable d)fibbing is morally wrong
Algorithms
13.An algorithm is like a
a)newspaper b)venus flytrap c)drum d)recipe
14.Which algorithm requires time directly proportional to the size of the input?
a)linear search b)binary search c)merge sort d)selection sort
Top-down design
15.Which of the following is not one of the fundamental characteristics of object-oriented design/programming?
a)inheritance b)polymorphism c)generally d)encapsulation
Objects
16.What graphics class would be best for drawing a square?
a)Square b)Polygon c)Line d)Rectangle
17.A user interface organized around visual elements and users actions is called a (n)
a)GUI b)application c)windower d)API
This is the code I have so far. How can I improve it?
def main():
infile = open("30075165.txt","r")
categories = raw_input("Please enter the four categories that are in the exam: ")
totalmarks = input("Please enter the total marks in the exam: ")
mc = input("Please enter the amount of multiple choice questions in the exam: ")
main()
A:
In the absence of additional information required to answer this specific problem, I will outline the general approach I would use to solve this problem. My solution would involve using LaTeX to typeset the exam and the probsoln package to define problems.
The probsoln package provides a format for defining and labeling problems and storing them in files. It also provides the command \loadrandomproblems[dataset]{n}{filename} to load n randomly-selected problems from filename into dataset. This suggests storing problems by topic in several external files, e.g. loops.tex, functions.tex, etc. Then you could write a Python script to programmatically create the LaTeX source for the exam (exam.tex) based on user input.
loops.tex
\newproblem{IPOpattern}{Which of the following is not a part of the IPO pattern?
\\ a) Input \quad b) Program \quad c) Process \quad d) Output}{The correct
answer goes here.}
\newproblem{input}{In Python, getting user input is done with a special expression
called: \\ a) for \quad b) read \quad c) simultaneous assignment \quad
d) input}{The correct answer goes here.}
exam.tex
\documentclass{report}
\usepackage{probsoln}
\begin{document}
\hideanswers
\chapter{Loops}
% randomly select 2 problems from loops.tex and add to
% the data set called 'loops'
\loadrandomproblems[loops]{2}{loops}
% Display the problems
\renewcommand{\theenumi}{\thechapter.\arabic{enumi}}
\begin{enumerate}
\foreachproblem[loops]{\item\label{prob:\thisproblemlabel}\thisproblem}
\end{enumerate}
% You may need to change \theenumi back here
\chapter{Functions}
% randomly select 2 problems from functions.tex and add to
% the data set called 'functions'
\loadrandomproblems[functions]{2}{functions}
% Display the problems
\renewcommand{\theenumi}{\thechapter.\arabic{enumi}}
\begin{enumerate}
\foreachproblem[functions]{\item\label{prob:\thisproblemlabel}\thisproblem}
\end{enumerate}
% You may need to change \theenumi back here
\appendix
\chapter{Solutions}
\showanswers
\begin{itemize}
\foreachdataset{\thisdataset}{%
\foreachproblem[\thisdataset]{\item[\ref{prob:\thisproblemlabel}]\thisproblem}
}
\end{itemize}
\end{document}
A:
las3rjock has a good solution.
You could also move your input file to a SQLite database, using a normalised structure: e.g. Question table, Answer table (with FK to QuestionID), and generate a random answer based on the Question ID. You'll need a third table to keep track of the correct answer per question too.
|
How to write a program that will automically generate sample exam questions from a file?
|
How can I write a program that will automatically generate a sample examination?
For example, the user will be prompted to supply four categories of questions to be included in the a 6 question exam from the following list:
Loops
Functions
Decisions
Data Types
Built-in functions
Recursion
Algorithms
Top-down design
Objects
I also need to prompt the user to supply the total marks of the exam and also prompt the user for how many multiple questions there are in the exam.
The sample questions, their category, their value (number of marks) and
whether they are multiple choice questions are stored in a Questions file
that I need to open to read all of the questions. Then the program should read the Question file and randomly select questions according to what the user has entered.
The file format is a text file in notepad, and looks like the following:
Multiple Choice Questions
Loops Questions
1. Which of the following is not a part of the IPO pattern?
a)Input b)Program c)Process d)Output
2. In Python, getting user input is done with a special expression called.
a)for b)read c)simultaneous assignment d)input
Function Questions
3. A Python function definition begins with
a)def b)define c)function d)defun
4.A function with no return statement returns
a)nothing b)its parameters c)its variables d)None
Decision Questions
5. An expression that evaluates to either true or false is called
a)operational b)Boolean c)simple d)compound
6.The literals for type bool are
a)T,F b)True,False c)true,false d)procrastination
DataTypes Questions
7. Which of the following is not a Python type-conversion function?
a)float b)round c)int d)long
8.The number of distinct values that can be represented using 5 bits is
a)5 b)10 c)32 d)50
Built-in Functions
9.The part of a program that uses a function is called the
a)user b)caller c)callee d)statement
10.A function can send output back to the program with a(n)
a)return b)print c)assignment d)SASE
Recursion
11.Recursions on sequence often use this as a base case:
a)0 b)1 c)an empty sequence d)None
12.The recursive Fibonacci function is inefficient because
a)it does many repeated computations b)recursion is inherently inefficient compared to iteration
c)calculating Fibonacci numbers is intractable d)fibbing is morally wrong
Algorithms
13.An algorithm is like a
a)newspaper b)venus flytrap c)drum d)recipe
14.Which algorithm requires time directly proportional to the size of the input?
a)linear search b)binary search c)merge sort d)selection sort
Top-down design
15.Which of the following is not one of the fundamental characteristics of object-oriented design/programming?
a)inheritance b)polymorphism c)generally d)encapsulation
Objects
16.What graphics class would be best for drawing a square?
a)Square b)Polygon c)Line d)Rectangle
17.A user interface organized around visual elements and users actions is called a (n)
a)GUI b)application c)windower d)API
This is the code I have so far. How can I improve it?
def main():
infile = open("30075165.txt","r")
categories = raw_input("Please enter the four categories that are in the exam: ")
totalmarks = input("Please enter the total marks in the exam: ")
mc = input("Please enter the amount of multiple choice questions in the exam: ")
main()
|
[
"In the absence of additional information required to answer this specific problem, I will outline the general approach I would use to solve this problem. My solution would involve using LaTeX to typeset the exam and the probsoln package to define problems.\nThe probsoln package provides a format for defining and labeling problems and storing them in files. It also provides the command \\loadrandomproblems[dataset]{n}{filename} to load n randomly-selected problems from filename into dataset. This suggests storing problems by topic in several external files, e.g. loops.tex, functions.tex, etc. Then you could write a Python script to programmatically create the LaTeX source for the exam (exam.tex) based on user input.\nloops.tex\n\\newproblem{IPOpattern}{Which of the following is not a part of the IPO pattern?\n \\\\ a) Input \\quad b) Program \\quad c) Process \\quad d) Output}{The correct\n answer goes here.}\n\n\\newproblem{input}{In Python, getting user input is done with a special expression\n called: \\\\ a) for \\quad b) read \\quad c) simultaneous assignment \\quad\n d) input}{The correct answer goes here.}\n\nexam.tex\n\\documentclass{report}\n\\usepackage{probsoln}\n\\begin{document}\n\\hideanswers\n\\chapter{Loops}\n% randomly select 2 problems from loops.tex and add to\n% the data set called 'loops'\n\\loadrandomproblems[loops]{2}{loops}\n\n% Display the problems\n\\renewcommand{\\theenumi}{\\thechapter.\\arabic{enumi}}\n\\begin{enumerate}\n\\foreachproblem[loops]{\\item\\label{prob:\\thisproblemlabel}\\thisproblem}\n\\end{enumerate}\n% You may need to change \\theenumi back here\n\n\\chapter{Functions}\n% randomly select 2 problems from functions.tex and add to\n% the data set called 'functions'\n\\loadrandomproblems[functions]{2}{functions}\n\n% Display the problems\n\\renewcommand{\\theenumi}{\\thechapter.\\arabic{enumi}}\n\\begin{enumerate}\n\\foreachproblem[functions]{\\item\\label{prob:\\thisproblemlabel}\\thisproblem}\n\\end{enumerate}\n% You may need to change \\theenumi back here\n\n\\appendix\n\n\\chapter{Solutions}\n\\showanswers\n\\begin{itemize}\n\\foreachdataset{\\thisdataset}{%\n\\foreachproblem[\\thisdataset]{\\item[\\ref{prob:\\thisproblemlabel}]\\thisproblem}\n}\n\\end{itemize}\n\n\\end{document}\n\n",
"las3rjock has a good solution.\nYou could also move your input file to a SQLite database, using a normalised structure: e.g. Question table, Answer table (with FK to QuestionID), and generate a random answer based on the Question ID. You'll need a third table to keep track of the correct answer per question too.\n"
] |
[
3,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001587248_python.txt
|
Q:
what is the correct way to process 4 bits inside an octet in python
I'm writing an application to parse certain network packets. A packet field contains the protocol version number in an octet, so that 4 high bits are the 'major' and low 4 are the 'minor' version. Currently I am parsing them as follows, but am wondering if there is a prettier or more 'pythonic' way of doing it:
v = ord(data[17])
major = (v & int('11110000',2) ) >> 4
minor = v & int('00001111',2)
A:
You can write binary literals like this0b1111000
For your example I would proabbly use hex though
v = ord(data[17])
major = (v & 0xF0) >> 4
minor = (v & 0x0F)
You might also want to use the struct module to break the packet into its components
A:
Well named functions are always a good way to hide ugliness and irrelevant complexity. This way the bit-fiddling is confined to small and easily proven correct functions while the higher level code can refer to the purpose of the fiddling.
def high_nibble(byte):
"""Get 4 high order bits from a byte."""
return (byte >> 4) & 0xF
def low_nibble(byte):
"""Get 4 low order bits from a byte."""
return byte & 0xF
def parse_version(version_byte):
"""Get the major-minor version tuple from the version byte."""
return high_nibble(version_byte), low_nibble(version_byte)
major, minor = parse_version(version_byte)
A:
It would be neater to use literals instead of calling int. You can use binary literals or hex, for example:
major = (v & 0xf0) >> 4
minor = (v & 0x0f)
Binary literals only work for Python 2.6 or later and are of the form 0b11110000. If you are using Python 2.6 or later then you might want to look at the bytearray type as this will let you treat the data as binary and so not have to use the call to ord.
If you are parsing binary data and finding that you are having to do lots of bit wise manipulations then you might like to try a more general solution as there are some third-party module that specialise in this. One is hachoir, and a lower level alternative is bitstring (which I wrote). In this your parsing would become something like:
major, minor = data.readlist('uint:4, uint:4')
which can be easier to manage if you're doing a lot of such reads.
A:
Just one hint, you'd better apply the mask for the major after shifting, in case it's a negative number and the sign is kept:
major = (v >> 4) & 0x0f
minor = (v & 0x0f)
|
what is the correct way to process 4 bits inside an octet in python
|
I'm writing an application to parse certain network packets. A packet field contains the protocol version number in an octet, so that 4 high bits are the 'major' and low 4 are the 'minor' version. Currently I am parsing them as follows, but am wondering if there is a prettier or more 'pythonic' way of doing it:
v = ord(data[17])
major = (v & int('11110000',2) ) >> 4
minor = v & int('00001111',2)
|
[
"You can write binary literals like this0b1111000\nFor your example I would proabbly use hex though\nv = ord(data[17])\nmajor = (v & 0xF0) >> 4\nminor = (v & 0x0F)\n\nYou might also want to use the struct module to break the packet into its components\n",
"Well named functions are always a good way to hide ugliness and irrelevant complexity. This way the bit-fiddling is confined to small and easily proven correct functions while the higher level code can refer to the purpose of the fiddling.\ndef high_nibble(byte):\n \"\"\"Get 4 high order bits from a byte.\"\"\"\n return (byte >> 4) & 0xF\n\ndef low_nibble(byte):\n \"\"\"Get 4 low order bits from a byte.\"\"\"\n return byte & 0xF\n\ndef parse_version(version_byte):\n \"\"\"Get the major-minor version tuple from the version byte.\"\"\"\n return high_nibble(version_byte), low_nibble(version_byte)\n\nmajor, minor = parse_version(version_byte)\n\n",
"It would be neater to use literals instead of calling int. You can use binary literals or hex, for example:\nmajor = (v & 0xf0) >> 4\nminor = (v & 0x0f)\n\nBinary literals only work for Python 2.6 or later and are of the form 0b11110000. If you are using Python 2.6 or later then you might want to look at the bytearray type as this will let you treat the data as binary and so not have to use the call to ord.\nIf you are parsing binary data and finding that you are having to do lots of bit wise manipulations then you might like to try a more general solution as there are some third-party module that specialise in this. One is hachoir, and a lower level alternative is bitstring (which I wrote). In this your parsing would become something like:\nmajor, minor = data.readlist('uint:4, uint:4')\n\nwhich can be easier to manage if you're doing a lot of such reads.\n",
"Just one hint, you'd better apply the mask for the major after shifting, in case it's a negative number and the sign is kept:\nmajor = (v >> 4) & 0x0f\nminor = (v & 0x0f)\n\n"
] |
[
3,
2,
1,
0
] |
[] |
[] |
[
"bit_manipulation",
"python"
] |
stackoverflow_0001587496_bit_manipulation_python.txt
|
Q:
How to clear cookies using python 2.6.x cookielib
It seems my previous description was not clear, so rewriting it.
Using python urllib2, I am automating fileupload task in my webapp. And am using Cookielib to store session information, and also I could able to successfully automate the fileupload task. Problem is, when I change the login credentials and did not supply those or supply wrong login credentials to automated python script, it still processing fileupload successfully. In this case, it should actually fail.
All I want is, how to clear the cookielib generated cookies.
Below is the code snippet....
cookies = cookielib.CookieJar()
cookies.clear_session_cookies()
#cookies.clear() tried this as well
opener = urllib2.build_opener(SmartRedirectHandler,HTTPCookieProcessor(cookies),MultipartPostHandler)
urllib2.install_opener(opener)
login_req = urllib2.Request(login_url, login_params)
res = urllib2.urlopen(login_req)
#after login, do fileupload
fileupload_req = urllib2.Request(fileupload_url, params)
response = urllib2.urlopen(import_req)
I tried using clear() and clear_session_cookies() but still cookies are not cleared.
A:
you need to install the opener that you have built, otherwise it will just keep using the default
A:
Instead of relying on cookies, I am restricting page access based response headers. Now, I could able to stop the file upload process when wrong credentials supplied. Thanks guys.
|
How to clear cookies using python 2.6.x cookielib
|
It seems my previous description was not clear, so rewriting it.
Using python urllib2, I am automating fileupload task in my webapp. And am using Cookielib to store session information, and also I could able to successfully automate the fileupload task. Problem is, when I change the login credentials and did not supply those or supply wrong login credentials to automated python script, it still processing fileupload successfully. In this case, it should actually fail.
All I want is, how to clear the cookielib generated cookies.
Below is the code snippet....
cookies = cookielib.CookieJar()
cookies.clear_session_cookies()
#cookies.clear() tried this as well
opener = urllib2.build_opener(SmartRedirectHandler,HTTPCookieProcessor(cookies),MultipartPostHandler)
urllib2.install_opener(opener)
login_req = urllib2.Request(login_url, login_params)
res = urllib2.urlopen(login_req)
#after login, do fileupload
fileupload_req = urllib2.Request(fileupload_url, params)
response = urllib2.urlopen(import_req)
I tried using clear() and clear_session_cookies() but still cookies are not cleared.
|
[
"you need to install the opener that you have built, otherwise it will just keep using the default\n",
"Instead of relying on cookies, I am restricting page access based response headers. Now, I could able to stop the file upload process when wrong credentials supplied. Thanks guys.\n"
] |
[
0,
0
] |
[] |
[] |
[
"cookies",
"python"
] |
stackoverflow_0001530464_cookies_python.txt
|
Q:
PHP, Python, Ruby application with multiple RDBMS
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.
Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.
By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.
So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?
I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?
Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?
A:
If you want to leverage the bells and whistles of various RDBMSes, you can certainly do it. Just apply standard OO Principles. Figure out what kind of API your persistence layer will need to provide.
You'll end up writing a set of isomorphic persistence adapter classes. From the perspective of your model code (which will be calling adapter methods to load and store data), these classes are identical. Writing good test coverage should be easy, and good tests will make life a lot easier. Deciding how much abstraction is provided by the persistence adapters is the trickiest part, and is largely application-specific.
As for whether this is worth the trouble: it depends. It's a good exercise if you've never done it before. It may be premature if you don't actually know for sure what your target databases are.
A good strategy might be to implement two persistence adapters to start. Let's say you expect the most common back end will be MySQL. Implement one adapter tuned for MySQL. Implement a second that uses your database abstraction library of choice, and uses only standard and widely available SQL features. Now you've got support for a ton of back ends (everything supported by your abstraction library of choice), plus tuned support for mySQL. If you decide you then want to provide an optimized adapter from Oracle, you can implement it at your leisure, and you'll know that your application can support swappable database back-ends.
A:
You cannot eat a cake and have it, choose on of the following options.
Use your database abstraction layer whenever you can and in the rare cases when you have a need for a hand-made query (eg. performance reasons) stick to the lowest common denominator and don't use stored procedures or any proprietary extensions that you database has to offer. In this case deploying the application on a different RDBMS should be trivial.
Use the full power of your expensive RDBMS, but take into account that your application won't be easily portable. When the need arises you will have to spend considerable effort on porting and maintenance. Of course a decent layered design encapsulating all the differences in a single module or class will help in this endeavor.
In other words you should consider how probable is it that your application will be deployed to multiple RDBMSes and make an informed choice.
A:
It would be great if code written for one platform would work on every other without any modification whatsoever, but this is usually not the case and probably never will be. What the current frameworks do is about all anyone can.
A:
It's even more "old fashioned" than modern ORMs, but doesn't ODBC address this issue?
|
PHP, Python, Ruby application with multiple RDBMS
|
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.
Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.
By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.
So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?
I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?
Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not?
|
[
"If you want to leverage the bells and whistles of various RDBMSes, you can certainly do it. Just apply standard OO Principles. Figure out what kind of API your persistence layer will need to provide. \nYou'll end up writing a set of isomorphic persistence adapter classes. From the perspective of your model code (which will be calling adapter methods to load and store data), these classes are identical. Writing good test coverage should be easy, and good tests will make life a lot easier. Deciding how much abstraction is provided by the persistence adapters is the trickiest part, and is largely application-specific.\nAs for whether this is worth the trouble: it depends. It's a good exercise if you've never done it before. It may be premature if you don't actually know for sure what your target databases are. \nA good strategy might be to implement two persistence adapters to start. Let's say you expect the most common back end will be MySQL. Implement one adapter tuned for MySQL. Implement a second that uses your database abstraction library of choice, and uses only standard and widely available SQL features. Now you've got support for a ton of back ends (everything supported by your abstraction library of choice), plus tuned support for mySQL. If you decide you then want to provide an optimized adapter from Oracle, you can implement it at your leisure, and you'll know that your application can support swappable database back-ends.\n",
"You cannot eat a cake and have it, choose on of the following options.\n\nUse your database abstraction layer whenever you can and in the rare cases when you have a need for a hand-made query (eg. performance reasons) stick to the lowest common denominator and don't use stored procedures or any proprietary extensions that you database has to offer. In this case deploying the application on a different RDBMS should be trivial.\nUse the full power of your expensive RDBMS, but take into account that your application won't be easily portable. When the need arises you will have to spend considerable effort on porting and maintenance. Of course a decent layered design encapsulating all the differences in a single module or class will help in this endeavor.\n\nIn other words you should consider how probable is it that your application will be deployed to multiple RDBMSes and make an informed choice.\n",
"It would be great if code written for one platform would work on every other without any modification whatsoever, but this is usually not the case and probably never will be. What the current frameworks do is about all anyone can. \n",
"It's even more \"old fashioned\" than modern ORMs, but doesn't ODBC address this issue?\n"
] |
[
2,
2,
0,
0
] |
[] |
[] |
[
"database",
"php",
"python",
"ruby_on_rails"
] |
stackoverflow_0001586008_database_php_python_ruby_on_rails.txt
|
Q:
Caching system for dynamically created files?
I have a web server that is dynamically creating various reports in several formats (pdf and doc files). The files require a fair amount of CPU to generate, and it is fairly common to have situations where two people are creating the same report with the same input.
Inputs:
raw data input as a string (equations, numbers, and
lists of words), arbitrary length, almost 99% will be less than about 200 words
the version of the report creation tool
When a user attempts to generate a report, I would like to check to see if a file already exists with the given input, and if so return a link to the file. If the file doesn't already exist, then I would like to generate it as needed.
What solutions are already out there? I've cached simple http requests before, but the keys were extremely simple (usually database id's)
If I have to do this myself, what is the best way. The input can be several hundred words, and I was wondering how I should go about transforming the strings into keys sent to the cache.
//entire input, uses too much memory, one to one mapping
cache['one two three four five six seven eight nine ten eleven...']
//short keys
cache['one two'] => 5 results, then I must narrow these down even more
Is this something that should be done in a database, or is it better done within the web app code (python in my case)
Thanks you everyone.
A:
This is what Apache is for.
Create a directory that will have the reports.
Configure Apache to serve files from that directory.
If the report exists, redirect to a URL that Apache will serve.
Otherwise, the report doesn't exist, so create it. Then redirect to a URL that Apache will serve.
There's no "hashing". You have a key ("a string (equations, numbers, and lists of words), arbitrary length, almost 99% will be less than about 200 words") and a value, which is a file. Don't waste time on a hash. You just have a long key.
You can compress this key somewhat by making a "slug" out of it: remove punctuation, replace spaces with _, that kind of thing.
You should create an internal surrogate key which is a simple integer.
You're simply translating a long key to a "report" which either exists as a file or will be created as a file.
A:
The usual thing is to use a reverse proxy like Squid or Varnish
|
Caching system for dynamically created files?
|
I have a web server that is dynamically creating various reports in several formats (pdf and doc files). The files require a fair amount of CPU to generate, and it is fairly common to have situations where two people are creating the same report with the same input.
Inputs:
raw data input as a string (equations, numbers, and
lists of words), arbitrary length, almost 99% will be less than about 200 words
the version of the report creation tool
When a user attempts to generate a report, I would like to check to see if a file already exists with the given input, and if so return a link to the file. If the file doesn't already exist, then I would like to generate it as needed.
What solutions are already out there? I've cached simple http requests before, but the keys were extremely simple (usually database id's)
If I have to do this myself, what is the best way. The input can be several hundred words, and I was wondering how I should go about transforming the strings into keys sent to the cache.
//entire input, uses too much memory, one to one mapping
cache['one two three four five six seven eight nine ten eleven...']
//short keys
cache['one two'] => 5 results, then I must narrow these down even more
Is this something that should be done in a database, or is it better done within the web app code (python in my case)
Thanks you everyone.
|
[
"This is what Apache is for.\nCreate a directory that will have the reports.\nConfigure Apache to serve files from that directory.\nIf the report exists, redirect to a URL that Apache will serve.\nOtherwise, the report doesn't exist, so create it. Then redirect to a URL that Apache will serve.\n\nThere's no \"hashing\". You have a key (\"a string (equations, numbers, and lists of words), arbitrary length, almost 99% will be less than about 200 words\") and a value, which is a file. Don't waste time on a hash. You just have a long key.\nYou can compress this key somewhat by making a \"slug\" out of it: remove punctuation, replace spaces with _, that kind of thing.\nYou should create an internal surrogate key which is a simple integer.\nYou're simply translating a long key to a \"report\" which either exists as a file or will be created as a file. \n",
"The usual thing is to use a reverse proxy like Squid or Varnish\n"
] |
[
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001587991_python.txt
|
Q:
Using different versions of a python library in the same process
We've got a python library that we're developing. During development, I'd like to use some parts of that library in testing the newer versions of it. That is, use the stable code in order to test the development code. Is there any way of doing this in python?
Edit: To be more specific, we've got a library (LibA) that has many useful things. Also, we've got a testing library that uses LibA in order to provide some testing facilities (LibT). We want to test LibA using LibT, but because LibT depends on LibA, we'd rather it to use a stable version of LibA, while testing LibT (because we will change LibT to work with newer LibA only once tests pass etc.). So, when running unit-tests, LibA-dev tests will use LibT code that depends on LibA-stable.
One idea we've come up with is calling the stable code using RPyC on a different process, but it's tricky to implement in an air-tight way (making sure it dies properly etc, and allowing multiple instances to execute at the same time on the same computer etc.).
Thanks
A:
If you "test" libA-dev using libT which depends on libA (stable), then you are not really testing libA-dev as it would behave in a production environment. The only way to really test libA-dev is to take the full plunge and make libT depend on libA-dev. If this breaks your unit tests then that is a good thing -- it is showing you what needs to be fixed.
If you don't have unit tests, then this is the time to start making them (using stable libA and libT first!).
I recommend using a "version control system" (e.g. bzr,hg,svn,git). Then you could make branches of your project, "stable" and "devA".
To work on branch devA, you would first run
export PYTHONPATH=/path/to/devA
By making sure the PYTHONPATH environment variable excludes the other branches, you're assured Python is using just the modules you desire.
When the time comes to merge code from dev --> stable, the version control software will provide an easy way to do that too.
Version control also allows you to be bolder -- trying major changes is not as scary. If things do not work out, reverting is super easy. Between that and the PYTHONPATH trick, you are always able to return to known, working code.
If you feel the above just simply is not going to work for you, and you must use libT-which-depends-on-libA to test libA-dev, then you'll need to rename all the modules and modify all the import statements to make a clear separation between libA-dev and libA. For example, if libA has a module called moduleA.py, then rename it moduleA_dev.py.
The command
rename -n 's/^(.*)\.py/$1_dev.py/' *.py
will add "_dev" to all the *.py files. (With the "-n" flag the rename command will only show you the contemplated renaming. Remove the "-n" to actually go through with it.)
To revert the renaming, run
rename -n 's/^(.*)_dev\.py/$1.py/' *.py
Next you'll need to change all references to moduleA to moduleA_dev within the code. The command
find /path/to/LibA-dev/ -type f -name '*.py' -exec sed -i 's/moduleA/moduleA_dev/g' {} \;
will alter every *.py file in LibA-dev, changing "moduleA" --> "moduleA_dev".
Be careful with this command. It is dangerous, because if you have a variable called moduleAB then it will get renamed moduleA_devB, while what you really wanted might be moduleAB_dev.
To revert this change (subject to the above caveat),
find /path/to/LibA-dev/ -type f -name '*.py' -exec sed -i 's/moduleA_dev/moduleA/g' {} \;
Once you separate the namespaces, you've broken the circular dependency. Once you are satisfied your libA-dev is good, you could change moduleA_dev.py --> moduleA.py and
changes all references to moduleA_dev --> moduleA in your code.
A:
"We want to test LibA using LibT, but because LibT depends on LibA, we'd rather it to use a stable version of LibA, while testing LibT "
It doesn't make sense to use T + A to test A. What does make sense is the following.
LibA is really two things mashed together: A1 and A2.
T depends on A1.
What's really happening is that you're upgrading and testing A2, using T and A1.
If you decompose LibA into the parts that T requires and the other parts, you may be able to break this circular dependency.
A:
I am unsure of exactly how you need to set up your tests, but you may be able to use VirtualEnv to get both instances running alongside eachother.
|
Using different versions of a python library in the same process
|
We've got a python library that we're developing. During development, I'd like to use some parts of that library in testing the newer versions of it. That is, use the stable code in order to test the development code. Is there any way of doing this in python?
Edit: To be more specific, we've got a library (LibA) that has many useful things. Also, we've got a testing library that uses LibA in order to provide some testing facilities (LibT). We want to test LibA using LibT, but because LibT depends on LibA, we'd rather it to use a stable version of LibA, while testing LibT (because we will change LibT to work with newer LibA only once tests pass etc.). So, when running unit-tests, LibA-dev tests will use LibT code that depends on LibA-stable.
One idea we've come up with is calling the stable code using RPyC on a different process, but it's tricky to implement in an air-tight way (making sure it dies properly etc, and allowing multiple instances to execute at the same time on the same computer etc.).
Thanks
|
[
"If you \"test\" libA-dev using libT which depends on libA (stable), then you are not really testing libA-dev as it would behave in a production environment. The only way to really test libA-dev is to take the full plunge and make libT depend on libA-dev. If this breaks your unit tests then that is a good thing -- it is showing you what needs to be fixed.\nIf you don't have unit tests, then this is the time to start making them (using stable libA and libT first!).\nI recommend using a \"version control system\" (e.g. bzr,hg,svn,git). Then you could make branches of your project, \"stable\" and \"devA\".\nTo work on branch devA, you would first run\nexport PYTHONPATH=/path/to/devA\n\nBy making sure the PYTHONPATH environment variable excludes the other branches, you're assured Python is using just the modules you desire.\nWhen the time comes to merge code from dev --> stable, the version control software will provide an easy way to do that too. \nVersion control also allows you to be bolder -- trying major changes is not as scary. If things do not work out, reverting is super easy. Between that and the PYTHONPATH trick, you are always able to return to known, working code.\nIf you feel the above just simply is not going to work for you, and you must use libT-which-depends-on-libA to test libA-dev, then you'll need to rename all the modules and modify all the import statements to make a clear separation between libA-dev and libA. For example, if libA has a module called moduleA.py, then rename it moduleA_dev.py. \nThe command\nrename -n 's/^(.*)\\.py/$1_dev.py/' *.py\n\nwill add \"_dev\" to all the *.py files. (With the \"-n\" flag the rename command will only show you the contemplated renaming. Remove the \"-n\" to actually go through with it.)\nTo revert the renaming, run\nrename -n 's/^(.*)_dev\\.py/$1.py/' *.py\n\nNext you'll need to change all references to moduleA to moduleA_dev within the code. The command\nfind /path/to/LibA-dev/ -type f -name '*.py' -exec sed -i 's/moduleA/moduleA_dev/g' {} \\;\n\nwill alter every *.py file in LibA-dev, changing \"moduleA\" --> \"moduleA_dev\".\nBe careful with this command. It is dangerous, because if you have a variable called moduleAB then it will get renamed moduleA_devB, while what you really wanted might be moduleAB_dev.\nTo revert this change (subject to the above caveat),\nfind /path/to/LibA-dev/ -type f -name '*.py' -exec sed -i 's/moduleA_dev/moduleA/g' {} \\;\n\nOnce you separate the namespaces, you've broken the circular dependency. Once you are satisfied your libA-dev is good, you could change moduleA_dev.py --> moduleA.py and\nchanges all references to moduleA_dev --> moduleA in your code.\n",
"\"We want to test LibA using LibT, but because LibT depends on LibA, we'd rather it to use a stable version of LibA, while testing LibT \"\nIt doesn't make sense to use T + A to test A. What does make sense is the following.\nLibA is really two things mashed together: A1 and A2.\nT depends on A1.\nWhat's really happening is that you're upgrading and testing A2, using T and A1. \nIf you decompose LibA into the parts that T requires and the other parts, you may be able to break this circular dependency.\n",
"I am unsure of exactly how you need to set up your tests, but you may be able to use VirtualEnv to get both instances running alongside eachother.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"circular_dependency",
"dependencies",
"python",
"testing"
] |
stackoverflow_0001587776_circular_dependency_dependencies_python_testing.txt
|
Q:
how to determine if webpage has been modified
I have snapshots of multiple webpages taken at 2 times. What is a reliable method to determine which webpages have been modified?
I can't rely on something like an RSS feed, and I need to ignore minor noise like date text.
Ideally I am looking for a Python solution, but an intuitive algorithm would also be great.
Thanks!
A:
Well, first you need to decide what is noise and what isn't. You can use a HTML parser like BeautifulSoup to remove the noise, pretty-print the result, and compare it as a string.
If you are looking for an automatic solution, you can use difflib.SequenceMatcher to calculate the differences between the pages, calculate the similarity and compare it to a threshold.
A:
The solution really depends if you are scraping a specific site, or are trying to create a program which will work for any site.
You can see which areas change frequently doing something like this:
diff <(curl http://stackoverflow.com/questions/) <(sleep 15; curl http://stackoverflow.com/questions/)
If your only worried about a single site, you can create some sed expressions to filter out stuff like time stamps. You can repeat until no difference is shown for small fields.
The general problem is much harder, and I would suggest comparing the total word count on a page for starters.
A:
Something like Levenshtein Distance could come in handy if you set the threshold of the changes to a distance that ignored the right amount of noise for you.
|
how to determine if webpage has been modified
|
I have snapshots of multiple webpages taken at 2 times. What is a reliable method to determine which webpages have been modified?
I can't rely on something like an RSS feed, and I need to ignore minor noise like date text.
Ideally I am looking for a Python solution, but an intuitive algorithm would also be great.
Thanks!
|
[
"Well, first you need to decide what is noise and what isn't. You can use a HTML parser like BeautifulSoup to remove the noise, pretty-print the result, and compare it as a string.\nIf you are looking for an automatic solution, you can use difflib.SequenceMatcher to calculate the differences between the pages, calculate the similarity and compare it to a threshold.\n",
"The solution really depends if you are scraping a specific site, or are trying to create a program which will work for any site. \nYou can see which areas change frequently doing something like this:\n diff <(curl http://stackoverflow.com/questions/) <(sleep 15; curl http://stackoverflow.com/questions/)\n\nIf your only worried about a single site, you can create some sed expressions to filter out stuff like time stamps. You can repeat until no difference is shown for small fields. \nThe general problem is much harder, and I would suggest comparing the total word count on a page for starters.\n",
"Something like Levenshtein Distance could come in handy if you set the threshold of the changes to a distance that ignored the right amount of noise for you.\n"
] |
[
8,
3,
0
] |
[
"just take snapshots of the files with MD5 or SHA1...if the values differ the next time you check, then they are modified.\n"
] |
[
-1
] |
[
"diff",
"python",
"snapshot",
"webpage"
] |
stackoverflow_0001587902_diff_python_snapshot_webpage.txt
|
Q:
Light-weight renderer HTML with CSS in Python
Sorry, perhaps I haven't described the problem well first time. All your answers are interesting, but most of them are almost full-featured web browsers, my task is much simpler.
I'm planning to write a GUI application using one of the available on linux GUI frameworks (I haven't yet chosen one). I shall use html in my application to render into one of my application frames text with some attributes — different fonts etc, which are stored in CSS.
The HTML shall be generated by my application, so the only task is to render a HTML / CSS string. Is there any widget which can do only that render and nothing more — no history, no bookmarks, no URL-loading etc? If there isn't I shall use one of those you advised — it's ok — but I'm just interested if there is just an html-renderer without any extra features.
A:
You should use a UI framework:
Qt: The simplest class to use would be QWebView
Gtk: pywebkitgtk would be the best answer, but you can find others in the PyGTK page.
In Tk is the TkHtml widget from here
An other option is to open the OS default web browser through something like this:
import webbrowser
url = 'http://www.python.org'
# Open URL in a new tab, if a browser window is already open.
webbrowser.open_new_tab(url + '/doc')
# Open URL in new window, raising the window if possible.
webbrowser.open_new(url)
You can find more info about the webbrowser module here. I think that the simplest way would be to use the os browser if you are looking for something very light-weight since it does not depend on a framework and it would work in all platforms. Using Tk may be an other option that is light and will not require to install a 3rd party framework.
A:
Flying Saucer Project -- an XHTML renderer.
No, it's not Python. It's -- however -- trivially called from Python.
A:
Maybe HulaHop could be interesting for you (can also be combined with Pyjamas). The Mozilla Prism Project might also be relevant.
|
Light-weight renderer HTML with CSS in Python
|
Sorry, perhaps I haven't described the problem well first time. All your answers are interesting, but most of them are almost full-featured web browsers, my task is much simpler.
I'm planning to write a GUI application using one of the available on linux GUI frameworks (I haven't yet chosen one). I shall use html in my application to render into one of my application frames text with some attributes — different fonts etc, which are stored in CSS.
The HTML shall be generated by my application, so the only task is to render a HTML / CSS string. Is there any widget which can do only that render and nothing more — no history, no bookmarks, no URL-loading etc? If there isn't I shall use one of those you advised — it's ok — but I'm just interested if there is just an html-renderer without any extra features.
|
[
"You should use a UI framework:\n\nQt: The simplest class to use would be QWebView\nGtk: pywebkitgtk would be the best answer, but you can find others in the PyGTK page.\nIn Tk is the TkHtml widget from here\n\nAn other option is to open the OS default web browser through something like this:\nimport webbrowser\nurl = 'http://www.python.org'\n\n# Open URL in a new tab, if a browser window is already open.\nwebbrowser.open_new_tab(url + '/doc')\n\n# Open URL in new window, raising the window if possible.\nwebbrowser.open_new(url)\n\nYou can find more info about the webbrowser module here. I think that the simplest way would be to use the os browser if you are looking for something very light-weight since it does not depend on a framework and it would work in all platforms. Using Tk may be an other option that is light and will not require to install a 3rd party framework.\n",
"Flying Saucer Project -- an XHTML renderer.\nNo, it's not Python. It's -- however -- trivially called from Python.\n",
"Maybe HulaHop could be interesting for you (can also be combined with Pyjamas). The Mozilla Prism Project might also be relevant.\n"
] |
[
15,
0,
0
] |
[] |
[] |
[
"browser",
"html",
"python"
] |
stackoverflow_0001587637_browser_html_python.txt
|
Q:
What are the use cases for non relational datastores?
I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM.
Anyway, what use cases suit non relational datastores best?
A:
Here is a nice little article (spread over three pages) that covers the use-case for non-relational databases.
http://www.readwriteweb.com/enterprise/2009/02/is-the-relational-database-doomed.php
In a nutshell, when you need massive scalability then you probably need a non-realtional db. Of course, you may well end up writing a lot more code to do what a relational db does for you, but if you really need that scalability, then the relational db option is usually more expensive, and very tricky to architect properly.
A:
Consider the situation where you have many entity types but few instances of each entity. In this case you will have many tables each with a few records so a relational approach is not suitable.
A:
In some cases that are simply nice. ZODB is a Python-only object database, that is so well-integrated with Python that you can simply forget that it's there. You don't have to bother about it, most of the time.
|
What are the use cases for non relational datastores?
|
I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM.
Anyway, what use cases suit non relational datastores best?
|
[
"Here is a nice little article (spread over three pages) that covers the use-case for non-relational databases.\nhttp://www.readwriteweb.com/enterprise/2009/02/is-the-relational-database-doomed.php\nIn a nutshell, when you need massive scalability then you probably need a non-realtional db. Of course, you may well end up writing a lot more code to do what a relational db does for you, but if you really need that scalability, then the relational db option is usually more expensive, and very tricky to architect properly.\n",
"Consider the situation where you have many entity types but few instances of each entity. In this case you will have many tables each with a few records so a relational approach is not suitable. \n",
"In some cases that are simply nice. ZODB is a Python-only object database, that is so well-integrated with Python that you can simply forget that it's there. You don't have to bother about it, most of the time.\n"
] |
[
7,
2,
0
] |
[] |
[] |
[
"couchdb",
"google_app_engine",
"python"
] |
stackoverflow_0001588708_couchdb_google_app_engine_python.txt
|
Q:
A good data model for finding a user's favorite stories
Original Design
Here's how I originally had my Models set up:
class UserData(db.Model):
user = db.UserProperty()
favorites = db.ListProperty(db.Key) # list of story keys
# ...
class Story(db.Model):
title = db.StringProperty()
# ...
On every page that displayed a story I would query UserData for the current user:
user_data = UserData.all().filter('user =' users.get_current_user()).get()
story_is_favorited = (story in user_data.favorites)
New Design
After watching this talk: Google I/O 2009 - Scalable, Complex Apps on App Engine, I wondered if I could set things up more efficiently.
class FavoriteIndex(db.Model):
favorited_by = db.StringListProperty()
The Story Model is the same, but I got rid of the UserData Model. Each instance of the new FavoriteIndex Model has a Story instance as a parent. And each FavoriteIndex stores a list of user id's in it's favorited_by property.
If I want to find all of the stories that have been favorited by a certain user:
index_keys = FavoriteIndex.all(keys_only=True).filter('favorited_by =', users.get_current_user().user_id())
story_keys = [k.parent() for k in index_keys]
stories = db.get(story_keys)
This approach avoids the serialization/deserialization that's otherwise associated with the ListProperty.
Efficiency vs Simplicity
I'm not sure how efficient the new design is, especially after a user decides to favorite 300 stories, but here's why I like it:
A favorited story is associated with a user, not with her user data
On a page where I display a story, it's pretty easy to ask the story if it's been favorited (without calling up a separate entity filled with user data).
fav_index = FavoriteIndex.all().ancestor(story).get()
fav_of_current_user = users.get_current_user().user_id() in fav_index.favorited_by
It's also easy to get a list of all the users who have favorited a story (using the method in #2)
Is there an easier way?
Please help. How is this kind of thing normally done?
A:
What you've described is a good solution. You can optimise it further, however: For each favorite, create a 'UserFavorite' entity as a child entity of the relevant Story entry (or equivalently, as a child entity of a UserInfo entry), with the key name set to the user's unique ID. This way, you can determine if a user has favorited a story with a simple get:
UserFavorite.get_by_name(user_id, parent=a_story)
get operations are 3 to 5 times faster than queries, so this is a substantial improvement.
A:
I don't want to tackle your actual question, but here's a very small tip: you can replace this code:
if story in user_data.favorites:
story_is_favorited = True
else:
story_is_favorited = False
with this single line:
story_is_favorited = (story in user_data.favorites)
You don't even need to put the parentheses around the story in user_data.favorites if you don't want to; I just think that's more readable.
A:
You can make the favorite index like a join on the two models
class FavoriteIndex(db.Model):
user = db.UserProperty()
story = db.ReferenceProperty()
or
class FavoriteIndex(db.Model):
user = db.UserProperty()
story = db.StringListProperty()
Then your query on by user returns one FavoriteIndex object for each story the user has favorited
You can also query by story to see how many users have Favorited it.
You don't want to be scanning through anything unless you know it is limited to a small size
A:
With your new Design you can lookup if a user has favorited a certain story with a query.
You don't need the UserFavorite class entities.
It is a keys_only query so not as fast as a get(key) but faster then a normal query.
The FavoriteIndex classes all have the same key_name='favs'.
You can filter based on __key__.
a_story = ......
a_user_id = users.get_current_user().user_id()
favIndexKey = db.Key.from_path('Story', a_story.key.id_or_name(), 'FavoriteIndex', 'favs')
doesFavStory = FavoriteIndex.all(keys_only=True).filter('__key__ =', favIndexKey).filter('favorited_by =', a_user_id).get()
If you use multiple FavoriteIndex as childs of a Story you can use the ancestor filter
doesFavStory = FavoriteIndex.all(keys_only=True).ancestor(a_story).filter('favorited_by =', a_user_id).get()
|
A good data model for finding a user's favorite stories
|
Original Design
Here's how I originally had my Models set up:
class UserData(db.Model):
user = db.UserProperty()
favorites = db.ListProperty(db.Key) # list of story keys
# ...
class Story(db.Model):
title = db.StringProperty()
# ...
On every page that displayed a story I would query UserData for the current user:
user_data = UserData.all().filter('user =' users.get_current_user()).get()
story_is_favorited = (story in user_data.favorites)
New Design
After watching this talk: Google I/O 2009 - Scalable, Complex Apps on App Engine, I wondered if I could set things up more efficiently.
class FavoriteIndex(db.Model):
favorited_by = db.StringListProperty()
The Story Model is the same, but I got rid of the UserData Model. Each instance of the new FavoriteIndex Model has a Story instance as a parent. And each FavoriteIndex stores a list of user id's in it's favorited_by property.
If I want to find all of the stories that have been favorited by a certain user:
index_keys = FavoriteIndex.all(keys_only=True).filter('favorited_by =', users.get_current_user().user_id())
story_keys = [k.parent() for k in index_keys]
stories = db.get(story_keys)
This approach avoids the serialization/deserialization that's otherwise associated with the ListProperty.
Efficiency vs Simplicity
I'm not sure how efficient the new design is, especially after a user decides to favorite 300 stories, but here's why I like it:
A favorited story is associated with a user, not with her user data
On a page where I display a story, it's pretty easy to ask the story if it's been favorited (without calling up a separate entity filled with user data).
fav_index = FavoriteIndex.all().ancestor(story).get()
fav_of_current_user = users.get_current_user().user_id() in fav_index.favorited_by
It's also easy to get a list of all the users who have favorited a story (using the method in #2)
Is there an easier way?
Please help. How is this kind of thing normally done?
|
[
"What you've described is a good solution. You can optimise it further, however: For each favorite, create a 'UserFavorite' entity as a child entity of the relevant Story entry (or equivalently, as a child entity of a UserInfo entry), with the key name set to the user's unique ID. This way, you can determine if a user has favorited a story with a simple get:\nUserFavorite.get_by_name(user_id, parent=a_story)\n\nget operations are 3 to 5 times faster than queries, so this is a substantial improvement.\n",
"I don't want to tackle your actual question, but here's a very small tip: you can replace this code:\nif story in user_data.favorites:\n story_is_favorited = True\nelse:\n story_is_favorited = False\n\nwith this single line:\nstory_is_favorited = (story in user_data.favorites)\n\nYou don't even need to put the parentheses around the story in user_data.favorites if you don't want to; I just think that's more readable.\n",
"You can make the favorite index like a join on the two models\nclass FavoriteIndex(db.Model):\n user = db.UserProperty()\n story = db.ReferenceProperty()\n\nor\nclass FavoriteIndex(db.Model):\n user = db.UserProperty()\n story = db.StringListProperty()\n\nThen your query on by user returns one FavoriteIndex object for each story the user has favorited\nYou can also query by story to see how many users have Favorited it.\nYou don't want to be scanning through anything unless you know it is limited to a small size\n",
"With your new Design you can lookup if a user has favorited a certain story with a query.\nYou don't need the UserFavorite class entities.\nIt is a keys_only query so not as fast as a get(key) but faster then a normal query.\nThe FavoriteIndex classes all have the same key_name='favs'.\nYou can filter based on __key__. \na_story = ......\na_user_id = users.get_current_user().user_id()\nfavIndexKey = db.Key.from_path('Story', a_story.key.id_or_name(), 'FavoriteIndex', 'favs')\ndoesFavStory = FavoriteIndex.all(keys_only=True).filter('__key__ =', favIndexKey).filter('favorited_by =', a_user_id).get()\n\nIf you use multiple FavoriteIndex as childs of a Story you can use the ancestor filter\ndoesFavStory = FavoriteIndex.all(keys_only=True).ancestor(a_story).filter('favorited_by =', a_user_id).get()\n\n"
] |
[
2,
1,
1,
1
] |
[] |
[] |
[
"google_app_engine",
"google_cloud_datastore",
"python"
] |
stackoverflow_0001562131_google_app_engine_google_cloud_datastore_python.txt
|
Q:
Python XMLRPC with concurrent requests
I'm looking for a way to prevent multiple hosts from issuing simultaneous commands to a Python XMLRPC listener. The listener is responsible for running scripts to perform tasks on that system that would fail if multiple users tried to issue these commands at the same time. Is there a way I can block all incoming requests until the single instance has completed?
A:
I think python SimpleXMLRPCServer module is what you want. I believe the default behavior of that model is blocking new requests when current request is processing. The default behavior gave me lots of trouble and I changed that behavior by mix in ThreadingMixIn class so that my xmlrpc server could respond multiple requests in the same time.
class RPCThreading(SocketServer.ThreadingMixIn, SimpleXMLRPCServer.SimpleXMLRPCServer):
pass
If I understand your question correctly, SimpleXMLRPCServer is the solution. Just use it directly.
A:
Can you have another communication channel? If yes, then have a "call me back when it is my turn" protocol running between the server and the clients.
In other words, each client would register its intention to issue requests to the server and the said server would "callback" the next-up client when it is ready.
A:
There are several choices:
Use single-process-single-thread server like SimpleXMLRPCServer to process requests subsequently.
Use threading.Lock() in threaded server.
You some external locking mechanism (like lockfile module or GET_LOCK() function in mysql) in multiprocess server.
|
Python XMLRPC with concurrent requests
|
I'm looking for a way to prevent multiple hosts from issuing simultaneous commands to a Python XMLRPC listener. The listener is responsible for running scripts to perform tasks on that system that would fail if multiple users tried to issue these commands at the same time. Is there a way I can block all incoming requests until the single instance has completed?
|
[
"I think python SimpleXMLRPCServer module is what you want. I believe the default behavior of that model is blocking new requests when current request is processing. The default behavior gave me lots of trouble and I changed that behavior by mix in ThreadingMixIn class so that my xmlrpc server could respond multiple requests in the same time.\nclass RPCThreading(SocketServer.ThreadingMixIn, SimpleXMLRPCServer.SimpleXMLRPCServer):\n pass\n\nIf I understand your question correctly, SimpleXMLRPCServer is the solution. Just use it directly.\n",
"Can you have another communication channel? If yes, then have a \"call me back when it is my turn\" protocol running between the server and the clients.\nIn other words, each client would register its intention to issue requests to the server and the said server would \"callback\" the next-up client when it is ready.\n",
"There are several choices:\n\nUse single-process-single-thread server like SimpleXMLRPCServer to process requests subsequently.\nUse threading.Lock() in threaded server.\nYou some external locking mechanism (like lockfile module or GET_LOCK() function in mysql) in multiprocess server.\n\n"
] |
[
17,
0,
0
] |
[] |
[] |
[
"python",
"xml_rpc"
] |
stackoverflow_0001589150_python_xml_rpc.txt
|
Q:
Python: Difference between 'global' & globals().update(var)
What is the difference between initializing a variable as global var or calling globals().update(var).
Thanks
A:
When you say
global var
you are telling Python that var is the same var that was defined in a global context.
You would use it in the following way:
var=0
def f():
global var
var=1
f()
print(var)
# 1 <---- the var outside the "def f" block is affected by calling f()
Without the global statement, the var inside the "def f" block would be a local variable,
and setting its value would have no effect on the var outside the "def f" block.
var=0
def f():
var=1
f()
print(var)
# 0 <---- the var outside the "def f" block is unaffected
When you say globals.update(var) I am guessing you actually mean globals().update(var).
Let's break it apart.
globals() returns a dict object. The dict's keys are the names of objects, and the
dict's values are the associated object's values.
Every dict has a method called "update". So globals().update() is a call to this method.
The update method expects at least one argument, and that argument is expected to be a dict. If you tell Python
globals().update(var)
then var had better be a dict, and you are telling Python to update the globals() dict with the contents of the var dict.
For example:
#!/usr/bin/env python
# Here is the original globals() dict
print(globals())
# {'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__file__': '/home/unutbu/pybin/test.py', '__doc__': None}
var={'x':'Howdy'}
globals().update(var)
# Now the globals() dict contains both var and 'x'
print(globals())
# {'var': {'x': 'Howdy'}, 'x': 'Howdy', '__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__file__': '/home/unutbu/pybin/test.py', '__doc__': None}
# Lo and behold, you've defined x without saying x='Howdy' !
print(x)
Howdy
|
Python: Difference between 'global' & globals().update(var)
|
What is the difference between initializing a variable as global var or calling globals().update(var).
Thanks
|
[
"When you say\nglobal var\n\nyou are telling Python that var is the same var that was defined in a global context.\nYou would use it in the following way:\nvar=0\ndef f():\n global var\n var=1\nf()\nprint(var)\n# 1 <---- the var outside the \"def f\" block is affected by calling f()\n\nWithout the global statement, the var inside the \"def f\" block would be a local variable,\nand setting its value would have no effect on the var outside the \"def f\" block.\nvar=0\ndef f():\n var=1\nf()\nprint(var)\n# 0 <---- the var outside the \"def f\" block is unaffected\n\nWhen you say globals.update(var) I am guessing you actually mean globals().update(var).\nLet's break it apart.\nglobals() returns a dict object. The dict's keys are the names of objects, and the\ndict's values are the associated object's values.\nEvery dict has a method called \"update\". So globals().update() is a call to this method.\nThe update method expects at least one argument, and that argument is expected to be a dict. If you tell Python\nglobals().update(var)\n\nthen var had better be a dict, and you are telling Python to update the globals() dict with the contents of the var dict.\nFor example:\n#!/usr/bin/env python\n\n# Here is the original globals() dict\nprint(globals())\n# {'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__file__': '/home/unutbu/pybin/test.py', '__doc__': None}\n\nvar={'x':'Howdy'}\nglobals().update(var)\n\n# Now the globals() dict contains both var and 'x'\nprint(globals())\n# {'var': {'x': 'Howdy'}, 'x': 'Howdy', '__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__file__': '/home/unutbu/pybin/test.py', '__doc__': None}\n\n# Lo and behold, you've defined x without saying x='Howdy' !\nprint(x)\nHowdy\n\n"
] |
[
29
] |
[] |
[] |
[
"global",
"python",
"variables"
] |
stackoverflow_0001589968_global_python_variables.txt
|
Q:
Must two SQLAlchemy declarative models share the same declarative_base()?
Is it necessary for two SQLAlchemy models to inherit from the same instance of declarative_base() if they must participate in the same Session? This is likely to be the case when importing two or more modules that define SQLAlchemy models.
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class SomeClass(Base):
__tablename__ = 'some_table'
id = Column(Integer, primary_key=True)
name = Column(String(50))
Base2 = declarative_base()
class AnotherClass(Base2):
__tablename__ = 'another_table'
id = Column(Integer, primary_key=True)
name = Column(String(50))
A:
I successfully use different declarative bases in single session. This can be useful when using several databases: each base is created with own metadata and each metadata is bound to separate database. Some of your declarative bases could define additional methods or they could use another metaclass to install extensions.
A:
Separate Base classes will work just fine.
You'll have to be careful when they are using different database connections, in that case you can't use joins across the two databases - every query needs to go to one database.
|
Must two SQLAlchemy declarative models share the same declarative_base()?
|
Is it necessary for two SQLAlchemy models to inherit from the same instance of declarative_base() if they must participate in the same Session? This is likely to be the case when importing two or more modules that define SQLAlchemy models.
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class SomeClass(Base):
__tablename__ = 'some_table'
id = Column(Integer, primary_key=True)
name = Column(String(50))
Base2 = declarative_base()
class AnotherClass(Base2):
__tablename__ = 'another_table'
id = Column(Integer, primary_key=True)
name = Column(String(50))
|
[
"I successfully use different declarative bases in single session. This can be useful when using several databases: each base is created with own metadata and each metadata is bound to separate database. Some of your declarative bases could define additional methods or they could use another metaclass to install extensions.\n",
"Separate Base classes will work just fine.\nYou'll have to be careful when they are using different database connections, in that case you can't use joins across the two databases - every query needs to go to one database.\n"
] |
[
5,
4
] |
[] |
[] |
[
"orm",
"python",
"sqlalchemy"
] |
stackoverflow_0001589748_orm_python_sqlalchemy.txt
|
Q:
How do I write a unix filter in python?
I want to write a program that reads stdin (unbuffered) and writes stdout (unbuffered) doing some trivial char-by-char transformation. For the sake of the example let's say I want to remove all chars x from stdin.
A:
Read from sys.stdin and write to sys.stdout (or use print). Your example program:
import sys
for line in sys.stdin:
print line.replace("x", ""),
There isn't a standard way to make stdin unbuffered, and you don't want that. Let the OS buffer it.
A:
You can use the fileinput class, which lets you process inputs like the Perl diamond operator would. From the docs:
import fileinput
for line in fileinput.input():
process(line)
where process does something like print line.replace('x','').
You can follow this StackOverflow question for how to unbuffer stdout. Or you can just call sys.stdout.flush() after each print.
A:
I don't know exactly what you mean by buffered in this context, but it is pretty simple to do what you are asking...
so_gen.py (generating a constant stream that we can watch):
import time
import sys
while True:
for char in 'abcdefx':
sys.stdout.write(char)
sys.stdout.flush()
time.sleep(0.1)
so_filter.py (doing what you ask):
import sys
while True:
char = sys.stdin.read(1)
if not char:
break
if char != 'x':
sys.stdout.write(char)
sys.stdout.flush()
Try running python so_gen.py | python so_filter.py to see what it does.
A:
Use the -u switch for the python interpreter to make all reads and writes unbuffered. Similar to setting $| = true; in Perl. Then proceed as you would, reading a line modifying it and then printing it. sys.stdout.flush() not required.
#!/path/to/python -u
import sys
for line in sys.stdin:
process_line(line)
|
How do I write a unix filter in python?
|
I want to write a program that reads stdin (unbuffered) and writes stdout (unbuffered) doing some trivial char-by-char transformation. For the sake of the example let's say I want to remove all chars x from stdin.
|
[
"Read from sys.stdin and write to sys.stdout (or use print). Your example program:\nimport sys\n\nfor line in sys.stdin:\n print line.replace(\"x\", \"\"),\n\nThere isn't a standard way to make stdin unbuffered, and you don't want that. Let the OS buffer it.\n",
"You can use the fileinput class, which lets you process inputs like the Perl diamond operator would. From the docs:\nimport fileinput\nfor line in fileinput.input():\n process(line)\n\nwhere process does something like print line.replace('x','').\nYou can follow this StackOverflow question for how to unbuffer stdout. Or you can just call sys.stdout.flush() after each print.\n",
"I don't know exactly what you mean by buffered in this context, but it is pretty simple to do what you are asking...\nso_gen.py (generating a constant stream that we can watch):\nimport time\nimport sys\nwhile True:\n for char in 'abcdefx':\n sys.stdout.write(char)\n sys.stdout.flush()\n time.sleep(0.1)\n\nso_filter.py (doing what you ask):\nimport sys\nwhile True:\n char = sys.stdin.read(1)\n if not char:\n break\n if char != 'x':\n sys.stdout.write(char)\n sys.stdout.flush()\n\nTry running python so_gen.py | python so_filter.py to see what it does.\n",
"Use the -u switch for the python interpreter to make all reads and writes unbuffered. Similar to setting $| = true; in Perl. Then proceed as you would, reading a line modifying it and then printing it. sys.stdout.flush() not required.\n#!/path/to/python -u\n\nimport sys\n\nfor line in sys.stdin:\n process_line(line)\n\n"
] |
[
15,
15,
6,
4
] |
[] |
[] |
[
"filter",
"python",
"unix"
] |
stackoverflow_0001589994_filter_python_unix.txt
|
Q:
Django admin won't let me delete a user in its auth admin application
This may be more of a serverfault question I'm not sure.
I have two practically identical servers - I cloned the DB from one to the other, and now when I try to delete a user in the Admin > Auth application Django gives the following error:
File "/usr/lib/python2.5/site-packages/django/db/models/sql/query.py", line 206, in results_iter
for rows in self.execute_sql(MULTI):
File "/usr/lib/python2.5/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql
cursor.execute(sql, params)
ProgrammingError: relation "django_openidauth_useropenid" does not exist
So the issue seems to be django_openidauth_useropenid but what is it referencing - something missing in the DB, or an application?
My site is based on the PINAX collection apps.
A:
This was resolved by doing a ./manage.py syncdb
It must have got out of date somehow.
|
Django admin won't let me delete a user in its auth admin application
|
This may be more of a serverfault question I'm not sure.
I have two practically identical servers - I cloned the DB from one to the other, and now when I try to delete a user in the Admin > Auth application Django gives the following error:
File "/usr/lib/python2.5/site-packages/django/db/models/sql/query.py", line 206, in results_iter
for rows in self.execute_sql(MULTI):
File "/usr/lib/python2.5/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql
cursor.execute(sql, params)
ProgrammingError: relation "django_openidauth_useropenid" does not exist
So the issue seems to be django_openidauth_useropenid but what is it referencing - something missing in the DB, or an application?
My site is based on the PINAX collection apps.
|
[
"This was resolved by doing a ./manage.py syncdb\nIt must have got out of date somehow.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"pinax",
"python"
] |
stackoverflow_0001578372_django_pinax_python.txt
|
Q:
Python Deprecation Warnings with Monostate __new__ -- Can someone explain why?
I have a basic Monostate with Python 2.6.
class Borg(object):
__shared_state = {}
def __new__(cls, *args, **kwargs):
self = object.__new__(cls, *args, **kwargs)
self.__dict__ = cls.__shared_state
return self
def __init__(self, *args, **kwargs):
noSend = kwargs.get("noSend", False)
reportLevel = kwargs.get("reportLevel", 30)
reportMethods = kwargs.get("reportMethods", "BaseReport")
contacts= kwargs.get("contacts", None)
a = Borg(contacts="Foo", noSend="Bar", )
Which happily gives me the following Deprecation warning..
untitled:4: DeprecationWarning: object.__new__() takes no parameters
self = object.__new__(cls, *args, **kwargs)
After a bit of googling I find this is attached to Bug #1683368. What I can't figure out is what this means. It's complaining about the following line
self = object.__new__(cls, *args, **kwargs)
Which appears to be OK. Can someone explain in laymens terms why this is a problem. I understand that "this is inconsistent with other built-ins, like list" but I'm not sure I understand why. Would someone explain this show me the right way to do it?
Thanks
A:
See python-singleton-object-instantiation, and note Alex Martelli's singleton example:
class Singleton(object):
__instance = None
def __new__(cls):
if cls.__instance == None:
__instance = type.__new__(cls)
__instance.name = "The one"
return __instance
The __new__ deprecation question was answered by Guido:
The message means just what it says. :-) There's no point in calling
object.__new__() with more than a class parameter, and any code that
did so was just dumping those args into a black hole.
The only time when it makes sense for object.__new__() to ignore extra
arguments is when it's not being overridden, but __init__ is being
overridden -- then you have a completely default __new__ and the
checking of constructor arguments is relegated to __init__.
The purpose of all this is to catch the error in a call like
object(42) which (again) passes an argument that is not used. This is
often a symptom of a bug in your program.
--Guido
A:
The warning comes from the fact that __new__() can HAVE args, but since they're ignored everywhere, passing args (other than cls) to it causes the warning. It's not actually (currently) an error to pass the extra args, but they have no effect.
In py3k it will become an error to pass the args.
|
Python Deprecation Warnings with Monostate __new__ -- Can someone explain why?
|
I have a basic Monostate with Python 2.6.
class Borg(object):
__shared_state = {}
def __new__(cls, *args, **kwargs):
self = object.__new__(cls, *args, **kwargs)
self.__dict__ = cls.__shared_state
return self
def __init__(self, *args, **kwargs):
noSend = kwargs.get("noSend", False)
reportLevel = kwargs.get("reportLevel", 30)
reportMethods = kwargs.get("reportMethods", "BaseReport")
contacts= kwargs.get("contacts", None)
a = Borg(contacts="Foo", noSend="Bar", )
Which happily gives me the following Deprecation warning..
untitled:4: DeprecationWarning: object.__new__() takes no parameters
self = object.__new__(cls, *args, **kwargs)
After a bit of googling I find this is attached to Bug #1683368. What I can't figure out is what this means. It's complaining about the following line
self = object.__new__(cls, *args, **kwargs)
Which appears to be OK. Can someone explain in laymens terms why this is a problem. I understand that "this is inconsistent with other built-ins, like list" but I'm not sure I understand why. Would someone explain this show me the right way to do it?
Thanks
|
[
"See python-singleton-object-instantiation, and note Alex Martelli's singleton example:\nclass Singleton(object):\n\n __instance = None\n\n def __new__(cls):\n if cls.__instance == None:\n __instance = type.__new__(cls)\n __instance.name = \"The one\"\n return __instance\n\nThe __new__ deprecation question was answered by Guido:\n\nThe message means just what it says. :-) There's no point in calling\n object.__new__() with more than a class parameter, and any code that\n did so was just dumping those args into a black hole.\nThe only time when it makes sense for object.__new__() to ignore extra\n arguments is when it's not being overridden, but __init__ is being\n overridden -- then you have a completely default __new__ and the\n checking of constructor arguments is relegated to __init__.\nThe purpose of all this is to catch the error in a call like\n object(42) which (again) passes an argument that is not used. This is\n often a symptom of a bug in your program.\n--Guido\n\n",
"The warning comes from the fact that __new__() can HAVE args, but since they're ignored everywhere, passing args (other than cls) to it causes the warning. It's not actually (currently) an error to pass the extra args, but they have no effect. \nIn py3k it will become an error to pass the args.\n"
] |
[
6,
1
] |
[] |
[] |
[
"deprecated",
"monostate",
"python"
] |
stackoverflow_0001590477_deprecated_monostate_python.txt
|
Q:
How to generate random 'greenish' colors
Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:
color = (randint(100, 200), randint(120, 255), randint(100, 200))
That mostly works, but I get brownish colors a lot.
A:
Simple solution: Use the HSL or HSV color space instead of rgb (convert it to RGB afterwards if you need this). The difference is the meaning of the tuple: Where RGB means values for Red, Green and Blue, in HSL the H is the color (120 degree or 0.33 meaning green for example) and the S is for saturation and the V for the brightness. So keep the H at a fixed value (or for even more random colors you could randomize it by add/sub a small random number) and randomize the S and the V. See the wikipedia article.
A:
As others have suggested, generating random colours is much easier in the HSV colour space (or HSL, the difference is pretty irrelevant for this)
So, code to generate random "green'ish" colours, and (for demonstration purposes) display them as a series of simple coloured HTML span tags:
#!/usr/bin/env python2.5
"""Random green colour generator, written by dbr, for
http://stackoverflow.com/questions/1586147/how-to-generate-random-greenish-colors
"""
def hsv_to_rgb(h, s, v):
"""Converts HSV value to RGB values
Hue is in range 0-359 (degrees), value/saturation are in range 0-1 (float)
Direct implementation of:
http://en.wikipedia.org/wiki/HSL_and_HSV#Conversion_from_HSV_to_RGB
"""
h, s, v = [float(x) for x in (h, s, v)]
hi = (h / 60) % 6
hi = int(round(hi))
f = (h / 60) - (h / 60)
p = v * (1 - s)
q = v * (1 - f * s)
t = v * (1 - (1 - f) * s)
if hi == 0:
return v, t, p
elif hi == 1:
return q, v, p
elif hi == 2:
return p, v, t
elif hi == 3:
return p, q, v
elif hi == 4:
return t, p, v
elif hi == 5:
return v, p, q
def test():
"""Check examples on..
http://en.wikipedia.org/wiki/HSL_and_HSV#Examples
..work correctly
"""
def verify(got, expected):
if got != expected:
raise AssertionError("Got %s, expected %s" % (got, expected))
verify(hsv_to_rgb(0, 1, 1), (1, 0, 0))
verify(hsv_to_rgb(120, 0.5, 1.0), (0.5, 1, 0.5))
verify(hsv_to_rgb(240, 1, 0.5), (0, 0, 0.5))
def main():
"""Generate 50 random RGB colours, and create some simple coloured HTML
span tags to verify them.
"""
test() # Run simple test suite
from random import randint, uniform
for i in range(50):
# Tweak these values to change colours/variance
h = randint(90, 140) # Select random green'ish hue from hue wheel
s = uniform(0.2, 1)
v = uniform(0.3, 1)
r, g, b = hsv_to_rgb(h, s, v)
# Convert to 0-1 range for HTML output
r, g, b = [x*255 for x in (r, g, b)]
print "<span style='background:rgb(%i, %i, %i)'> </span>" % (r, g, b)
if __name__ == '__main__':
main()
The output (when viewed in a web-browser) should look something along the lines of:
Edit: I didn't know about the colorsys module. Instead of the above hsv_to_rgb function, you could use colorsys.hsv_to_rgb, which makes the code much shorter (it's not quite a drop-in replacement, as my hsv_to_rgb function expects the hue to be in degrees instead of 0-1):
#!/usr/bin/env python2.5
from colorsys import hsv_to_rgb
from random import randint, uniform
for x in range(50):
h = uniform(0.25, 0.38) # Select random green'ish hue from hue wheel
s = uniform(0.2, 1)
v = uniform(0.3, 1)
r, g, b = hsv_to_rgb(h, s, v)
# Convert to 0-1 range for HTML output
r, g, b = [x*255 for x in (r, g, b)]
print "<span style='background:rgb(%i, %i, %i)'> </span>" % (r, g, b)
A:
Check out the colorsys module:
http://docs.python.org/library/colorsys.html
Use the HSL or HSV color space. Randomize the hue to be close to green, then choose completely random stuff for the saturation and V (brightness).
A:
If you stick with RGB, you basically just need to make sure the G value is greater than the R and B, and try to keep the blue and red values similar so that the hue doesn't go too crazy. Extending from Slaks, maybe something like (I know next to nothing about Python):
greenval = randint(100, 255)
redval = randint(20,(greenval - 60))
blueval = randint((redval - 20), (redval + 20))
color = (redval, greenval, blueval)
A:
So in this case you are lucky enough to want variations on a primary color, but for artistic uses like this it is better to specify color wheel coordinates rather than primary color magnitudes.
You probably want something from the colorsys module like:
colorsys.hsv_to_rgb(h, s, v)
Convert the color from HSV coordinates to RGB coordinates.
A:
The solution with HSx color space is a very good one. However, if you need something extremely simplistic and have no specific requirements about the distribution of the colors (like uniformity), a simplistic RGB-based solution would be just to make sure that G value is greater than both R and B
rr = randint(100, 200)
rb = randint(100, 200)
rg = randint(max(rr, rb) + 1, 255)
This will give you "greenish" colors. Some of them will be ever so slightly greenish. You can increase the guaranteed degree of greenishness by increasing (absolutely or relatively) the lower bound in the last randint call.
A:
What you want is to work in terms of HSL instead of RGB. You could find a range of hue that satisfies "greenish" and pick a random hue from it. You could also pick random saturation and lightness but you'll probably want to keep your saturation near 1 and your lightness around 0.5 but you can play with them.
Below is some actionscript code to convert HSL to RGB. I haven't touched python in a while or it'd post the python version.
I find that greenish is something like 0.47*PI to 0.8*PI.
/**
@param h hue [0, 2PI]
@param s saturation [0,1]
@param l lightness [0,1]
@return object {r,g,b} {[0,1],[0,1][0,1]}
*/
public function hslToRGB(h:Number, s:Number, l:Number):Color
{
var q:Number = (l<0.5)?l*(1+s):l+s-l*s;
var p:Number = 2*l-q;
var h_k:Number = h/(Math.PI*2);
var t_r:Number = h_k+1/3;
var t_g:Number = h_k;
var t_b:Number = h_k-1/3;
if (t_r < 0) ++t_r; else if (t_r > 1) --t_r;
if (t_g < 0) ++t_g; else if (t_g > 1) --t_g;
if (t_b < 0) ++t_b; else if (t_b > 1) --t_b;
var c:Color = new Color();
if (t_r < 1/6) c.r = p+((q-p)*6*t_r);
else if (t_r < 1/2) c.r = q;
else if (t_r < 2/3) c.r = p+((q-p)*6*(2/3-t_r));
else c.r = p;
if (t_g < 1/6) c.g = p+((q-p)*6*t_g);
else if (t_g < 1/2) c.g = q;
else if (t_g < 2/3) c.g = p+((q-p)*6*(2/3-t_g));
else c.g = p;
if (t_b < 1/6) c.b = p+((q-p)*6*t_b);
else if (t_b < 1/2) c.b = q;
else if (t_b < 2/3) c.b = p+((q-p)*6*(2/3-t_b));
else c.b = p;
return c;
}
A:
The simplest way to do this is to make sure that the red and blue components are the same, like this: (Forgive my Python)
rb = randint(100, 200)
color = (rb, randint(120, 255), rb)
A:
I'd go with with the HSV approach everyone else mentioned. Another approach would be to get a nice high resolution photo which some greenery in it, crop out the non-green parts, and pick random pixels from it using PIL.
|
How to generate random 'greenish' colors
|
Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:
color = (randint(100, 200), randint(120, 255), randint(100, 200))
That mostly works, but I get brownish colors a lot.
|
[
"Simple solution: Use the HSL or HSV color space instead of rgb (convert it to RGB afterwards if you need this). The difference is the meaning of the tuple: Where RGB means values for Red, Green and Blue, in HSL the H is the color (120 degree or 0.33 meaning green for example) and the S is for saturation and the V for the brightness. So keep the H at a fixed value (or for even more random colors you could randomize it by add/sub a small random number) and randomize the S and the V. See the wikipedia article.\n",
"As others have suggested, generating random colours is much easier in the HSV colour space (or HSL, the difference is pretty irrelevant for this)\nSo, code to generate random \"green'ish\" colours, and (for demonstration purposes) display them as a series of simple coloured HTML span tags:\n#!/usr/bin/env python2.5\n\"\"\"Random green colour generator, written by dbr, for\nhttp://stackoverflow.com/questions/1586147/how-to-generate-random-greenish-colors\n\"\"\"\n\ndef hsv_to_rgb(h, s, v):\n \"\"\"Converts HSV value to RGB values\n Hue is in range 0-359 (degrees), value/saturation are in range 0-1 (float)\n\n Direct implementation of:\n http://en.wikipedia.org/wiki/HSL_and_HSV#Conversion_from_HSV_to_RGB\n \"\"\"\n h, s, v = [float(x) for x in (h, s, v)]\n\n hi = (h / 60) % 6\n hi = int(round(hi))\n\n f = (h / 60) - (h / 60)\n p = v * (1 - s)\n q = v * (1 - f * s)\n t = v * (1 - (1 - f) * s)\n\n if hi == 0:\n return v, t, p\n elif hi == 1:\n return q, v, p\n elif hi == 2:\n return p, v, t\n elif hi == 3:\n return p, q, v\n elif hi == 4:\n return t, p, v\n elif hi == 5:\n return v, p, q\n\ndef test():\n \"\"\"Check examples on..\n http://en.wikipedia.org/wiki/HSL_and_HSV#Examples\n ..work correctly\n \"\"\"\n def verify(got, expected):\n if got != expected:\n raise AssertionError(\"Got %s, expected %s\" % (got, expected))\n\n verify(hsv_to_rgb(0, 1, 1), (1, 0, 0))\n verify(hsv_to_rgb(120, 0.5, 1.0), (0.5, 1, 0.5))\n verify(hsv_to_rgb(240, 1, 0.5), (0, 0, 0.5))\n\ndef main():\n \"\"\"Generate 50 random RGB colours, and create some simple coloured HTML\n span tags to verify them.\n \"\"\"\n test() # Run simple test suite\n\n from random import randint, uniform\n\n for i in range(50):\n # Tweak these values to change colours/variance\n h = randint(90, 140) # Select random green'ish hue from hue wheel\n s = uniform(0.2, 1)\n v = uniform(0.3, 1)\n\n r, g, b = hsv_to_rgb(h, s, v)\n\n # Convert to 0-1 range for HTML output\n r, g, b = [x*255 for x in (r, g, b)]\n\n print \"<span style='background:rgb(%i, %i, %i)'> </span>\" % (r, g, b)\n\nif __name__ == '__main__':\n main()\n\nThe output (when viewed in a web-browser) should look something along the lines of:\n\nEdit: I didn't know about the colorsys module. Instead of the above hsv_to_rgb function, you could use colorsys.hsv_to_rgb, which makes the code much shorter (it's not quite a drop-in replacement, as my hsv_to_rgb function expects the hue to be in degrees instead of 0-1):\n#!/usr/bin/env python2.5\nfrom colorsys import hsv_to_rgb\nfrom random import randint, uniform\n\nfor x in range(50):\n h = uniform(0.25, 0.38) # Select random green'ish hue from hue wheel\n s = uniform(0.2, 1)\n v = uniform(0.3, 1)\n\n r, g, b = hsv_to_rgb(h, s, v)\n\n # Convert to 0-1 range for HTML output\n r, g, b = [x*255 for x in (r, g, b)]\n\n print \"<span style='background:rgb(%i, %i, %i)'> </span>\" % (r, g, b)\n\n",
"Check out the colorsys module:\nhttp://docs.python.org/library/colorsys.html\nUse the HSL or HSV color space. Randomize the hue to be close to green, then choose completely random stuff for the saturation and V (brightness).\n",
"If you stick with RGB, you basically just need to make sure the G value is greater than the R and B, and try to keep the blue and red values similar so that the hue doesn't go too crazy. Extending from Slaks, maybe something like (I know next to nothing about Python):\ngreenval = randint(100, 255)\nredval = randint(20,(greenval - 60))\nblueval = randint((redval - 20), (redval + 20))\ncolor = (redval, greenval, blueval)\n\n",
"So in this case you are lucky enough to want variations on a primary color, but for artistic uses like this it is better to specify color wheel coordinates rather than primary color magnitudes.\nYou probably want something from the colorsys module like:\ncolorsys.hsv_to_rgb(h, s, v)\n Convert the color from HSV coordinates to RGB coordinates.\n\n",
"The solution with HSx color space is a very good one. However, if you need something extremely simplistic and have no specific requirements about the distribution of the colors (like uniformity), a simplistic RGB-based solution would be just to make sure that G value is greater than both R and B\nrr = randint(100, 200)\nrb = randint(100, 200)\nrg = randint(max(rr, rb) + 1, 255)\n\nThis will give you \"greenish\" colors. Some of them will be ever so slightly greenish. You can increase the guaranteed degree of greenishness by increasing (absolutely or relatively) the lower bound in the last randint call.\n",
"What you want is to work in terms of HSL instead of RGB. You could find a range of hue that satisfies \"greenish\" and pick a random hue from it. You could also pick random saturation and lightness but you'll probably want to keep your saturation near 1 and your lightness around 0.5 but you can play with them.\nBelow is some actionscript code to convert HSL to RGB. I haven't touched python in a while or it'd post the python version.\nI find that greenish is something like 0.47*PI to 0.8*PI.\n /**\n@param h hue [0, 2PI]\n@param s saturation [0,1]\n@param l lightness [0,1]\n@return object {r,g,b} {[0,1],[0,1][0,1]}\n*/\npublic function hslToRGB(h:Number, s:Number, l:Number):Color\n{\n var q:Number = (l<0.5)?l*(1+s):l+s-l*s;\n var p:Number = 2*l-q;\n var h_k:Number = h/(Math.PI*2);\n var t_r:Number = h_k+1/3;\n var t_g:Number = h_k;\n var t_b:Number = h_k-1/3;\n if (t_r < 0) ++t_r; else if (t_r > 1) --t_r;\n if (t_g < 0) ++t_g; else if (t_g > 1) --t_g;\n if (t_b < 0) ++t_b; else if (t_b > 1) --t_b;\n var c:Color = new Color();\n if (t_r < 1/6) c.r = p+((q-p)*6*t_r);\n else if (t_r < 1/2) c.r = q;\n else if (t_r < 2/3) c.r = p+((q-p)*6*(2/3-t_r));\n else c.r = p;\n if (t_g < 1/6) c.g = p+((q-p)*6*t_g);\n else if (t_g < 1/2) c.g = q;\n else if (t_g < 2/3) c.g = p+((q-p)*6*(2/3-t_g));\n else c.g = p;\n if (t_b < 1/6) c.b = p+((q-p)*6*t_b);\n else if (t_b < 1/2) c.b = q;\n else if (t_b < 2/3) c.b = p+((q-p)*6*(2/3-t_b));\n else c.b = p;\n return c;\n}\n\n",
"The simplest way to do this is to make sure that the red and blue components are the same, like this: (Forgive my Python)\nrb = randint(100, 200)\ncolor = (rb, randint(120, 255), rb)\n\n",
"I'd go with with the HSV approach everyone else mentioned. Another approach would be to get a nice high resolution photo which some greenery in it, crop out the non-green parts, and pick random pixels from it using PIL.\n"
] |
[
54,
21,
16,
9,
5,
3,
1,
0,
0
] |
[] |
[] |
[
"colors",
"language_agnostic",
"python",
"random"
] |
stackoverflow_0001586147_colors_language_agnostic_python_random.txt
|
Q:
Python| How can I make this variable global without initializing it as 'global'
I have this code here. The only part I can add code to is in main_____ AFTER the 'i=1' line. This script will be executing multiple times and will have some variable (might not be 'i', could be 'xy', 'var', anything), incrementing by 1 each time. I have gotten this to work by declaring 'i' as global above the method, but unfortunately, I can't keep it that way.
Is there a way in which I can make 'i' function as a global variable within the above-mentioned parameters?
def main______():
try:
i+=1
except NameError:
i=1
main______()
A:
If you want to use a global variable you have to declare it as global. What's wrong with that?
If you need to store state between calls, you should be using a class
>>> class F():
... def __init__(self):
... self.i=0
... def __call__(self):
... print self.i
... self.i+=1
...
>>> f=F()
>>> f()
0
>>> f()
1
>>> f()
2
|
Python| How can I make this variable global without initializing it as 'global'
|
I have this code here. The only part I can add code to is in main_____ AFTER the 'i=1' line. This script will be executing multiple times and will have some variable (might not be 'i', could be 'xy', 'var', anything), incrementing by 1 each time. I have gotten this to work by declaring 'i' as global above the method, but unfortunately, I can't keep it that way.
Is there a way in which I can make 'i' function as a global variable within the above-mentioned parameters?
def main______():
try:
i+=1
except NameError:
i=1
main______()
|
[
"If you want to use a global variable you have to declare it as global. What's wrong with that?\nIf you need to store state between calls, you should be using a class\n>>> class F():\n... def __init__(self):\n... self.i=0\n... def __call__(self):\n... print self.i\n... self.i+=1\n... \n>>> f=F()\n>>> f()\n0\n>>> f()\n1\n>>> f()\n2\n\n"
] |
[
1
] |
[] |
[] |
[
"global",
"python"
] |
stackoverflow_0001590712_global_python.txt
|
Q:
Preserving whitespace with Pygments
I'm currently writing an application that uses Pygments to perform syntax highlighting. The problem I'm having is any code I process with Pygments has the leading and trailing whitespace in the file removed, and a single line break added to the end. Is there a way to make Pygments preserve the whitespace?
A:
I think you want to make your own lexer, e.g.
lexer = lexers.get_lexer_by_name("python", stripnl=False)
and explicitly pass it to pygment.highlight. See the lexers' reference here.
|
Preserving whitespace with Pygments
|
I'm currently writing an application that uses Pygments to perform syntax highlighting. The problem I'm having is any code I process with Pygments has the leading and trailing whitespace in the file removed, and a single line break added to the end. Is there a way to make Pygments preserve the whitespace?
|
[
"I think you want to make your own lexer, e.g.\nlexer = lexers.get_lexer_by_name(\"python\", stripnl=False)\n\nand explicitly pass it to pygment.highlight. See the lexers' reference here.\n"
] |
[
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001591024_python.txt
|
Q:
Scheduled tasks in Win32
I have a Scheduled Task on a WinXP SP2 machine that is set up to run a python script:
Daily
Start time: 12:03 AM
Schedule task daily: every 1 day
Start date: some time in the past
Repeat task: every 5 minutes
Until: Duration 24 hours
Basically, i want the script to run every five minutes, for ever.
My problem is the task runs sometime after 23:47 every night (presumably after 23:55) and does not run after that. What am I doing wrong? Alternatively, is there a different method you can suggest other than using Windows scheduled tasks?
A:
you can schedule it from another script and kick this off once a day or after each reboot:
#!/usr/bin/env python
import subprocess
interval = 300 # secs
while True:
p = subprocess.Popen(['pythonw.exe', 'foo.py'])
time.sleep(interval)
This way you can do sub-minute intervals also.
A:
On the first pane (labeled "Task") do you have "Run only if logged on" unchecked and "Enabled (scheduled task runs at specified time" checked?
I've run python jobs via Windows scheduled task with settings very similar to what you show.
A:
Also, for the past year or so I've seen a common bug where Scheduled Tasks on Server 2003 or XP do not run if either of the following checkboxes are on:
"Don't start the task if the computer is running on batteries"
"Stop the task if battery mode begins"
It seems that Windows gets a little confused if you have a battery (on a laptop) or a UPS (on a server, for example), whether or not your utility power is working.
Also, as a rule I would trim down the time or uncheck the option to "Stop the task if it runs for X minutes" when you're running it so often.
A:
From what I know, Scheduled Tasks is horrible. You should use something with better control like
http://cronw.sourceforge.net/ or any other implementation of cron for Windows.
A:
Until: Duration 24 hours
That shuts it off at the end of the first day.
Remove that, see if it keeps going. It should, and you shouldn't need to install Python in the process. :)
A:
At the risk of not answering your question, can I suggest that if what you have to run is important or even critical then Windows task-Scheduler is not the way to run it.
There are so many awful flows when using the task-scheduler. Lets just start with the obvious ones:
There is no logging. There is no way to investigate what happens when things go wrong. There's no way to distribute work across PCs. There's no fault-tolerance. It's Windows only and the interface is crappy.
If any of the above is a problem for you you need something a bit more sophisticated. My suggestion is that you try Hudson, a.k.a. Sun's continuous integration server.
In addition to all of the above it can do cron-style scheduling, with automatic expiry of logs. It can be set to jabber or email on failure and you can even make it auto diagnose what went wrong with your process if you can make it produce some XML output.
Please please, do not use Windows Scheduled tasks. There are many better things to use, and I speak from experience when I say that I never regretted dumping the built-in scheduler.
A:
What version of Windows are you running?
Did you check the "Settings" tab to make sure all of the options are de-selected?
You might also consider a more feature rich scheduler such as System Scheduler from Splinterware Software
|
Scheduled tasks in Win32
|
I have a Scheduled Task on a WinXP SP2 machine that is set up to run a python script:
Daily
Start time: 12:03 AM
Schedule task daily: every 1 day
Start date: some time in the past
Repeat task: every 5 minutes
Until: Duration 24 hours
Basically, i want the script to run every five minutes, for ever.
My problem is the task runs sometime after 23:47 every night (presumably after 23:55) and does not run after that. What am I doing wrong? Alternatively, is there a different method you can suggest other than using Windows scheduled tasks?
|
[
"you can schedule it from another script and kick this off once a day or after each reboot:\n#!/usr/bin/env python\n\nimport subprocess\n\ninterval = 300 # secs\n\nwhile True:\n p = subprocess.Popen(['pythonw.exe', 'foo.py'])\n time.sleep(interval)\n\nThis way you can do sub-minute intervals also.\n",
"On the first pane (labeled \"Task\") do you have \"Run only if logged on\" unchecked and \"Enabled (scheduled task runs at specified time\" checked?\nI've run python jobs via Windows scheduled task with settings very similar to what you show.\n",
"Also, for the past year or so I've seen a common bug where Scheduled Tasks on Server 2003 or XP do not run if either of the following checkboxes are on:\n\n\"Don't start the task if the computer is running on batteries\"\n\"Stop the task if battery mode begins\"\n\nIt seems that Windows gets a little confused if you have a battery (on a laptop) or a UPS (on a server, for example), whether or not your utility power is working.\nAlso, as a rule I would trim down the time or uncheck the option to \"Stop the task if it runs for X minutes\" when you're running it so often.\n",
"From what I know, Scheduled Tasks is horrible. You should use something with better control like \nhttp://cronw.sourceforge.net/ or any other implementation of cron for Windows.\n",
"Until: Duration 24 hours\nThat shuts it off at the end of the first day.\nRemove that, see if it keeps going. It should, and you shouldn't need to install Python in the process. :)\n",
"At the risk of not answering your question, can I suggest that if what you have to run is important or even critical then Windows task-Scheduler is not the way to run it. \nThere are so many awful flows when using the task-scheduler. Lets just start with the obvious ones:\nThere is no logging. There is no way to investigate what happens when things go wrong. There's no way to distribute work across PCs. There's no fault-tolerance. It's Windows only and the interface is crappy. \nIf any of the above is a problem for you you need something a bit more sophisticated. My suggestion is that you try Hudson, a.k.a. Sun's continuous integration server. \nIn addition to all of the above it can do cron-style scheduling, with automatic expiry of logs. It can be set to jabber or email on failure and you can even make it auto diagnose what went wrong with your process if you can make it produce some XML output.\nPlease please, do not use Windows Scheduled tasks. There are many better things to use, and I speak from experience when I say that I never regretted dumping the built-in scheduler.\n",
"What version of Windows are you running?\nDid you check the \"Settings\" tab to make sure all of the options are de-selected?\nYou might also consider a more feature rich scheduler such as System Scheduler from Splinterware Software\n"
] |
[
3,
1,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"python",
"scheduled_tasks",
"windows_xp"
] |
stackoverflow_0001590474_python_scheduled_tasks_windows_xp.txt
|
Q:
How to prepare a django project for future changes
As I work on my first django powered site, I am constantly learning new things and making all sorts of changes and additions to my apps as I go. I try to follow DRY and pythonic principles and be smart in my coding but eventually I will have to take the site live and am certain that not long after I do, something new and exiting will come down the pipe and I will want to implement it.
Preparing for the future:
With this in mind, do folks have any suggestions about how I can prepare my code now to be as future-ready as possible for these currently unforseen/unknown upgrades/additions to my code base?
Hindsight is 20/20:
What do you wish you had done at the start that would have made your life easier now that your site is up and running ?
Little Things I've Learned (examples):
use UTC as the default timezone (and use datetime.datetime.utcnow())
use South to aid future database changes (haven't done it yet, but it seems wise)
not hard code links in my templates (use get_absolute_url() and reverse lookups)
create a separate tools app to contain small re-usable templatetags and utility functions that I may want to use in future projects (no need to decouple them later)
These are small tips, and some straight from the django-docs, but I think they help .
How about you? What are your best practices for a new app or project that prepare you for the future?
A:
Deploy into a pure environment using virtualenv.
Document requirements using a pip requirements file.
I'm sure others will suggest their deployment strategies, but making these changes were big positives for me.
A:
Learn and use South at the outset, so when you make major DB schema changes, you'll have a migration tool already in place. Otherwise, you'll find you end up running two versions side by side while trying to figure out how to port the data, and it gets very VERY messy.
http://south.aeracode.org/
A:
Not sure how relevant this is outside of the wonderful world of Webfaction.
Use Django checked out from Django's svn repository, rather than whatever your host installed for you when creating a Django app, so you can update Django to get security fixes by running svn up.
I had to do this a few days ago, and whilst it wasn't too painful (remove the Django installation, then run an SVN checkout, then restart Apache), doing it for all my various projects was a bit irritating - would have been much happier to just run svn up.
A:
Listen to James Bennett: Read Practical Django Projects, follow http://b-list.org/. Search youtube for his djangocon talk on reusable apps. Read his code (on bitbucket).
An example of advice I've gotten from him: Dependency injection on your views will make your apps much more reusable. A concrete example—refactor this situation-specific view:
def user_login_view(request):
context = {
'login_form': forms.LoginForm
}
return render_to_response('accounts/login.html', context)
with this generic view:
def user_login_view(request, form=models.LoginForm, template_name='accounts/login.html'):
context = {
'login_form': form,
}
return render_to_response(template_name, context)
Better still, give your view a generic name like "form_view", rename your form 'form' instead of 'login_form', and pass in your parameters explicity. But those changes alter the functionality, and so aren't a pure refactoring. Once you've refactored, then you can start incrementally changing other things.
|
How to prepare a django project for future changes
|
As I work on my first django powered site, I am constantly learning new things and making all sorts of changes and additions to my apps as I go. I try to follow DRY and pythonic principles and be smart in my coding but eventually I will have to take the site live and am certain that not long after I do, something new and exiting will come down the pipe and I will want to implement it.
Preparing for the future:
With this in mind, do folks have any suggestions about how I can prepare my code now to be as future-ready as possible for these currently unforseen/unknown upgrades/additions to my code base?
Hindsight is 20/20:
What do you wish you had done at the start that would have made your life easier now that your site is up and running ?
Little Things I've Learned (examples):
use UTC as the default timezone (and use datetime.datetime.utcnow())
use South to aid future database changes (haven't done it yet, but it seems wise)
not hard code links in my templates (use get_absolute_url() and reverse lookups)
create a separate tools app to contain small re-usable templatetags and utility functions that I may want to use in future projects (no need to decouple them later)
These are small tips, and some straight from the django-docs, but I think they help .
How about you? What are your best practices for a new app or project that prepare you for the future?
|
[
"\nDeploy into a pure environment using virtualenv.\nDocument requirements using a pip requirements file.\n\nI'm sure others will suggest their deployment strategies, but making these changes were big positives for me.\n",
"Learn and use South at the outset, so when you make major DB schema changes, you'll have a migration tool already in place. Otherwise, you'll find you end up running two versions side by side while trying to figure out how to port the data, and it gets very VERY messy.\nhttp://south.aeracode.org/\n",
"Not sure how relevant this is outside of the wonderful world of Webfaction.\nUse Django checked out from Django's svn repository, rather than whatever your host installed for you when creating a Django app, so you can update Django to get security fixes by running svn up.\nI had to do this a few days ago, and whilst it wasn't too painful (remove the Django installation, then run an SVN checkout, then restart Apache), doing it for all my various projects was a bit irritating - would have been much happier to just run svn up.\n",
"Listen to James Bennett: Read Practical Django Projects, follow http://b-list.org/. Search youtube for his djangocon talk on reusable apps. Read his code (on bitbucket).\nAn example of advice I've gotten from him: Dependency injection on your views will make your apps much more reusable. A concrete example—refactor this situation-specific view:\ndef user_login_view(request):\n context = {\n 'login_form': forms.LoginForm\n }\n return render_to_response('accounts/login.html', context)\n\nwith this generic view:\ndef user_login_view(request, form=models.LoginForm, template_name='accounts/login.html'):\n context = {\n 'login_form': form,\n }\n return render_to_response(template_name, context)\n\nBetter still, give your view a generic name like \"form_view\", rename your form 'form' instead of 'login_form', and pass in your parameters explicity. But those changes alter the functionality, and so aren't a pure refactoring. Once you've refactored, then you can start incrementally changing other things.\n"
] |
[
8,
7,
5,
4
] |
[
"\"something will come up and I'll wish I had implemented it earlier\"\nThat's the definition of a good site. One that evolves and changes.\n\"future-ready as possible ?\"\nWhat can this possibly mean? What specific things are you worried about? Technology is always changing. A good site is always evolving. What do you want to prevent? Do you want to prevent technical change? Do you want to prevent your site from evolving?\nThere will always be change. It will always be devastating to previous technology choices you made.\nYou cannot prevent, stop or even reduce the impact of change except by refusing to participate in new technology.\n"
] |
[
-2
] |
[
"database",
"django",
"python"
] |
stackoverflow_0001588570_database_django_python.txt
|
Q:
Cron job to connect to sql server and run a sproc using python
I want to learn python, and my task is the run a sql server 2008 stored procedure via a cron job.
Can someone step through a script for me in python?
A:
I am assuming you mean Microsoft's SQL server...
#! /usr/bin/python
import pymssql
con = pymssql.connect (host='xxxxx',user='xxxx',
password='xxxxx',database='xxxxx')
cur = con.cursor()
query = "DECLARE @id INT; EXECUTE sp_GetUserID; SELECT @id;"
cur.execute(query)
outputparameter = cur.fetchall()
con.commit()
con.close()
Taken from http://coding.derkeiler.com/Archive/Python/comp.lang.python/2008-10/msg02620.html (copyright retained)
Put that in a script and run it from cron...
Check this question too.
|
Cron job to connect to sql server and run a sproc using python
|
I want to learn python, and my task is the run a sql server 2008 stored procedure via a cron job.
Can someone step through a script for me in python?
|
[
"I am assuming you mean Microsoft's SQL server...\n#! /usr/bin/python\n\nimport pymssql\ncon = pymssql.connect (host='xxxxx',user='xxxx',\n password='xxxxx',database='xxxxx')\ncur = con.cursor()\nquery = \"DECLARE @id INT; EXECUTE sp_GetUserID; SELECT @id;\"\ncur.execute(query)\noutputparameter = cur.fetchall()\ncon.commit()\ncon.close()\n\nTaken from http://coding.derkeiler.com/Archive/Python/comp.lang.python/2008-10/msg02620.html (copyright retained)\nPut that in a script and run it from cron...\nCheck this question too.\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001591477_python.txt
|
Q:
From PHP workers to Python threads
Right now I'm running 50 PHP (in CLI mode) individual workers (processes) per machine that are waiting to receive their workload (job). For example, the job of resizing an image. In workload they receive the image (binary data) and the desired size. The worker does it's work and returns the resized image back. Then it waits for more jobs (it loops in a smart way). I'm presuming that I have the same executable, libraries and classes loaded and instantiated 50 times. Am I correct? Because this does not sound very effective.
What I'd like to have now is one process that handles all this work and being able to use all available CPU cores while having everything loaded only once (to be more efficient). I presume a new thread would be started for each job and after it finishes, the thread would stop. More jobs would be accepted if there are less than 50 threads doing the work. If all 50 threads are busy, no additional jobs are accepted.
I am using a lot of libraries (for Memcached, Redis, MogileFS, ...) to have access to all the various components that the system uses and Python is pretty much the only language apart from PHP that has support for all of them.
Can Python do what I want and will it be faster and more efficient that the current PHP solution?
A:
Linux has shared libraries, so those 50 php processes use mostly the same libraries.
You don't sound like you even have a problem at all.
"this does not sound very effective." is not a problem description, if anything those words are a problem on their own. Writing code needs a real reason, else you're just wasting time and/or money.
Python is a fine language and won't perform worse than php. Python's multiprocessing module will probably help a lot too. But there isn't much to gain if the php implementation is not completly insane. So why even bother spending time on it when everything works? That is usually the goal, not a reason to rewrite ...
A:
Most probably - yes. But don't assume you have to do multithreading. Have a look at the multiprocessing module. It already has an implementation of a Pool included, which is what you could use. And it basically solves the GIL problem (multithreading can run only 1 "standard python code" at any time - that's a very simplified explanation).
It will still fork a process per job, but in a different way than starting it all over again. All the initialisations done- and libraries loaded before entering the worker process will be inherited in a copy-on-write way. You won't do more initialisations than necessary and you will not waste memory for the same libarary/class if you didn't actually make it different from the pre-pool state.
So yes - looking only at this part, python will be wasting less resources and will use a "nicer" worker-pool model. Whether it will really be faster / less CPU-abusing, is hard to tell without testing, or at least looking at the code. Try it yourself.
Added: If you're worried about memory usage, python may also help you a bit, since it has a "proper" garbage collector, while in php GC is a not a priority and not that good (and for a good reason too).
A:
If you are on a sane operating system then shared libraries should only be loaded once and shared among all processes using them. Memory for data structures and connection handles will obviously be duplicated, but the overhead of stopping and starting the systems may be greater than keeping things up while idle. If you are using something like gearman it might make sense to let several workers stay up even if idle and then have a persistent monitoring process that will start new workers if all the current workers are busy up until a threshold such as the number of available CPUs. That process could then kill workers in a LIFO manner after they have been idle for some period of time.
|
From PHP workers to Python threads
|
Right now I'm running 50 PHP (in CLI mode) individual workers (processes) per machine that are waiting to receive their workload (job). For example, the job of resizing an image. In workload they receive the image (binary data) and the desired size. The worker does it's work and returns the resized image back. Then it waits for more jobs (it loops in a smart way). I'm presuming that I have the same executable, libraries and classes loaded and instantiated 50 times. Am I correct? Because this does not sound very effective.
What I'd like to have now is one process that handles all this work and being able to use all available CPU cores while having everything loaded only once (to be more efficient). I presume a new thread would be started for each job and after it finishes, the thread would stop. More jobs would be accepted if there are less than 50 threads doing the work. If all 50 threads are busy, no additional jobs are accepted.
I am using a lot of libraries (for Memcached, Redis, MogileFS, ...) to have access to all the various components that the system uses and Python is pretty much the only language apart from PHP that has support for all of them.
Can Python do what I want and will it be faster and more efficient that the current PHP solution?
|
[
"Linux has shared libraries, so those 50 php processes use mostly the same libraries. \nYou don't sound like you even have a problem at all.\n\"this does not sound very effective.\" is not a problem description, if anything those words are a problem on their own. Writing code needs a real reason, else you're just wasting time and/or money.\nPython is a fine language and won't perform worse than php. Python's multiprocessing module will probably help a lot too. But there isn't much to gain if the php implementation is not completly insane. So why even bother spending time on it when everything works? That is usually the goal, not a reason to rewrite ...\n",
"Most probably - yes. But don't assume you have to do multithreading. Have a look at the multiprocessing module. It already has an implementation of a Pool included, which is what you could use. And it basically solves the GIL problem (multithreading can run only 1 \"standard python code\" at any time - that's a very simplified explanation).\nIt will still fork a process per job, but in a different way than starting it all over again. All the initialisations done- and libraries loaded before entering the worker process will be inherited in a copy-on-write way. You won't do more initialisations than necessary and you will not waste memory for the same libarary/class if you didn't actually make it different from the pre-pool state.\nSo yes - looking only at this part, python will be wasting less resources and will use a \"nicer\" worker-pool model. Whether it will really be faster / less CPU-abusing, is hard to tell without testing, or at least looking at the code. Try it yourself.\nAdded: If you're worried about memory usage, python may also help you a bit, since it has a \"proper\" garbage collector, while in php GC is a not a priority and not that good (and for a good reason too).\n",
"If you are on a sane operating system then shared libraries should only be loaded once and shared among all processes using them. Memory for data structures and connection handles will obviously be duplicated, but the overhead of stopping and starting the systems may be greater than keeping things up while idle. If you are using something like gearman it might make sense to let several workers stay up even if idle and then have a persistent monitoring process that will start new workers if all the current workers are busy up until a threshold such as the number of available CPUs. That process could then kill workers in a LIFO manner after they have been idle for some period of time.\n"
] |
[
4,
4,
1
] |
[] |
[] |
[
"multithreading",
"php",
"python"
] |
stackoverflow_0001591555_multithreading_php_python.txt
|
Q:
os.popen subprocess conversion
This snippet gets me the dotted quad of my BSD network interface.
I would like to figure out how to use the subprocess module instead.
ifcfg_lines = os.popen("/sbin/ifconfig fxp0").readlines()
x = string.split(ifcfg_lines[3])[1]
Seems as if I can't use subprocess in exactly the same way.
I don't think I want shell=True or PIPE.
What should I do to make the output indexable?
Thanks.
A:
from subprocess import Popen, PIPE
ifcfg_lines = Popen("/sbin/ifconfig fxp0",shell=True,stdout=PIPE).stdout.readlines()
x = string.split(ifcfg_lines[3])[1]
For a little more elegance, hide the details:
def getBSDIP():
from subprocess import Popen, PIPE
import string
CURRENT = Popen("/sbin/ifconfig fxp0", shell=True,stdout=PIPE).stdout.readlines()
return(string.split(CURRENT[3])[1])
If you are going to use subprocess to do this then elegance is a bit limited because you are essentially doing something like screenscraping, except here you are scriptscraping. If you want a truly general solution, use the socket library, i.e. let Python handle the portability.
Often, when you look at a bit of code and you wish that there was a better cleaner way to do it, this means that you need to question your assumptions, and change the algorithm or architecture of the solution.
A:
Why not use something like:
import socket
ipadr = socket.gethostbyname(socket.gethostname())
as in Finding Local IP Addresses in Python and remove the subprocess dependency? There are also some other suggestions in that question.
I also found a post on the netifaces package that does aims to do this with better cross-platform support.
|
os.popen subprocess conversion
|
This snippet gets me the dotted quad of my BSD network interface.
I would like to figure out how to use the subprocess module instead.
ifcfg_lines = os.popen("/sbin/ifconfig fxp0").readlines()
x = string.split(ifcfg_lines[3])[1]
Seems as if I can't use subprocess in exactly the same way.
I don't think I want shell=True or PIPE.
What should I do to make the output indexable?
Thanks.
|
[
"from subprocess import Popen, PIPE\n\nifcfg_lines = Popen(\"/sbin/ifconfig fxp0\",shell=True,stdout=PIPE).stdout.readlines()\nx = string.split(ifcfg_lines[3])[1]\n\nFor a little more elegance, hide the details:\ndef getBSDIP():\n from subprocess import Popen, PIPE\n import string\n\n CURRENT = Popen(\"/sbin/ifconfig fxp0\", shell=True,stdout=PIPE).stdout.readlines()\n return(string.split(CURRENT[3])[1]) \n\nIf you are going to use subprocess to do this then elegance is a bit limited because you are essentially doing something like screenscraping, except here you are scriptscraping. If you want a truly general solution, use the socket library, i.e. let Python handle the portability.\nOften, when you look at a bit of code and you wish that there was a better cleaner way to do it, this means that you need to question your assumptions, and change the algorithm or architecture of the solution.\n",
"Why not use something like:\nimport socket\nipadr = socket.gethostbyname(socket.gethostname())\n\nas in Finding Local IP Addresses in Python and remove the subprocess dependency? There are also some other suggestions in that question.\nI also found a post on the netifaces package that does aims to do this with better cross-platform support.\n"
] |
[
1,
0
] |
[] |
[] |
[
"indexing",
"python",
"string",
"subprocess"
] |
stackoverflow_0001591798_indexing_python_string_subprocess.txt
|
Q:
PHP Sockets or Python, Perl, Bash Sockets?
I'm trying to implement a socket server that will run in most shared PHP hosting.
The requirements are that the Socket server can be installed, started and stopped from PHP automatically without the user doing anything. It doesn't matter what language the socket server is written in, as long as it will run on the majority of shared hosting globally.
Currently, I've written a Socket Server with PHP that implements an Object Cache:
http://code.google.com/p/php-object-cache/
source: http://code.google.com/p/php-object-cache/source/browse/trunk/socket.class.php
However, PHP has to be compiled with sockets support, and not many servers run with PHP sockets support.
My real question is: What language should I implement the socket server in, and have maximum platform support and be invokable from within PHP.
In other words, what scripting language is the most common on PHP enabled Servers?
Or do I have to write the socket server in a compiled language to have it works across all servers?
Lets leave IIS out of the picture at the moment, just Linux servers. I don't think many PHP sites are running on IIS...
edit:
Sorry I think my question is not clear.
I'd like to know, what languages is best suited for creating a socket server given the following requirements:
The language must exist in shared hosting, alongside PHP running in Apache (not CLI).
The sockets support must be enabled natively, not via a required extension.
PHP must be able to write the deamon to file as well as start and stop the deamon.
I'm not asking for a solution for a single server. It has to run natively on the majority of shared hosting servers.
A:
Any server can be stopped or started by PHP under Linux. Of course, if you are running a server which accepts sockets from the internet, then you can just connect directly to the server and tell it to shutdown. No need to go via PHP!
As for "starting a server from PHP", well, under Linux, anything can be started from pretty much anything. Just shell out to start the process and have it drop into daemon mode.
I'm a Perl fan myself. Not surprisingly, there's a
Perl Daemon library available.
If your hosting provider offers Perl script support, then you probably have permission to use "system" or backticks command. Then you can very likely start a daemon. However, you will need to use a non-privileged port (over 1024). Also, you should ASK THEM FIRST! They may not appreciate you tying up ports on their server. This is very definitely something you should discuss with your hosting provider.
A:
It really depends on what the install requirements are. Often the easiest and most standard way to write a socket server is to write an inet.d service. This is a standard daemon on my unix machines, and it will fork a process and handle the socket level details. If you want your service to run on port below 1024 on Unix, this is one of the easier ways to get it done. However, the initial install requires root to configure inet.d.
If you shared hosting allows PHP to do an exec call, then you could start the daemon that way. Keep in mind though, it'll need to run above port 1024. You next need to decide if your program is going to be multi-threaded or multi-process. Typically Java programs are multi-threaded, while an Apache instance is normally multi-process.
Lastly, the host may have a firewall in place. This helps prevent shared hosting accounts from becoming part of a bot-net. If the firewall rules don't allow connections to other ports, you won't be able to connect to it remotely.
|
PHP Sockets or Python, Perl, Bash Sockets?
|
I'm trying to implement a socket server that will run in most shared PHP hosting.
The requirements are that the Socket server can be installed, started and stopped from PHP automatically without the user doing anything. It doesn't matter what language the socket server is written in, as long as it will run on the majority of shared hosting globally.
Currently, I've written a Socket Server with PHP that implements an Object Cache:
http://code.google.com/p/php-object-cache/
source: http://code.google.com/p/php-object-cache/source/browse/trunk/socket.class.php
However, PHP has to be compiled with sockets support, and not many servers run with PHP sockets support.
My real question is: What language should I implement the socket server in, and have maximum platform support and be invokable from within PHP.
In other words, what scripting language is the most common on PHP enabled Servers?
Or do I have to write the socket server in a compiled language to have it works across all servers?
Lets leave IIS out of the picture at the moment, just Linux servers. I don't think many PHP sites are running on IIS...
edit:
Sorry I think my question is not clear.
I'd like to know, what languages is best suited for creating a socket server given the following requirements:
The language must exist in shared hosting, alongside PHP running in Apache (not CLI).
The sockets support must be enabled natively, not via a required extension.
PHP must be able to write the deamon to file as well as start and stop the deamon.
I'm not asking for a solution for a single server. It has to run natively on the majority of shared hosting servers.
|
[
"Any server can be stopped or started by PHP under Linux. Of course, if you are running a server which accepts sockets from the internet, then you can just connect directly to the server and tell it to shutdown. No need to go via PHP!\nAs for \"starting a server from PHP\", well, under Linux, anything can be started from pretty much anything. Just shell out to start the process and have it drop into daemon mode.\nI'm a Perl fan myself. Not surprisingly, there's a \nPerl Daemon library available.\nIf your hosting provider offers Perl script support, then you probably have permission to use \"system\" or backticks command. Then you can very likely start a daemon. However, you will need to use a non-privileged port (over 1024). Also, you should ASK THEM FIRST! They may not appreciate you tying up ports on their server. This is very definitely something you should discuss with your hosting provider.\n",
"It really depends on what the install requirements are. Often the easiest and most standard way to write a socket server is to write an inet.d service. This is a standard daemon on my unix machines, and it will fork a process and handle the socket level details. If you want your service to run on port below 1024 on Unix, this is one of the easier ways to get it done. However, the initial install requires root to configure inet.d. \nIf you shared hosting allows PHP to do an exec call, then you could start the daemon that way. Keep in mind though, it'll need to run above port 1024. You next need to decide if your program is going to be multi-threaded or multi-process. Typically Java programs are multi-threaded, while an Apache instance is normally multi-process.\nLastly, the host may have a firewall in place. This helps prevent shared hosting accounts from becoming part of a bot-net. If the firewall rules don't allow connections to other ports, you won't be able to connect to it remotely.\n"
] |
[
7,
2
] |
[] |
[] |
[
"bash",
"perl",
"php",
"python",
"sockets"
] |
stackoverflow_0001047991_bash_perl_php_python_sockets.txt
|
Q:
Subclass lookup
(I'm developing in Python 3.1, so if there's some shiny new 3.x feature I should know about for this, please let me know!)
I've got a class (we'll just call it "Packet") that serves as the parent for a bunch of child classes representing each of a few dozen packet types in a legacy client-server protocol over which I have no control. (The packets often have wildly differing behavior, so I just gave them each their own class to make my life easier.)
When receiving a packet, I'll have a simple "dispatch" function that checks the packet header to determine the type, then hands it off to the class that knows how to deal with it.
I do not want to maintain the lookup table by hand -- it's inelegant and just asking for trouble. Instead, I'd like the table built at runtime by examining all of the subclasses of Packet, which will have class variables specifying what packet type they correspond to, e.g.:
class Login(Packet):
type_id = 0x01
I thought, of course, about iterating through object.__subclasses__(), but I've found slightly varying viewpoints on the safety and propriety of using it for things like this, including implications that it is implementation-specific and may not work in places other than CPython, and that it may disappear from CPython in the future. Am I being too paranoid? Is __subclassess__ considered a "real" part of the language now?
If not, what's a good "pythonic" way to approach this?
A:
"I'd like the table built at runtime by examining all of the subclasses of Packet,"
This is guaranteed to cause endless problems. This kind of thing puts a strange constraint on your subclasses. You can't use any abstract superclasses to simplify things.
Here's a specific example of what won't work if you "automatically" discover the subclasses.
class MessagePacket( object ):
"""superclass. Ignore me, I'm abstract."""
class Type_01( MessagePacket ):
type_id = 0x01
class Type_023( MessagePacket ):
"""superclass with common features for type 2 and 3. Ignore me, I'm abstract."""
class Type_02( Type_023 ):
type_id = 0x02
class Type_03( Type_023 ):
type_id = 0x03
class Type_04( MessagePacket ):
"""superclass for subtypes of 4. Ignore me, I'm abstract."""
type_id = 0x04
class Type_04A( Type_04 ):
discriminator = lambda x: x[23] == 'a' or x[35] == 42
class Type_04B( Type_04 ):
discriminator = lambda x: True
That should be enough to show that "automatic discovery" is doomed from the outset.
The correct solution is to have a Factory class which embodies the correct subclass hierarchy, exploiting all features based on -- well -- manual design.
class Factory( object ):
def __init__( self, *subclass_list ):
self.subclass = dict( (s.type_id,s) for s in subclass_list )
def parse( self, packet ):
if packet.type_id == 04:
# special subclass logic
return self.subclass[packet.type_id]( packet )
It doesn't seem too onerous a burden to include the following:
factory= Factory( Subclass1, Subclass2, ... SubclassN )
And when you add subclasses, add to the list of subclasses that are actually being used.
A:
__subclasses__ IS part of the Python language, and implemented by IronPython and Jython as well as CPython (no pypy at hand to test, right now, but I'd be astonished if they have broken that!-). Whatever gave you the impression that __subclasses__ was at all problematic?! I see a comment by @gnibbler in the same vein and I'd like to challenge that: can you post URLs about __subclasses__ not being a crucial part of the Python language?!
A:
>>> class PacketMeta(type):
... def __init__(self,*args,**kw):
... if self.type_id is not None:
... self.subclasses[self.type_id]=self
... return type.__init__(self,*args,**kw)
...
>>> class Packet(object):
... __metaclass__=PacketMeta
... subclasses={}
... type_id = None
...
>>> class Login(Packet):
... type_id = 0x01
...
>>> class Logout(Packet):
... type_id = 0x02
...
>>>
>>> Packet.subclasses
{1: <class '__main__.Login'>, 2: <class '__main__.Logout'>}
>>>
If you prefer to use the __subclasses__() you can do something like this
>>> class Packet(object):
... pass
...
>>> class Login(Packet):
... type_id = 0x01
...
>>> class Logout(Packet):
... type_id = 0x02
...
def packetfactory(packet_id):
for x in Packet.__subclasses__():
if x.type_id==packet_id:
return x
...
>>> packetfactory(0x01)
<class '__main__.Login'>
>>> packetfactory(0x02)
<class '__main__.Logout'>
A:
I guess you can use Python metaclasses (Python ≥ 2.2) to share such an information between classes, that would be quite pythonic. Take a look at the implementation of the Google's Protocol Buffers. Here is the tutorial showing metaclasses at work. By the way, the domain of Protocol Buffers is similar to yours.
|
Subclass lookup
|
(I'm developing in Python 3.1, so if there's some shiny new 3.x feature I should know about for this, please let me know!)
I've got a class (we'll just call it "Packet") that serves as the parent for a bunch of child classes representing each of a few dozen packet types in a legacy client-server protocol over which I have no control. (The packets often have wildly differing behavior, so I just gave them each their own class to make my life easier.)
When receiving a packet, I'll have a simple "dispatch" function that checks the packet header to determine the type, then hands it off to the class that knows how to deal with it.
I do not want to maintain the lookup table by hand -- it's inelegant and just asking for trouble. Instead, I'd like the table built at runtime by examining all of the subclasses of Packet, which will have class variables specifying what packet type they correspond to, e.g.:
class Login(Packet):
type_id = 0x01
I thought, of course, about iterating through object.__subclasses__(), but I've found slightly varying viewpoints on the safety and propriety of using it for things like this, including implications that it is implementation-specific and may not work in places other than CPython, and that it may disappear from CPython in the future. Am I being too paranoid? Is __subclassess__ considered a "real" part of the language now?
If not, what's a good "pythonic" way to approach this?
|
[
"\"I'd like the table built at runtime by examining all of the subclasses of Packet,\" \nThis is guaranteed to cause endless problems. This kind of thing puts a strange constraint on your subclasses. You can't use any abstract superclasses to simplify things.\nHere's a specific example of what won't work if you \"automatically\" discover the subclasses.\nclass MessagePacket( object ):\n \"\"\"superclass. Ignore me, I'm abstract.\"\"\"\nclass Type_01( MessagePacket ):\n type_id = 0x01\nclass Type_023( MessagePacket ):\n \"\"\"superclass with common features for type 2 and 3. Ignore me, I'm abstract.\"\"\"\nclass Type_02( Type_023 ):\n type_id = 0x02\nclass Type_03( Type_023 ):\n type_id = 0x03\nclass Type_04( MessagePacket ):\n \"\"\"superclass for subtypes of 4. Ignore me, I'm abstract.\"\"\"\n type_id = 0x04\nclass Type_04A( Type_04 ):\n discriminator = lambda x: x[23] == 'a' or x[35] == 42\nclass Type_04B( Type_04 ):\n discriminator = lambda x: True\n\nThat should be enough to show that \"automatic discovery\" is doomed from the outset.\nThe correct solution is to have a Factory class which embodies the correct subclass hierarchy, exploiting all features based on -- well -- manual design.\nclass Factory( object ):\n def __init__( self, *subclass_list ):\n self.subclass = dict( (s.type_id,s) for s in subclass_list )\n def parse( self, packet ):\n if packet.type_id == 04:\n # special subclass logic\n return self.subclass[packet.type_id]( packet )\n\nIt doesn't seem too onerous a burden to include the following:\nfactory= Factory( Subclass1, Subclass2, ... SubclassN )\n\nAnd when you add subclasses, add to the list of subclasses that are actually being used.\n",
"__subclasses__ IS part of the Python language, and implemented by IronPython and Jython as well as CPython (no pypy at hand to test, right now, but I'd be astonished if they have broken that!-). Whatever gave you the impression that __subclasses__ was at all problematic?! I see a comment by @gnibbler in the same vein and I'd like to challenge that: can you post URLs about __subclasses__ not being a crucial part of the Python language?!\n",
"\n>>> class PacketMeta(type):\n... def __init__(self,*args,**kw):\n... if self.type_id is not None:\n... self.subclasses[self.type_id]=self\n... return type.__init__(self,*args,**kw)\n... \n>>> class Packet(object):\n... __metaclass__=PacketMeta\n... subclasses={}\n... type_id = None\n... \n>>> class Login(Packet):\n... type_id = 0x01\n... \n>>> class Logout(Packet):\n... type_id = 0x02\n... \n>>> \n>>> Packet.subclasses\n{1: <class '__main__.Login'>, 2: <class '__main__.Logout'>}\n>>> \n\nIf you prefer to use the __subclasses__() you can do something like this\n>>> class Packet(object):\n... pass\n... \n>>> class Login(Packet):\n... type_id = 0x01\n... \n>>> class Logout(Packet):\n... type_id = 0x02\n... \ndef packetfactory(packet_id):\n for x in Packet.__subclasses__():\n if x.type_id==packet_id:\n return x\n... \n>>> packetfactory(0x01)\n<class '__main__.Login'>\n>>> packetfactory(0x02)\n<class '__main__.Logout'>\n\n",
"I guess you can use Python metaclasses (Python ≥ 2.2) to share such an information between classes, that would be quite pythonic. Take a look at the implementation of the Google's Protocol Buffers. Here is the tutorial showing metaclasses at work. By the way, the domain of Protocol Buffers is similar to yours.\n"
] |
[
3,
3,
2,
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0001592089_python_python_3.x.txt
|
Q:
Python binary data reading
A urllib2 request receives binary response as below:
00 00 00 01 00 04 41 4D 54 44 00 00 00 00 02 41
97 33 33 41 99 5C 29 41 90 3D 71 41 91 D7 0A 47
0F C6 14 00 00 01 16 6A E0 68 80 41 93 B4 05 41
97 1E B8 41 90 7A E1 41 96 8F 57 46 E6 2E 80 00
00 01 16 7A 53 7C 80 FF FF
Its structure is:
DATA, TYPE, DESCRIPTION
00 00 00 01, 4 bytes, Symbol Count =1
00 04, 2 bytes, Symbol Length = 4
41 4D 54 44, 6 bytes, Symbol = AMTD
00, 1 byte, Error code = 0 (OK)
00 00 00 02, 4 bytes, Bar Count = 2
FIRST BAR
41 97 33 33, 4 bytes, Close = 18.90
41 99 5C 29, 4 bytes, High = 19.17
41 90 3D 71, 4 bytes, Low = 18.03
41 91 D7 0A, 4 bytes, Open = 18.23
47 0F C6 14, 4 bytes, Volume = 3,680,608
00 00 01 16 6A E0 68 80, 8 bytes, Timestamp = November 23,2007
SECOND BAR
41 93 B4 05, 4 bytes, Close = 18.4629
41 97 1E B8, 4 bytes, High = 18.89
41 90 7A E1, 4 bytes, Low = 18.06
41 96 8F 57, 4 bytes, Open = 18.82
46 E6 2E 80, 4 bytes, Volume = 2,946,325
00 00 01 16 7A 53 7C 80, 8 bytes, Timestamp = November 26,2007
TERMINATOR
FF FF, 2 bytes,
How to read binary data like this?
Thanks in advance.
Update:
I tried struct module on first 6 bytes with following code:
struct.unpack('ih', response.read(6))
(16777216, 1024)
But it should output (1, 4). I take a look at the manual but have no clue what was wrong.
A:
So here's my best shot at interpreting the data you're giving...:
import datetime
import struct
class Printable(object):
specials = ()
def __str__(self):
resultlines = []
for pair in self.__dict__.items():
if pair[0] in self.specials: continue
resultlines.append('%10s %s' % pair)
return '\n'.join(resultlines)
head_fmt = '>IH6sBH'
head_struct = struct.Struct(head_fmt)
class Header(Printable):
specials = ('bars',)
def __init__(self, symbol_count, symbol_length,
symbol, error_code, bar_count):
self.__dict__.update(locals())
self.bars = []
del self.self
bar_fmt = '>5fQ'
bar_struct = struct.Struct(bar_fmt)
class Bar(Printable):
specials = ('header',)
def __init__(self, header, close, high, low,
open, volume, timestamp):
self.__dict__.update(locals())
self.header.bars.append(self)
del self.self
self.timestamp /= 1000.0
self.timestamp = datetime.date.fromtimestamp(self.timestamp)
def showdata(data):
terminator = '\xff' * 2
assert data[-2:] == terminator
head_data = head_struct.unpack(data[:head_struct.size])
try:
assert head_data[4] * bar_struct.size + head_struct.size == \
len(data) - len(terminator)
except AssertionError:
print 'data length is %d' % len(data)
print 'head struct size is %d' % head_struct.size
print 'bar struct size is %d' % bar_struct.size
print 'number of bars is %d' % head_data[4]
print 'head data:', head_data
print 'terminator:', terminator
print 'so, something is wrong, since',
print head_data[4] * bar_struct.size + head_struct.size, '!=',
print len(data) - len(terminator)
raise
head = Header(*head_data)
for i in range(head.bar_count):
bar_substr = data[head_struct.size + i * bar_struct.size:
head_struct.size + (i+1) * bar_struct.size]
bar_data = bar_struct.unpack(bar_substr)
Bar(head, *bar_data)
assert len(head.bars) == head.bar_count
print head
for i, x in enumerate(head.bars):
print 'Bar #%s' % i
print x
datas = '''
00 00 00 01 00 04 41 4D 54 44 00 00 00 00 02 41
97 33 33 41 99 5C 29 41 90 3D 71 41 91 D7 0A 47
0F C6 14 00 00 01 16 6A E0 68 80 41 93 B4 05 41
97 1E B8 41 90 7A E1 41 96 8F 57 46 E6 2E 80 00
00 01 16 7A 53 7C 80 FF FF
'''
data = ''.join(chr(int(x, 16)) for x in datas.split())
showdata(data)
this emits:
symbol_count 1
bar_count 2
symbol AMTD
error_code 0
symbol_length 4
Bar #0
volume 36806.078125
timestamp 2007-11-22
high 19.1700000763
low 18.0300006866
close 18.8999996185
open 18.2299995422
Bar #1
volume 29463.25
timestamp 2007-11-25
high 18.8899993896
low 18.0599994659
close 18.4629001617
open 18.8199901581
...which seems to be pretty close to what you want, net of some output formatting details. Hope this helps!-)
A:
>>> data
'\x00\x00\x00\x01\x00\x04AMTD\x00\x00\x00\x00\x02A\x9733A\x99\\)A\x90=qA\x91\xd7\nG\x0f\xc6\x14\x00\x00\x01\x16j\xe0h\x80A\x93\xb4\x05A\x97\x1e\xb8A\x90z\xe1A\x96\x8fWF\xe6.\x80\x00\x00\x01\x16zS|\x80\xff\xff'
>>> from struct import unpack, calcsize
>>> scount, slength = unpack("!IH", data[:6])
>>> assert scount == 1
>>> symbol, error_code = unpack("!%dsb" % slength, data[6:6+slength+1])
>>> assert error_code == 0
>>> symbol
'AMTD'
>>> bar_count = unpack("!I", data[6+slength+1:6+slength+1+4])
>>> bar_count
(2,)
>>> bar_format = "!5fQ"
>>> from collections import namedtuple
>>> Bar = namedtuple("Bar", "Close High Low Open Volume Timestamp")
>>> b = Bar(*unpack(bar_format, data[6+slength+1+4:6+slength+1+4+calcsize(bar_format)]))
>>> b
Bar(Close=18.899999618530273, High=19.170000076293945, Low=18.030000686645508, Open=18.229999542236328, Volume=36806.078125, Timestamp=1195794000000L)
>>> import time
>>> time.ctime(b.Timestamp//1000)
'Fri Nov 23 08:00:00 2007'
>>> int(b.Volume*100 + 0.5)
3680608
A:
>>> struct.unpack('ih', response.read(6))
(16777216, 1024)
You are unpacking big-endian data on a little-endian machine. Try this instead:
>>> struct.unpack('!IH', response.read(6))
(1L, 4)
This tells unpack to consider the data in network-order (big-endian). Also, the values of counts and lengths can not be negative, so you should should use the unsigned variants in your format string.
A:
Take a look at the struct.unpack in the struct module.
A:
Use pack/unpack functions from "struct" package. More info here http://docs.python.org/library/struct.html
Bye!
A:
As it was already mentioned, struct is the module you need to use.
Please read its documentation to learn about byte ordering, etc.
In your example you need to do the following (as your data is big-endian and unsigned):
>>> import struct
>>> x = '\x00\x00\x00\x01\x00\x04'
>>> struct.unpack('>IH', x)
(1, 4)
|
Python binary data reading
|
A urllib2 request receives binary response as below:
00 00 00 01 00 04 41 4D 54 44 00 00 00 00 02 41
97 33 33 41 99 5C 29 41 90 3D 71 41 91 D7 0A 47
0F C6 14 00 00 01 16 6A E0 68 80 41 93 B4 05 41
97 1E B8 41 90 7A E1 41 96 8F 57 46 E6 2E 80 00
00 01 16 7A 53 7C 80 FF FF
Its structure is:
DATA, TYPE, DESCRIPTION
00 00 00 01, 4 bytes, Symbol Count =1
00 04, 2 bytes, Symbol Length = 4
41 4D 54 44, 6 bytes, Symbol = AMTD
00, 1 byte, Error code = 0 (OK)
00 00 00 02, 4 bytes, Bar Count = 2
FIRST BAR
41 97 33 33, 4 bytes, Close = 18.90
41 99 5C 29, 4 bytes, High = 19.17
41 90 3D 71, 4 bytes, Low = 18.03
41 91 D7 0A, 4 bytes, Open = 18.23
47 0F C6 14, 4 bytes, Volume = 3,680,608
00 00 01 16 6A E0 68 80, 8 bytes, Timestamp = November 23,2007
SECOND BAR
41 93 B4 05, 4 bytes, Close = 18.4629
41 97 1E B8, 4 bytes, High = 18.89
41 90 7A E1, 4 bytes, Low = 18.06
41 96 8F 57, 4 bytes, Open = 18.82
46 E6 2E 80, 4 bytes, Volume = 2,946,325
00 00 01 16 7A 53 7C 80, 8 bytes, Timestamp = November 26,2007
TERMINATOR
FF FF, 2 bytes,
How to read binary data like this?
Thanks in advance.
Update:
I tried struct module on first 6 bytes with following code:
struct.unpack('ih', response.read(6))
(16777216, 1024)
But it should output (1, 4). I take a look at the manual but have no clue what was wrong.
|
[
"So here's my best shot at interpreting the data you're giving...:\nimport datetime\nimport struct\n\nclass Printable(object):\n specials = ()\n def __str__(self):\n resultlines = []\n for pair in self.__dict__.items():\n if pair[0] in self.specials: continue\n resultlines.append('%10s %s' % pair)\n return '\\n'.join(resultlines)\n\nhead_fmt = '>IH6sBH'\nhead_struct = struct.Struct(head_fmt)\nclass Header(Printable):\n specials = ('bars',)\n def __init__(self, symbol_count, symbol_length,\n symbol, error_code, bar_count):\n self.__dict__.update(locals())\n self.bars = []\n del self.self\n\nbar_fmt = '>5fQ'\nbar_struct = struct.Struct(bar_fmt)\nclass Bar(Printable):\n specials = ('header',)\n def __init__(self, header, close, high, low,\n open, volume, timestamp):\n self.__dict__.update(locals())\n self.header.bars.append(self)\n del self.self\n self.timestamp /= 1000.0\n self.timestamp = datetime.date.fromtimestamp(self.timestamp)\n\ndef showdata(data):\n terminator = '\\xff' * 2\n assert data[-2:] == terminator\n head_data = head_struct.unpack(data[:head_struct.size])\n try:\n assert head_data[4] * bar_struct.size + head_struct.size == \\\n len(data) - len(terminator)\n except AssertionError:\n print 'data length is %d' % len(data)\n print 'head struct size is %d' % head_struct.size\n print 'bar struct size is %d' % bar_struct.size\n print 'number of bars is %d' % head_data[4]\n print 'head data:', head_data\n print 'terminator:', terminator\n print 'so, something is wrong, since',\n print head_data[4] * bar_struct.size + head_struct.size, '!=',\n print len(data) - len(terminator)\n raise\n\n head = Header(*head_data)\n for i in range(head.bar_count):\n bar_substr = data[head_struct.size + i * bar_struct.size:\n head_struct.size + (i+1) * bar_struct.size]\n bar_data = bar_struct.unpack(bar_substr)\n Bar(head, *bar_data)\n assert len(head.bars) == head.bar_count\n print head\n for i, x in enumerate(head.bars):\n print 'Bar #%s' % i\n print x\n\ndatas = '''\n00 00 00 01 00 04 41 4D 54 44 00 00 00 00 02 41\n97 33 33 41 99 5C 29 41 90 3D 71 41 91 D7 0A 47\n0F C6 14 00 00 01 16 6A E0 68 80 41 93 B4 05 41\n97 1E B8 41 90 7A E1 41 96 8F 57 46 E6 2E 80 00\n00 01 16 7A 53 7C 80 FF FF\n'''\n\ndata = ''.join(chr(int(x, 16)) for x in datas.split())\nshowdata(data)\n\nthis emits:\nsymbol_count 1\n bar_count 2\n symbol AMTD\nerror_code 0\nsymbol_length 4\nBar #0\n volume 36806.078125\n timestamp 2007-11-22\n high 19.1700000763\n low 18.0300006866\n close 18.8999996185\n open 18.2299995422\nBar #1\n volume 29463.25\n timestamp 2007-11-25\n high 18.8899993896\n low 18.0599994659\n close 18.4629001617\n open 18.8199901581\n\n...which seems to be pretty close to what you want, net of some output formatting details. Hope this helps!-)\n",
">>> data\n'\\x00\\x00\\x00\\x01\\x00\\x04AMTD\\x00\\x00\\x00\\x00\\x02A\\x9733A\\x99\\\\)A\\x90=qA\\x91\\xd7\\nG\\x0f\\xc6\\x14\\x00\\x00\\x01\\x16j\\xe0h\\x80A\\x93\\xb4\\x05A\\x97\\x1e\\xb8A\\x90z\\xe1A\\x96\\x8fWF\\xe6.\\x80\\x00\\x00\\x01\\x16zS|\\x80\\xff\\xff'\n>>> from struct import unpack, calcsize\n>>> scount, slength = unpack(\"!IH\", data[:6])\n>>> assert scount == 1\n>>> symbol, error_code = unpack(\"!%dsb\" % slength, data[6:6+slength+1])\n>>> assert error_code == 0\n>>> symbol\n'AMTD'\n>>> bar_count = unpack(\"!I\", data[6+slength+1:6+slength+1+4])\n>>> bar_count\n(2,)\n>>> bar_format = \"!5fQ\" \n>>> from collections import namedtuple\n>>> Bar = namedtuple(\"Bar\", \"Close High Low Open Volume Timestamp\") \n>>> b = Bar(*unpack(bar_format, data[6+slength+1+4:6+slength+1+4+calcsize(bar_format)]))\n>>> b\nBar(Close=18.899999618530273, High=19.170000076293945, Low=18.030000686645508, Open=18.229999542236328, Volume=36806.078125, Timestamp=1195794000000L)\n>>> import time\n>>> time.ctime(b.Timestamp//1000)\n'Fri Nov 23 08:00:00 2007'\n>>> int(b.Volume*100 + 0.5)\n3680608\n\n",
"\n>>> struct.unpack('ih', response.read(6))\n(16777216, 1024)\n\n\nYou are unpacking big-endian data on a little-endian machine. Try this instead:\n>>> struct.unpack('!IH', response.read(6))\n(1L, 4)\n\nThis tells unpack to consider the data in network-order (big-endian). Also, the values of counts and lengths can not be negative, so you should should use the unsigned variants in your format string.\n",
"Take a look at the struct.unpack in the struct module.\n",
"Use pack/unpack functions from \"struct\" package. More info here http://docs.python.org/library/struct.html\nBye!\n",
"As it was already mentioned, struct is the module you need to use.\nPlease read its documentation to learn about byte ordering, etc.\nIn your example you need to do the following (as your data is big-endian and unsigned):\n>>> import struct\n>>> x = '\\x00\\x00\\x00\\x01\\x00\\x04'\n>>> struct.unpack('>IH', x)\n(1, 4)\n\n"
] |
[
10,
6,
5,
2,
1,
0
] |
[] |
[] |
[
"binary_data",
"python"
] |
stackoverflow_0001591920_binary_data_python.txt
|
Q:
Qt being now released under LGPL, would you recommend it over wxWidgets?
I am quite a heavy user of wxWidgets, partly because of licensing reasons.
How do you see the future of wxWidgets in prospect of the recent announcement of Qt now being released under LGPL?
Do you think wxwidget is still a good technical choice for new projects ? Or would you recommand adopting Qt, because it is going to be a de-facto standard.
I am also interested about the possible implications this will have on their bindings with the most common scripting languages (e.g. PyQt, wxPython, wxRuby). Why PyQt is so under-used when it has a professional grade designer and wxPython not?
Related:
https://stackoverflow.com/questions/443546/qt-goes-lgpl-on-windows-is-it-good-enough-to-use-instead-of-mfc
A:
For those of us who are drawn to wxWidgets because it is the cross-platform library that uses native controls for proper look and feel the licensing change of Qt has little to no consequences.
Edit:
Regarding
Qt not having native controls but native drawing functions
let me quote the wxWidgets wiki page comparing toolkits:
Qt doesn't have true native ports like wxWidgets does. What we mean by this is that even though Qt draws them quite realistically, Qt draws its own widgets on each platform. It's worth mentioning though that Qt comes with special styles for Mac OS X and Windows XP and Vista that use native APIs (Appearance Manager on Mac OS X, UxTheme on Windows XP) for drawing standard widget primitives (e.g. scrollbars or buttons) exactly like any native application. Event handling, the resulting visual feedback and widget layout are always implemented by Qt.
A:
I'm currently using pyqt at work and I find myself totally satisfied.
You have better documentation (IMHO), better event managing (signal-slot pattern is somehow more powerful than the old simple-callback style), and importing your custom widget in a graphical designer like qt-designer is far easier.
As far as I can tell qt-designer is more powerful than any wxpython counterpart, like Boa Constructor and pyGlade).
You also have great support for translating program's strings in different languages (better support than wxLocale at least, and you can use a tool like Qt-Linguist which is fully integrated in the qt system).
I'm using wxpython in some hobbistic works, but I'm still a noob there. I think its greater advantage over pyqt is to have a native look&feel on different platforms. This is a huge point if you are developing windows/linux applications, for example. Actually you could use "skins" to obtain a native look&feel with windows-qt applications but I have no idea on how to achieve that (sorry, I've never used qt on windows :D).
A:
Honestly, I don't think that people will massively switch away from WxWidgets.
For python, there are PyQt bindings and WxPython bindings. Despite Qt being much more practical than WxWidgets, the majority of GUI python open source programs are written with WxWidgets. Since those programs are open source, the GPL vs LGPL did not matter that much in their choice of toolkit.
The same goes for Gtk. Many open source applications are written in Gtk, on windows, despite Gtk being very difficult to work with on windows. With Qt, those applications would be a lot easier to maintain on a cross platform basis, but it has not happened.
So, choice of toolkit is influenced by many parameters, licensing being only one of them.
I still don't understand why Qt is not more mainstream, because it's in my opinion the easiest and more practical GUI toolkit ever written.
A:
Please note that, as of Jan 2009, while Qt 4.5 was to be available under LGPL, Riverbank Computing hadn't made any announcement about licensing for future versions of PyQt. PyQt is still only commercial/GPLv2/GPLv3.
As noted in comments for this answer, Nokia announced the LGPL-licensed PySide project in August 2009.
A:
Qt is very comprehensive and high quality framework. I am sure that many new projects that would have used wxWidgets will now use LGPL Qt instead. But projects that are already using wxWidgets will no doubt continue to use wxWidgets rather than doing a massive re-write.
A:
I chose wxPython for 2 main reasons:
Boa Constructor,
which is still a beta product, gives me unified control over 100% of the process, whereas PyQt indeed has better designer, but there's no connection between editing "event handlers".
My ideal IDE designs, creates events, let me edit just the functional code needed, and run; without "compiling UICs", without switching editors, without going into the command line.
While for Large scale applications it matters very little, my current domain is fast and small scale programs.
Licensing...
it doesn't matter right now, but it will once I start vending my stuff on small scale.
autocompletion inside event functional code doesn't seem to work in QTDesigner, for event code. I might be missing something, yet the "broken" process described above prevent it from being a RAD.
A:
I was never able to setup Qt to cross compile. I remember seeing something from Trolltech saying that they don't officially support cross compilation, although I can't find it now.
There are many guides and such detailing how to get Qt to cross compile, so its possible (likely) that I was doing something wrong.
When choosing a framework, I recommend considering and testing out their cross compilation abilities.
|
Qt being now released under LGPL, would you recommend it over wxWidgets?
|
I am quite a heavy user of wxWidgets, partly because of licensing reasons.
How do you see the future of wxWidgets in prospect of the recent announcement of Qt now being released under LGPL?
Do you think wxwidget is still a good technical choice for new projects ? Or would you recommand adopting Qt, because it is going to be a de-facto standard.
I am also interested about the possible implications this will have on their bindings with the most common scripting languages (e.g. PyQt, wxPython, wxRuby). Why PyQt is so under-used when it has a professional grade designer and wxPython not?
Related:
https://stackoverflow.com/questions/443546/qt-goes-lgpl-on-windows-is-it-good-enough-to-use-instead-of-mfc
|
[
"For those of us who are drawn to wxWidgets because it is the cross-platform library that uses native controls for proper look and feel the licensing change of Qt has little to no consequences.\nEdit:\nRegarding\n\nQt not having native controls but native drawing functions\n\nlet me quote the wxWidgets wiki page comparing toolkits:\n\nQt doesn't have true native ports like wxWidgets does. What we mean by this is that even though Qt draws them quite realistically, Qt draws its own widgets on each platform. It's worth mentioning though that Qt comes with special styles for Mac OS X and Windows XP and Vista that use native APIs (Appearance Manager on Mac OS X, UxTheme on Windows XP) for drawing standard widget primitives (e.g. scrollbars or buttons) exactly like any native application. Event handling, the resulting visual feedback and widget layout are always implemented by Qt.\n\n",
"I'm currently using pyqt at work and I find myself totally satisfied.\nYou have better documentation (IMHO), better event managing (signal-slot pattern is somehow more powerful than the old simple-callback style), and importing your custom widget in a graphical designer like qt-designer is far easier.\nAs far as I can tell qt-designer is more powerful than any wxpython counterpart, like Boa Constructor and pyGlade).\nYou also have great support for translating program's strings in different languages (better support than wxLocale at least, and you can use a tool like Qt-Linguist which is fully integrated in the qt system).\nI'm using wxpython in some hobbistic works, but I'm still a noob there. I think its greater advantage over pyqt is to have a native look&feel on different platforms. This is a huge point if you are developing windows/linux applications, for example. Actually you could use \"skins\" to obtain a native look&feel with windows-qt applications but I have no idea on how to achieve that (sorry, I've never used qt on windows :D).\n",
"Honestly, I don't think that people will massively switch away from WxWidgets.\nFor python, there are PyQt bindings and WxPython bindings. Despite Qt being much more practical than WxWidgets, the majority of GUI python open source programs are written with WxWidgets. Since those programs are open source, the GPL vs LGPL did not matter that much in their choice of toolkit.\nThe same goes for Gtk. Many open source applications are written in Gtk, on windows, despite Gtk being very difficult to work with on windows. With Qt, those applications would be a lot easier to maintain on a cross platform basis, but it has not happened.\nSo, choice of toolkit is influenced by many parameters, licensing being only one of them.\nI still don't understand why Qt is not more mainstream, because it's in my opinion the easiest and more practical GUI toolkit ever written.\n",
"Please note that, as of Jan 2009, while Qt 4.5 was to be available under LGPL, Riverbank Computing hadn't made any announcement about licensing for future versions of PyQt. PyQt is still only commercial/GPLv2/GPLv3.\nAs noted in comments for this answer, Nokia announced the LGPL-licensed PySide project in August 2009.\n",
"Qt is very comprehensive and high quality framework. I am sure that many new projects that would have used wxWidgets will now use LGPL Qt instead. But projects that are already using wxWidgets will no doubt continue to use wxWidgets rather than doing a massive re-write.\n",
"I chose wxPython for 2 main reasons:\n\nBoa Constructor, \nwhich is still a beta product, gives me unified control over 100% of the process, whereas PyQt indeed has better designer, but there's no connection between editing \"event handlers\".\n\nMy ideal IDE designs, creates events, let me edit just the functional code needed, and run; without \"compiling UICs\", without switching editors, without going into the command line.\nWhile for Large scale applications it matters very little, my current domain is fast and small scale programs.\n\nLicensing... \nit doesn't matter right now, but it will once I start vending my stuff on small scale.\nautocompletion inside event functional code doesn't seem to work in QTDesigner, for event code. I might be missing something, yet the \"broken\" process described above prevent it from being a RAD.\n\n",
"I was never able to setup Qt to cross compile. I remember seeing something from Trolltech saying that they don't officially support cross compilation, although I can't find it now.\nThere are many guides and such detailing how to get Qt to cross compile, so its possible (likely) that I was doing something wrong.\nWhen choosing a framework, I recommend considering and testing out their cross compilation abilities. \n"
] |
[
17,
13,
8,
8,
3,
3,
2
] |
[] |
[] |
[
"python",
"qt",
"wxpython",
"wxwidgets"
] |
stackoverflow_0000464463_python_qt_wxpython_wxwidgets.txt
|
Q:
Rendering different part of templates according to the request values in Django
In my view to render my template I receive different parameters through my request.
According to these parameters I need to render different "part" in my templates.
For example let say that if I receive in my request
to_render = ["table", "bar_chart"]
I want to render a partial template for table and an other for bar_chart
to_render = ["bar_chart", "line_chart"]
Then I will want to render a partial template for the bar_chart and an other for line_chart
Can I define this in my view?
Or do I have to manage this in my template?
Thank you!
A:
just manage it in your template
A:
Sure, you can manage this in your view. The template API shows clearly how to use the templating system in Python.
|
Rendering different part of templates according to the request values in Django
|
In my view to render my template I receive different parameters through my request.
According to these parameters I need to render different "part" in my templates.
For example let say that if I receive in my request
to_render = ["table", "bar_chart"]
I want to render a partial template for table and an other for bar_chart
to_render = ["bar_chart", "line_chart"]
Then I will want to render a partial template for the bar_chart and an other for line_chart
Can I define this in my view?
Or do I have to manage this in my template?
Thank you!
|
[
"just manage it in your template\n",
"Sure, you can manage this in your view. The template API shows clearly how to use the templating system in Python.\n"
] |
[
2,
1
] |
[] |
[] |
[
"django",
"django_templates",
"django_views",
"python"
] |
stackoverflow_0001592331_django_django_templates_django_views_python.txt
|
Q:
How to make all combinations of the elements in an array?
I have a list. It contains x lists, each with y elements.
I want to pair each element with all the other elements, just once, (a,b = b,a)
EDIT: this has been criticized as being too vague.So I'll describe the history.
My function produces random equations and using genetic techniques, mutates and crossbreeds them, selecting for fitness.
After a number of iterations, it returns a list of 12 objects, sorted by fitness of their 'equation' attribute.
Using the 'parallel python' module to run this function 8 times, a list containing 8 lists of 12 objects (each with an equation attribute) each is returned.
Now, within each list, the 12 objects have already been cross-bread with each other.
I want to cross-breed each object in a list with all the other objects in all the other lists, but not with the objects within it's own list with which it has already been cross-bread. (whew!)
A:
itertools.product is your friend.
about removing the duplicates, try with a set of sets.
Now it's a little bit clearer what you want:
import itertools
def recombinate(families):
"families is the list of 8 elements, each one with 12 individuals"
for fi, fj in itertools.combinations(families, 2):
for pair in itertools.product(fi, fj):
yield pair
basically, take all the 2-combinations of families (of those produced in parallel) and for each pair of families, yield all the pairs of elements.
A:
You haven't made it completely clear what you need. It sounds like itertools should have what you need. Perhaps what you wish is an itertools.combinations of the itertools.product of the lists in your big list.
@fortran: you can't have a set of sets. You can have a set of frozensets, but depending on what it really means to have duplicates here, that might not be what is needed.
A:
First of all, please don't refer to this as an "array". You are using a list of lists. In Python, an array is a different type of data structure, provided by the array module.
Also, your application sounds suspiciously like a matrix. If you are really doing matrix manipulations, you should investigate the Numpy package.
At first glance your problem sounded like something that the zip() function would solve or itertools.izip(). You should definitely read through the docs for the itertools module because it has various list manipulations and they will run faster than anything you could write yourself in Python.
|
How to make all combinations of the elements in an array?
|
I have a list. It contains x lists, each with y elements.
I want to pair each element with all the other elements, just once, (a,b = b,a)
EDIT: this has been criticized as being too vague.So I'll describe the history.
My function produces random equations and using genetic techniques, mutates and crossbreeds them, selecting for fitness.
After a number of iterations, it returns a list of 12 objects, sorted by fitness of their 'equation' attribute.
Using the 'parallel python' module to run this function 8 times, a list containing 8 lists of 12 objects (each with an equation attribute) each is returned.
Now, within each list, the 12 objects have already been cross-bread with each other.
I want to cross-breed each object in a list with all the other objects in all the other lists, but not with the objects within it's own list with which it has already been cross-bread. (whew!)
|
[
"itertools.product is your friend.\nabout removing the duplicates, try with a set of sets.\nNow it's a little bit clearer what you want:\nimport itertools\n\ndef recombinate(families):\n \"families is the list of 8 elements, each one with 12 individuals\"\n for fi, fj in itertools.combinations(families, 2):\n for pair in itertools.product(fi, fj):\n yield pair\n\nbasically, take all the 2-combinations of families (of those produced in parallel) and for each pair of families, yield all the pairs of elements.\n",
"You haven't made it completely clear what you need. It sounds like itertools should have what you need. Perhaps what you wish is an itertools.combinations of the itertools.product of the lists in your big list. \n@fortran: you can't have a set of sets. You can have a set of frozensets, but depending on what it really means to have duplicates here, that might not be what is needed.\n",
"First of all, please don't refer to this as an \"array\". You are using a list of lists. In Python, an array is a different type of data structure, provided by the array module. \nAlso, your application sounds suspiciously like a matrix. If you are really doing matrix manipulations, you should investigate the Numpy package.\nAt first glance your problem sounded like something that the zip() function would solve or itertools.izip(). You should definitely read through the docs for the itertools module because it has various list manipulations and they will run faster than anything you could write yourself in Python.\n"
] |
[
7,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001591762_python.txt
|
Q:
How to install 64-bit Python on Solaris?
I am trying to install Python 2.6 on Solaris by building the source on Solaris machine. I installed one this way and it appears that it is 32-bit. I downloaded some source tar ball as Linux or Unix for this purpose. Everything works well but I need 64-bit Python.
I looked up the Python download site and there is no separate installation for a 64-bit Python.
That makes me think that there must be some option while running configure and/or install commands to install Python. I tried reading README.txt of the installation but could not find any info. I am very new to installations on "Unix" like systems.
How can I install 64-bit Python on Solaris?
A:
It's currently an acknowledged bug that Solaris 64-bit support is suboptimal, but that bug report looks to contain some flags that you might want to use. See also this mailing list posting.
A:
I would strongly suggest seeing if you can get away with the 32 bit version of Python. If your new to compiling stuff on Solaris, this will save you many headaches. However, it is possible, and I do have a working 64 bit version of Python. I'm using cc: Sun C 5.8 2005/10/13 to compile. Additionally, I've already compiled 64-bit version of readline and ncurses.
My configure line looks like this:
../Python-2.6.1/configure CCSHARED="-KPIC" LDSHARED="cc -xarch=generic64 -G -KPIC" LDFLAGS="-xarch=generic64 -L/opt/tools/lib -R/opt/tools/lib -L/opt/tools/ssl/lib -ltermcap -lz -R $ORIGIN/../lib" CC="cc" CPP="cc -xarch=generic64 -E -I/opt/tools/include -I/opt/tools/include/ncurses -I/opt/tools/include/readline" BASECFLAGS="-xarch=generic64 -I/opt/tools/include -I/opt/tools/include/ncurses" OPT="-xO5" CFLAGS="-xarch=generic64 -I/opt/tools/include -I/opt/tools/include/ncurses -I/opt/tools/include/readline" CXX="CC -xarch=generic64 -I/opt/tools/include -I/opt/tools/include/ncurses" --prefix=/opt/tools/python-2.6.1 --enable-64-bit --without-gcc --disable-ipv6 --with-ssl=openssl --with-ncurses --with-readline
Additionally, I modified these two lines in Modules/Setup.local to include the required locations:
readline readline.c -I/opt/tools/include/readline -L/opt/tools/lib -lreadline -ltermcap
_ssl _ssl.c -I/opt/tools/ssl/include -L/opt/tools/ssl/lib -lssl -lcrypto
Now, just pray you don't need to compile in some Sybase bindings or some other 64-bit libraries.
|
How to install 64-bit Python on Solaris?
|
I am trying to install Python 2.6 on Solaris by building the source on Solaris machine. I installed one this way and it appears that it is 32-bit. I downloaded some source tar ball as Linux or Unix for this purpose. Everything works well but I need 64-bit Python.
I looked up the Python download site and there is no separate installation for a 64-bit Python.
That makes me think that there must be some option while running configure and/or install commands to install Python. I tried reading README.txt of the installation but could not find any info. I am very new to installations on "Unix" like systems.
How can I install 64-bit Python on Solaris?
|
[
"It's currently an acknowledged bug that Solaris 64-bit support is suboptimal, but that bug report looks to contain some flags that you might want to use. See also this mailing list posting.\n",
"I would strongly suggest seeing if you can get away with the 32 bit version of Python. If your new to compiling stuff on Solaris, this will save you many headaches. However, it is possible, and I do have a working 64 bit version of Python. I'm using cc: Sun C 5.8 2005/10/13 to compile. Additionally, I've already compiled 64-bit version of readline and ncurses.\nMy configure line looks like this:\n../Python-2.6.1/configure CCSHARED=\"-KPIC\" LDSHARED=\"cc -xarch=generic64 -G -KPIC\" LDFLAGS=\"-xarch=generic64 -L/opt/tools/lib -R/opt/tools/lib -L/opt/tools/ssl/lib -ltermcap -lz -R $ORIGIN/../lib\" CC=\"cc\" CPP=\"cc -xarch=generic64 -E -I/opt/tools/include -I/opt/tools/include/ncurses -I/opt/tools/include/readline\" BASECFLAGS=\"-xarch=generic64 -I/opt/tools/include -I/opt/tools/include/ncurses\" OPT=\"-xO5\" CFLAGS=\"-xarch=generic64 -I/opt/tools/include -I/opt/tools/include/ncurses -I/opt/tools/include/readline\" CXX=\"CC -xarch=generic64 -I/opt/tools/include -I/opt/tools/include/ncurses\" --prefix=/opt/tools/python-2.6.1 --enable-64-bit --without-gcc --disable-ipv6 --with-ssl=openssl --with-ncurses --with-readline\n\nAdditionally, I modified these two lines in Modules/Setup.local to include the required locations:\nreadline readline.c -I/opt/tools/include/readline -L/opt/tools/lib -lreadline -ltermcap\n_ssl _ssl.c -I/opt/tools/ssl/include -L/opt/tools/ssl/lib -lssl -lcrypto\n\nNow, just pray you don't need to compile in some Sybase bindings or some other 64-bit libraries.\n"
] |
[
3,
3
] |
[] |
[] |
[
"python",
"solaris"
] |
stackoverflow_0001396678_python_solaris.txt
|
Q:
Decoding Mac OS text in Python
I'm writing some code to parse RTF documents, and need to handle the various codepages they can use. Python comes with decoders for all the necessary Windows codepages, but I'm not sure how to handle the Mac ones:
# 77: "10000", # Mac Roman
# 78: "10001", # Mac Shift Jis
# 79: "10003", # Mac Hangul
# 80: "10008", # Mac GB2312
# 81: "10002", # Mac Big5
# 83: "10005", # Mac Hebrew
# 84: "10004", # Mac Arabic
# 85: "10006", # Mac Greek
# 86: "10081", # Mac Turkish
# 87: "10021", # Mac Thai
# 88: "10029", # Mac East Europe
# 89: "10007", # Mac Russian
Does Python have any built-in support for these? If not, is there a cross-platform pure-Python library that will handle them?
A:
You can use the python codecs for these that are known by their names 'mac-roman', 'mac-turkish', etc.
>>> 'foo'.decode('mac-turkish')
u'foo'
You'll have to refer to them by their names, these numbers you've got in your question don't appear in the source files. For more information look at $pylib/encodings/mac_*.py.
A:
It seems that at least Mac Roman and Mac Turkish encodings exist in Python stdlib, under names macroman and macturkish. See http://svn.python.org/projects/python/trunk/Lib/encodings/aliases.py for a complete list of encoding aliases in the most up-to-date Python.
A:
No.
However, unicode.org provides codec description files that you can use to generate modules that will parse those codecs. Included with python source distributions is a script that will convert these files: Python-x.x/Tools/unicode/gencodec.py.
|
Decoding Mac OS text in Python
|
I'm writing some code to parse RTF documents, and need to handle the various codepages they can use. Python comes with decoders for all the necessary Windows codepages, but I'm not sure how to handle the Mac ones:
# 77: "10000", # Mac Roman
# 78: "10001", # Mac Shift Jis
# 79: "10003", # Mac Hangul
# 80: "10008", # Mac GB2312
# 81: "10002", # Mac Big5
# 83: "10005", # Mac Hebrew
# 84: "10004", # Mac Arabic
# 85: "10006", # Mac Greek
# 86: "10081", # Mac Turkish
# 87: "10021", # Mac Thai
# 88: "10029", # Mac East Europe
# 89: "10007", # Mac Russian
Does Python have any built-in support for these? If not, is there a cross-platform pure-Python library that will handle them?
|
[
"You can use the python codecs for these that are known by their names 'mac-roman', 'mac-turkish', etc.\n>>> 'foo'.decode('mac-turkish')\nu'foo'\n\nYou'll have to refer to them by their names, these numbers you've got in your question don't appear in the source files. For more information look at $pylib/encodings/mac_*.py.\n",
"It seems that at least Mac Roman and Mac Turkish encodings exist in Python stdlib, under names macroman and macturkish. See http://svn.python.org/projects/python/trunk/Lib/encodings/aliases.py for a complete list of encoding aliases in the most up-to-date Python.\n",
"No.\nHowever, unicode.org provides codec description files that you can use to generate modules that will parse those codecs. Included with python source distributions is a script that will convert these files: Python-x.x/Tools/unicode/gencodec.py.\n"
] |
[
9,
3,
1
] |
[] |
[] |
[
"macos",
"python"
] |
stackoverflow_0001592925_macos_python.txt
|
Q:
How to use twistedweb with django on windows
I'm looking for a super easy way to deploy django application on windows.
Basically my plan is to set up any python web server with my app on it and the boundle everything together using py2exe into a single executable.
I've tried using cherrypy however the newest (3.1.2) server doesn't work with Windows XP with Nod32 antivirus installed.
So I decided to give a try to Twisted. I've only found Django On Twisted but it seams to be quite old (2008) and it use twistd command which is a bit hard to pack into single executable.
Has anyone got a working snipped or good source of info?
A:
I would rather suggest Portable LightTPD (i.e. the .zip) and Portable Python. It is very easy to set up LightTPD for FastCGI, and very easy to set up sqlite and FastCGI with Django in the Portable Python distro. This is probably your fastest and simplest route to getting an easily-deployable Django app going. If you aren't using it already, you probably want the Django book to help speed things along.
Instant Django has Python 2.6.2 integrated, so perhaps that would serve your needs better.
A:
I've found quite nice blog entry describing how to run django on twisted trunk.
Here is an example that merge twisted with django app into one file so it can be used from file created by py2exe:
# bootstrap your django instance
from django.core.handlers.wsgi import WSGIHandler
application = WSGIHandler()
import sys
sys.argv += '-no web --wsgi=<module_name>.application --port=8081'.split()
from twisted.scripts.twistd import run
run()
|
How to use twistedweb with django on windows
|
I'm looking for a super easy way to deploy django application on windows.
Basically my plan is to set up any python web server with my app on it and the boundle everything together using py2exe into a single executable.
I've tried using cherrypy however the newest (3.1.2) server doesn't work with Windows XP with Nod32 antivirus installed.
So I decided to give a try to Twisted. I've only found Django On Twisted but it seams to be quite old (2008) and it use twistd command which is a bit hard to pack into single executable.
Has anyone got a working snipped or good source of info?
|
[
"I would rather suggest Portable LightTPD (i.e. the .zip) and Portable Python. It is very easy to set up LightTPD for FastCGI, and very easy to set up sqlite and FastCGI with Django in the Portable Python distro. This is probably your fastest and simplest route to getting an easily-deployable Django app going. If you aren't using it already, you probably want the Django book to help speed things along.\nInstant Django has Python 2.6.2 integrated, so perhaps that would serve your needs better.\n",
"I've found quite nice blog entry describing how to run django on twisted trunk.\nHere is an example that merge twisted with django app into one file so it can be used from file created by py2exe:\n# bootstrap your django instance\n\nfrom django.core.handlers.wsgi import WSGIHandler\napplication = WSGIHandler()\n\nimport sys\nsys.argv += '-no web --wsgi=<module_name>.application --port=8081'.split() \n\nfrom twisted.scripts.twistd import run\nrun()\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"django",
"python",
"twisted",
"windows"
] |
stackoverflow_0001590651_django_python_twisted_windows.txt
|
Q:
Python Queue get()/task_done() issue
My consumer side of the queue:
m = queue.get()
queue.task_done()
<rest of the program>
Questions:
Does task_done() effectively pops m off the queue and release whatever locks the consumer has on the queue?
I need to use m during the rest of the program. Is it safe, or do I need to copy it before I call task_done() or is m usable after task_done()?
be happy
A:
No, queue.get() pops the item off the queue. After you do that, you can do whatever you want with it, as long as the producer works like it should and doesn't touch it anymore. queue.task_done() is called only to notify the queue that you are done with something (it doesn't even know about the specific item, it just counts unfinished items in the queue), so that queue.join() knows the work is finished.
|
Python Queue get()/task_done() issue
|
My consumer side of the queue:
m = queue.get()
queue.task_done()
<rest of the program>
Questions:
Does task_done() effectively pops m off the queue and release whatever locks the consumer has on the queue?
I need to use m during the rest of the program. Is it safe, or do I need to copy it before I call task_done() or is m usable after task_done()?
be happy
|
[
"No, queue.get() pops the item off the queue. After you do that, you can do whatever you want with it, as long as the producer works like it should and doesn't touch it anymore. queue.task_done() is called only to notify the queue that you are done with something (it doesn't even know about the specific item, it just counts unfinished items in the queue), so that queue.join() knows the work is finished.\n"
] |
[
58
] |
[] |
[] |
[
"multithreading",
"python",
"queue"
] |
stackoverflow_0001593299_multithreading_python_queue.txt
|
Q:
How to get all related/parent instances from set of child instances without looping through latter set
Please regard the following Django models:
ParentModel(models.Model):
...
ChildModel(models.Model):
parent = models.ForeignKey(ParentModel, related_name='children')
Let's assume there is certain subset of all children in the database available as a queryset (call it the 1st set).
Now, I'd like to gain access to the subset of all parents (call it the 2nd set) that said children from the 1st set relate to.
How do you do that without looping through the 1st set at the python level (and potentially cause a linear number of DB hits), i.e., with only one or two DB hits?
Thank you!
A:
Assuming you have a queryset called children:
ParentModel.objects.filter(children__in=children)
|
How to get all related/parent instances from set of child instances without looping through latter set
|
Please regard the following Django models:
ParentModel(models.Model):
...
ChildModel(models.Model):
parent = models.ForeignKey(ParentModel, related_name='children')
Let's assume there is certain subset of all children in the database available as a queryset (call it the 1st set).
Now, I'd like to gain access to the subset of all parents (call it the 2nd set) that said children from the 1st set relate to.
How do you do that without looping through the 1st set at the python level (and potentially cause a linear number of DB hits), i.e., with only one or two DB hits?
Thank you!
|
[
"Assuming you have a queryset called children:\nParentModel.objects.filter(children__in=children)\n\n"
] |
[
4
] |
[] |
[] |
[
"django",
"django_views",
"foreign_key_relationship",
"python"
] |
stackoverflow_0001593306_django_django_views_foreign_key_relationship_python.txt
|
Q:
How do I check if the python debug option is set from within a script
If I'm in debug mode, I want to do other stuff than when I'm not.
if DEBUG:
STORED_DATA_FILE = os.path.join(TEMP_DIR, 'store.dat')
LOG_LEVEL = logging.DEBUG
print "debug mode"
else:
STORED_DATA_FILE = os.path.join(SCRIPT_PATH, 'store.dat')
LOG_LEVEL = logging.INFO
print "not debug mode"
then:
python script.py
not debug mode
python -d script.py
debug mode
How can I detect that? It certainly isn't using the __debug__ variable.
A:
you can use python -O with the __debug__ variable
where -O means optimise. so __debug__ is false
-d turns on debugging for the parser, which is not what you want
A:
Parser debug mode is enabled with -d commandline option or PYTHONDEBUG environment variable and starting from python 2.6 is reflected in sys.flags.debug. But are you sure this is what you are looking for?
|
How do I check if the python debug option is set from within a script
|
If I'm in debug mode, I want to do other stuff than when I'm not.
if DEBUG:
STORED_DATA_FILE = os.path.join(TEMP_DIR, 'store.dat')
LOG_LEVEL = logging.DEBUG
print "debug mode"
else:
STORED_DATA_FILE = os.path.join(SCRIPT_PATH, 'store.dat')
LOG_LEVEL = logging.INFO
print "not debug mode"
then:
python script.py
not debug mode
python -d script.py
debug mode
How can I detect that? It certainly isn't using the __debug__ variable.
|
[
"you can use python -O with the __debug__ variable\nwhere -O means optimise. so __debug__ is false\n-d turns on debugging for the parser, which is not what you want\n",
"Parser debug mode is enabled with -d commandline option or PYTHONDEBUG environment variable and starting from python 2.6 is reflected in sys.flags.debug. But are you sure this is what you are looking for?\n"
] |
[
14,
7
] |
[] |
[] |
[
"python",
"script_debugging"
] |
stackoverflow_0001593274_python_script_debugging.txt
|
Q:
Constructors in Python
I need help in writing code for a Python constructor method.
This constructor method would take the following three parameters:
x, y, angle
What is an example of this?
A:
class MyClass(object):
def __init__(self, x, y, angle):
self.x = x
self.y = y
self.angle = angle
The constructor is always written as a function called __init__(). It must always take as its first argument a reference to the instance being constructed. This is typically called self. The rest of the arguments are up to the programmer.
The object on the first line is the superclass, i.e. this says that MyClass is a subclass of object. This is normal for Python class definitions.
You access fields (members) of the instance using the self. syntax.
A:
Constructors are declared with __init__(self, other parameters), so in this case:
def __init__(self, x, y, angle):
self.x = x
self.y = y
self.angle = angle
You can read more about this here: Class definition in python
A:
See the Python tutorial.
|
Constructors in Python
|
I need help in writing code for a Python constructor method.
This constructor method would take the following three parameters:
x, y, angle
What is an example of this?
|
[
"class MyClass(object):\n def __init__(self, x, y, angle):\n self.x = x\n self.y = y\n self.angle = angle\n\nThe constructor is always written as a function called __init__(). It must always take as its first argument a reference to the instance being constructed. This is typically called self. The rest of the arguments are up to the programmer.\nThe object on the first line is the superclass, i.e. this says that MyClass is a subclass of object. This is normal for Python class definitions.\nYou access fields (members) of the instance using the self. syntax.\n",
"Constructors are declared with __init__(self, other parameters), so in this case:\ndef __init__(self, x, y, angle):\n self.x = x\n self.y = y\n self.angle = angle\n\nYou can read more about this here: Class definition in python\n",
"See the Python tutorial.\n"
] |
[
19,
6,
2
] |
[
"\nclass MyClass(SuperClass):\n def __init__(self, *args, **kwargs):\n super(MyClass, self).__init__(*args, **kwargs)\n # do initialization\n\n"
] |
[
-4
] |
[
"constructor",
"python"
] |
stackoverflow_0001593441_constructor_python.txt
|
Q:
getpos() coding
just wanted to know how to write the getpos() command which must return an (x,y) tuple of the current position.
does it start like this:
def getpos(x 100, y 100)
not sure need help
A:
This is a bit underspecified, but this might work:
def getpos(self):
return (self.x, self.y)
This is how to return a tuple, from values assumed to be instance variables.
A:
In Python you can't force a return type on a function from its header. The return type can change from one call to another.
When you write:
def getpos(x,y):
It means that the function receives 2 parameters, that inside the function are called x & y. No type is forced on them either.
Just write the function so it returns the tuple, e.g.:
def getpos():
x = 100
y = 100
return (x,y)
The syntax that you used (def getpos(x 100, y 100) ) does not have any meaning that I know.
|
getpos() coding
|
just wanted to know how to write the getpos() command which must return an (x,y) tuple of the current position.
does it start like this:
def getpos(x 100, y 100)
not sure need help
|
[
"This is a bit underspecified, but this might work:\ndef getpos(self):\n return (self.x, self.y)\n\nThis is how to return a tuple, from values assumed to be instance variables.\n",
"In Python you can't force a return type on a function from its header. The return type can change from one call to another.\nWhen you write:\ndef getpos(x,y):\n\nIt means that the function receives 2 parameters, that inside the function are called x & y. No type is forced on them either.\nJust write the function so it returns the tuple, e.g.:\ndef getpos():\n x = 100\n y = 100\n return (x,y)\n\nThe syntax that you used (def getpos(x 100, y 100) ) does not have any meaning that I know.\n"
] |
[
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001593599_python.txt
|
Q:
String replacing in a file by given position
I have a file opened in 'ab+' mode.
What I need to do is replacing some bytes in the file with another string's bytes such that:
FILE:
thisissomethingasperfectlygood.
string:
01234
So, for example, I seek for the position (4, 0) and I want to write 01234 in the place of "issom" in the file. Last appearance would be:
this01234ethingasperfectlygood.
There are some solutions on the net, but all of them (at least what I could find) are based on "first find a string in the file and then replace it with another one". Because my case is based on seeking, so I am confused about the solution.
A:
You could mmap() your file and then use slice notation to update specific byte ranges in the file. The example here should help.
A:
You can use mmap for that
import os,mmap
f=os.open("afile",os.O_RDWR)
m=mmap.mmap(f,0)
m[4:9]="01234"
os.close(f)
|
String replacing in a file by given position
|
I have a file opened in 'ab+' mode.
What I need to do is replacing some bytes in the file with another string's bytes such that:
FILE:
thisissomethingasperfectlygood.
string:
01234
So, for example, I seek for the position (4, 0) and I want to write 01234 in the place of "issom" in the file. Last appearance would be:
this01234ethingasperfectlygood.
There are some solutions on the net, but all of them (at least what I could find) are based on "first find a string in the file and then replace it with another one". Because my case is based on seeking, so I am confused about the solution.
|
[
"You could mmap() your file and then use slice notation to update specific byte ranges in the file. The example here should help.\n",
"You can use mmap for that\nimport os,mmap\nf=os.open(\"afile\",os.O_RDWR)\nm=mmap.mmap(f,0)\nm[4:9]=\"01234\"\nos.close(f)\n\n"
] |
[
2,
2
] |
[] |
[] |
[
"python",
"replace",
"seek",
"string"
] |
stackoverflow_0001593576_python_replace_seek_string.txt
|
Q:
python constructing functions on the fly
I have several little functions f1, f2, f3 and a function f.
I want f to be a "container" to f1, f2, f3: to do the some of operations f1, f2, f3, depending on the program configuration (for example f1 and f2 or f1 and f3 or all the three) and nothing more.
I see two simple solutions: first to add some if's in the function f:
if configuration_f1_f2:
f1()
f2()
second, I can add a list of operations in f:
for op in operations:
op()
and add f1, f2, f3 in operations or remove them, depending on configuration.
But can I somehow construct dynamically code of 'f' adding to it calls of f1, f2 and f3 exact what I need without any if's or list's or for's? I mean something like on the fly code manipulation. So if my configuration is "f1 and f3", I set code of f so that it is
f1()
f3()
and when my configuration changes to "f2 and f3" I modify code of f to
f2()
f3()
Can I manipulate the code of the function that way?
A:
If f1, f2 etc. are functions with side effects, than you should use an explicit for loop (no fancy map solution). Perhaps you want something like this?
configurations = {
'config_1': (f1, f2, f3),
'config_2': (f1, f2),
}
def f(config='config_1'):
for op in configurations[config]:
op()
If f1, f2 etc. receive arguments, then perhaps this is a more suitable definition of f:
def f(config, *args, **kwargs):
for op in configurations[config]:
op(*args, **kwargs)
A:
If you're brave, you can construct a function definition as a string and pass it to the exec statement. For example:
func = "def f():\n"
if config_f1:
func += " f1()\n"
if config_f2:
func += " f2()\n"
exec func in globals()
At this point, you should have a new global f() that executes the appropriate bits of code.
A:
Create dictionary
fun_dict = {'f1': f1, 'f2': f2}
then parse your config and run function from dict:
fun_dict[parsed_name]()
A:
There can be various ways to achieve what you want, but for that you need to fully define the problem and context
a way could be like this, and most of the other will be variant of this, using a dict to lookup the function we need
def f1(): print "f1"
def f2(): print "f2"
def f3(): print "f3"
def f(fList):
for f in fList: globals()[f]()
f(["f1", "f2"])
f("f1 f3 f1 f3 f2 f1 f3 f2".split())
A:
Use objects and the Command design pattern.
class Function( object ):
pass
class F1( Function ):
def __call__( self ):
whatever `f1` used to do
class F2( Function ):
def __call__( self ):
whatever `f1` used to do
class Sequence( Function ):
def __init__( self, *someList ):
self.sequence= someList
def __call__( self ):
for f in self.sequence:
f()
f= myDynamicOperation( F1(), F2(), F3() )
f()
That's how it's done. No "constructing a function on the fly"
|
python constructing functions on the fly
|
I have several little functions f1, f2, f3 and a function f.
I want f to be a "container" to f1, f2, f3: to do the some of operations f1, f2, f3, depending on the program configuration (for example f1 and f2 or f1 and f3 or all the three) and nothing more.
I see two simple solutions: first to add some if's in the function f:
if configuration_f1_f2:
f1()
f2()
second, I can add a list of operations in f:
for op in operations:
op()
and add f1, f2, f3 in operations or remove them, depending on configuration.
But can I somehow construct dynamically code of 'f' adding to it calls of f1, f2 and f3 exact what I need without any if's or list's or for's? I mean something like on the fly code manipulation. So if my configuration is "f1 and f3", I set code of f so that it is
f1()
f3()
and when my configuration changes to "f2 and f3" I modify code of f to
f2()
f3()
Can I manipulate the code of the function that way?
|
[
"If f1, f2 etc. are functions with side effects, than you should use an explicit for loop (no fancy map solution). Perhaps you want something like this?\nconfigurations = {\n 'config_1': (f1, f2, f3),\n 'config_2': (f1, f2),\n}\n\ndef f(config='config_1'):\n for op in configurations[config]:\n op()\n\nIf f1, f2 etc. receive arguments, then perhaps this is a more suitable definition of f:\ndef f(config, *args, **kwargs):\n for op in configurations[config]:\n op(*args, **kwargs)\n\n",
"If you're brave, you can construct a function definition as a string and pass it to the exec statement. For example:\nfunc = \"def f():\\n\"\nif config_f1:\n func += \" f1()\\n\"\nif config_f2:\n func += \" f2()\\n\"\nexec func in globals()\n\nAt this point, you should have a new global f() that executes the appropriate bits of code.\n",
"Create dictionary\nfun_dict = {'f1': f1, 'f2': f2}\n\nthen parse your config and run function from dict:\nfun_dict[parsed_name]()\n\n",
"There can be various ways to achieve what you want, but for that you need to fully define the problem and context\na way could be like this, and most of the other will be variant of this, using a dict to lookup the function we need\ndef f1(): print \"f1\"\ndef f2(): print \"f2\"\ndef f3(): print \"f3\"\n\ndef f(fList):\n for f in fList: globals()[f]()\n\nf([\"f1\", \"f2\"])\nf(\"f1 f3 f1 f3 f2 f1 f3 f2\".split())\n\n",
"Use objects and the Command design pattern.\nclass Function( object ):\n pass\n\nclass F1( Function ):\n def __call__( self ):\n whatever `f1` used to do\n\n class F2( Function ):\n def __call__( self ):\n whatever `f1` used to do\n\n class Sequence( Function ):\n def __init__( self, *someList ):\n self.sequence= someList\n def __call__( self ):\n for f in self.sequence:\n f()\n\nf= myDynamicOperation( F1(), F2(), F3() )\nf()\n\nThat's how it's done. No \"constructing a function on the fly\"\n"
] |
[
3,
3,
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001593572_python.txt
|
Q:
Python: Get name of instantiating class?
Example:
class Class1:
def __init__(self):
self.x = Class2('Woo!')
class Class2:
def __init__(self, word):
print word
meow = Class1()
How do I derive the class name that created the self.x instance? In other words, if I was given the instance self.x, how do I get the name 'Class1'? Using self.x.__class__.__name__ will obviously only give you the Class2 name. Is this even possible? Thanks.
A:
You can't, unless you pass an instance of the 'creator' to the Class2() constructor. e.g.
class Class1(object):
def __init__(self, *args, **kw):
self.x = Class2("Woo!", self)
class Class2(object):
def __init__(self, word, creator, *args, **kw):
self._creator = creator
print word
This creates an inverse link between the classes for you
A:
Set a variable on the class in question in your __init__() method that you then retrieve later on.
You'll get better answers if you ask better questions. This one is pretty unclear.
A:
Your question is very similar to answered here. Note, that you can determine who created the instance in its constructor, but not afterwards. Anyway, the best way is to pass creator into constructor explicitly.
|
Python: Get name of instantiating class?
|
Example:
class Class1:
def __init__(self):
self.x = Class2('Woo!')
class Class2:
def __init__(self, word):
print word
meow = Class1()
How do I derive the class name that created the self.x instance? In other words, if I was given the instance self.x, how do I get the name 'Class1'? Using self.x.__class__.__name__ will obviously only give you the Class2 name. Is this even possible? Thanks.
|
[
"You can't, unless you pass an instance of the 'creator' to the Class2() constructor. e.g.\nclass Class1(object):\n def __init__(self, *args, **kw):\n self.x = Class2(\"Woo!\", self)\n\nclass Class2(object):\n def __init__(self, word, creator, *args, **kw):\n self._creator = creator\n print word\n\nThis creates an inverse link between the classes for you\n",
"Set a variable on the class in question in your __init__() method that you then retrieve later on.\nYou'll get better answers if you ask better questions. This one is pretty unclear.\n",
"Your question is very similar to answered here. Note, that you can determine who created the instance in its constructor, but not afterwards. Anyway, the best way is to pass creator into constructor explicitly.\n"
] |
[
6,
1,
0
] |
[] |
[] |
[
"class",
"instance",
"python"
] |
stackoverflow_0001593632_class_instance_python.txt
|
Q:
forward and back command
wanted to know how to write a forward and back command in a superclass not sure but i gave it a try dont know if its right or wrong some help plz
def forward(self):
return (self.100)
def back(self):
return (self.50)
A:
def forward(self):
self.position += self.distance
return (self.position)
def back(self):
self.position -= self.distance
return (self.position)
EDIT:
I assumed you are doing something like progress bar of install app, where some operation advances progress (copying files), and if user cancels install others operation make rollback (deleting files). Try ask a proper question as other users comment.
|
forward and back command
|
wanted to know how to write a forward and back command in a superclass not sure but i gave it a try dont know if its right or wrong some help plz
def forward(self):
return (self.100)
def back(self):
return (self.50)
|
[
"def forward(self):\n self.position += self.distance\n return (self.position)\n\ndef back(self):\n self.position -= self.distance\n return (self.position)\n\nEDIT:\nI assumed you are doing something like progress bar of install app, where some operation advances progress (copying files), and if user cancels install others operation make rollback (deleting files). Try ask a proper question as other users comment.\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001593818_python.txt
|
Q:
SVN hook environment issues with Python script
I am experiencing issues with my SVN post-commit hook and the fact that it is executed with an empty environment. Everything was working fine till about two weeks ago when my systems administrator upgraded a few things on the server.
My post-commit hook executes a Python script that uses a SVN module to email information about the commit to me. After the recent upgrades, however, Python cannot find the SVN module when executed via the hook. When executed by hand (ie with all environment variables intact) everything works fine.
I have tried setting the PYTHONPATH variable in my post-commit hook directly (PYTHONPATH=/usr/local/lib/svn-python), but that makes no difference.
How can I tell Python where the module is located?
A:
Your system administrator might have forgotten to execute this command.
echo /usr/local/lib/svn-python \
> /usr/local/lib/python2.x/site-packages/subversion.pth
This is written in subversion/bindings/swig/INSTALL in the source distribution.
A:
Got it! I missed the export in my post-commit hook script!
It should have been:
export PYTHONPATH=/usr/local/lib/svn-python
Problem solved :)
|
SVN hook environment issues with Python script
|
I am experiencing issues with my SVN post-commit hook and the fact that it is executed with an empty environment. Everything was working fine till about two weeks ago when my systems administrator upgraded a few things on the server.
My post-commit hook executes a Python script that uses a SVN module to email information about the commit to me. After the recent upgrades, however, Python cannot find the SVN module when executed via the hook. When executed by hand (ie with all environment variables intact) everything works fine.
I have tried setting the PYTHONPATH variable in my post-commit hook directly (PYTHONPATH=/usr/local/lib/svn-python), but that makes no difference.
How can I tell Python where the module is located?
|
[
"Your system administrator might have forgotten to execute this command.\necho /usr/local/lib/svn-python \\\n> /usr/local/lib/python2.x/site-packages/subversion.pth\n\nThis is written in subversion/bindings/swig/INSTALL in the source distribution.\n",
"Got it! I missed the export in my post-commit hook script!\nIt should have been:\nexport PYTHONPATH=/usr/local/lib/svn-python\nProblem solved :)\n"
] |
[
1,
1
] |
[] |
[] |
[
"python",
"svn"
] |
stackoverflow_0001576784_python_svn.txt
|
Q:
Get the request uri outside of a RequestHandler in Google App Engine (Python)
So, within a webapp.RequestHandler subclass I would use self.request.uri to get the request URI. But, I can't access this outside of a RequestHandler and so no go. Any ideas?
I'm running Python and I'm new at it as well as GAE.
A:
You should generally be doing everything within some sort of RequestHandler or the equivalent in your non-WebApp framework. However, if you really insist on being stuck in the early 1990s and writing plain CGI scripts, the environment variables SERVER_NAME and PATH_INFO may be what you want; see a CGI reference for more info.
A:
Since using request outside code handling it is meaningless I assume you'd like to access it from some method called by handler without passing request to it. Your choices are:
Refactor code so that request is passed to it.
When the former is not possible use a hack by defining a global threading.local(), storing request somewhere in request handler and access it in your method.
|
Get the request uri outside of a RequestHandler in Google App Engine (Python)
|
So, within a webapp.RequestHandler subclass I would use self.request.uri to get the request URI. But, I can't access this outside of a RequestHandler and so no go. Any ideas?
I'm running Python and I'm new at it as well as GAE.
|
[
"You should generally be doing everything within some sort of RequestHandler or the equivalent in your non-WebApp framework. However, if you really insist on being stuck in the early 1990s and writing plain CGI scripts, the environment variables SERVER_NAME and PATH_INFO may be what you want; see a CGI reference for more info.\n",
"Since using request outside code handling it is meaningless I assume you'd like to access it from some method called by handler without passing request to it. Your choices are:\n\nRefactor code so that request is passed to it.\nWhen the former is not possible use a hack by defining a global threading.local(), storing request somewhere in request handler and access it in your method.\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001593483_google_app_engine_python.txt
|
Q:
angles commands in superclass
how do you write a command that turn left or right at an angle in a superclass is it like this:
def left(self):
self.position += self.angle
return (self.position)
is it the same as the forward and back command
A:
It looks like you are interested by something like the logo turtle. Look at http://docs.python.org/library/turtle.html
If so, the left function doesn't change the position of the turtle but its orientation.
def left(self, angle):
self.angle -= angle*2*math.pi/360
|
angles commands in superclass
|
how do you write a command that turn left or right at an angle in a superclass is it like this:
def left(self):
self.position += self.angle
return (self.position)
is it the same as the forward and back command
|
[
"It looks like you are interested by something like the logo turtle. Look at http://docs.python.org/library/turtle.html\nIf so, the left function doesn't change the position of the turtle but its orientation.\ndef left(self, angle):\n self.angle -= angle*2*math.pi/360\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001594090_python.txt
|
Q:
Permissions for a site only
I have a multilingual Django project. Every language is a different subdomain.
So we've decided to use the "sites" application and to create one different site for every language.
On that project, I also have a "pages" application, which is quite similar to a CMS. The user can create pages with content and they'll be displayed in the appropriate language site.
Now I'm looking to be able to manage advanced permissions. What I need to do is to allow, in the admin application a user only to create and update pages for one (or many) specific language/site.
What'd be the cleaner way to do something like that ?
Edit : Here is the solution I've adapted, given by Chris
I create a decorator that's checking if the user is appropriately in the group that has access to the lang.
See Chris' accepted answer for an example of this.
In a "normal" view, I do the following :
def view(self):
# Whatever you wanna do
return render_to_response('page.html', {}, RequestContext(request))
view = group_required(view)
If the user is in the group, it'll return the method. Otherwise, it'll return an "Access Denied" error.
And in my admin, I do the following :
class PageAdmin(admin.ModelAdmin):
list_display = ('title', 'published')
fieldsets = [
(None, {'fields': ['title', 'slug', 'whatever_field_you_have']}),
]
def has_add_permission(self, request):
return in_group_required(request)
admin.site.register(Page, PageAdmin)
Where the in_group_required is a similar method to group_required mentionned above. But will return only true or false depending of if we have access or not.
And because we use them quite much in the previous examples, you'll find above here what I have in my in_group and group_required methods.
def group_required(func):
def _decorator(request, *args, **kwargs):
if not in_group(request):
return HttpResponse("Access denied")
return func(*args, **kwargs)
return _decorator
def in_group(request):
language = Language.objects.get(site__domain__exact=request.get_host())
for group in language.group.all():
if request.user in group.user_set.all():
return True
return False
A:
You could create a Group (http://docs.djangoproject.com/en/dev/topics/auth/)
per site / language and add the users to the groups accordingly.
Then, you can check if the request.user.groups belongs to the group.
(You can do this with a decorator:
def group_required(func):
def _decorator(request, *args, **kwargs):
hostname = request.META.get('HTTP_HOST')
lang = hostname.split(".")[0]
if not lang in request.user.groups:
return HttpResponse("Access denied")
return func(*args, **kwargs)
return _decorator
(Correct / modify the code to match your requirements...)
A:
You can override has_add_permission (and related methods) in your ModelAdmin class.
(With similar code like shown above)
A:
If you want to filter the Page objects on the admin index of your page-application,
you can override the method queryset() in ModelAdmin.
This QuerySet returns only those Page objects, that belong to a Site (and therefore Group)
of which the request.user is a member.
Pages.objects.filter(site__name__in=request.user.groups)
|
Permissions for a site only
|
I have a multilingual Django project. Every language is a different subdomain.
So we've decided to use the "sites" application and to create one different site for every language.
On that project, I also have a "pages" application, which is quite similar to a CMS. The user can create pages with content and they'll be displayed in the appropriate language site.
Now I'm looking to be able to manage advanced permissions. What I need to do is to allow, in the admin application a user only to create and update pages for one (or many) specific language/site.
What'd be the cleaner way to do something like that ?
Edit : Here is the solution I've adapted, given by Chris
I create a decorator that's checking if the user is appropriately in the group that has access to the lang.
See Chris' accepted answer for an example of this.
In a "normal" view, I do the following :
def view(self):
# Whatever you wanna do
return render_to_response('page.html', {}, RequestContext(request))
view = group_required(view)
If the user is in the group, it'll return the method. Otherwise, it'll return an "Access Denied" error.
And in my admin, I do the following :
class PageAdmin(admin.ModelAdmin):
list_display = ('title', 'published')
fieldsets = [
(None, {'fields': ['title', 'slug', 'whatever_field_you_have']}),
]
def has_add_permission(self, request):
return in_group_required(request)
admin.site.register(Page, PageAdmin)
Where the in_group_required is a similar method to group_required mentionned above. But will return only true or false depending of if we have access or not.
And because we use them quite much in the previous examples, you'll find above here what I have in my in_group and group_required methods.
def group_required(func):
def _decorator(request, *args, **kwargs):
if not in_group(request):
return HttpResponse("Access denied")
return func(*args, **kwargs)
return _decorator
def in_group(request):
language = Language.objects.get(site__domain__exact=request.get_host())
for group in language.group.all():
if request.user in group.user_set.all():
return True
return False
|
[
"You could create a Group (http://docs.djangoproject.com/en/dev/topics/auth/)\nper site / language and add the users to the groups accordingly.\nThen, you can check if the request.user.groups belongs to the group.\n(You can do this with a decorator:\ndef group_required(func):\n def _decorator(request, *args, **kwargs):\n hostname = request.META.get('HTTP_HOST')\n lang = hostname.split(\".\")[0]\n if not lang in request.user.groups:\n return HttpResponse(\"Access denied\")\n return func(*args, **kwargs)\n return _decorator\n\n(Correct / modify the code to match your requirements...)\n",
"You can override has_add_permission (and related methods) in your ModelAdmin class.\n(With similar code like shown above)\n",
"If you want to filter the Page objects on the admin index of your page-application,\nyou can override the method queryset() in ModelAdmin.\nThis QuerySet returns only those Page objects, that belong to a Site (and therefore Group)\nof which the request.user is a member.\nPages.objects.filter(site__name__in=request.user.groups)\n\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"django",
"permissions",
"python"
] |
stackoverflow_0001593423_django_permissions_python.txt
|
Q:
Including non-Python files with setup.py
How do I make setup.py include a file that isn't part of the code? (Specifically, it's a license file, but it could be any other thing.)
A:
http://docs.python.org/distutils/setupscript.html#installing-additional-files is all you should need.
Since you mentioned a license file, you can include additional meta-data (such as a license) this way.
|
Including non-Python files with setup.py
|
How do I make setup.py include a file that isn't part of the code? (Specifically, it's a license file, but it could be any other thing.)
|
[
"http://docs.python.org/distutils/setupscript.html#installing-additional-files is all you should need.\nSince you mentioned a license file, you can include additional meta-data (such as a license) this way.\n"
] |
[
4
] |
[] |
[] |
[
"distutils",
"python"
] |
stackoverflow_0001594838_distutils_python.txt
|
Q:
Why do simple math operations on floating point return unexpected (inaccurate) results in VB.Net and Python?
x = 4.2 - 0.1
vb.net gives 4.1000000000000005
python gives 4.1000000000000005
Excel gives 4.1
Google calc gives 4.1
What is the reason this happens?
A:
Float/double precision.
You must remember that in binary, 4.1 = 4 + 1/10. 1/10 is an infinitely repeating sum in binary, much like 1/9 is an infinite sum in decimal.
A:
>>> x = 4.2 - 0.1
>>> x
4.1000000000000005
>>>>print(x)
4.1
This happens because of how numbers are stored internally.
Computers represent numbers in binary, instead of decimal, as us humans are used to. With floating point numbers, computers have to make an approximation to the closest binary floating point value.
Almost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2***N* where J is an integer containing exactly 53 bits.
If you print the number, it will show the approximation, truncated to a normal value. For example, the real value of 0.1 is 0.1000000000000000055511151231257827021181583404541015625.
If you really need a base 10 based number (if you don't know the answer to this question, you don't), you could use (in Python) decimal.Decimal:
>>> from decimal import Decimal
>>> Decimal("4.2") - Decimal("0.1")
Decimal("4.1")
Binary floating-point arithmetic holds many surprises like this. The problem with “0.1” is explained in precise detail below, in the “Representation Error” section. See The Perils of Floating Point for a more complete account of other common surprises.
As that says near the end, “there are no easy answers.” Still, don’t be unduly wary of floating-point! The errors in Python float operations are inherited from the floating-point hardware, and on most machines are on the order of no more than 1 part in 2**53 per operation. That’s more than adequate for most tasks, but you do need to keep in mind that it’s not decimal arithmetic, and that every float operation can suffer a new rounding error.
While pathological cases do exist, for most casual use of floating-point arithmetic you’ll see the result you expect in the end if you simply round the display of your final results to the number of decimal digits you expect. str() usually suffices, and for finer control see the str.format() method’s format specifiers in Format String Syntax.
A:
There is no problem, really. It is just the way floats work (their internal binary representation). Anyway:
>>> from decimal import Decimal
>>> Decimal('4.2')-Decimal('0.1')
Decimal('4.1')
A:
In vb.net, you can avoid this problem by using Decimal type instead:
Dim x As Decimal = 4.2D - 0.1D
The result is 4.1 .
|
Why do simple math operations on floating point return unexpected (inaccurate) results in VB.Net and Python?
|
x = 4.2 - 0.1
vb.net gives 4.1000000000000005
python gives 4.1000000000000005
Excel gives 4.1
Google calc gives 4.1
What is the reason this happens?
|
[
"Float/double precision.\nYou must remember that in binary, 4.1 = 4 + 1/10. 1/10 is an infinitely repeating sum in binary, much like 1/9 is an infinite sum in decimal.\n",
">>> x = 4.2 - 0.1 \n>>> x\n4.1000000000000005\n\n>>>>print(x)\n4.1\n\nThis happens because of how numbers are stored internally.\nComputers represent numbers in binary, instead of decimal, as us humans are used to. With floating point numbers, computers have to make an approximation to the closest binary floating point value.\n\nAlmost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2***N* where J is an integer containing exactly 53 bits.\n\nIf you print the number, it will show the approximation, truncated to a normal value. For example, the real value of 0.1 is 0.1000000000000000055511151231257827021181583404541015625.\nIf you really need a base 10 based number (if you don't know the answer to this question, you don't), you could use (in Python) decimal.Decimal:\n>>> from decimal import Decimal\n>>> Decimal(\"4.2\") - Decimal(\"0.1\")\nDecimal(\"4.1\")\n\n\nBinary floating-point arithmetic holds many surprises like this. The problem with “0.1” is explained in precise detail below, in the “Representation Error” section. See The Perils of Floating Point for a more complete account of other common surprises.\nAs that says near the end, “there are no easy answers.” Still, don’t be unduly wary of floating-point! The errors in Python float operations are inherited from the floating-point hardware, and on most machines are on the order of no more than 1 part in 2**53 per operation. That’s more than adequate for most tasks, but you do need to keep in mind that it’s not decimal arithmetic, and that every float operation can suffer a new rounding error.\nWhile pathological cases do exist, for most casual use of floating-point arithmetic you’ll see the result you expect in the end if you simply round the display of your final results to the number of decimal digits you expect. str() usually suffices, and for finer control see the str.format() method’s format specifiers in Format String Syntax.\n\n",
"There is no problem, really. It is just the way floats work (their internal binary representation). Anyway:\n>>> from decimal import Decimal\n>>> Decimal('4.2')-Decimal('0.1')\nDecimal('4.1')\n\n",
"In vb.net, you can avoid this problem by using Decimal type instead:\nDim x As Decimal = 4.2D - 0.1D\n\nThe result is 4.1 .\n"
] |
[
15,
10,
4,
2
] |
[] |
[] |
[
"floating_point",
"python",
"vb.net"
] |
stackoverflow_0001594985_floating_point_python_vb.net.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.