content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How to count possibilities in python lists
Given a list like this:
num = [1, 2, 3, 4, 5]
There are 10 three-element combinations:
[123, 124, 125, 134, 135, 145, 234, 235, 245, 345]
How can I generate this list?
A:
Use itertools.combinations:
import itertools
num = [1, 2, 3, 4, 5]
combinations = []
for combination in itertools.combinations(num, 3):
combinations.append(int("".join(str(i) for i in combination)))
# => [123, 124, 125, 134, 135, 145, 234, 235, 245, 345]
print len(combinations)
# => 10
Edit
You can skip int(), join(), and str() if you are only interested in the number of combinations. itertools.combinations() gives you tuples that may be good enough.
A:
You are talking about combinations. There are n!/(k! * (n - k)!) ways to take k elements from a list of n elements. So:
>>> num = [1, 2, 3, 4, 5]
>>> fac = lambda n: 1 if n < 2 else n * fac(n - 1)
>>> combos = lambda n, k: fac(n) / fac(k) / fac(n - k)
>>> combos(len(num), 3)
10
Use itertools.combinations only if you actually want to generate all combinations. Not if you just want to know the number of different combinations.
Also, there are more efficient ways to calculate the number of combinations than using the code shown above. For example,
>>> from operator import truediv, mul
>>> from itertools import starmap
>>> from functools import reduce
>>> combos = lambda n, k: reduce(mul, starmap(truediv, zip(range(n, n - k, -1), range(k, 0, -1))))
>>> combos(len(num), 3)
10.0
(Note that this code uses floating point division!)
A:
I believe you are looking for the binomial coefficient:
A:
itertools.combinations():
Return r length subsequences of elements from the input iterable.
Combinations are emitted in lexicographic sort order. So, if the input iterable is sorted, the combination tuples will be produced in sorted order.
Elements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeat values in each combination.
>>> num = [1, 2, 3, 4, 5]
>>> [i for i in itertools.combinations(num,3)]
[(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 4, 5), (2, 3, 4), (2, 3, 5),
(2, 4, 5), (3, 4, 5)]
>>>
|
How to count possibilities in python lists
|
Given a list like this:
num = [1, 2, 3, 4, 5]
There are 10 three-element combinations:
[123, 124, 125, 134, 135, 145, 234, 235, 245, 345]
How can I generate this list?
|
[
"Use itertools.combinations:\nimport itertools\n\nnum = [1, 2, 3, 4, 5]\ncombinations = []\nfor combination in itertools.combinations(num, 3):\n combinations.append(int(\"\".join(str(i) for i in combination)))\n# => [123, 124, 125, 134, 135, 145, 234, 235, 245, 345]\nprint len(combinations)\n# => 10\n\nEdit\nYou can skip int(), join(), and str() if you are only interested in the number of combinations. itertools.combinations() gives you tuples that may be good enough.\n",
"You are talking about combinations. There are n!/(k! * (n - k)!) ways to take k elements from a list of n elements. So:\n>>> num = [1, 2, 3, 4, 5]\n>>> fac = lambda n: 1 if n < 2 else n * fac(n - 1)\n>>> combos = lambda n, k: fac(n) / fac(k) / fac(n - k)\n>>> combos(len(num), 3)\n10\n\nUse itertools.combinations only if you actually want to generate all combinations. Not if you just want to know the number of different combinations.\nAlso, there are more efficient ways to calculate the number of combinations than using the code shown above. For example,\n>>> from operator import truediv, mul\n>>> from itertools import starmap\n>>> from functools import reduce\n>>> combos = lambda n, k: reduce(mul, starmap(truediv, zip(range(n, n - k, -1), range(k, 0, -1))))\n>>> combos(len(num), 3)\n10.0\n\n(Note that this code uses floating point division!)\n",
"I believe you are looking for the binomial coefficient:\n",
"itertools.combinations():\n\nReturn r length subsequences of elements from the input iterable.\nCombinations are emitted in lexicographic sort order. So, if the input iterable is sorted, the combination tuples will be produced in sorted order.\nElements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeat values in each combination.\n\n>>> num = [1, 2, 3, 4, 5]\n>>> [i for i in itertools.combinations(num,3)]\n[(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 4, 5), (2, 3, 4), (2, 3, 5),\n (2, 4, 5), (3, 4, 5)]\n>>> \n\n"
] |
[
10,
5,
2,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001367550_python.txt
|
Q:
How can I update an attribute created by a base class' mutable default argument, without modifying that argument?
I've found a strange issue with subclassing and dictionary updates in new-style classes:
Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on
win32
>>> class a(object):
... def __init__(self, props={}):
... self.props = props
...
>>> class b(a):
... def __init__(self, val = None):
... super(b, self).__init__()
... self.props.update({'arg': val})
...
>>> class c(b):
... def __init__(self, val):
... super(c, self).__init__(val)
...
>>> b_inst = b(2)
>>> b_inst.props
{'arg': 2}
>>> c_inst = c(3)
>>> c_inst.props
{'arg': 3}
>>> b_inst.props
{'arg': 3}
>>>
In debug, in second call (c(3)) you can see that within a constructor self.props is already equal to {'arg': 2}, and when b constructor is called after that, it becomes {'arg': 3} for both objects!
also, the order of constructors calling is:
a, b # for b(2)
c, a, b # for c(3)
If you replace self.props.update() with self.props = {'arg': val} in b constructor, everything will be OK, and will act as expected
But I really need to update this property, not to replace it.
A:
props should not have a default value like that. Do this instead:
class a(object):
def __init__(self, props=None):
if props is None:
props = {}
self.props = props
This is a common python "gotcha".
A:
Your problem is in this line:
def __init__(self, props={}):
{} is an mutable type. And in python default argument values are only evaluated once. That means all your instances are sharing the same dictionary object!
To fix this change it to:
class a(object):
def __init__(self, props=None):
if props is None:
props = {}
self.props = props
A:
The short version: Do this:
class a(object):
def __init__(self, props=None):
self.props = props if props is not None else {}
class b(a):
def __init__(self, val = None):
super(b, self).__init__()
self.props.update({'arg': val})
class c(b):
def __init__(self, val):
super(c, self).__init__(val)
The long version:
The function definition is evaluated exactly once, so every time you call it the same default argument is used. For this to work like you expected, the default arguments would have to be evaluated every time a function is called. But instead Python generates a function object once and adds the defaults to the object ( as func_obj.func_defaults )
|
How can I update an attribute created by a base class' mutable default argument, without modifying that argument?
|
I've found a strange issue with subclassing and dictionary updates in new-style classes:
Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on
win32
>>> class a(object):
... def __init__(self, props={}):
... self.props = props
...
>>> class b(a):
... def __init__(self, val = None):
... super(b, self).__init__()
... self.props.update({'arg': val})
...
>>> class c(b):
... def __init__(self, val):
... super(c, self).__init__(val)
...
>>> b_inst = b(2)
>>> b_inst.props
{'arg': 2}
>>> c_inst = c(3)
>>> c_inst.props
{'arg': 3}
>>> b_inst.props
{'arg': 3}
>>>
In debug, in second call (c(3)) you can see that within a constructor self.props is already equal to {'arg': 2}, and when b constructor is called after that, it becomes {'arg': 3} for both objects!
also, the order of constructors calling is:
a, b # for b(2)
c, a, b # for c(3)
If you replace self.props.update() with self.props = {'arg': val} in b constructor, everything will be OK, and will act as expected
But I really need to update this property, not to replace it.
|
[
"props should not have a default value like that. Do this instead:\nclass a(object):\n def __init__(self, props=None):\n if props is None:\n props = {}\n self.props = props\n\nThis is a common python \"gotcha\".\n",
"Your problem is in this line:\ndef __init__(self, props={}):\n\n{} is an mutable type. And in python default argument values are only evaluated once. That means all your instances are sharing the same dictionary object!\nTo fix this change it to:\nclass a(object):\n def __init__(self, props=None):\n if props is None:\n props = {}\n self.props = props\n\n",
"The short version: Do this:\nclass a(object):\n def __init__(self, props=None):\n self.props = props if props is not None else {}\n\nclass b(a):\n def __init__(self, val = None):\n super(b, self).__init__()\n self.props.update({'arg': val})\n\nclass c(b):\n def __init__(self, val):\n super(c, self).__init__(val)\n\nThe long version: \nThe function definition is evaluated exactly once, so every time you call it the same default argument is used. For this to work like you expected, the default arguments would have to be evaluated every time a function is called. But instead Python generates a function object once and adds the defaults to the object ( as func_obj.func_defaults ) \n"
] |
[
19,
8,
3
] |
[] |
[] |
[
"dictionary",
"python",
"super"
] |
stackoverflow_0001367883_dictionary_python_super.txt
|
Q:
simplifying data structures and condition statements in python code
I was wondering if there are any ways to simplify the following piece of Code. As you can see, there are numerous dicts being used as well as condition statements to weed out bad input data. Note that the trip rate values are not all inputed yet, the dicts are just copied and pasted for now
EDIT
In any of the rates, (x,y):z . x and y are correct, the z values are not as they're just copy/pasted
this code works in case you want to copy, paste, and test it
import math
# step 1.4 return trip rates
def trip_rates( population_stratification, analysis_type, low_income, medium_income, high_income ):
''' this function returns the proper trip rate tuple to be used based on input
data
ADPT = Average Daily Person Trips per Household
pph = person per household
veh_hh = vehicles per household
(param_1, param_2): ADPT
'''
li = low_income
mi = medium_income
hi = high_income
# table 5 -
if analysis_type == 1:
if population_stratification == 1:
rates = {( li, 1 ):3.6, ( li, 2 ):6.5, ( li, 3 ):9.1, ( li, 4 ):11.5, ( li, 5 ): 13.8,
( mi, 1 ):3.9, ( mi, 2 ):7.3, ( mi, 3 ):10.0, ( mi, 4 ):13.1, ( mi, 5 ): 15.9,
( hi, 1 ):4.5, ( mi, 2 ):9.2, ( mi, 3 ):12.2, ( mi, 4 ):14.8, ( mi, 5 ): 18.2}
return rates
if population_stratification == 2:
rates = {
( li, 1 ):3.1, ( li, 2 ):6.3, ( li, 3 ):9.4, ( li, 4 ):12.5, ( li, 5 ): 14.7,
( mi, 1 ):4.8, ( mi, 2 ):7.2, ( mi, 3 ):10.1, ( mi, 4 ):13.3, ( mi, 5 ): 15.5,
( hi, 1 ):4.9, ( mi, 2 ):7.7, ( mi, 3 ):12.5, ( mi, 4 ):13.8, ( mi, 5 ): 16.7
}
return rates
if population_stratification == 3: #TODO: input actual rate
rates = {
( li, 1 ):3.6, ( li, 2 ):6.5, ( li, 3 ):9.1, ( li, 4 ):11.5, ( li, 5 ): 13.8,
( mi, 1 ):3.9, ( mi, 2 ):7.3, ( mi, 3 ):10.0, ( mi, 4 ):13.1, ( mi, 5 ): 15.9,
( hi, 1 ):4.5, ( mi, 2 ):9.2, ( mi, 3 ):12.2, ( mi, 4 ):14.8, ( mi, 5 ): 18.2
}
return rates
if population_stratification == 4: #TODO: input actual rate
rates = {
( li, 1 ):3.1, ( li, 2 ):6.3, ( li, 3 ):9.4, ( li, 4 ):12.5, ( li, 5 ): 14.7,
( mi, 1 ):4.8, ( mi, 2 ):7.2, ( mi, 3 ):10.1, ( mi, 4 ):13.3, ( mi, 5 ): 15.5,
( hi, 1 ):4.9, ( mi, 2 ):7.7, ( mi, 3 ):12.5, ( mi, 4 ):13.8, ( mi, 5 ): 16.7
}
return rates
#table 6
elif analysis_type == 2:
if population_stratification == 1: #TODO: Change rates
rates = {
( 0, 1 ):3.6, ( 0, 2 ):6.5, ( 0, 3 ):9.1, ( 0, 4 ):11.5, ( 0, 5 ): 13.8,
( 1, 1 ):3.9, ( 1, 2 ):7.3, ( 1, 3 ):10.0, ( 1, 4 ):13.1, ( 1, 5 ): 15.9,
( 2, 1 ):4.5, ( 2, 2 ):9.2, ( 2, 3 ):12.2, ( 2, 4 ):14.8, ( 2, 5 ): 18.2,
( 3, 1 ):4.5, ( 3, 2 ):9.2, ( 3, 3 ):12.2, ( 3, 4 ):14.8, ( 3, 5 ): 18.2
}
return rates
if population_stratification == 2: #TODO: Change rates
rates = {
( 0, 1 ):3.6, ( 0, 2 ):6.5, ( 0, 3 ):9.1, ( 0, 4 ):11.5, ( 0, 5 ): 13.8,
( 1, 1 ):3.9, ( 1, 2 ):7.3, ( 1, 3 ):10.0, ( 1, 4 ):13.1, ( 1, 5 ): 15.9,
( 2, 1 ):4.5, ( 2, 2 ):9.2, ( 2, 3 ):12.2, ( 2, 4 ):14.8, ( 2, 5 ): 18.2,
( 3, 1 ):4.5, ( 3, 2 ):9.2, ( 3, 3 ):12.2, ( 3, 4 ):14.8, ( 3, 5 ): 18.2
}
return rates
if population_stratification == 3: #TODO: Change rates
rates = {
( 0, 1 ):3.6, ( 0, 2 ):6.5, ( 0, 3 ):9.1, ( 0, 4 ):11.5, ( 0, 5 ): 13.8,
( 1, 1 ):3.9, ( 1, 2 ):7.3, ( 1, 3 ):10.0, ( 1, 4 ):13.1, ( 1, 5 ): 15.9,
( 2, 1 ):4.5, ( 2, 2 ):9.2, ( 2, 3 ):12.2, ( 2, 4 ):14.8, ( 2, 5 ): 18.2,
( 3, 1 ):4.5, ( 3, 2 ):9.2, ( 3, 3 ):12.2, ( 3, 4 ):14.8, ( 3, 5 ): 18.2
}
return rates
if population_stratification == 4: #TODO: Change rates
rates = {
( 0, 1 ):3.6, ( 0, 2 ):6.5, ( 0, 3 ):9.1, ( 0, 4 ):11.5, ( 0, 5 ): 13.8,
( 1, 1 ):3.9, ( 1, 2 ):7.3, ( 1, 3 ):10.0, ( 1, 4 ):13.1, ( 1, 5 ): 15.9,
( 2, 1 ):4.5, ( 2, 2 ):9.2, ( 2, 3 ):12.2, ( 2, 4 ):14.8, ( 2, 5 ): 18.2,
( 3, 1 ):4.5, ( 3, 2 ):9.2, ( 3, 3 ):12.2, ( 3, 4 ):14.8, ( 3, 5 ): 18.2
}
return rates
# table 7
elif analysis_type == 3:
if population_stratification == 1: #TODO: input actual rate
rates = {
( li, 0 ):3.6, ( li, 1 ):6.5, ( li, 2 ):9.1, ( li, 3 ):11.5,
( mi, 0 ):3.9, ( mi, 1 ):7.3, ( mi, 2 ):10.0, ( mi, 3 ):13.1,
( hi, 0 ):4.5, ( mi, 1 ):9.2, ( mi, 2 ):12.2, ( mi, 3 ):14.8
}
return rates
if population_stratification == 2: #TODO: input actual rate
rates = {
( li, 0 ):3.6, ( li, 1 ):6.5, ( li, 2 ):9.1, ( li, 3 ):11.5,
( mi, 0 ):3.9, ( mi, 1 ):7.3, ( mi, 2 ):10.0, ( mi, 3 ):13.1,
( hi, 0 ):4.5, ( mi, 1 ):9.2, ( mi, 2 ):12.2, ( mi, 3 ):14.8
}
return rates
if population_stratification == 3: #TODO: input actual rate
rates = {
( li, 0 ):3.6, ( li, 1 ):6.5, ( li, 2 ):9.1, ( li, 3 ):11.5,
( mi, 0 ):3.9, ( mi, 1 ):7.3, ( mi, 2 ):10.0, ( mi, 3 ):13.1,
( hi, 0 ):4.5, ( mi, 1 ):9.2, ( mi, 2 ):12.2, ( mi, 3 ):14.8
}
return rates
if population_stratification == 4: #TODO: input actual rate
rates = {
( li, 0 ):3.6, ( li, 1 ):6.5, ( li, 2 ):9.1, ( li, 3 ):11.5,
( mi, 0 ):3.9, ( mi, 1 ):7.3, ( mi, 2 ):10.0, ( mi, 3 ):13.1,
( hi, 0 ):4.5, ( mi, 1 ):9.2, ( mi, 2 ):12.2, ( mi, 3 ):14.8
}
return rates
def interpolate( population_stratification, analysis_type, low_income, medium_income, high_income, x, y ):
#get rates dict
rates = trip_rates( population_stratification, analysis_type, low_income, medium_income, high_income )
# dealing with x parameters
#when using income levels, x_1 and x_2 are li, mi, or hi
if analysis_type == 1 or analysis_type == 2 or analsis_type == 4:
if x < high_income and x >= medium_income:
x_1 = medium_income
x_2 = high_income
elif x < medium_income:
x_1 = low_income
x_2 = medium_income
else:
x_1 = high_income
x_2 = high_income
if analysis_type == 3:
if x >= 3:
x_1 = 3
x_2 = 3
else:
x_1 = int( math.floor( x ) )
x_2 = int( math.ceil( x ) )
# dealing with y parametrs
#when using persons per household, max number y = 5
if analysis_type == 1 or analysis_type == 4:
if y >= 5:
y_1 = 5
y_2 = 5
else:
y_1 = int( math.floor( y ) )
y_2 = int( math.ceil( y ) )
elif analysis_type == 2 or analysis_type == 3:
if y >= 5:
y_1 = 5
y_2 = 5
else:
y_1 = int( math.floor( y ) )
y_2 = int( math.ceil( y ) )
# denominator
z = ( ( x_2 - x_1 ) * ( y_2 - y_1 ) )
result = ( ( ( rates[( x_1, y_1 )] ) * ( ( x_2 - x ) * ( y_2 - y ) ) / ( z ) ) +
( ( rates[( x_2, y_1 )] ) * ( ( x - x_1 ) * ( y_2 - y ) ) / ( z ) ) +
( ( rates[( x_1, y_2 )] ) * ( ( x_2 - x ) * ( y - y_1 ) ) / ( z ) ) +
( ( rates[( x_2, y_2 )] ) * ( ( x - x_1 ) * ( y - y_1 ) ) / ( z ) ) )
return result
#test
low_income = 20000 #this is calculated using exchange rates
medium_income = 40000 # this is calculated using exchange rates
high_income = 60000 # this is calculated using exchange rates
population_stratification = 1 #inputed by user
analysis_type = 1 #inputed by user
x = 35234.34 #test income
y = 3.5 # test pph
print interpolate( population_stratification, analysis_type, low_income, medium_income, high_income, x, y )
A:
Well, where to start? Here is just a first observation:
You have a lot of data there, and it seems code and data are mixed into each other.
Data and Code should be separate. Data is an external source, something you modify or read in. You could probably adapt your code to quickly parse Data from a good editable representation to a representation useful for your algorithms. I suspect your code will be shorter, clearer, and less error prone (did you notice all of the 'rates' dictionaries have multiple keys, and you miss a lot of 'hi' keys?).
If you need better abstractions such as matrices and arrays of data, look into numpy
Edit 1
Did you count your number of dimensions? You have a many-dimensional matrix here with X dimensions:
analysis_type, population_stratification, income_level, index
If I see right this is a 3x4x3x3 (= 108 entries) "matrix" or "lookup table". If this is the data your model builds on, fine. But can't you put those numbers in a file, or table that you read in? Your code would be next to trivial.
Edit 2
Ok, I'll bite for some minor python style: Testing for values in a Set or a Range.
Instead of:
if analysis_type == 1 or analysis_type == 2 or analsis_type == 4:
you can use
if analysis_type in (1, 2, 4):
or even using readable names as (CUBIC, ..) as suggested.
Instead of:
if x < high_income and x >= medium_income:
you can used chained conditions; Python is one of the few programming languages where conditions chain to make nautral if statements:
if medium_income <= x < high_income:
Edit 3
More important than small code figures is of course code design and refactoring. Edit 2 can only give you some polish.
You should learn to loathe duplicate code.
Also, you have quite a lot of branches in one function. That is a good sign you should break it up into multiple functions. It can also reduce duplication. For example, when one variable like analysis_type can totally change what the function does, why have two different behaviors in one function? You shouldn't have the whole program in one function. Perhaps analysis_type == 3 is better expressed in its own function (as an example)?
Do you understand that your function trip_rates basically does an array lookup, where the array lookup is hardcoded as if ..: return .. if : return .., and the array is written out in full in the function? What if trip_rates could be implemented like this? Would it be possible?
data_model = compute_table(low_income, ...)
return data_model[analysis_type][population_stratification]
A:
As well with kaizer's suggestion about data and code, here are some simple cleanups:
The code
if y >= 5:
y_1 = 5
y_2 = 5
else:
y_1 = int( math.floor( y ) )
y_2 = int( math.ceil( y ) )
can be written as
min(5, int(math.floor(y))
or
int(math.floor(min(5, y))
or even made a function:
def limitedInt(v, maxV):
return min(5, int(math.floor(y))
Also I would recommend that instead of saying analysis_type == 1 you say something like
analysis_type = CUBIC (i.e., an name that describes the analysis type) and set the name to 1. This will not simplify so much as make the code easier to read.
You might find the book Refactoring by Martin Fowler or Refactoring Workbook by William Wake as a way to learn about cleaning up code (the website is also available, but without knowing about "code smells" described in the books, it is not as helpful.
|
simplifying data structures and condition statements in python code
|
I was wondering if there are any ways to simplify the following piece of Code. As you can see, there are numerous dicts being used as well as condition statements to weed out bad input data. Note that the trip rate values are not all inputed yet, the dicts are just copied and pasted for now
EDIT
In any of the rates, (x,y):z . x and y are correct, the z values are not as they're just copy/pasted
this code works in case you want to copy, paste, and test it
import math
# step 1.4 return trip rates
def trip_rates( population_stratification, analysis_type, low_income, medium_income, high_income ):
''' this function returns the proper trip rate tuple to be used based on input
data
ADPT = Average Daily Person Trips per Household
pph = person per household
veh_hh = vehicles per household
(param_1, param_2): ADPT
'''
li = low_income
mi = medium_income
hi = high_income
# table 5 -
if analysis_type == 1:
if population_stratification == 1:
rates = {( li, 1 ):3.6, ( li, 2 ):6.5, ( li, 3 ):9.1, ( li, 4 ):11.5, ( li, 5 ): 13.8,
( mi, 1 ):3.9, ( mi, 2 ):7.3, ( mi, 3 ):10.0, ( mi, 4 ):13.1, ( mi, 5 ): 15.9,
( hi, 1 ):4.5, ( mi, 2 ):9.2, ( mi, 3 ):12.2, ( mi, 4 ):14.8, ( mi, 5 ): 18.2}
return rates
if population_stratification == 2:
rates = {
( li, 1 ):3.1, ( li, 2 ):6.3, ( li, 3 ):9.4, ( li, 4 ):12.5, ( li, 5 ): 14.7,
( mi, 1 ):4.8, ( mi, 2 ):7.2, ( mi, 3 ):10.1, ( mi, 4 ):13.3, ( mi, 5 ): 15.5,
( hi, 1 ):4.9, ( mi, 2 ):7.7, ( mi, 3 ):12.5, ( mi, 4 ):13.8, ( mi, 5 ): 16.7
}
return rates
if population_stratification == 3: #TODO: input actual rate
rates = {
( li, 1 ):3.6, ( li, 2 ):6.5, ( li, 3 ):9.1, ( li, 4 ):11.5, ( li, 5 ): 13.8,
( mi, 1 ):3.9, ( mi, 2 ):7.3, ( mi, 3 ):10.0, ( mi, 4 ):13.1, ( mi, 5 ): 15.9,
( hi, 1 ):4.5, ( mi, 2 ):9.2, ( mi, 3 ):12.2, ( mi, 4 ):14.8, ( mi, 5 ): 18.2
}
return rates
if population_stratification == 4: #TODO: input actual rate
rates = {
( li, 1 ):3.1, ( li, 2 ):6.3, ( li, 3 ):9.4, ( li, 4 ):12.5, ( li, 5 ): 14.7,
( mi, 1 ):4.8, ( mi, 2 ):7.2, ( mi, 3 ):10.1, ( mi, 4 ):13.3, ( mi, 5 ): 15.5,
( hi, 1 ):4.9, ( mi, 2 ):7.7, ( mi, 3 ):12.5, ( mi, 4 ):13.8, ( mi, 5 ): 16.7
}
return rates
#table 6
elif analysis_type == 2:
if population_stratification == 1: #TODO: Change rates
rates = {
( 0, 1 ):3.6, ( 0, 2 ):6.5, ( 0, 3 ):9.1, ( 0, 4 ):11.5, ( 0, 5 ): 13.8,
( 1, 1 ):3.9, ( 1, 2 ):7.3, ( 1, 3 ):10.0, ( 1, 4 ):13.1, ( 1, 5 ): 15.9,
( 2, 1 ):4.5, ( 2, 2 ):9.2, ( 2, 3 ):12.2, ( 2, 4 ):14.8, ( 2, 5 ): 18.2,
( 3, 1 ):4.5, ( 3, 2 ):9.2, ( 3, 3 ):12.2, ( 3, 4 ):14.8, ( 3, 5 ): 18.2
}
return rates
if population_stratification == 2: #TODO: Change rates
rates = {
( 0, 1 ):3.6, ( 0, 2 ):6.5, ( 0, 3 ):9.1, ( 0, 4 ):11.5, ( 0, 5 ): 13.8,
( 1, 1 ):3.9, ( 1, 2 ):7.3, ( 1, 3 ):10.0, ( 1, 4 ):13.1, ( 1, 5 ): 15.9,
( 2, 1 ):4.5, ( 2, 2 ):9.2, ( 2, 3 ):12.2, ( 2, 4 ):14.8, ( 2, 5 ): 18.2,
( 3, 1 ):4.5, ( 3, 2 ):9.2, ( 3, 3 ):12.2, ( 3, 4 ):14.8, ( 3, 5 ): 18.2
}
return rates
if population_stratification == 3: #TODO: Change rates
rates = {
( 0, 1 ):3.6, ( 0, 2 ):6.5, ( 0, 3 ):9.1, ( 0, 4 ):11.5, ( 0, 5 ): 13.8,
( 1, 1 ):3.9, ( 1, 2 ):7.3, ( 1, 3 ):10.0, ( 1, 4 ):13.1, ( 1, 5 ): 15.9,
( 2, 1 ):4.5, ( 2, 2 ):9.2, ( 2, 3 ):12.2, ( 2, 4 ):14.8, ( 2, 5 ): 18.2,
( 3, 1 ):4.5, ( 3, 2 ):9.2, ( 3, 3 ):12.2, ( 3, 4 ):14.8, ( 3, 5 ): 18.2
}
return rates
if population_stratification == 4: #TODO: Change rates
rates = {
( 0, 1 ):3.6, ( 0, 2 ):6.5, ( 0, 3 ):9.1, ( 0, 4 ):11.5, ( 0, 5 ): 13.8,
( 1, 1 ):3.9, ( 1, 2 ):7.3, ( 1, 3 ):10.0, ( 1, 4 ):13.1, ( 1, 5 ): 15.9,
( 2, 1 ):4.5, ( 2, 2 ):9.2, ( 2, 3 ):12.2, ( 2, 4 ):14.8, ( 2, 5 ): 18.2,
( 3, 1 ):4.5, ( 3, 2 ):9.2, ( 3, 3 ):12.2, ( 3, 4 ):14.8, ( 3, 5 ): 18.2
}
return rates
# table 7
elif analysis_type == 3:
if population_stratification == 1: #TODO: input actual rate
rates = {
( li, 0 ):3.6, ( li, 1 ):6.5, ( li, 2 ):9.1, ( li, 3 ):11.5,
( mi, 0 ):3.9, ( mi, 1 ):7.3, ( mi, 2 ):10.0, ( mi, 3 ):13.1,
( hi, 0 ):4.5, ( mi, 1 ):9.2, ( mi, 2 ):12.2, ( mi, 3 ):14.8
}
return rates
if population_stratification == 2: #TODO: input actual rate
rates = {
( li, 0 ):3.6, ( li, 1 ):6.5, ( li, 2 ):9.1, ( li, 3 ):11.5,
( mi, 0 ):3.9, ( mi, 1 ):7.3, ( mi, 2 ):10.0, ( mi, 3 ):13.1,
( hi, 0 ):4.5, ( mi, 1 ):9.2, ( mi, 2 ):12.2, ( mi, 3 ):14.8
}
return rates
if population_stratification == 3: #TODO: input actual rate
rates = {
( li, 0 ):3.6, ( li, 1 ):6.5, ( li, 2 ):9.1, ( li, 3 ):11.5,
( mi, 0 ):3.9, ( mi, 1 ):7.3, ( mi, 2 ):10.0, ( mi, 3 ):13.1,
( hi, 0 ):4.5, ( mi, 1 ):9.2, ( mi, 2 ):12.2, ( mi, 3 ):14.8
}
return rates
if population_stratification == 4: #TODO: input actual rate
rates = {
( li, 0 ):3.6, ( li, 1 ):6.5, ( li, 2 ):9.1, ( li, 3 ):11.5,
( mi, 0 ):3.9, ( mi, 1 ):7.3, ( mi, 2 ):10.0, ( mi, 3 ):13.1,
( hi, 0 ):4.5, ( mi, 1 ):9.2, ( mi, 2 ):12.2, ( mi, 3 ):14.8
}
return rates
def interpolate( population_stratification, analysis_type, low_income, medium_income, high_income, x, y ):
#get rates dict
rates = trip_rates( population_stratification, analysis_type, low_income, medium_income, high_income )
# dealing with x parameters
#when using income levels, x_1 and x_2 are li, mi, or hi
if analysis_type == 1 or analysis_type == 2 or analsis_type == 4:
if x < high_income and x >= medium_income:
x_1 = medium_income
x_2 = high_income
elif x < medium_income:
x_1 = low_income
x_2 = medium_income
else:
x_1 = high_income
x_2 = high_income
if analysis_type == 3:
if x >= 3:
x_1 = 3
x_2 = 3
else:
x_1 = int( math.floor( x ) )
x_2 = int( math.ceil( x ) )
# dealing with y parametrs
#when using persons per household, max number y = 5
if analysis_type == 1 or analysis_type == 4:
if y >= 5:
y_1 = 5
y_2 = 5
else:
y_1 = int( math.floor( y ) )
y_2 = int( math.ceil( y ) )
elif analysis_type == 2 or analysis_type == 3:
if y >= 5:
y_1 = 5
y_2 = 5
else:
y_1 = int( math.floor( y ) )
y_2 = int( math.ceil( y ) )
# denominator
z = ( ( x_2 - x_1 ) * ( y_2 - y_1 ) )
result = ( ( ( rates[( x_1, y_1 )] ) * ( ( x_2 - x ) * ( y_2 - y ) ) / ( z ) ) +
( ( rates[( x_2, y_1 )] ) * ( ( x - x_1 ) * ( y_2 - y ) ) / ( z ) ) +
( ( rates[( x_1, y_2 )] ) * ( ( x_2 - x ) * ( y - y_1 ) ) / ( z ) ) +
( ( rates[( x_2, y_2 )] ) * ( ( x - x_1 ) * ( y - y_1 ) ) / ( z ) ) )
return result
#test
low_income = 20000 #this is calculated using exchange rates
medium_income = 40000 # this is calculated using exchange rates
high_income = 60000 # this is calculated using exchange rates
population_stratification = 1 #inputed by user
analysis_type = 1 #inputed by user
x = 35234.34 #test income
y = 3.5 # test pph
print interpolate( population_stratification, analysis_type, low_income, medium_income, high_income, x, y )
|
[
"Well, where to start? Here is just a first observation:\nYou have a lot of data there, and it seems code and data are mixed into each other.\nData and Code should be separate. Data is an external source, something you modify or read in. You could probably adapt your code to quickly parse Data from a good editable representation to a representation useful for your algorithms. I suspect your code will be shorter, clearer, and less error prone (did you notice all of the 'rates' dictionaries have multiple keys, and you miss a lot of 'hi' keys?).\nIf you need better abstractions such as matrices and arrays of data, look into numpy\n\nEdit 1\nDid you count your number of dimensions? You have a many-dimensional matrix here with X dimensions:\nanalysis_type, population_stratification, income_level, index\nIf I see right this is a 3x4x3x3 (= 108 entries) \"matrix\" or \"lookup table\". If this is the data your model builds on, fine. But can't you put those numbers in a file, or table that you read in? Your code would be next to trivial.\n\nEdit 2\nOk, I'll bite for some minor python style: Testing for values in a Set or a Range.\nInstead of:\nif analysis_type == 1 or analysis_type == 2 or analsis_type == 4:\n\nyou can use\nif analysis_type in (1, 2, 4):\n\nor even using readable names as (CUBIC, ..) as suggested.\nInstead of:\nif x < high_income and x >= medium_income:\n\nyou can used chained conditions; Python is one of the few programming languages where conditions chain to make nautral if statements:\nif medium_income <= x < high_income:\n\n\nEdit 3\nMore important than small code figures is of course code design and refactoring. Edit 2 can only give you some polish.\nYou should learn to loathe duplicate code.\nAlso, you have quite a lot of branches in one function. That is a good sign you should break it up into multiple functions. It can also reduce duplication. For example, when one variable like analysis_type can totally change what the function does, why have two different behaviors in one function? You shouldn't have the whole program in one function. Perhaps analysis_type == 3 is better expressed in its own function (as an example)?\nDo you understand that your function trip_rates basically does an array lookup, where the array lookup is hardcoded as if ..: return .. if : return .., and the array is written out in full in the function? What if trip_rates could be implemented like this? Would it be possible?\ndata_model = compute_table(low_income, ...)\nreturn data_model[analysis_type][population_stratification]\n\n",
"As well with kaizer's suggestion about data and code, here are some simple cleanups:\nThe code\nif y >= 5:\n y_1 = 5\n y_2 = 5\n else:\n y_1 = int( math.floor( y ) )\n y_2 = int( math.ceil( y ) )\n\ncan be written as\nmin(5, int(math.floor(y))\n\nor\nint(math.floor(min(5, y))\n\nor even made a function:\ndef limitedInt(v, maxV):\n return min(5, int(math.floor(y))\n\nAlso I would recommend that instead of saying analysis_type == 1 you say something like \nanalysis_type = CUBIC (i.e., an name that describes the analysis type) and set the name to 1. This will not simplify so much as make the code easier to read.\nYou might find the book Refactoring by Martin Fowler or Refactoring Workbook by William Wake as a way to learn about cleaning up code (the website is also available, but without knowing about \"code smells\" described in the books, it is not as helpful.\n"
] |
[
5,
2
] |
[] |
[] |
[
"conditional",
"data_structures",
"python"
] |
stackoverflow_0001367913_conditional_data_structures_python.txt
|
Q:
How to create objects dynamically in an elegant way in python?
I have two classes that I would like to merge into a composite. These two classes will continue to be used standalone and I don't want to modify them.
For some reasons, I want to let my composite class creating the objects. I am thinking about something like the code below (it is just an example) but I think it is complex and I don't like it very much. I guess that it could be improved by some techniques and tricks that I ignore.
Please note that the composite is designed to manage a lot of different classes with different constructor signatures.
What would recommend in order to improve this code?
class Parent:
def __init__(self, x):
self.x = x
class A(Parent):
def __init__(self, x, a="a", b="b", c="c"):
Parent.__init__(self, x)
self.a, self.b, self.c = a, b, c
def do(self):
print self.x, self.a, self.b, self.c
class D(Parent):
def __init__(self, x, d):
Parent.__init__(self, x)
self.d = d
def do(self):
print self.x, self.d
class Composite(Parent):
def __init__(self, x, list_of_classes, list_of_args):
Parent.__init__(self, x)
self._objs = []
for i in xrange(len(list_of_classes)):
self._objs.append(self._make_object(list_of_classes[i], list_of_args[i]))
def _make_object(self, the_class, the_args):
if the_class is A:
a = the_args[0] if len(the_args)>0 else "a"
b = the_args[1] if len(the_args)>1 else "b"
c = the_args[2] if len(the_args)>2 else "c"
return the_class(self.x, a, b, c)
if the_class is D:
return the_class(self.x, the_args[0])
def do(self):
for o in self._objs: o.do()
compo = Composite("x", [A, D, A], [(), ("hello",), ("A", "B", "C")])
compo.do()
A:
You could shorten it by removing type-checking _make_object, and letting class constructors take care of the default arguments, e.g.
class Composite(Parent):
def __init__(self, x, list_of_classes, list_of_args):
Parent.__init__(self, x)
self._objs = [
the_class(self.x, *the_args)
for the_class, the_args
in zip(list_of_classes, list_of_args)
if isinstance(the_class, Parent.__class__)
]
def do(self):
for o in self._objs: o.do()
This would also allow you to use it with new classes without modifying its code.
|
How to create objects dynamically in an elegant way in python?
|
I have two classes that I would like to merge into a composite. These two classes will continue to be used standalone and I don't want to modify them.
For some reasons, I want to let my composite class creating the objects. I am thinking about something like the code below (it is just an example) but I think it is complex and I don't like it very much. I guess that it could be improved by some techniques and tricks that I ignore.
Please note that the composite is designed to manage a lot of different classes with different constructor signatures.
What would recommend in order to improve this code?
class Parent:
def __init__(self, x):
self.x = x
class A(Parent):
def __init__(self, x, a="a", b="b", c="c"):
Parent.__init__(self, x)
self.a, self.b, self.c = a, b, c
def do(self):
print self.x, self.a, self.b, self.c
class D(Parent):
def __init__(self, x, d):
Parent.__init__(self, x)
self.d = d
def do(self):
print self.x, self.d
class Composite(Parent):
def __init__(self, x, list_of_classes, list_of_args):
Parent.__init__(self, x)
self._objs = []
for i in xrange(len(list_of_classes)):
self._objs.append(self._make_object(list_of_classes[i], list_of_args[i]))
def _make_object(self, the_class, the_args):
if the_class is A:
a = the_args[0] if len(the_args)>0 else "a"
b = the_args[1] if len(the_args)>1 else "b"
c = the_args[2] if len(the_args)>2 else "c"
return the_class(self.x, a, b, c)
if the_class is D:
return the_class(self.x, the_args[0])
def do(self):
for o in self._objs: o.do()
compo = Composite("x", [A, D, A], [(), ("hello",), ("A", "B", "C")])
compo.do()
|
[
"You could shorten it by removing type-checking _make_object, and letting class constructors take care of the default arguments, e.g.\nclass Composite(Parent):\n def __init__(self, x, list_of_classes, list_of_args):\n Parent.__init__(self, x)\n self._objs = [\n the_class(self.x, *the_args)\n for the_class, the_args\n in zip(list_of_classes, list_of_args)\n if isinstance(the_class, Parent.__class__)\n ]\n\n def do(self):\n for o in self._objs: o.do()\n\nThis would also allow you to use it with new classes without modifying its code.\n"
] |
[
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001367819_python.txt
|
Q:
Java equivalent of a Python functionality -> set(string)
I want to mimic a Python functionality in Java.
In Python if I want unique characters in a string I can do just,
text = "i am a string"
print set(text) # o/p is set(['a', ' ', 'g', 'i', 'm', 'n', 's', 'r', 't'])
How can I do this in Java trivially or directly?
A:
String str = "i am a string";
System.out.println(new HashSet<String>(Arrays.asList(str.split(""))));
EDIT: For those who object that they aren't exactly equivalent because str.split will include an empty string in the set, we can do it even more verbose:
String str = "i am a string";
Set<String> set = new HashSet<String>(Arrays.asList(str.split("")));
set.remove("");
System.out.println(set);
But of course it depends on what you need to accomplish.
|
Java equivalent of a Python functionality -> set(string)
|
I want to mimic a Python functionality in Java.
In Python if I want unique characters in a string I can do just,
text = "i am a string"
print set(text) # o/p is set(['a', ' ', 'g', 'i', 'm', 'n', 's', 'r', 't'])
How can I do this in Java trivially or directly?
|
[
"String str = \"i am a string\";\nSystem.out.println(new HashSet<String>(Arrays.asList(str.split(\"\"))));\n\nEDIT: For those who object that they aren't exactly equivalent because str.split will include an empty string in the set, we can do it even more verbose:\nString str = \"i am a string\";\nSet<String> set = new HashSet<String>(Arrays.asList(str.split(\"\")));\nset.remove(\"\");\nSystem.out.println(set);\n\nBut of course it depends on what you need to accomplish.\n"
] |
[
10
] |
[] |
[] |
[
"java",
"python",
"set"
] |
stackoverflow_0001368181_java_python_set.txt
|
Q:
python file-like buffer object
I've written a buffer class that provides a File-like interface with read, write, seek, tell, flush methods to a simple string in memory. Of course it is incomplete (e.g. I didn't write readline). It's purpose is to be filled by a background thread from some external data source, but let a user treat it like a file. I'd expect it to contain a relatively small amount of data (maybe 50K max)
Is there a better way to do this instead of writing it from scratch?
A:
You can use the standard Python modules StringIO or cStringIO to obtain an in-memory buffer which implements the file interface.
cStringIO is implemented in C, and will be faster, so you should use that version if possible.
If you're using Python 3 you should use the io.StringIO instead of StringIO and io.BytesIO instead of cStringIO.
A:
I think you might be looking for StringIO.
|
python file-like buffer object
|
I've written a buffer class that provides a File-like interface with read, write, seek, tell, flush methods to a simple string in memory. Of course it is incomplete (e.g. I didn't write readline). It's purpose is to be filled by a background thread from some external data source, but let a user treat it like a file. I'd expect it to contain a relatively small amount of data (maybe 50K max)
Is there a better way to do this instead of writing it from scratch?
|
[
"You can use the standard Python modules StringIO or cStringIO to obtain an in-memory buffer which implements the file interface.\ncStringIO is implemented in C, and will be faster, so you should use that version if possible.\nIf you're using Python 3 you should use the io.StringIO instead of StringIO and io.BytesIO instead of cStringIO.\n",
"I think you might be looking for StringIO.\n"
] |
[
27,
7
] |
[] |
[] |
[
"buffer",
"io",
"python"
] |
stackoverflow_0001368261_buffer_io_python.txt
|
Q:
How do I unit test Django Views?
I want to begin integrating unit tests into my Django projects and I've discovered unit testing a view to be tricky because of the way Django implements views with functions.
For example, each function is a view/page in Django if the function has a URL.
How do I unit test Django views?
A:
I'm not sure how testing a view is tricky.
You just use the test client.
Code coverage is easy. You reason how how a URL request maps to a code path and make the appropriate URL requests.
You can, if you want, call the view functions "manually" by creating a Request object and examining the Response object, but this is too much work.
If you have doubts about your code coverage, that's a good thing. It means you have code you can't easily map to a URL (which is all a user can ever see of a web application.) If you have code that doesn't map to a URL, you should probably either (a) delete the code or (b) refactor it into a separate module.
We have lots of modules outside our view functions. Our view functions import these modules. We test these "outside the view function" modules with ordinary unittest.
Here's a typical structure.
some_big_product/
|-- __init__.py
|-- settings.py
|-- urls.py
|-- logging.ini
|-- other_global_files.py
|-- an_app_1/
| |-- __init__.py
| |-- urls.py
| |-- models.py
| |-- views.py
| |-- tests.py <-- the generic Django testing
| |-- app_specific_module.py
| |-- app_specific_package/
| | |-- __init__.py
| |-- test_app_specific_module.py <-- unittest
| |-- test_app_specific_package.py
|-- generic_module.py
|-- generic_package/
| |-- __init__.py
|-- tests/
| |-- test_this.py
| |-- test_that.py
| |-- test_all.py <-- not always practical
|-- scripts/
|-- run_tests.sh
A:
django.test.client should have everything you need for basic unit testing of the view. I also really like twill and selenium for testing the full stack.
A:
You could try tddspry - collection of helpers to test Django with nosetests and twill. Nose also have coverage plugin which generate pretty reports of the coverage.
|
How do I unit test Django Views?
|
I want to begin integrating unit tests into my Django projects and I've discovered unit testing a view to be tricky because of the way Django implements views with functions.
For example, each function is a view/page in Django if the function has a URL.
How do I unit test Django views?
|
[
"I'm not sure how testing a view is tricky.\nYou just use the test client.\nCode coverage is easy. You reason how how a URL request maps to a code path and make the appropriate URL requests.\nYou can, if you want, call the view functions \"manually\" by creating a Request object and examining the Response object, but this is too much work.\nIf you have doubts about your code coverage, that's a good thing. It means you have code you can't easily map to a URL (which is all a user can ever see of a web application.) If you have code that doesn't map to a URL, you should probably either (a) delete the code or (b) refactor it into a separate module.\nWe have lots of modules outside our view functions. Our view functions import these modules. We test these \"outside the view function\" modules with ordinary unittest.\n\nHere's a typical structure.\nsome_big_product/\n|-- __init__.py\n|-- settings.py\n|-- urls.py\n|-- logging.ini\n|-- other_global_files.py\n|-- an_app_1/\n| |-- __init__.py\n| |-- urls.py\n| |-- models.py\n| |-- views.py\n| |-- tests.py <-- the generic Django testing \n| |-- app_specific_module.py\n| |-- app_specific_package/\n| | |-- __init__.py\n| |-- test_app_specific_module.py <-- unittest \n| |-- test_app_specific_package.py\n|-- generic_module.py\n|-- generic_package/\n| |-- __init__.py\n|-- tests/\n| |-- test_this.py\n| |-- test_that.py\n| |-- test_all.py <-- not always practical\n|-- scripts/\n |-- run_tests.sh \n\n",
"django.test.client should have everything you need for basic unit testing of the view. I also really like twill and selenium for testing the full stack. \n",
"You could try tddspry - collection of helpers to test Django with nosetests and twill. Nose also have coverage plugin which generate pretty reports of the coverage. \n"
] |
[
11,
2,
0
] |
[] |
[] |
[
"django",
"python",
"unit_testing",
"views"
] |
stackoverflow_0001368255_django_python_unit_testing_views.txt
|
Q:
Default encoding of exception messages
The following code examines the behaviour of the float() method when fed a non-ascii symbol:
import sys
try:
float(u'\xbd')
except ValueError as e:
print sys.getdefaultencoding() # in my system, this is 'ascii'
print e[0].decode('latin-1') # u'invalid literal for float(): ' followed by the 1/2 (one half) character
print unicode(e[0]) # raises "UnicodeDecodeError: 'ascii' codec can't decode byte 0xbd in position 29: ordinal not in range(128)"
My question: why is the error message e[0] encoded in Latin-1? The default encoding is Ascii, and this seems to be what unicode() expects.
Platform is Ubuntu 9.04, Python 2.6.2
A:
e[0] isn't encoded with latin-1; it just so happens that the byte \xbd, when decoded as latin-1, is the character U+00BD.
The conversion occurs in Objects/floatobject.c.
First, the unicode string must be converted to a byte string. This is performed using PyUnicode_EncodeDecimal():
if (PyUnicode_EncodeDecimal(PyUnicode_AS_UNICODE(v),
PyUnicode_GET_SIZE(v),
s_buffer,
NULL))
return NULL;
which is implemented in unicodeobject.c. It doesn't perform any sort of character set conversion, it just writes bytes with values equal to the unicode ordinals of the string. In this case, U+00BD -> 0xBD.
The statement formatting the error is:
PyOS_snprintf(buffer, sizeof(buffer),
"invalid literal for float(): %.200s", s);
where s contains the byte string created earlier. PyOS_snprintf() writes a byte string, and s is a byte string, so it just includes it directly.
A:
Very good question!
I took the liberty to dig into Python's source code, which is a mere command away on properly set up linux distributions (apt-get source python2.5)
Damn, John Millikin beat me to it. That's right, PyUnicode_EncodeDecimal is the answer it does this here:
/* (Loop ch in the unicode string) */
if (Py_UNICODE_ISSPACE(ch)) {
*output++ = ' ';
++p;
continue;
}
decimal = Py_UNICODE_TODECIMAL(ch);
if (decimal >= 0) {
*output++ = '0' + decimal;
++p;
continue;
}
if (0 < ch && ch < 256) {
*output++ = (char)ch;
++p;
continue;
}
/* All other characters are considered unencodable */
collstart = p;
collend = p+1;
while (collend < end) {
if ((0 < *collend && *collend < 256) ||
!Py_UNICODE_ISSPACE(*collend) ||
Py_UNICODE_TODECIMAL(*collend))
break;
}
See, it leaves all unicode code points < 256 in place, which are the latin-1 characters, based on Unicode's backward compatibility.
Addendum
With this in place, you can verify by trying other non-latin-1 characters, it will throw a different exception:
>>> float(u"ħ")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'decimal' codec can't encode character u'\u0127' in position 0: invalid decimal Unicode string
A:
The ASCII encoding only includes the bytes with values <= 127. The range of characters represented by these bytes is identical in most encodings; in other words, "A" is chr(65) in ASCII, in latin-1, in UTF-8, and so on.
The one half symbol, however, is not part of the ASCII character set, so when Python tries to encode this symbol into ASCII, it can do nothing but fail.
Update: Here's what happens (I assume we're talking CPython):
float(u'\xbd') leads to PyFloat_FromString in floatobject.c being called. This function, giving a unicode object, in turn calls PyUnicode_EncodeDecimal in unicodeobject.c being called. From skimming over the code, I get it that this function turns the unicode object into a string by replacing every character with a unicode codepoint <256 with the byte of that value, i.e. the one half character, having the codepoint 189, is turned into chr(89).
Then, PyFloat_FromString does its work as usual. At this moment, it's working with a regular string, which happens to be containing a non-ASCII range byte. It doesn't care about this; it just finds a byte that's not a digit, a period or the like, so it raises the value error.
The argument to this exception is a string
"invalid literal for float(): " + evil_string
That's fine; an exception message is, after all, a string. It's only when you try to decode this string, using the default encoding ASCII, that this turns into a problem.
A:
From experimenting with you code snippet, it would seem I have the same behavior on my platform (Py2.6 on OS X 10.5).
Since you established that e[0] is encoded with latin-1, the correct way to convert it unicode is to do .decode('latin-1'), and not unicode(e[0]).
Update: So it sounds like e[0] does not have a valid encoding. Definetely not latin-1. Because of that, as mentioned elsewhere in the comments, you'll have to call repr(e[0]) if you need to display this error message w/o causing a cascading exception.
|
Default encoding of exception messages
|
The following code examines the behaviour of the float() method when fed a non-ascii symbol:
import sys
try:
float(u'\xbd')
except ValueError as e:
print sys.getdefaultencoding() # in my system, this is 'ascii'
print e[0].decode('latin-1') # u'invalid literal for float(): ' followed by the 1/2 (one half) character
print unicode(e[0]) # raises "UnicodeDecodeError: 'ascii' codec can't decode byte 0xbd in position 29: ordinal not in range(128)"
My question: why is the error message e[0] encoded in Latin-1? The default encoding is Ascii, and this seems to be what unicode() expects.
Platform is Ubuntu 9.04, Python 2.6.2
|
[
"e[0] isn't encoded with latin-1; it just so happens that the byte \\xbd, when decoded as latin-1, is the character U+00BD.\nThe conversion occurs in Objects/floatobject.c.\nFirst, the unicode string must be converted to a byte string. This is performed using PyUnicode_EncodeDecimal():\nif (PyUnicode_EncodeDecimal(PyUnicode_AS_UNICODE(v),\n PyUnicode_GET_SIZE(v),\n s_buffer,\n NULL))\n return NULL;\n\nwhich is implemented in unicodeobject.c. It doesn't perform any sort of character set conversion, it just writes bytes with values equal to the unicode ordinals of the string. In this case, U+00BD -> 0xBD.\nThe statement formatting the error is:\nPyOS_snprintf(buffer, sizeof(buffer),\n \"invalid literal for float(): %.200s\", s);\n\nwhere s contains the byte string created earlier. PyOS_snprintf() writes a byte string, and s is a byte string, so it just includes it directly.\n",
"Very good question!\nI took the liberty to dig into Python's source code, which is a mere command away on properly set up linux distributions (apt-get source python2.5)\nDamn, John Millikin beat me to it. That's right, PyUnicode_EncodeDecimal is the answer it does this here:\n/* (Loop ch in the unicode string) */\n if (Py_UNICODE_ISSPACE(ch)) {\n *output++ = ' ';\n ++p;\n continue;\n }\n decimal = Py_UNICODE_TODECIMAL(ch);\n if (decimal >= 0) {\n *output++ = '0' + decimal;\n ++p;\n continue;\n }\n if (0 < ch && ch < 256) {\n *output++ = (char)ch;\n ++p;\n continue;\n }\n /* All other characters are considered unencodable */\n collstart = p;\n collend = p+1;\n while (collend < end) {\n if ((0 < *collend && *collend < 256) ||\n !Py_UNICODE_ISSPACE(*collend) ||\n Py_UNICODE_TODECIMAL(*collend))\n break;\n }\n\nSee, it leaves all unicode code points < 256 in place, which are the latin-1 characters, based on Unicode's backward compatibility.\n\nAddendum\nWith this in place, you can verify by trying other non-latin-1 characters, it will throw a different exception:\n>>> float(u\"ħ\")\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nUnicodeEncodeError: 'decimal' codec can't encode character u'\\u0127' in position 0: invalid decimal Unicode string\n\n",
"The ASCII encoding only includes the bytes with values <= 127. The range of characters represented by these bytes is identical in most encodings; in other words, \"A\" is chr(65) in ASCII, in latin-1, in UTF-8, and so on.\nThe one half symbol, however, is not part of the ASCII character set, so when Python tries to encode this symbol into ASCII, it can do nothing but fail.\nUpdate: Here's what happens (I assume we're talking CPython):\nfloat(u'\\xbd') leads to PyFloat_FromString in floatobject.c being called. This function, giving a unicode object, in turn calls PyUnicode_EncodeDecimal in unicodeobject.c being called. From skimming over the code, I get it that this function turns the unicode object into a string by replacing every character with a unicode codepoint <256 with the byte of that value, i.e. the one half character, having the codepoint 189, is turned into chr(89).\nThen, PyFloat_FromString does its work as usual. At this moment, it's working with a regular string, which happens to be containing a non-ASCII range byte. It doesn't care about this; it just finds a byte that's not a digit, a period or the like, so it raises the value error.\nThe argument to this exception is a string \n\"invalid literal for float(): \" + evil_string\n\nThat's fine; an exception message is, after all, a string. It's only when you try to decode this string, using the default encoding ASCII, that this turns into a problem.\n",
"From experimenting with you code snippet, it would seem I have the same behavior on my platform (Py2.6 on OS X 10.5).\nSince you established that e[0] is encoded with latin-1, the correct way to convert it unicode is to do .decode('latin-1'), and not unicode(e[0]).\nUpdate: So it sounds like e[0] does not have a valid encoding. Definetely not latin-1. Because of that, as mentioned elsewhere in the comments, you'll have to call repr(e[0]) if you need to display this error message w/o causing a cascading exception.\n"
] |
[
9,
5,
2,
0
] |
[] |
[] |
[
"encoding",
"exception",
"python",
"python_2.x"
] |
stackoverflow_0001369089_encoding_exception_python_python_2.x.txt
|
Q:
Managing Python Path When Moving Code from Development Computer to Target
I have a python project with this directory structure and these files:
/home/project_root
|---__init__.py
|---setup
|---__init__.py
|---configs.py
|---test_code
|---__init__.py
|---tester.py
The tester script imports from setup/configs.py with the reference "setup.configs". It runs fine on my development machine.
This works on the development (Linux) computer. When I move this to another (Linux) computer, I set the PYTHONPATH with
PYTHONPATH = "/home/project_root"
But when I run tester.py, it can't find the configs module. And when I run the interactive Python interpreter, sys.path doesn't include the /home/project_root directory. But /home/project_root does appear when I echo $PYTHPATH.
What am I doing wrong here?
(I don't want to rely on the .bashrc file to set the PYTHONPATH for the target machine -- the code is for a Django application, and will eventually be run by www-data. And, I know that the apache configuration for Django includes a specification of the PYTHONPATH, but I don't want to use that here as I'm first trying to make sure the code passes its unit tests in the target machine environment.)
CURIOUSER AND CURIOUSER
This seems to be a userid and permissions problem.
- When launched by a command from an ordinary user, the interpreter can import modules as expected.
- When launched by sudo (I'm running Ubuntu here), the interpreter cannot import modules as expected.
- I've been calling the test script with sudo, as the files are owned by www-data (b/c they'll be called by the user running apache as part of the Django application).
- After changing the files' ownership to that of an ordinary user, the test script does run without import errors (albeit, into all sorts of userid related walls).
Sorry to waste your time. This question should be closed.
A:
Stick this in the tester script right before the import setup.configs
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), os.path.pardir))
sys.path is a list of all the directories the python interpreter looks for when importing a python module.
This will add the parent directory which contains setup module to the beginning of that list which means that the local directory will be checked first. That is important if you have your module installed system wide. More info on that here: sys doc.
EDIT: You could also put a .pth file in /usr/local/lib/python2.X/site-packages/ A .pth file is simply a text file with a directory path on each line that the python interpreter will search in. So just add a file with this line in it:
/home/project_root
A:
Try explicitly setting your python path in your scripts. If you don't want to have to change it, you could always add something like "../" to the path in tester. That is to say:
sys.path.append("../")
A:
(I don't want to rely on the .bashrc file to set the PYTHONPATH for the target machine -- the code is for a Django application, and will eventually be run by www-data. And, I know that the apache configuration for Django includes a specification of the PYTHONPATH, but I don't want to use that here as I'm first trying to make sure the code passes its unit tests in the target machine environment.)
If the code is for a Django application, is there a reason you're not testing it in the context of a Django project? Testing it in the context of a Django project gives a couple benefits:
Django's manage.py will set up your Python environment for you. It'll add the appropriate project paths to sys.path, and it'll set the environment variable DJANGO_SETTINGS_MODULE correctly.
Django's libraries include ample unit testing facilities, and you can easily extend that functionality to include your own testing facilities. Executing tests in a Django project is as easy as executing a single command via manage.py.
|
Managing Python Path When Moving Code from Development Computer to Target
|
I have a python project with this directory structure and these files:
/home/project_root
|---__init__.py
|---setup
|---__init__.py
|---configs.py
|---test_code
|---__init__.py
|---tester.py
The tester script imports from setup/configs.py with the reference "setup.configs". It runs fine on my development machine.
This works on the development (Linux) computer. When I move this to another (Linux) computer, I set the PYTHONPATH with
PYTHONPATH = "/home/project_root"
But when I run tester.py, it can't find the configs module. And when I run the interactive Python interpreter, sys.path doesn't include the /home/project_root directory. But /home/project_root does appear when I echo $PYTHPATH.
What am I doing wrong here?
(I don't want to rely on the .bashrc file to set the PYTHONPATH for the target machine -- the code is for a Django application, and will eventually be run by www-data. And, I know that the apache configuration for Django includes a specification of the PYTHONPATH, but I don't want to use that here as I'm first trying to make sure the code passes its unit tests in the target machine environment.)
CURIOUSER AND CURIOUSER
This seems to be a userid and permissions problem.
- When launched by a command from an ordinary user, the interpreter can import modules as expected.
- When launched by sudo (I'm running Ubuntu here), the interpreter cannot import modules as expected.
- I've been calling the test script with sudo, as the files are owned by www-data (b/c they'll be called by the user running apache as part of the Django application).
- After changing the files' ownership to that of an ordinary user, the test script does run without import errors (albeit, into all sorts of userid related walls).
Sorry to waste your time. This question should be closed.
|
[
"Stick this in the tester script right before the import setup.configs\nimport sys\nimport os\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), os.path.pardir))\n\nsys.path is a list of all the directories the python interpreter looks for when importing a python module.\nThis will add the parent directory which contains setup module to the beginning of that list which means that the local directory will be checked first. That is important if you have your module installed system wide. More info on that here: sys doc.\nEDIT: You could also put a .pth file in /usr/local/lib/python2.X/site-packages/ A .pth file is simply a text file with a directory path on each line that the python interpreter will search in. So just add a file with this line in it:\n/home/project_root\n",
"Try explicitly setting your python path in your scripts. If you don't want to have to change it, you could always add something like \"../\" to the path in tester. That is to say:\nsys.path.append(\"../\")\n\n",
"\n(I don't want to rely on the .bashrc file to set the PYTHONPATH for the target machine -- the code is for a Django application, and will eventually be run by www-data. And, I know that the apache configuration for Django includes a specification of the PYTHONPATH, but I don't want to use that here as I'm first trying to make sure the code passes its unit tests in the target machine environment.)\n\nIf the code is for a Django application, is there a reason you're not testing it in the context of a Django project? Testing it in the context of a Django project gives a couple benefits:\n\nDjango's manage.py will set up your Python environment for you. It'll add the appropriate project paths to sys.path, and it'll set the environment variable DJANGO_SETTINGS_MODULE correctly.\nDjango's libraries include ample unit testing facilities, and you can easily extend that functionality to include your own testing facilities. Executing tests in a Django project is as easy as executing a single command via manage.py.\n\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"path",
"python",
"pythonpath"
] |
stackoverflow_0001369159_path_python_pythonpath.txt
|
Q:
Split a list of dates by another list of dates
I have a number of nodes in a network. The nodes send status information every hour to indicate that they are alive. So i have a list of Nodes and the time when they were last alive. I want to graph the number of alive nodes over the time.
The list of nodes is sorted by the time they were last alive but i cant figure out a nice way to count how many are alive at a each date.
from datetime import datetime, timedelta
seen = [ n.last_seen for n in c.nodes ] # a list of datetimes
seen.sort()
start = seen[0]
end = seen[-1]
diff = end - start
num_points = 100
step = diff / num_points
num = len( c.nodes )
dates = [ start + i * step for i in range( num_points ) ]
What i want is basically
alive = [ len([ s for s in seen if s > date]) for date in dates ]
but thats not really efficient. The solution should use the fact that the seen list is sorted and not loop over the whole list for every date.
A:
this generator traverses the list only once:
def get_alive(seen, dates):
c = len(seen)
for date in dates:
for s in seen[-c:]:
if s >= date: # replaced your > for >= as it seems to make more sense
yield c
break
else:
c -= 1
A:
The python bisect module will find the correct index for you, and you can deduct the number of items before and after.
If I'm understanding right, that would be O(dates) * O(log(seen))
Edit 1
It should be possible to do in one pass, just like SilentGhost demonstrates. However,itertools.groupby works fine with sorted data, it should be able to do something here, perhaps like this (this is more than O(n) but could be improved):
import itertools
# numbers are easier to make up now
seen = [-1, 10, 12, 15, 20, 75]
dates = [5, 15, 25, 50, 100]
def finddate(s, dates):
"""Find the first date in @dates larger than s"""
for date in dates:
if s < date:
break
return date
for date, group in itertools.groupby(seen, key=lambda s: finddate(s, dates)):
print date, list(group)
A:
I took SilentGhosts generator solution a bit further using explicit iterators. This is the linear time solution i was thinking of.
def splitter( items, breaks ):
""" assuming `items` and `breaks` are sorted """
c = len( items )
items = iter(items)
item = items.next()
breaks = iter(breaks)
breaker = breaks.next()
while True:
if breaker > item:
for it in items:
c -= 1
if it >= breaker:
item = it
yield c
break
else:# no item left that is > the current breaker
yield 0 # 0 items left for the current breaker
# and 0 items left for all other breaks, since they are > the current
for _ in breaks:
yield 0
break # and done
else:
yield c
for br in breaks:
if br > item:
breaker = br
break
yield c
else:
# there is no break > any item in the list
break
|
Split a list of dates by another list of dates
|
I have a number of nodes in a network. The nodes send status information every hour to indicate that they are alive. So i have a list of Nodes and the time when they were last alive. I want to graph the number of alive nodes over the time.
The list of nodes is sorted by the time they were last alive but i cant figure out a nice way to count how many are alive at a each date.
from datetime import datetime, timedelta
seen = [ n.last_seen for n in c.nodes ] # a list of datetimes
seen.sort()
start = seen[0]
end = seen[-1]
diff = end - start
num_points = 100
step = diff / num_points
num = len( c.nodes )
dates = [ start + i * step for i in range( num_points ) ]
What i want is basically
alive = [ len([ s for s in seen if s > date]) for date in dates ]
but thats not really efficient. The solution should use the fact that the seen list is sorted and not loop over the whole list for every date.
|
[
"this generator traverses the list only once:\ndef get_alive(seen, dates):\n c = len(seen)\n for date in dates:\n for s in seen[-c:]:\n if s >= date: # replaced your > for >= as it seems to make more sense\n yield c\n break\n else:\n c -= 1\n\n",
"The python bisect module will find the correct index for you, and you can deduct the number of items before and after.\nIf I'm understanding right, that would be O(dates) * O(log(seen))\n\nEdit 1\nIt should be possible to do in one pass, just like SilentGhost demonstrates. However,itertools.groupby works fine with sorted data, it should be able to do something here, perhaps like this (this is more than O(n) but could be improved):\nimport itertools\n\n# numbers are easier to make up now\nseen = [-1, 10, 12, 15, 20, 75]\ndates = [5, 15, 25, 50, 100]\n\ndef finddate(s, dates):\n \"\"\"Find the first date in @dates larger than s\"\"\"\n for date in dates:\n if s < date:\n break\n return date\n\n\nfor date, group in itertools.groupby(seen, key=lambda s: finddate(s, dates)):\n print date, list(group)\n\n",
"I took SilentGhosts generator solution a bit further using explicit iterators. This is the linear time solution i was thinking of. \ndef splitter( items, breaks ):\n \"\"\" assuming `items` and `breaks` are sorted \"\"\"\n c = len( items )\n\n items = iter(items)\n item = items.next()\n breaks = iter(breaks)\n breaker = breaks.next()\n\n while True:\n if breaker > item:\n for it in items:\n c -= 1\n if it >= breaker:\n item = it\n yield c\n break\n else:# no item left that is > the current breaker\n yield 0 # 0 items left for the current breaker\n # and 0 items left for all other breaks, since they are > the current\n for _ in breaks:\n yield 0 \n break # and done\n else:\n yield c\n for br in breaks:\n if br > item:\n breaker = br\n break\n yield c\n else:\n # there is no break > any item in the list\n break\n\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"performance",
"python"
] |
stackoverflow_0001368802_performance_python.txt
|
Q:
referencing class methods in class lists in Python
I am writing a class in Python 2.6.2 that contains a lookup table. Most cases are simple enough that the table contains data. Some of the cases are more complex and I want to be able call a function. However, I'm running into some trouble referencing the function.
Here's some sample code:
class a:
lut = [1,
3,
17,
[12,34],
5]
Where lut is static, and is expected to be constant as well.
and now I wan to do the following:
class a:
def spam0(self):
return (some_calculation_based_on_self)
def spam1(self):
return (a_different_calculation_based_on_self)
lut = [1,
3,
17,
[12,34],
5,
self.spam0
self.spam1]
This doesn't compile because self.spam0 and self.spam1 are undefined. I tried using a.spam but that is also undefined. How can I set lut[5] to return a reference to self.spam?
Edit: This is what I plan to do:
(continuing the definition of class a):
import inspect
# continue to somewhere in the definition of class a
def __init__(self, param):
self.param = param
def eggs(self):
tmp = lut[param]
if (insect.isfunction(tmp)): # if tmp is self.spam()
return tmp() # should call it here
return tmp
So I want to either return a simple value or run some extra code, depending on the parameter.
Edit: lut doesn't have to be a class property, but the methods spam0 and spam1 do need to access the class members, so they have to belong to the class.
I'm not sure that this is the best way to do this. I'm still in the process of working this out.
A:
In the clas body, you're creating the class; there is no self, so you obviously cannot yet refer to self.anything. But also within that body there is as yet no a: name a gets bound AFTER the class body is done. So, although that's a tad less obvious, in the body of class a you cannot refer to a.anything either, yet.
What you CAN refer to are bare names of class attributes that have already been bound: for example, you could simply use 5, spam] at the end of your list lut. spam will be there as a function, as you say at one point that you want; not as a method, and most definitely NOT as a class method (I don't see classmethod ANYWHERE in your code, why do you think a class method would magically spring into existence unless you explicitly wrap a function in the classmethod builtin, directly or by decorator?) -- I suspect your use of "class method" does not actually refer to the class-method type (though it's certainly confusing;-).
So, if you later need to call that function on some instance x of a, you'll be calling e.g. a.lut[-1](x), with the argument explicitly there.
If you need to do something subtler it may be possible to get a bound or unbound method of some sort at various points during processing (after the class creation is done, or, if you want a bound instance method, only after a specific instance is instantiated). But you don't explain clearly and completely enough what exactly it is that you want to do, for us to offer very detailed help on this later-stage alternatives.
A:
While you are in the scope of the class, you can just write
class A:
def spam(self):
pass
lut = [1, 2, 3, spam]
a = A()
print a.lut
gives
[1, 2, 3, <function spam at 0xb7bb764c>]
Don't forget that this is a function in your lookup table, not a number as you probably intended. You probably want to solve another problem.
A:
Remember that there's no such thing as static, or constant, in Python. Just make it easy to read. Here's an example which generates a cached version of lut per object:
class A(object):
def __init__(self):
self.__cached_lut = None
def spam0(self):
return (some_calculation_based_on_self)
def spam1(self):
return (a_different_calculation_based_on_self)
@property
def lut(self):
if self.__cached_lut is None:
self.__cached_lut = [1,
3,
17,
[12,34],
5,
self.spam0()
self.spam1()]
return self.__cached_lut
a = A()
print a.lut
A:
I'm not sure I understand the question. Is this what you mean?
class a:
lut = [1,
3,
17,
[12,34],
5]
def __init__(self):
self.lut.append(self.spam)
def spam(self, a):
print "this is %s" % a
b = a()
b.lut[-1](4)
this will output: "this is 4".
A:
Trivial:
class a:
@classmethod
def spam(cls):
# not really pass, but you get the idea
pass
lut = [1,
3,
17,
[12,34],
5,
spam]
assert a().lut[-1] == a.spam
assert a.spam() is None
A:
You want the function to be bound to the class. None of the answers seem to address this.
This won't be done automatically; when you do this:
class a:
def spam(self): print self
lut = [1, spam]
lut[1] is spam itself, not a bound method to an object, so you can't simply call lut[1](); you'd have to call lut[1](self).
If you specifically want to be able to include functions in the list that can be called directly, you need to arrange for the functions to be bound, which means referencing them from an instance and not the class. To do this, you'd probably want to initialize this list from __init__:
class a:
def spam(self): print self
def __init__(self):
self.lut = [1, self.spam]
and now self.lut[1]() is correct, since it's a bound method.
This all has the advantage that other functions can be placed in the list for other purposes, possibly bound to other objects or otherwise not expecting a parameter.
It has the disadvantage that you aren't reusing the list between instances; this may or may not matter to you.
|
referencing class methods in class lists in Python
|
I am writing a class in Python 2.6.2 that contains a lookup table. Most cases are simple enough that the table contains data. Some of the cases are more complex and I want to be able call a function. However, I'm running into some trouble referencing the function.
Here's some sample code:
class a:
lut = [1,
3,
17,
[12,34],
5]
Where lut is static, and is expected to be constant as well.
and now I wan to do the following:
class a:
def spam0(self):
return (some_calculation_based_on_self)
def spam1(self):
return (a_different_calculation_based_on_self)
lut = [1,
3,
17,
[12,34],
5,
self.spam0
self.spam1]
This doesn't compile because self.spam0 and self.spam1 are undefined. I tried using a.spam but that is also undefined. How can I set lut[5] to return a reference to self.spam?
Edit: This is what I plan to do:
(continuing the definition of class a):
import inspect
# continue to somewhere in the definition of class a
def __init__(self, param):
self.param = param
def eggs(self):
tmp = lut[param]
if (insect.isfunction(tmp)): # if tmp is self.spam()
return tmp() # should call it here
return tmp
So I want to either return a simple value or run some extra code, depending on the parameter.
Edit: lut doesn't have to be a class property, but the methods spam0 and spam1 do need to access the class members, so they have to belong to the class.
I'm not sure that this is the best way to do this. I'm still in the process of working this out.
|
[
"In the clas body, you're creating the class; there is no self, so you obviously cannot yet refer to self.anything. But also within that body there is as yet no a: name a gets bound AFTER the class body is done. So, although that's a tad less obvious, in the body of class a you cannot refer to a.anything either, yet.\nWhat you CAN refer to are bare names of class attributes that have already been bound: for example, you could simply use 5, spam] at the end of your list lut. spam will be there as a function, as you say at one point that you want; not as a method, and most definitely NOT as a class method (I don't see classmethod ANYWHERE in your code, why do you think a class method would magically spring into existence unless you explicitly wrap a function in the classmethod builtin, directly or by decorator?) -- I suspect your use of \"class method\" does not actually refer to the class-method type (though it's certainly confusing;-).\nSo, if you later need to call that function on some instance x of a, you'll be calling e.g. a.lut[-1](x), with the argument explicitly there.\nIf you need to do something subtler it may be possible to get a bound or unbound method of some sort at various points during processing (after the class creation is done, or, if you want a bound instance method, only after a specific instance is instantiated). But you don't explain clearly and completely enough what exactly it is that you want to do, for us to offer very detailed help on this later-stage alternatives.\n",
"While you are in the scope of the class, you can just write\nclass A:\n def spam(self):\n pass\n\n lut = [1, 2, 3, spam]\n\na = A()\nprint a.lut\n\ngives\n[1, 2, 3, <function spam at 0xb7bb764c>]\n\nDon't forget that this is a function in your lookup table, not a number as you probably intended. You probably want to solve another problem.\n",
"Remember that there's no such thing as static, or constant, in Python. Just make it easy to read. Here's an example which generates a cached version of lut per object:\nclass A(object):\n def __init__(self):\n self.__cached_lut = None\n\n def spam0(self):\n return (some_calculation_based_on_self)\n\n def spam1(self):\n return (a_different_calculation_based_on_self)\n\n @property\n def lut(self):\n if self.__cached_lut is None:\n self.__cached_lut = [1,\n 3,\n 17,\n [12,34],\n 5,\n self.spam0()\n self.spam1()]\n return self.__cached_lut\n\na = A()\nprint a.lut\n\n",
"I'm not sure I understand the question. Is this what you mean?\nclass a:\n lut = [1,\n 3,\n 17,\n [12,34],\n 5]\n\n def __init__(self):\n self.lut.append(self.spam)\n\n def spam(self, a):\n print \"this is %s\" % a\n\nb = a()\nb.lut[-1](4)\n\nthis will output: \"this is 4\".\n",
"Trivial:\nclass a:\n @classmethod\n def spam(cls):\n # not really pass, but you get the idea\n pass\n\n lut = [1,\n 3,\n 17,\n [12,34],\n 5,\n spam]\n\n\nassert a().lut[-1] == a.spam\nassert a.spam() is None\n\n",
"You want the function to be bound to the class. None of the answers seem to address this.\nThis won't be done automatically; when you do this:\nclass a:\n def spam(self): print self\n lut = [1, spam]\n\nlut[1] is spam itself, not a bound method to an object, so you can't simply call lut[1](); you'd have to call lut[1](self).\nIf you specifically want to be able to include functions in the list that can be called directly, you need to arrange for the functions to be bound, which means referencing them from an instance and not the class. To do this, you'd probably want to initialize this list from __init__:\nclass a:\n def spam(self): print self\n def __init__(self):\n self.lut = [1, self.spam]\n\nand now self.lut[1]() is correct, since it's a bound method.\nThis all has the advantage that other functions can be placed in the list for other purposes, possibly bound to other objects or otherwise not expecting a parameter.\nIt has the disadvantage that you aren't reusing the list between instances; this may or may not matter to you.\n"
] |
[
4,
3,
1,
0,
0,
0
] |
[
"You're going to need to move the definition of a.lut outside of the definition of a. \nclass a():\n def spam():pass\n\na.lut = [1,2,3,a.spam]\n\nIf you think about it, this makes perfect sense. Using self wouldn't work because self is actually only defined for class methods for which you use the parameter \"self\". self has no special meaning in Python and is not a reserved word; it is simply the conventional argument passed to bound class methods, but could also be this, or foo, or whatever you want.\nReferring to a.lut doesn't work because the definition of a isn't complete yet. How should Python know, at that point in the code, what a is? In the scope of a's class definition, a itself is still undefined.\n"
] |
[
-1
] |
[
"class",
"python",
"reference"
] |
stackoverflow_0001351669_class_python_reference.txt
|
Q:
How does Django determine if an uploaded image is valid?
I'm trying to add images to my models in my Django app.
models.py
class ImageMain(models.Model):
"""This is the Main Image of the product"""
product = models.ForeignKey(Product)
photo = models.ImageField(upload_to='products')
In development mode, every time I try to upload the image via Django admin, I keep getting:
Upload a valid image. The file you uploaded was either not an image or a corrupted image.
The jpg I'm trying to upload can be viewed with os X Preview so it would seem to be valid.
It seems the problem is Python Imaging Library doesn't recognize it as an image. Why would that be happening with a valid image?
PIL is installed, verified via the Django shell.
A:
According to Django's source code. Those three lines are responsible for verifying images:
from PIL import Image
trial_image = Image.open(file)
trial_image.verify()
The image type could be unsupported by PIL. Check the list of supported formats here
A:
Did you try uploading image format like gif or png? It might be that your PIL was not built with the jpeg lib properly. I had a similar issue with Django on Ubuntu. If you have ever seen the error message decoder jpeg not available, check this link. Relevant line from the link:
$ cd libImaging
$ ./configure --with-jpeg=/somelib/lib --with-zlib=/somelib/lib
A:
I looked into the Django source, in django/forms/fields.py, in the ImageField class. Django actually does use PIL to determine when an image is valid.
|
How does Django determine if an uploaded image is valid?
|
I'm trying to add images to my models in my Django app.
models.py
class ImageMain(models.Model):
"""This is the Main Image of the product"""
product = models.ForeignKey(Product)
photo = models.ImageField(upload_to='products')
In development mode, every time I try to upload the image via Django admin, I keep getting:
Upload a valid image. The file you uploaded was either not an image or a corrupted image.
The jpg I'm trying to upload can be viewed with os X Preview so it would seem to be valid.
It seems the problem is Python Imaging Library doesn't recognize it as an image. Why would that be happening with a valid image?
PIL is installed, verified via the Django shell.
|
[
"According to Django's source code. Those three lines are responsible for verifying images:\nfrom PIL import Image\ntrial_image = Image.open(file)\ntrial_image.verify()\n\nThe image type could be unsupported by PIL. Check the list of supported formats here\n",
"Did you try uploading image format like gif or png? It might be that your PIL was not built with the jpeg lib properly. I had a similar issue with Django on Ubuntu. If you have ever seen the error message decoder jpeg not available, check this link. Relevant line from the link:\n$ cd libImaging\n$ ./configure --with-jpeg=/somelib/lib --with-zlib=/somelib/lib\n\n",
"I looked into the Django source, in django/forms/fields.py, in the ImageField class. Django actually does use PIL to determine when an image is valid.\n"
] |
[
12,
2,
0
] |
[] |
[] |
[
"django",
"python",
"python_imaging_library"
] |
stackoverflow_0001368724_django_python_python_imaging_library.txt
|
Q:
How would you query Picasa from a Google App Engine app? Data API or Url Fetch?
How would you query Picasa from a Google App Engine app? Data API or Url Fetch? What are the pros and cons of using either method?
[Edit]
I would like to be able to query a specific album in Picasa and list all the photos in it.
Code examples to do this in python are much appreciated.
A:
Your question is a little off, since the Data API is exposed through RESTful URLs, so both methods are ultimately a "URL Fetch".
The Data API works quite well, though. It gives you access to nearly all the functionality of Picasa, and responses are sent back and forth in well-formed, well-documented XML. Google's documentation for the API is very good.
If you only need access to limited publicly available content (like a photostream) then you can do this at a very basic level by just fetching the feed url and parsing that... but even for that you can get the same data with more configuration options via the Data API URLs.
I'm not sure what code samples you'd like... it really depends what you actually want to do with Picasa.
|
How would you query Picasa from a Google App Engine app? Data API or Url Fetch?
|
How would you query Picasa from a Google App Engine app? Data API or Url Fetch? What are the pros and cons of using either method?
[Edit]
I would like to be able to query a specific album in Picasa and list all the photos in it.
Code examples to do this in python are much appreciated.
|
[
"Your question is a little off, since the Data API is exposed through RESTful URLs, so both methods are ultimately a \"URL Fetch\".\nThe Data API works quite well, though. It gives you access to nearly all the functionality of Picasa, and responses are sent back and forth in well-formed, well-documented XML. Google's documentation for the API is very good.\nIf you only need access to limited publicly available content (like a photostream) then you can do this at a very basic level by just fetching the feed url and parsing that... but even for that you can get the same data with more configuration options via the Data API URLs.\nI'm not sure what code samples you'd like... it really depends what you actually want to do with Picasa.\n"
] |
[
2
] |
[] |
[] |
[
"api",
"google_app_engine",
"picasa",
"python",
"urlfetch"
] |
stackoverflow_0001369861_api_google_app_engine_picasa_python_urlfetch.txt
|
Q:
Python .pth Files Aren't Working
Directories listed in my .pth configuration file aren't appearing in sys.path.
The contents of configuration file, named some_code_dirs.pth:
/home/project
Paths to the file:
/usr/lib/python2.6/site-packages/some_code_dirs.pth
/usr/lib/python2.6/some_code_dirs.pth
Check on sys variables in the python interpreter:
>>> print sys.prefix
'/usr'
>>> print sys.exec_prefix
'/usr'
All this seems as required in the Python documentation, but sys.path doesn't include the /home/project directory.
Note that the interpreter does add the directory after:
>>> site.addsitedir('/usr/lib/python2.6/site-packages')
What am I missing here?
A:
What OS are you using? On my Ubuntu 9.04 system that directory is not in sys.path.
Try putting it into /usr/lib/python2.6/dist-packages. Notice that it is dist instead of site.
A:
I had a similar problem a while ago. Check the encoding of your pth-file. It seems that pth-files are silently ignored if encoded in UTF-8 with BOM.
|
Python .pth Files Aren't Working
|
Directories listed in my .pth configuration file aren't appearing in sys.path.
The contents of configuration file, named some_code_dirs.pth:
/home/project
Paths to the file:
/usr/lib/python2.6/site-packages/some_code_dirs.pth
/usr/lib/python2.6/some_code_dirs.pth
Check on sys variables in the python interpreter:
>>> print sys.prefix
'/usr'
>>> print sys.exec_prefix
'/usr'
All this seems as required in the Python documentation, but sys.path doesn't include the /home/project directory.
Note that the interpreter does add the directory after:
>>> site.addsitedir('/usr/lib/python2.6/site-packages')
What am I missing here?
|
[
"What OS are you using? On my Ubuntu 9.04 system that directory is not in sys.path.\nTry putting it into /usr/lib/python2.6/dist-packages. Notice that it is dist instead of site.\n",
"I had a similar problem a while ago. Check the encoding of your pth-file. It seems that pth-files are silently ignored if encoded in UTF-8 with BOM.\n"
] |
[
4,
0
] |
[] |
[] |
[
"python",
"pythonpath"
] |
stackoverflow_0001369947_python_pythonpath.txt
|
Q:
Why is my "exploded" Python code actually running faster?
I'm in an introductory comp-sci class (after doing web programming for years) and became curious about how much speed I was gaining, if any, with my one-liners.
for line in lines:
numbers.append(eval(line.strip().split()[0]))
So I wrote the same thing with painfully explicit assignments and ran them against each other.
for line in lines:
a = line.split()
b = a[0]
c = b.strip()
d = eval(c)
numbers.append(d)
The second one runs a consistent 30ms faster (on my FreeBSD shell account; see Edit #2) with an input file of 100K lines! Of course this is on a total run time of 3 seconds, so the percentage isn't large...but I'm really surprised to see all those explicit named assignments somehow helping.
There's a recent thread on the performance of functions as opposed to inline code, but this seems even more basic. What gives? Should I be writing lovingly fulsomely verbose code and telling my sneering colleagues it's for performance reasons? (Thankfully, a list-comprehension version runs about 10ms faster yet, so my cherished compactness isn't totally out the window.)
EDIT: Thanks for the tip on my sloppy expansion of the code. You're all correct that the second one should really be:
for line in lines:
a = line.strip()
b = a.split()
c = b[0]
d = eval(c)
numbers.append(d)
However, even once I've fixed that, my timings are 2.714s, 2.652s, and 2.624s respectively, for the one-liner, the fully-exploded form, and the list comprehension (not pictured). So my question stands!
EDIT #2: It's interesting that the answer doesn't seem to be obvious even to a bunch of knowledgeable folks, which makes me feel a bit better about the question! I'll probably play with dis a bit on my own now, in this and similar circumstances, and see what turns up. By all means keep tinkering with the thread if you want, but I'm going to declare my received answer to be "huh, that's interesting; must be something deep." Especially since the behavior isn't consistent across machines, as steveha pointed out -- the slight difference goes the other direction on my Debian and Windows installs. Thanks to everyone who's contributed!
A:
Your code isn't exploded in the same order. The compact version goes:
A > B > C > D > E
while your exploded version goes
B > C > A > D > E
The effect is that strip() is being deferred 2 steps down, which may affect performance depending on what the input is.
A:
Frankly speaking, the first version, where everything is in one line, is a pain to read.
The second one is maybe a little too verbose (something in the middle would be appreciated) but it is definitely better.
I would not care too much about micro optimizations because of Python internals, and focus only on readable code.
By the way: the two (initial) versions are not doing the same thing.
In the former, you first strip, then split, while in the latter you first split and then strip (furthermore, only the first element).
Again, I think you overlooked this because the former version is quite difficult to focus on.
Then, analyzing the two (updated) versions with dis (python disassembler) showed no real difference between the two codes, only the order how the function names are being looked up. It is possible that this may have an impact on performance.
While we are on this, you could get some performance improvement just by binding eval to a local variable, before the loop. I would expect that after that change, there should be no difference in time between the two versions.
For example:
eval_ = eval
for line in lines:
a = line.strip()
b = a.split()
c = b[0]
d = eval_(c)
numbers.append(d)
We are mostly talking about micro-optimizations, but this aliasing is actually a technique that may be very useful in several circumstances.
A:
The method calls are also not in the same order:
for line in lines:
numbers.append(eval(line.strip().split()[0]))
should be:
for line in lines:
numbers.append(eval(line.split()[0].strip()))
A:
I agree with Roberto Liffredo; don't worry about that small of a performance improvement; code that is easier to understand, debug, and change is its own reward.
As for what's going on: the terse code and the expanded code don't do quite the same things. line.strip().split() first strips the line and then splits it; your expanded code splits the line first, and then calls strip() on the first word from the line. Now, the strip() isn't needed here; it's stripping white space from the end of the line, and words returned by split() never have any. Thus, in your expanded version, strip() has absolutely no work to do.
Without benchmarking, I can't be certain, but I think that strip() having no work to do is the key. In the one-line version, strip() sometimes has work to do; so it will strip the whitespace, building a new string object, and then return that string object. Then, that new string object will be split and discarded. The extra work of creating and discarding string objects is likely what is making the one-line solution slower. Compare that with the expanded version, where strip() simply looks at the string, decides it has no work to do, and returns the string unmodified.
In summary, I predict that a one-liner equivalent to your expanded code will be slightly faster than your expanded code. Try benchmarking this:
for line in lines:
numbers.append(eval(line.split()[0].strip()))
If you want to be completely thorough, you could benchmark both versions with the strip() removed completely. You just don't need it. Or, you could pre-process your input file, making sure that there is no leading or trailing white space on any input line, and thus never any work for strip() to do, and you will probably see the benchmarks work as you would expect.
If you really want to make a hobby out of optimizing for speed here, you could call split with a "maxsplit" argument; you don't need to process the whole string as you are throwing away everything after the first split. Thus you could call split(None, 1). You can get rid of the strip(), of course. And you would then have:
for line in lines:
numbers.append(eval(line.split(None, 1)[0]))
If you knew the numbers were always integers, you could call int() instead of eval(), for a speed improvement and security improvement.
A:
Also, sometimes it's tricky to run benchmarks. Did you re-run the benchmarks multiple times and take the best of several runs? Is there any chance that caching effects give a performance advantage to the second Python program you run? Have you tried making your input file ten times bigger, so your program will take about ten times longer to run?
A:
I haven't benchmarked it, but one factor in the time differences is that you have to do several variable lookups in the second function.
From Python Patterns - An Optimization Anecdote:
This is because local variable lookups are much faster than global or built-in variable lookups: the Python "compiler" optimizes most function bodies so that for local variables, no dictionary lookup is necessary, but a simple array indexing operation is sufficient.
So, local variable lookups do have a cost associated. Let's take a look at the disassembled functions:
First, making sure I have the same defined functions as you:
>>> def a(lines):
for line in lines:
numbers.append(eval(line.strip().split()[0]))
>>> def b(lines):
for line in lines:
a = line.strip()
b = a.split()
c = b[0]
d = eval(c)
numbers.append(d)
Now, let's compare their disassembled values:
>>> import dis
>>> dis.dis(a)
2 0 SETUP_LOOP 49 (to 52)
3 LOAD_FAST 0 (lines)
6 GET_ITER
>> 7 FOR_ITER 41 (to 51)
10 STORE_FAST 1 (line)
3 13 LOAD_GLOBAL 0 (numbers)
16 LOAD_ATTR 1 (append)
19 LOAD_GLOBAL 2 (eval)
22 LOAD_FAST 1 (line)
25 LOAD_ATTR 3 (strip)
28 CALL_FUNCTION 0
31 LOAD_ATTR 4 (split)
34 CALL_FUNCTION 0
37 LOAD_CONST 1 (0)
40 BINARY_SUBSCR
41 CALL_FUNCTION 1
44 CALL_FUNCTION 1
47 POP_TOP
48 JUMP_ABSOLUTE 7
>> 51 POP_BLOCK
>> 52 LOAD_CONST 0 (None)
55 RETURN_VALUE
>>> dis.dis(b)
2 0 SETUP_LOOP 73 (to 76)
3 LOAD_FAST 0 (lines)
6 GET_ITER
>> 7 FOR_ITER 65 (to 75)
10 STORE_FAST 1 (line)
3 13 LOAD_FAST 1 (line)
16 LOAD_ATTR 0 (strip)
19 CALL_FUNCTION 0
22 STORE_FAST 2 (a)
4 25 LOAD_FAST 2 (a)
28 LOAD_ATTR 1 (split)
31 CALL_FUNCTION 0
34 STORE_FAST 3 (b)
5 37 LOAD_FAST 3 (b)
40 LOAD_CONST 1 (0)
43 BINARY_SUBSCR
44 STORE_FAST 4 (c)
6 47 LOAD_GLOBAL 2 (eval)
50 LOAD_FAST 4 (c)
53 CALL_FUNCTION 1
56 STORE_FAST 5 (d)
7 59 LOAD_GLOBAL 3 (numbers)
62 LOAD_ATTR 4 (append)
65 LOAD_FAST 5 (d)
68 CALL_FUNCTION 1
71 POP_TOP
72 JUMP_ABSOLUTE 7
>> 75 POP_BLOCK
>> 76 LOAD_CONST 0 (None)
79 RETURN_VALUE
It's a lot of information, but we can see the second method is riddled with STORE_FAST, LOAD_FAST pairs due to the local variables being used. That might be enough to cause your small timing differences, (perhaps) in addition to the different operation order as others have mentioned.
A:
A one liner doesn't mean smaller or faster code. And I would expect that the eval() line would throw off performance measurements quite a bit.
Do you see similar performance differences without eval ?
A:
Okay, enough theorizing. I created a file with one million lines, with random amounts of white space (0 to 4 spaces, usually 0) at beginning and end of each line. And I ran your one-liner, your expanded version, and my own list comprehension version (as fast as I know how to make it).
My results? (Each one is the best of three trials):
one-line: 13.208s
expanded: 26.321s
listcomp: 13.024s
I tested under Ubuntu 9.04, 32-bit, with Python 2.6.2 (CPython, of course).
So I am completely unable to explain why you saw the expanded one running faster, given that it ran half as fast on my computer.
Here's the Python program I used to generate my test data:
import random
f = open("/tmp/junk.txt", "w")
r = random.Random()
def randws():
n = r.randint(0, 10) - 4
if n < 0 or n > 4:
n = 0
return " " * n
for i in xrange(1000000):
s0 = randws()
n = r.randint(0, 256)
s1 = randws()
f.write("%s%d%s\n" % (s0, n, s1))
Here's my list comprehension program:
lines = open("/tmp/junk.txt")
numbers = [eval(line.split(None, 1)[0]) for line in lines]
P.S. Here is a nice, fast version that can handle both int and float values.
lines = open("/tmp/junk.txt")
def val(x):
try:
return int(x)
except ValueError:
pass
try:
return float(x)
except StandardError:
return 0
numbers = [val(line.split(None, 1)[0]) for line in lines]
Its best-of-three time was: 2.161s
|
Why is my "exploded" Python code actually running faster?
|
I'm in an introductory comp-sci class (after doing web programming for years) and became curious about how much speed I was gaining, if any, with my one-liners.
for line in lines:
numbers.append(eval(line.strip().split()[0]))
So I wrote the same thing with painfully explicit assignments and ran them against each other.
for line in lines:
a = line.split()
b = a[0]
c = b.strip()
d = eval(c)
numbers.append(d)
The second one runs a consistent 30ms faster (on my FreeBSD shell account; see Edit #2) with an input file of 100K lines! Of course this is on a total run time of 3 seconds, so the percentage isn't large...but I'm really surprised to see all those explicit named assignments somehow helping.
There's a recent thread on the performance of functions as opposed to inline code, but this seems even more basic. What gives? Should I be writing lovingly fulsomely verbose code and telling my sneering colleagues it's for performance reasons? (Thankfully, a list-comprehension version runs about 10ms faster yet, so my cherished compactness isn't totally out the window.)
EDIT: Thanks for the tip on my sloppy expansion of the code. You're all correct that the second one should really be:
for line in lines:
a = line.strip()
b = a.split()
c = b[0]
d = eval(c)
numbers.append(d)
However, even once I've fixed that, my timings are 2.714s, 2.652s, and 2.624s respectively, for the one-liner, the fully-exploded form, and the list comprehension (not pictured). So my question stands!
EDIT #2: It's interesting that the answer doesn't seem to be obvious even to a bunch of knowledgeable folks, which makes me feel a bit better about the question! I'll probably play with dis a bit on my own now, in this and similar circumstances, and see what turns up. By all means keep tinkering with the thread if you want, but I'm going to declare my received answer to be "huh, that's interesting; must be something deep." Especially since the behavior isn't consistent across machines, as steveha pointed out -- the slight difference goes the other direction on my Debian and Windows installs. Thanks to everyone who's contributed!
|
[
"Your code isn't exploded in the same order. The compact version goes:\nA > B > C > D > E \n\nwhile your exploded version goes \nB > C > A > D > E\n\nThe effect is that strip() is being deferred 2 steps down, which may affect performance depending on what the input is.\n",
"Frankly speaking, the first version, where everything is in one line, is a pain to read.\nThe second one is maybe a little too verbose (something in the middle would be appreciated) but it is definitely better.\nI would not care too much about micro optimizations because of Python internals, and focus only on readable code.\nBy the way: the two (initial) versions are not doing the same thing.\nIn the former, you first strip, then split, while in the latter you first split and then strip (furthermore, only the first element).\nAgain, I think you overlooked this because the former version is quite difficult to focus on.\nThen, analyzing the two (updated) versions with dis (python disassembler) showed no real difference between the two codes, only the order how the function names are being looked up. It is possible that this may have an impact on performance. \nWhile we are on this, you could get some performance improvement just by binding eval to a local variable, before the loop. I would expect that after that change, there should be no difference in time between the two versions.\nFor example:\neval_ = eval\nfor line in lines:\n a = line.strip()\n b = a.split()\n c = b[0]\n d = eval_(c)\n numbers.append(d)\n\nWe are mostly talking about micro-optimizations, but this aliasing is actually a technique that may be very useful in several circumstances.\n",
"The method calls are also not in the same order:\nfor line in lines:\n numbers.append(eval(line.strip().split()[0]))\n\nshould be:\nfor line in lines:\n numbers.append(eval(line.split()[0].strip()))\n\n",
"I agree with Roberto Liffredo; don't worry about that small of a performance improvement; code that is easier to understand, debug, and change is its own reward.\nAs for what's going on: the terse code and the expanded code don't do quite the same things. line.strip().split() first strips the line and then splits it; your expanded code splits the line first, and then calls strip() on the first word from the line. Now, the strip() isn't needed here; it's stripping white space from the end of the line, and words returned by split() never have any. Thus, in your expanded version, strip() has absolutely no work to do.\nWithout benchmarking, I can't be certain, but I think that strip() having no work to do is the key. In the one-line version, strip() sometimes has work to do; so it will strip the whitespace, building a new string object, and then return that string object. Then, that new string object will be split and discarded. The extra work of creating and discarding string objects is likely what is making the one-line solution slower. Compare that with the expanded version, where strip() simply looks at the string, decides it has no work to do, and returns the string unmodified.\nIn summary, I predict that a one-liner equivalent to your expanded code will be slightly faster than your expanded code. Try benchmarking this:\nfor line in lines:\n numbers.append(eval(line.split()[0].strip()))\n\nIf you want to be completely thorough, you could benchmark both versions with the strip() removed completely. You just don't need it. Or, you could pre-process your input file, making sure that there is no leading or trailing white space on any input line, and thus never any work for strip() to do, and you will probably see the benchmarks work as you would expect.\nIf you really want to make a hobby out of optimizing for speed here, you could call split with a \"maxsplit\" argument; you don't need to process the whole string as you are throwing away everything after the first split. Thus you could call split(None, 1). You can get rid of the strip(), of course. And you would then have:\nfor line in lines:\n numbers.append(eval(line.split(None, 1)[0]))\n\nIf you knew the numbers were always integers, you could call int() instead of eval(), for a speed improvement and security improvement.\n",
"Also, sometimes it's tricky to run benchmarks. Did you re-run the benchmarks multiple times and take the best of several runs? Is there any chance that caching effects give a performance advantage to the second Python program you run? Have you tried making your input file ten times bigger, so your program will take about ten times longer to run?\n",
"I haven't benchmarked it, but one factor in the time differences is that you have to do several variable lookups in the second function.\nFrom Python Patterns - An Optimization Anecdote:\n\nThis is because local variable lookups are much faster than global or built-in variable lookups: the Python \"compiler\" optimizes most function bodies so that for local variables, no dictionary lookup is necessary, but a simple array indexing operation is sufficient. \n\nSo, local variable lookups do have a cost associated. Let's take a look at the disassembled functions:\nFirst, making sure I have the same defined functions as you:\n>>> def a(lines):\n for line in lines:\n numbers.append(eval(line.strip().split()[0]))\n\n>>> def b(lines):\n for line in lines:\n a = line.strip()\n b = a.split()\n c = b[0]\n d = eval(c)\n numbers.append(d)\n\nNow, let's compare their disassembled values:\n>>> import dis\n>>> dis.dis(a)\n 2 0 SETUP_LOOP 49 (to 52)\n 3 LOAD_FAST 0 (lines)\n 6 GET_ITER \n >> 7 FOR_ITER 41 (to 51)\n 10 STORE_FAST 1 (line)\n\n 3 13 LOAD_GLOBAL 0 (numbers)\n 16 LOAD_ATTR 1 (append)\n 19 LOAD_GLOBAL 2 (eval)\n 22 LOAD_FAST 1 (line)\n 25 LOAD_ATTR 3 (strip)\n 28 CALL_FUNCTION 0\n 31 LOAD_ATTR 4 (split)\n 34 CALL_FUNCTION 0\n 37 LOAD_CONST 1 (0)\n 40 BINARY_SUBSCR \n 41 CALL_FUNCTION 1\n 44 CALL_FUNCTION 1\n 47 POP_TOP \n 48 JUMP_ABSOLUTE 7\n >> 51 POP_BLOCK \n >> 52 LOAD_CONST 0 (None)\n 55 RETURN_VALUE \n>>> dis.dis(b)\n 2 0 SETUP_LOOP 73 (to 76)\n 3 LOAD_FAST 0 (lines)\n 6 GET_ITER \n >> 7 FOR_ITER 65 (to 75)\n 10 STORE_FAST 1 (line)\n\n 3 13 LOAD_FAST 1 (line)\n 16 LOAD_ATTR 0 (strip)\n 19 CALL_FUNCTION 0\n 22 STORE_FAST 2 (a)\n\n 4 25 LOAD_FAST 2 (a)\n 28 LOAD_ATTR 1 (split)\n 31 CALL_FUNCTION 0\n 34 STORE_FAST 3 (b)\n\n 5 37 LOAD_FAST 3 (b)\n 40 LOAD_CONST 1 (0)\n 43 BINARY_SUBSCR \n 44 STORE_FAST 4 (c)\n\n 6 47 LOAD_GLOBAL 2 (eval)\n 50 LOAD_FAST 4 (c)\n 53 CALL_FUNCTION 1\n 56 STORE_FAST 5 (d)\n\n 7 59 LOAD_GLOBAL 3 (numbers)\n 62 LOAD_ATTR 4 (append)\n 65 LOAD_FAST 5 (d)\n 68 CALL_FUNCTION 1\n 71 POP_TOP \n 72 JUMP_ABSOLUTE 7\n >> 75 POP_BLOCK \n >> 76 LOAD_CONST 0 (None)\n 79 RETURN_VALUE \n\nIt's a lot of information, but we can see the second method is riddled with STORE_FAST, LOAD_FAST pairs due to the local variables being used. That might be enough to cause your small timing differences, (perhaps) in addition to the different operation order as others have mentioned.\n",
"A one liner doesn't mean smaller or faster code. And I would expect that the eval() line would throw off performance measurements quite a bit. \nDo you see similar performance differences without eval ?\n",
"Okay, enough theorizing. I created a file with one million lines, with random amounts of white space (0 to 4 spaces, usually 0) at beginning and end of each line. And I ran your one-liner, your expanded version, and my own list comprehension version (as fast as I know how to make it).\nMy results? (Each one is the best of three trials):\none-line: 13.208s\nexpanded: 26.321s\nlistcomp: 13.024s\n\nI tested under Ubuntu 9.04, 32-bit, with Python 2.6.2 (CPython, of course).\nSo I am completely unable to explain why you saw the expanded one running faster, given that it ran half as fast on my computer.\nHere's the Python program I used to generate my test data:\nimport random\n\nf = open(\"/tmp/junk.txt\", \"w\")\n\nr = random.Random()\n\ndef randws():\n n = r.randint(0, 10) - 4\n if n < 0 or n > 4:\n n = 0\n return \" \" * n\n\nfor i in xrange(1000000):\n s0 = randws()\n n = r.randint(0, 256)\n s1 = randws()\n f.write(\"%s%d%s\\n\" % (s0, n, s1))\n\nHere's my list comprehension program:\nlines = open(\"/tmp/junk.txt\")\n\nnumbers = [eval(line.split(None, 1)[0]) for line in lines]\n\nP.S. Here is a nice, fast version that can handle both int and float values.\nlines = open(\"/tmp/junk.txt\")\n\ndef val(x):\n try:\n return int(x)\n except ValueError:\n pass\n\n try:\n return float(x)\n except StandardError:\n return 0\n\nnumbers = [val(line.split(None, 1)[0]) for line in lines]\n\nIts best-of-three time was: 2.161s\n"
] |
[
15,
7,
4,
3,
3,
3,
2,
2
] |
[] |
[] |
[
"optimization",
"python"
] |
stackoverflow_0001369697_optimization_python.txt
|
Q:
random.choice not random
I'm using Python 2.5 on Linux, in multiple parallel FCGI processes. I use
chars = string.ascii_letters + string.digits
cookie = ''.join([random.choice(chars) for x in range(32)])
to generate distinct cookies. Assuming that the RNG is seeded from /dev/urandom, and that the sequence of random numbers comes from the Mersenne twister, I would expect that there is practically zero chance of collision.
However, I do see regular collisions, even though only a few (<100) users are logged in at any time.
Why are the random numbers not more random?
A:
It shouldn't be generating duplicates.
import random
chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
def gen():
return ''.join([random.choice(chars) for x in range(32)])
test = [gen() for i in range(100000)]
print len(test), len(set(test)) # 100000 100000
The chances of duplicates is significant with chars = "ab"; 126 duplicates in 1000000 iterations. It's nonexistant with 62.
That said, this isn't a good way to generate cookies, because session cookies need to be unpredictable, to avoid attacks involving stealing other people's session cookies. The Mersenne Twister is not designed for generating secure random numbers. This is what I do:
import os, hashlib
def gen():
return hashlib.sha1(os.urandom(512)).hexdigest()
test = [gen() for i in range(100000)]
print len(test), len(set(test))
... which should be very secure (which is to say, difficult to take a string of session cookies and guess other existing session cookies from them).
A:
This is definitely not a normal collision scenario:
32 characters with 62 options per character is equivalent to 190 bits (log2(62) * 32)
According to the birthday paradox, you should be receiving a collision naturally once every 2**95 cookies, which means never
Could this be a concurrency issue?
If so, use different random.Random instances for each thread
Can save these instances in thread-local storage (threading.local())
On linux, Python should seed them using os.urandom() - not system time - so you should get different streams for each thread.
A:
I don't know how your FCGI processes are being spawned, but is it possible that it's using fork() after the Python interpreter has started (and the random module has been imported by something), hence effectively seeding two processes' random._insts from the same source?
Maybe put some debugging in to check that it is correctly seeding from urandom, and not falling back to the less rigorous time-based seed?
eta re comment: man! That's me stumped then; if the RNG always has different state at startup I can't see how you could possibly get collisions. Weird. Would have to put in a lot of state logging to investigate the particular cases which result in collisions, I guess, which sounds like a lot of work trawling through logs. Could it be (1a) the FCGI server usually doesn't fork, but occasionally does (maybe under load, or something)?
Or (3) some higher-level problem such as a broken HTTP proxy passing the same Set-Cookie to multiple clients?
A:
I had to erase my original answer, which suggested that generator is not seeded from /dev/urandom, since its source (for Python 3.x) clearly says that it is:
def seed(self, a=None):
"""Initialize internal state from hashable object.
None or no argument seeds from current time or from an operating
system specific randomness source if available.
If a is not None or an int or long, hash(a) is used instead.
"""
if a is None:
try:
a = int(_hexlify(_urandom(16)), 16)
except NotImplementedError:
import time
a = int(time.time() * 256) # use fractional seconds
super().seed(a)
self.gauss_next = None
I therefore humbly accept that there are mysteries in the world that I may not be able to decipher.
|
random.choice not random
|
I'm using Python 2.5 on Linux, in multiple parallel FCGI processes. I use
chars = string.ascii_letters + string.digits
cookie = ''.join([random.choice(chars) for x in range(32)])
to generate distinct cookies. Assuming that the RNG is seeded from /dev/urandom, and that the sequence of random numbers comes from the Mersenne twister, I would expect that there is practically zero chance of collision.
However, I do see regular collisions, even though only a few (<100) users are logged in at any time.
Why are the random numbers not more random?
|
[
"It shouldn't be generating duplicates.\nimport random\nchars = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\"\ndef gen():\n return ''.join([random.choice(chars) for x in range(32)])\n\ntest = [gen() for i in range(100000)]\nprint len(test), len(set(test)) # 100000 100000\n\nThe chances of duplicates is significant with chars = \"ab\"; 126 duplicates in 1000000 iterations. It's nonexistant with 62.\nThat said, this isn't a good way to generate cookies, because session cookies need to be unpredictable, to avoid attacks involving stealing other people's session cookies. The Mersenne Twister is not designed for generating secure random numbers. This is what I do:\nimport os, hashlib\ndef gen():\n return hashlib.sha1(os.urandom(512)).hexdigest()\n\ntest = [gen() for i in range(100000)]\nprint len(test), len(set(test))\n\n... which should be very secure (which is to say, difficult to take a string of session cookies and guess other existing session cookies from them).\n",
"This is definitely not a normal collision scenario:\n\n32 characters with 62 options per character is equivalent to 190 bits (log2(62) * 32)\nAccording to the birthday paradox, you should be receiving a collision naturally once every 2**95 cookies, which means never\n\nCould this be a concurrency issue?\n\nIf so, use different random.Random instances for each thread\nCan save these instances in thread-local storage (threading.local())\nOn linux, Python should seed them using os.urandom() - not system time - so you should get different streams for each thread.\n\n",
"\nI don't know how your FCGI processes are being spawned, but is it possible that it's using fork() after the Python interpreter has started (and the random module has been imported by something), hence effectively seeding two processes' random._insts from the same source?\nMaybe put some debugging in to check that it is correctly seeding from urandom, and not falling back to the less rigorous time-based seed?\n\neta re comment: man! That's me stumped then; if the RNG always has different state at startup I can't see how you could possibly get collisions. Weird. Would have to put in a lot of state logging to investigate the particular cases which result in collisions, I guess, which sounds like a lot of work trawling through logs. Could it be (1a) the FCGI server usually doesn't fork, but occasionally does (maybe under load, or something)?\nOr (3) some higher-level problem such as a broken HTTP proxy passing the same Set-Cookie to multiple clients?\n",
"I had to erase my original answer, which suggested that generator is not seeded from /dev/urandom, since its source (for Python 3.x) clearly says that it is:\ndef seed(self, a=None):\n \"\"\"Initialize internal state from hashable object.\n\n None or no argument seeds from current time or from an operating\n system specific randomness source if available.\n\n If a is not None or an int or long, hash(a) is used instead.\n \"\"\"\n\n if a is None:\n try:\n a = int(_hexlify(_urandom(16)), 16)\n except NotImplementedError:\n import time\n a = int(time.time() * 256) # use fractional seconds\n\n super().seed(a)\n self.gauss_next = None\n\nI therefore humbly accept that there are mysteries in the world that I may not be able to decipher.\n"
] |
[
13,
4,
1,
0
] |
[
"To avoid the problem, you can use a sequence of cookies, that are guaranteed to be different (you can e.g. use a set). Each time you give a cookie to someone, you take it from the sequence and you add another to it. Another option is to generate a UUID and use that as a cookie.\nAnother way to avoid the problem could be to hold a private key, and use a (e.g. MD5) checksum of the private key, with a counter value joined to it. The probability for collisions will then be very low. To be safer, add a few more variables to the checksum, like the current time, the ip address of the user, ...\nLibraries to generate cookies exist. Any WSGI implementation probably contains a cookie generator.\nIf you're only interested in how random your strings are, you could generate a file with, say, one million cookies and perform randomness checks on that file. This, however, is not what I would recommend.\n"
] |
[
-4
] |
[
"python",
"random"
] |
stackoverflow_0001366047_python_random.txt
|
Q:
Django - flush response?
I am sending an AJAX request to a Django view that can potentially take a lot of time. It goes through some well-defined steps, however, so I would like to print status indicators to the user letting it know when it is finished doing a certain thing and has moved on to the next.
If I was using PHP it might look like this, using the flush function:
do_something();
print 'Done doing something!';
flush();
do_something_else();
print 'Done doing something else!';
flush();
How would I go about doing the same with Django? Looking at the documentation I see that HttpResponse objects have a flush method, but all it has to say is that "This method makes an HttpResponse instance a file-like object." - I'm not sure that's what I want. I'm having a hard time wrapping my head around how this could be done in Django since I have to return the response and don't really have a control of when the content goes to the browser.
A:
Most webservers (eg. FCGI/SCGI) do their own buffering, HTTP clients do their own, and so on. It's very difficult to actually get data flushed out in this way and for the client to actually receive it, because it's not a typical operation.
The closest to what you're trying to do would be to pass an iterator to HttpResponse, and to do the work in a generator; something like this:
def index(request):
def do_work():
step_1()
yield "step 1 complete"
step_2()
yield "step 2 complete"
step_3()
yield "step 3 complete"
return HttpResponse(do_work())
... but this won't necessarily flush. (Not tested code, but you get the idea; see http://docs.djangoproject.com/en/dev/ref/request-response/#passing-iterators.)
Most of the infrastructure is simply not expecting a piecemeal response. Even if Django isn't buffering, your front-end server might be, and the client probably is, too. That's why most things use pull updates for this: a separate interface to query the status of a long-running request.
(I'd like to be able to do reliable push updates for this sort of thing, too...)
A:
I'm not sure you need to use the flush() function.
Your AJAX request should just go to a django view.
If your steps can be broken down, keep it simple and create a view for each step.
That way one one process completes you can update the user and start the next request via AJAX.
views.py
def do_something(request):
# stuff here
return HttpResponse()
def do_something_else(request):
# more stuff
return HttpResponse()
|
Django - flush response?
|
I am sending an AJAX request to a Django view that can potentially take a lot of time. It goes through some well-defined steps, however, so I would like to print status indicators to the user letting it know when it is finished doing a certain thing and has moved on to the next.
If I was using PHP it might look like this, using the flush function:
do_something();
print 'Done doing something!';
flush();
do_something_else();
print 'Done doing something else!';
flush();
How would I go about doing the same with Django? Looking at the documentation I see that HttpResponse objects have a flush method, but all it has to say is that "This method makes an HttpResponse instance a file-like object." - I'm not sure that's what I want. I'm having a hard time wrapping my head around how this could be done in Django since I have to return the response and don't really have a control of when the content goes to the browser.
|
[
"Most webservers (eg. FCGI/SCGI) do their own buffering, HTTP clients do their own, and so on. It's very difficult to actually get data flushed out in this way and for the client to actually receive it, because it's not a typical operation.\nThe closest to what you're trying to do would be to pass an iterator to HttpResponse, and to do the work in a generator; something like this:\ndef index(request):\n def do_work():\n step_1()\n yield \"step 1 complete\"\n step_2()\n yield \"step 2 complete\"\n step_3()\n yield \"step 3 complete\"\n return HttpResponse(do_work())\n\n... but this won't necessarily flush. (Not tested code, but you get the idea; see http://docs.djangoproject.com/en/dev/ref/request-response/#passing-iterators.)\nMost of the infrastructure is simply not expecting a piecemeal response. Even if Django isn't buffering, your front-end server might be, and the client probably is, too. That's why most things use pull updates for this: a separate interface to query the status of a long-running request.\n(I'd like to be able to do reliable push updates for this sort of thing, too...)\n",
"I'm not sure you need to use the flush() function.\nYour AJAX request should just go to a django view.\nIf your steps can be broken down, keep it simple and create a view for each step.\nThat way one one process completes you can update the user and start the next request via AJAX.\nviews.py\ndef do_something(request):\n # stuff here\n return HttpResponse()\n\ndef do_something_else(request):\n # more stuff\n return HttpResponse()\n\n"
] |
[
9,
4
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001371020_django_python.txt
|
Q:
Limit choices to
I have a model named Project which has a m2m field users. I have a task model with a FK project. And it has a field assigned_to. How can i limit the choices of assigned_to to only the users of the current project?
A:
You could do this another way, using this nifty form factory trick.
def make_task_form(project):
class _TaskForm(forms.Form):
assigned_to = forms.ModelChoiceField(
queryset=User.objects.filter(user__project=project))
class Meta:
model = Task
return _TaskForm
Then from your view code you can do something like this:
project = Project.objects.get(id=234)
form_class = make_task_form(project)
...
form = form_class(request.POST)
A:
You need to create a custom form for the admin.
Your form should contain a ModelChoiceField in which you can specify a queryset parameter that defines what the available choices are. This form can be a ModelForm.
(the following example assumes users have an FK to your Project model)
forms.py
from django import forms
class TaskForm(forms.ModelForm):
assigned_to = forms.ModelChoiceField(queryset=Users.objects.filter(user__project=project))
class Meta:
model = Task
Then assign the form to the ModelAdmin.
admin.py
from django.contrib import admin
from models import Task
from forms import TaskForm
class TaskAdmin(admin.ModelAdmin):
form = TaskForm
admin.site.register(Task, TaskAdmin)
|
Limit choices to
|
I have a model named Project which has a m2m field users. I have a task model with a FK project. And it has a field assigned_to. How can i limit the choices of assigned_to to only the users of the current project?
|
[
"You could do this another way, using this nifty form factory trick.\ndef make_task_form(project):\n class _TaskForm(forms.Form):\n assigned_to = forms.ModelChoiceField(\n queryset=User.objects.filter(user__project=project))\n\n class Meta:\n model = Task\n return _TaskForm\n\nThen from your view code you can do something like this:\nproject = Project.objects.get(id=234)\nform_class = make_task_form(project)\n...\nform = form_class(request.POST)\n\n",
"You need to create a custom form for the admin.\nYour form should contain a ModelChoiceField in which you can specify a queryset parameter that defines what the available choices are. This form can be a ModelForm.\n(the following example assumes users have an FK to your Project model)\nforms.py\nfrom django import forms\n\nclass TaskForm(forms.ModelForm):\n assigned_to = forms.ModelChoiceField(queryset=Users.objects.filter(user__project=project))\n\n class Meta:\n model = Task\n\nThen assign the form to the ModelAdmin.\nadmin.py\nfrom django.contrib import admin\nfrom models import Task\nfrom forms import TaskForm\n\nclass TaskAdmin(admin.ModelAdmin):\n form = TaskForm\nadmin.site.register(Task, TaskAdmin)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django_models",
"python"
] |
stackoverflow_0001239433_django_models_python.txt
|
Q:
REST / JSON / XML-RPC / SOAP
Sorry for being the 100000th person to ask the same question. But I guess my case is slightly distinctive.
The application is that we'd like to have an Android phone client on 3g and a light python web service server.
The phone would do most of the work and do a lot of uploading, pictures, GPS, etc etc. The server just has to respond with an 'ok' per upload.
I want to use the lightest method, easiest on the battery. But reading about all these protocols is a bit confusing since they all sound the same.
Are they all on the same levels? Or can JSON be a RESTful thing etc?
So as described, the key here is uploading. Does all the input for a REST transaction have to be in a URI? i.e. http://www.server.com/upload/0x81d058f82ac13.
XML-RPC and SOAP sound decently similar from Googling too.
A:
REST mandates the general semantics and concepts. The transport and encodings are up to you. They were originally formulated on XML, but JSON is totally applicable.
XML-RPC / SOAP are different mechanisms, but mostly the same ideas: how to map OO APIs on top of XML and HTTP. IMHO, they're disgusting from a design view. I was so relieved when found about REST. In your case, i'm sure that the lots of layers would mean a lot more CPU demand.
I'd say go REST, using JSON for encoding; but if your requirements are really that simple as just uploading, then you can use simply HTTP (which might be RESTful in design even without adding any specific library)
|
REST / JSON / XML-RPC / SOAP
|
Sorry for being the 100000th person to ask the same question. But I guess my case is slightly distinctive.
The application is that we'd like to have an Android phone client on 3g and a light python web service server.
The phone would do most of the work and do a lot of uploading, pictures, GPS, etc etc. The server just has to respond with an 'ok' per upload.
I want to use the lightest method, easiest on the battery. But reading about all these protocols is a bit confusing since they all sound the same.
Are they all on the same levels? Or can JSON be a RESTful thing etc?
So as described, the key here is uploading. Does all the input for a REST transaction have to be in a URI? i.e. http://www.server.com/upload/0x81d058f82ac13.
XML-RPC and SOAP sound decently similar from Googling too.
|
[
"REST mandates the general semantics and concepts. The transport and encodings are up to you. They were originally formulated on XML, but JSON is totally applicable.\nXML-RPC / SOAP are different mechanisms, but mostly the same ideas: how to map OO APIs on top of XML and HTTP. IMHO, they're disgusting from a design view. I was so relieved when found about REST. In your case, i'm sure that the lots of layers would mean a lot more CPU demand.\nI'd say go REST, using JSON for encoding; but if your requirements are really that simple as just uploading, then you can use simply HTTP (which might be RESTful in design even without adding any specific library)\n"
] |
[
7
] |
[] |
[] |
[
"android",
"python",
"service"
] |
stackoverflow_0001371312_android_python_service.txt
|
Q:
How to distribute and execute platform-specific unit tests?
We have a python project that we want to start testing using buildbot. Its unit tests include tests that should only work on some platforms. So, we've got tests that should pass on all platforms, tests that should only run on 1 specific platform, tests that should pass on platforms A, B, C and tests that pass on B and D.
What is the best way of doing this? Simple suites would be a hassle, since, as described, each test can have a different list of target platforms. I thought about adding "@run_on" and "@ignore_on" decorators that would match platforms to test methods. Is there anything better?
A:
On a couple of occasions I have used this very simple approach in test modules:
import sys
import unittest
if 'win' in sys.platform:
class TestIt(unittest.TestCase):
...
if 'linux' in sys.platform:
class TestIt(unittest.TestCase):
...
A:
Sounds like a handy place for a test loader.
Check out http://docs.python.org/library/unittest.html#unittest.TestLoader.loadTestsFromName
If you provide some suitable naming conventions you can probably create suites based on your test naming conventions.
If I've got test A that executes on AIX, Linux (all) and 32 bit Windows, test B that runs on Windows 64, Linux 64 and Solaris, and test C that runs on everything but HPUX and test D that runs on everything... What possible naming-convention is there for this?
class TestA_AIX_Linux2_Win32( unittest.TestCase ):
class TestB_Win64_Linux64_Solaris( unittest.TestCase ):
class TestC_AIX_Linux2_Win32_Win64_Linux64_Solaris( unittest.TestCase ):
class TestD_All( unittest.TestCase ):
The hard part is the "not HP/UX". Avoiding negative logic makes your life simpler. In this case, you simply list all OS's that are not HP/UX. The list is fairly short and grows slowly.
The "All" tests are simply a separate text search that's merged in with the current platform's list of tests to create a complete suite.
You could try something like
class TextC_XHPUX( unittest.TestCase ):
Your text matching rule is normally "_someOSName"; your exceptions would be a weird text filter to mess around with the the "_X" name.
"We can't have a positive list of OS's. What if we add a new OS? Do we have to rename every test to explicitly include it?" Yes. The new operating system market is slow to evolve, it's not that painful to manage.
The alternative is to include information within each class (i.e., a class-level function) or a decorator and use a customized class loader that evaluates the class-level function.
A:
We've decided to go with decorators that, using platform module and others, check whether the tests should be executed, and if not simply let it pass (though, we saw that python2.7 already has in its trunk a SkipTest exception that could be raised in such cases, to ignore the test).
|
How to distribute and execute platform-specific unit tests?
|
We have a python project that we want to start testing using buildbot. Its unit tests include tests that should only work on some platforms. So, we've got tests that should pass on all platforms, tests that should only run on 1 specific platform, tests that should pass on platforms A, B, C and tests that pass on B and D.
What is the best way of doing this? Simple suites would be a hassle, since, as described, each test can have a different list of target platforms. I thought about adding "@run_on" and "@ignore_on" decorators that would match platforms to test methods. Is there anything better?
|
[
"On a couple of occasions I have used this very simple approach in test modules:\nimport sys\nimport unittest\n\nif 'win' in sys.platform:\n class TestIt(unittest.TestCase):\n ...\n\nif 'linux' in sys.platform:\n class TestIt(unittest.TestCase):\n ...\n\n",
"Sounds like a handy place for a test loader.\nCheck out http://docs.python.org/library/unittest.html#unittest.TestLoader.loadTestsFromName\nIf you provide some suitable naming conventions you can probably create suites based on your test naming conventions.\nIf I've got test A that executes on AIX, Linux (all) and 32 bit Windows, test B that runs on Windows 64, Linux 64 and Solaris, and test C that runs on everything but HPUX and test D that runs on everything... What possible naming-convention is there for this?\nclass TestA_AIX_Linux2_Win32( unittest.TestCase ):\n\nclass TestB_Win64_Linux64_Solaris( unittest.TestCase ):\n\nclass TestC_AIX_Linux2_Win32_Win64_Linux64_Solaris( unittest.TestCase ):\n\nclass TestD_All( unittest.TestCase ):\n\nThe hard part is the \"not HP/UX\". Avoiding negative logic makes your life simpler. In this case, you simply list all OS's that are not HP/UX. The list is fairly short and grows slowly.\nThe \"All\" tests are simply a separate text search that's merged in with the current platform's list of tests to create a complete suite.\nYou could try something like\nclass TextC_XHPUX( unittest.TestCase ):\n\nYour text matching rule is normally \"_someOSName\"; your exceptions would be a weird text filter to mess around with the the \"_X\" name.\n\"We can't have a positive list of OS's. What if we add a new OS? Do we have to rename every test to explicitly include it?\" Yes. The new operating system market is slow to evolve, it's not that painful to manage.\nThe alternative is to include information within each class (i.e., a class-level function) or a decorator and use a customized class loader that evaluates the class-level function.\n",
"We've decided to go with decorators that, using platform module and others, check whether the tests should be executed, and if not simply let it pass (though, we saw that python2.7 already has in its trunk a SkipTest exception that could be raised in such cases, to ignore the test).\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"buildbot",
"python",
"unit_testing"
] |
stackoverflow_0001199493_buildbot_python_unit_testing.txt
|
Q:
Django: retrieve all galleries containing one public photo at least
excuse me for my ugly english) !
Imagine these very simple models :
class Photo(models.Model):
is_public = models.BooleanField('Public', default=False)
class Gallery(models.Model):
photos = models.ManyToManyField('Photos', related_name='galleries', null=True, blank=True)
I need to select all Gallery instances which contain at least one public photo (and if possible adding a photos__count attribute which contains the number of public photos).
I tried this query :
Gallery.objects.all()\
.annotate(Count('photos'))\
.filter(photos__is_public=True)
It seems to be okay, but :
- the query is strange
- the added attribute photos__count on each gallery will contain the total number of photos on this gallery, instead of the number of public photos in this gallery.
I thin that the hard-coded sql query I need is that :
SELECT `gallery`.* , COUNT(`gallery_photos`.`photo_id`)
FROM `gallery`
INNER JOIN `gallery_photos` ON (`gallery`.`id` = `gallery_photos`.`gallery_id`)
INNER JOIN `photo` ON (`gallery_photos`.`photo_id` = `photo`.`id`)
WHERE `photo`.`is_public` = True
GROUP BY gallery.id ;
Any idea to fix it ?
Thank you ! ;-)
A:
This should do it:
Edit, updated to add count:
SELECT `gallery`.*, 'a'.'count'
FROM `gallery`
inner join (
select `gallery`.`id`, count(*) as count
from `gallery_photos`
INNER JOIN `photo` ON (`gallery_photos`.`photo_id` = `photo`.`id`)
where `photo`.`is_public` = True
group by `gallery`.`id`
) a on `gallery`.`id` = 'a'.'id'
WHERE `photo`.`is_public` = True
A:
I would try:
Gallery.objects.filter(photos__is_public=True).annotate(Count('photos'))
I believe you just got your filter ordering wrong but I have not set up your models to test that assumption.
Try number two:
Gallery.objects.exclude(photos__is_public=False).annotate(Count('photos'))
That should be exclude all galleries where none of the photos are public and return a count of what is and isn't public.
A:
This?
Gallery.objects.filter(photos__is_public=True)\
.annotate(Count('photos__is_public'))
A:
The django documentation is at
http://docs.djangoproject.com/en/dev/topics/db/aggregation/#order-of-annotate-and-filter-clauses
and my experience says that the following query:
Gallery.objects.filter(photos__is_public=True).annotate(Count('photos'))
would give you galleries with at least one photo that is public and a count of only photos that are public. The only thing is that it will exclude galleries with zero public photos but it sounds like you don't care about that. Have you tested the above query?
If it still doesn't return the right data then annotate is probably changing the returned data by causing it to return only galleries where all are public. In that case you could use the "extra" method to get the count you want.
Gallery.objects.filter(photos__is_public=True).extra(select={
"photo_count": """
SELECT COUNT(`gallery_photos.id`)
FROM `gallery_photos`
WHERE `gallery_photos.gallery_id` `gallery.id AND
`gallery_photos.is_public = True
"""})
Jason Christa's exclude method may also work.
|
Django: retrieve all galleries containing one public photo at least
|
excuse me for my ugly english) !
Imagine these very simple models :
class Photo(models.Model):
is_public = models.BooleanField('Public', default=False)
class Gallery(models.Model):
photos = models.ManyToManyField('Photos', related_name='galleries', null=True, blank=True)
I need to select all Gallery instances which contain at least one public photo (and if possible adding a photos__count attribute which contains the number of public photos).
I tried this query :
Gallery.objects.all()\
.annotate(Count('photos'))\
.filter(photos__is_public=True)
It seems to be okay, but :
- the query is strange
- the added attribute photos__count on each gallery will contain the total number of photos on this gallery, instead of the number of public photos in this gallery.
I thin that the hard-coded sql query I need is that :
SELECT `gallery`.* , COUNT(`gallery_photos`.`photo_id`)
FROM `gallery`
INNER JOIN `gallery_photos` ON (`gallery`.`id` = `gallery_photos`.`gallery_id`)
INNER JOIN `photo` ON (`gallery_photos`.`photo_id` = `photo`.`id`)
WHERE `photo`.`is_public` = True
GROUP BY gallery.id ;
Any idea to fix it ?
Thank you ! ;-)
|
[
"This should do it:\nEdit, updated to add count:\nSELECT `gallery`.*, 'a'.'count' \nFROM `gallery` \ninner join (\n select `gallery`.`id`, count(*) as count\n from `gallery_photos` \n INNER JOIN `photo` ON (`gallery_photos`.`photo_id` = `photo`.`id`) \n where `photo`.`is_public` = True\n group by `gallery`.`id`\n) a on `gallery`.`id` = 'a'.'id'\nWHERE `photo`.`is_public` = True \n\n",
"I would try:\nGallery.objects.filter(photos__is_public=True).annotate(Count('photos'))\n\nI believe you just got your filter ordering wrong but I have not set up your models to test that assumption.\nTry number two:\nGallery.objects.exclude(photos__is_public=False).annotate(Count('photos'))\n\nThat should be exclude all galleries where none of the photos are public and return a count of what is and isn't public.\n",
"This?\n\nGallery.objects.filter(photos__is_public=True)\\\n .annotate(Count('photos__is_public'))\n\n",
"The django documentation is at \nhttp://docs.djangoproject.com/en/dev/topics/db/aggregation/#order-of-annotate-and-filter-clauses\nand my experience says that the following query:\nGallery.objects.filter(photos__is_public=True).annotate(Count('photos'))\n\nwould give you galleries with at least one photo that is public and a count of only photos that are public. The only thing is that it will exclude galleries with zero public photos but it sounds like you don't care about that. Have you tested the above query?\nIf it still doesn't return the right data then annotate is probably changing the returned data by causing it to return only galleries where all are public. In that case you could use the \"extra\" method to get the count you want.\nGallery.objects.filter(photos__is_public=True).extra(select={\n \"photo_count\": \"\"\"\n SELECT COUNT(`gallery_photos.id`)\n FROM `gallery_photos`\n WHERE `gallery_photos.gallery_id` `gallery.id AND\n `gallery_photos.is_public = True\n \"\"\"})\n\nJason Christa's exclude method may also work.\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"count",
"django",
"django_models",
"python",
"sql"
] |
stackoverflow_0001366943_count_django_django_models_python_sql.txt
|
Q:
query for values based on date w/ Django ORM
I have a bunch of objects that have a value and a date field:
obj1 = Obj(date='2009-8-20', value=10)
obj2 = Obj(date='2009-8-21', value=15)
obj3 = Obj(date='2009-8-23', value=8)
I want this returned:
[10, 15, 0, 8]
or better yet, an aggregate of the total up to that point:
[10, 25, 25, 33]
I would be best to get this data directly from the database, but otherwise I can do the totaling pretty easily with a forloop.
I'm using Django's ORM and also Postgres
edit:
Just to note, that my example only covers a few days, but in practice, I have hundreds of objects covering a couple decades... What I'm trying to do is create a line graph showing how the sum of all my objects has grown over time (a very long time)
A:
This one isn't tested, since it's a bit too much of a pain to set up a Django table to test with:
from datetime import date, timedelta
# http://www.ianlewis.org/en/python-date-range-iterator
def datetimeRange(from_date, to_date=None):
while to_date is None or from_date <= to_date:
yield from_date
from_date = from_date + timedelta(days = 1)
start = date(2009, 8, 20)
end = date(2009, 8, 23)
objects = Obj.objects.filter(date__gte=start)
objects = objects.filter(date__lte=end)
results = {}
for o in objects:
results[o.date] = o.value
return [results.get(day, 0) for day in datetimeRange(start, end)]
This avoids running a separate query for every day.
A:
result_list = []
for day in range(20,24):
result = Obj.objects.get(date=datetime(2009, 08, day))
if result:
result_list.append(result.value)
else:
result_list.append(0)
return result_list
If you have more than one Obj per date, you'll need to check len(obj) and iterate over them in case it's more than 1.
A:
If you loop through a Obj.objects.get 100 times, you're doing 100 SQL queries. Obj.objects.filter will return the results in one SQL query, but you also select all model fields. The right way to do this is to use Obj.objects.values_list, which will do this with a single query, and only select the 'values' field.
start_date = date(2009, 8, 20)
end_date = date(2009, 8, 23)
objects = Obj.objects.filter(date__range=(start_date,end_date))
# values_list and 'value' aren't related. 'value' should be whatever field you're querying
val_list = objects.values_list('value',flat=True)
# val_list = [10, 15, 8]
To do a running aggregate of val_list, you can do this (not certain that this is the most pythonic way)
for i in xrange(len(val_list)):
if i > 0:
val_list[i] = val_list[i] + val_list[i-1]
# val_list = [10,25,33]
EDIT: If you need to account for missing days, @Glenn Maynard's answer is actually pretty good, although I prefer the dict() syntax:
objects = Obj.objects.filter(date__range=(start_date,end_date)).values('date','value')
val_dict = dict((obj['date'],obj['value']) for obj in objects)
# I'm stealing datetimeRange from @Glenn Maynard
val_list = [val_dict.get(day, 0) for day in datetimeRange(start_date, end_date)]
# val_list = [10,15,0,8]
|
query for values based on date w/ Django ORM
|
I have a bunch of objects that have a value and a date field:
obj1 = Obj(date='2009-8-20', value=10)
obj2 = Obj(date='2009-8-21', value=15)
obj3 = Obj(date='2009-8-23', value=8)
I want this returned:
[10, 15, 0, 8]
or better yet, an aggregate of the total up to that point:
[10, 25, 25, 33]
I would be best to get this data directly from the database, but otherwise I can do the totaling pretty easily with a forloop.
I'm using Django's ORM and also Postgres
edit:
Just to note, that my example only covers a few days, but in practice, I have hundreds of objects covering a couple decades... What I'm trying to do is create a line graph showing how the sum of all my objects has grown over time (a very long time)
|
[
"This one isn't tested, since it's a bit too much of a pain to set up a Django table to test with:\nfrom datetime import date, timedelta\n# http://www.ianlewis.org/en/python-date-range-iterator\ndef datetimeRange(from_date, to_date=None):\n while to_date is None or from_date <= to_date:\n yield from_date\n from_date = from_date + timedelta(days = 1)\n\nstart = date(2009, 8, 20)\nend = date(2009, 8, 23)\nobjects = Obj.objects.filter(date__gte=start)\nobjects = objects.filter(date__lte=end)\n\nresults = {}\nfor o in objects:\n results[o.date] = o.value\n\nreturn [results.get(day, 0) for day in datetimeRange(start, end)]\n\nThis avoids running a separate query for every day.\n",
"result_list = []\nfor day in range(20,24): \n result = Obj.objects.get(date=datetime(2009, 08, day))\n if result:\n result_list.append(result.value)\n else:\n result_list.append(0)\nreturn result_list\n\nIf you have more than one Obj per date, you'll need to check len(obj) and iterate over them in case it's more than 1.\n",
"If you loop through a Obj.objects.get 100 times, you're doing 100 SQL queries. Obj.objects.filter will return the results in one SQL query, but you also select all model fields. The right way to do this is to use Obj.objects.values_list, which will do this with a single query, and only select the 'values' field.\nstart_date = date(2009, 8, 20)\nend_date = date(2009, 8, 23)\n\nobjects = Obj.objects.filter(date__range=(start_date,end_date))\n# values_list and 'value' aren't related. 'value' should be whatever field you're querying\nval_list = objects.values_list('value',flat=True)\n# val_list = [10, 15, 8]\n\nTo do a running aggregate of val_list, you can do this (not certain that this is the most pythonic way)\nfor i in xrange(len(val_list)):\n if i > 0:\n val_list[i] = val_list[i] + val_list[i-1]\n\n# val_list = [10,25,33]\n\nEDIT: If you need to account for missing days, @Glenn Maynard's answer is actually pretty good, although I prefer the dict() syntax:\nobjects = Obj.objects.filter(date__range=(start_date,end_date)).values('date','value')\nval_dict = dict((obj['date'],obj['value']) for obj in objects)\n# I'm stealing datetimeRange from @Glenn Maynard\nval_list = [val_dict.get(day, 0) for day in datetimeRange(start_date, end_date)]\n# val_list = [10,15,0,8]\n\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"django",
"django_orm",
"python"
] |
stackoverflow_0001371280_django_django_orm_python.txt
|
Q:
Python Changing module variables in another module
Say I am importing the module 'foo' into the module 'bar'.
Is it possible for me to change a global variable in foo inside bar?
Let the global variable in foo be 'arbit'.
Change arbit so that if bar were to call a function of foo that uses this variable, the updated variable is used rather than the one before that.
A:
You should be able to do:
import foo
foo.arbit = 'new value'
|
Python Changing module variables in another module
|
Say I am importing the module 'foo' into the module 'bar'.
Is it possible for me to change a global variable in foo inside bar?
Let the global variable in foo be 'arbit'.
Change arbit so that if bar were to call a function of foo that uses this variable, the updated variable is used rather than the one before that.
|
[
"You should be able to do:\nimport foo\nfoo.arbit = 'new value'\n\n"
] |
[
9
] |
[] |
[] |
[
"module",
"python"
] |
stackoverflow_0001372486_module_python.txt
|
Q:
What non web-oriented Python frameworks exist?
I'm looking for a good framework on which to base my applications development.
In PHP I use Symfony, in ActionScript PureMVC, they are all MVC frameworks.
I'm looking for a Python framework being oriented towards general purpose application development, not web application. I mean, just applications, services, daemons and so on.
Sometimes I have not a real view to implement, just an RPC service. Other times I have to code for a serial port, or implement a command scheduler, or whatever.
What is the best open source software I can think of as a standard base for my needs? Why do you think your suggestion will fulfill my requirements over its competitors?
EDIT:
For "general purpose" I mean not being strongly bounded to be with or without a GUI, being a daemon or a command-line application, being multiprocess/multithread or not. Being general, giving a good architecture structure, not being a particular tool.
EDIT 2:
I'd want to explain that the question is about the eventual existence of one or more "frameworks" not being bounded to any particular use case, but being able to give a good and well standardized startup structure/architecture, with some best practices applied, being a guideline, something being able to guide the architecture planning of the application itself, not of their behavior regarding tasks to perform.
I think this question is not so subjective, maybe wrong exposed because of my English, but I suppose it is legal
A:
For network services needing to handle numerous connections asynchronously, a great many people favor Twisted.
Outside of that (and web applications), however, there's simply less need for overarching frameworks in Python than with many other languages -- the core language itself is expressive, powerful, and comes with batteries included; why add anything?
A:
Check out the Zope Component Architecture. It's an architecture to use and reuse components. It's mostly used in web applications because it's used in Zope (as the name implies) but it is in no way web specific.
I wrote a quick intro to it:
http://regebro.wordpress.com/2007/11/16/a-python-component-architecture/
Here is an online book about it: http://www.muthukadan.net/docs/zca.html
And here is a non-online book: http://www.amazon.com/dp/354076447X
A:
I would guess what you're looking for might be the Enthought Tool Suite (ETS), particularly Envisage (extensible plug-in architecture for scientific applications).
A:
"not being bounded to be with or without a GUI" doesn't make a lot of sense.
GUI's -- generally -- are quite complex and require a framework. Folks use tkinter, pyQT, pyGTK, wxWidgets, etc. to build GUI's.
"daemon or a cmd line app" does not require a framework of any kind. This is already part of the standard library.
"being multiprocess/multithread or not" is already part of the standard library.
Since, "general" doesn't have much meaning, there are several answers:
For GUI development, yes, there are many frameworks. "Best" is subjective.
For non-GUI development, there are no "additional" frameworks to speak of.
For "event driven networking", there is twisted.
For "Object-Relational Mapping", there are several. "Best" is subjective.
A:
I'm having difficulty imagining what a "framework" would be that unifies "with or without a GUI, being a daemon or a cmd line app, being multiprocess/multithread or not". What do you expect such a framework to provide?
Frameworks are built to encapsulate various basic tasks - GUI, or web, or asynchronicity, or whatever - so that, as you say, users don't have to reinvent them. But you're explicitly excluding all the things that make a framework a framework, so I can't see what you're left with.
About the only thing you don't exclude is database access (ORM). If that's all you want, look at sqlalchemy.
A:
Python's core language and standard library are an amazing framework by themselves.
Only languages which are deficient in some way need a framework for efficient development of applications (example: JavaScript needs jQuery or Prototype).
The general approach with Python is:
Check the standard library; it probably has what you need.
If there's some large component that isn't in the standard library, there's probably a specific library that help with it.
A:
Python bindings to GObject and GLib provide an application framework not bound to GUI or anything-- however, if it should be bound to a UI, GTK+ comes closer.
GLib provides functions such as an application main loop, events, signals and callbacks. GObject implements the base class for objects with connectable signal slots.
GLib also offers a lot of Filesystem abstraction, including VFS, trash handling, directory monitoring, file metadata.
The python reference is here:
http://library.gnome.org/devel/pygobject/stable/index.html
A:
I don't think what you are asking for exists. Frameworks provide a common frame for similar applications, whereas you are asking for something for all applications. Almost by definition, such a thing can't exist.
Instead, for each application type, unless you find a framework for that specific type of app, you provide the framework yourself and use libraries to provide common functionality shared across applications. Python has many good libraries that come as standard and more can be found at PyPi.
|
What non web-oriented Python frameworks exist?
|
I'm looking for a good framework on which to base my applications development.
In PHP I use Symfony, in ActionScript PureMVC, they are all MVC frameworks.
I'm looking for a Python framework being oriented towards general purpose application development, not web application. I mean, just applications, services, daemons and so on.
Sometimes I have not a real view to implement, just an RPC service. Other times I have to code for a serial port, or implement a command scheduler, or whatever.
What is the best open source software I can think of as a standard base for my needs? Why do you think your suggestion will fulfill my requirements over its competitors?
EDIT:
For "general purpose" I mean not being strongly bounded to be with or without a GUI, being a daemon or a command-line application, being multiprocess/multithread or not. Being general, giving a good architecture structure, not being a particular tool.
EDIT 2:
I'd want to explain that the question is about the eventual existence of one or more "frameworks" not being bounded to any particular use case, but being able to give a good and well standardized startup structure/architecture, with some best practices applied, being a guideline, something being able to guide the architecture planning of the application itself, not of their behavior regarding tasks to perform.
I think this question is not so subjective, maybe wrong exposed because of my English, but I suppose it is legal
|
[
"For network services needing to handle numerous connections asynchronously, a great many people favor Twisted.\nOutside of that (and web applications), however, there's simply less need for overarching frameworks in Python than with many other languages -- the core language itself is expressive, powerful, and comes with batteries included; why add anything?\n",
"Check out the Zope Component Architecture. It's an architecture to use and reuse components. It's mostly used in web applications because it's used in Zope (as the name implies) but it is in no way web specific.\nI wrote a quick intro to it:\nhttp://regebro.wordpress.com/2007/11/16/a-python-component-architecture/\nHere is an online book about it: http://www.muthukadan.net/docs/zca.html\nAnd here is a non-online book: http://www.amazon.com/dp/354076447X\n",
"I would guess what you're looking for might be the Enthought Tool Suite (ETS), particularly Envisage (extensible plug-in architecture for scientific applications).\n",
"\"not being bounded to be with or without a GUI\" doesn't make a lot of sense.\nGUI's -- generally -- are quite complex and require a framework. Folks use tkinter, pyQT, pyGTK, wxWidgets, etc. to build GUI's.\n\"daemon or a cmd line app\" does not require a framework of any kind. This is already part of the standard library.\n\"being multiprocess/multithread or not\" is already part of the standard library.\nSince, \"general\" doesn't have much meaning, there are several answers:\n\nFor GUI development, yes, there are many frameworks. \"Best\" is subjective.\nFor non-GUI development, there are no \"additional\" frameworks to speak of. \nFor \"event driven networking\", there is twisted.\nFor \"Object-Relational Mapping\", there are several. \"Best\" is subjective.\n\n",
"I'm having difficulty imagining what a \"framework\" would be that unifies \"with or without a GUI, being a daemon or a cmd line app, being multiprocess/multithread or not\". What do you expect such a framework to provide?\nFrameworks are built to encapsulate various basic tasks - GUI, or web, or asynchronicity, or whatever - so that, as you say, users don't have to reinvent them. But you're explicitly excluding all the things that make a framework a framework, so I can't see what you're left with.\nAbout the only thing you don't exclude is database access (ORM). If that's all you want, look at sqlalchemy.\n",
"Python's core language and standard library are an amazing framework by themselves.\nOnly languages which are deficient in some way need a framework for efficient development of applications (example: JavaScript needs jQuery or Prototype).\nThe general approach with Python is:\n\nCheck the standard library; it probably has what you need.\nIf there's some large component that isn't in the standard library, there's probably a specific library that help with it.\n\n",
"Python bindings to GObject and GLib provide an application framework not bound to GUI or anything-- however, if it should be bound to a UI, GTK+ comes closer.\nGLib provides functions such as an application main loop, events, signals and callbacks. GObject implements the base class for objects with connectable signal slots.\nGLib also offers a lot of Filesystem abstraction, including VFS, trash handling, directory monitoring, file metadata.\nThe python reference is here:\nhttp://library.gnome.org/devel/pygobject/stable/index.html\n",
"I don't think what you are asking for exists. Frameworks provide a common frame for similar applications, whereas you are asking for something for all applications. Almost by definition, such a thing can't exist.\nInstead, for each application type, unless you find a framework for that specific type of app, you provide the framework yourself and use libraries to provide common functionality shared across applications. Python has many good libraries that come as standard and more can be found at PyPi.\n"
] |
[
9,
6,
6,
3,
3,
1,
0,
0
] |
[] |
[] |
[
"frameworks",
"python"
] |
stackoverflow_0001368364_frameworks_python.txt
|
Q:
Aliasing a class in Python
I am writing a class to implement an algorithm. This algorithm has three levels of complexity. It makes sense to me to implement the classes like this:
class level0:
def calc_algorithm(self):
# level 0 algorithm
pass
# more level0 stuff
class level1(level0):
def calc_algorithm(self):
# level 1 algorithm
pass
# more level1 stuff
class level2(level1):
def calc_algorithm(self):
# level 2 algorithm
pass
# more level2 stuff
Where I expect that calc_algorithm will be overridden in each class. Depending on a certain commandline option I want to run either level 0, level 1 or level 2 on the data. This is how I call the algorithm:
for offset in itertools.product(*map(xrange, (dim[0] - 1, dim[1] - 1, dim[2] - 1))):
algorithm(offset).calc_algorithm
Where algorithm is either level0, level1 or level2.
The way I'd do it in other languages is:
for offset in itertools.product(*map(xrange, (dim[0] - 1, dim[1] - 1, dim[2] - 1))):
if (level == 0):
level0(offset).calc_algorithm
else:
if (level == 1):
level1(offset).calc_algorithm
else:
level2(offset).calc_algorithm
Is there a Pythonic way to alias a class to refer to another class, so that I could do:
algorithm = (level == 0) and level0 or (level == 1) and level1 or level2
and then call algorithm as above?
Just for comparison, in Specman, which is an Aspect Oriented language I could write the class like this:
struct algorithm {
level: uint;
// common stuff for all levels
calc_algorithm() is empty;
when (level == 0) {
calc_algorithm() is only {
// level 0 algorithm
};
};
when (level == 1) {
calc_algorithm() is only {
// level 1 algorithm
};
};
when (level == 1) {
calc_algorithm() is only {
// level 1 algorithm
};
};
};
And then once I set the level struct member, I can use the rest of the class transparently.
A:
Are you looking for something along these lines?
dispatch = {0: level0, 1: level1, 2:level2}
dispatch[offset].calc_algorithm
Keys (and offset), obviously, could come from command line.
A:
dispatch = {0:level0, 1:level1, 2:level2}
algo = dispatch[offset]() # "calling" a class constructs an instance.
algo.calc_algorithm()
If you like introspection more:
class_name = "level%d" % offset
klass = globals()[class_name]
algo = klass()
algo.calc_algorithm()
A:
The key is that — unlike in some other languages where classes are ‘special’ and you'd have to have some unusual means of ‘aliasing’ them — in Python, classes are themselves first-class objects which are referenced by perfectly normal variables.
So ‘a’ can be aliased as ‘b’ as simply as saying ‘b=a’.
Is there a Pythonic way to alias a class to refer to another class, so that I could do:
algorithm = (level == 0) and level0 or (level == 1) and level1 or level2
Well yes, that's exactly right and will already work just as you wrote it!
...although modern Python has the if/else expression so today you'd generally say:
algorithm= level0 if level==0 else level1 if level==1 else level2
but then a sequence access would probably be simpler than two conditionals:
algorithm= (level0, level1, level2)[level]
A:
Personally, I wouldn't create 3 classes, but one class with 3 different calculation methods, and dinamically change the official calculation method (the calculation interface) as needed.
For example:
class algorithm:
def __init__(self, level = 0):
self.level = level
self.calcFunctions = {0: self.calcLevel0, 1: self.calcLevel1, 2:self.calcLevel2}
#initial value for calc function
self.setLevel(level)
def setLevel(self, newlevel):
self.level = newlevel
self.calc = self.calcFunctions[level]
def calcLevel0(self):
"""Level 0 calc algorithm"""
#...
pass
def calcLevel1(self):
"""Level 1 calc algorithm"""
#...
pass
def calcLevel2(self):
"""Level 2 calc algorithm"""
#...
pass
#class in use:
#level (taken from command line in your case)
level = 1
alg = algorithm()
alg.setLevel(level)
alg.calc()
If you don't need to change the calcFunction dinamically during execution, you could also pass level to class constructor
|
Aliasing a class in Python
|
I am writing a class to implement an algorithm. This algorithm has three levels of complexity. It makes sense to me to implement the classes like this:
class level0:
def calc_algorithm(self):
# level 0 algorithm
pass
# more level0 stuff
class level1(level0):
def calc_algorithm(self):
# level 1 algorithm
pass
# more level1 stuff
class level2(level1):
def calc_algorithm(self):
# level 2 algorithm
pass
# more level2 stuff
Where I expect that calc_algorithm will be overridden in each class. Depending on a certain commandline option I want to run either level 0, level 1 or level 2 on the data. This is how I call the algorithm:
for offset in itertools.product(*map(xrange, (dim[0] - 1, dim[1] - 1, dim[2] - 1))):
algorithm(offset).calc_algorithm
Where algorithm is either level0, level1 or level2.
The way I'd do it in other languages is:
for offset in itertools.product(*map(xrange, (dim[0] - 1, dim[1] - 1, dim[2] - 1))):
if (level == 0):
level0(offset).calc_algorithm
else:
if (level == 1):
level1(offset).calc_algorithm
else:
level2(offset).calc_algorithm
Is there a Pythonic way to alias a class to refer to another class, so that I could do:
algorithm = (level == 0) and level0 or (level == 1) and level1 or level2
and then call algorithm as above?
Just for comparison, in Specman, which is an Aspect Oriented language I could write the class like this:
struct algorithm {
level: uint;
// common stuff for all levels
calc_algorithm() is empty;
when (level == 0) {
calc_algorithm() is only {
// level 0 algorithm
};
};
when (level == 1) {
calc_algorithm() is only {
// level 1 algorithm
};
};
when (level == 1) {
calc_algorithm() is only {
// level 1 algorithm
};
};
};
And then once I set the level struct member, I can use the rest of the class transparently.
|
[
"Are you looking for something along these lines?\ndispatch = {0: level0, 1: level1, 2:level2}\ndispatch[offset].calc_algorithm\n\nKeys (and offset), obviously, could come from command line.\n",
"dispatch = {0:level0, 1:level1, 2:level2}\nalgo = dispatch[offset]() # \"calling\" a class constructs an instance.\nalgo.calc_algorithm()\n\nIf you like introspection more:\nclass_name = \"level%d\" % offset\nklass = globals()[class_name]\nalgo = klass()\nalgo.calc_algorithm()\n\n",
"The key is that — unlike in some other languages where classes are ‘special’ and you'd have to have some unusual means of ‘aliasing’ them — in Python, classes are themselves first-class objects which are referenced by perfectly normal variables.\nSo ‘a’ can be aliased as ‘b’ as simply as saying ‘b=a’.\n\nIs there a Pythonic way to alias a class to refer to another class, so that I could do:\nalgorithm = (level == 0) and level0 or (level == 1) and level1 or level2\n\nWell yes, that's exactly right and will already work just as you wrote it!\n...although modern Python has the if/else expression so today you'd generally say:\nalgorithm= level0 if level==0 else level1 if level==1 else level2\n\nbut then a sequence access would probably be simpler than two conditionals:\nalgorithm= (level0, level1, level2)[level]\n\n",
"Personally, I wouldn't create 3 classes, but one class with 3 different calculation methods, and dinamically change the official calculation method (the calculation interface) as needed.\nFor example:\nclass algorithm:\n def __init__(self, level = 0):\n\n self.level = level\n self.calcFunctions = {0: self.calcLevel0, 1: self.calcLevel1, 2:self.calcLevel2}\n #initial value for calc function\n self.setLevel(level)\n\n def setLevel(self, newlevel):\n self.level = newlevel\n self.calc = self.calcFunctions[level] \n\n def calcLevel0(self):\n \"\"\"Level 0 calc algorithm\"\"\"\n #...\n pass\n\n def calcLevel1(self):\n \"\"\"Level 1 calc algorithm\"\"\"\n #...\n pass\n\n def calcLevel2(self):\n \"\"\"Level 2 calc algorithm\"\"\"\n #...\n pass\n\n\n#class in use:\n\n#level (taken from command line in your case)\nlevel = 1\n\nalg = algorithm()\nalg.setLevel(level)\nalg.calc()\n\nIf you don't need to change the calcFunction dinamically during execution, you could also pass level to class constructor\n"
] |
[
11,
4,
3,
2
] |
[] |
[] |
[
"class",
"python"
] |
stackoverflow_0001369534_class_python.txt
|
Q:
Algorithm to filter a set of all phrases containing in other phrase
Given a set of phrases, i would like to filter the set of all phrases that contain any of the other phrases. Contained here means that if a phrase contains all the words of another phrase it should be filtered out. Order of the words within the phrase does not matter.
What i have so far is this:
Sort the set by the number of words in each phrase.
For each phrase X in the set:
For each phrase Y in the rest of the set:
If all the words in X are in Y then X is contained in Y, discard Y.
This is slow given a list of about 10k phrases.
Any better options?
A:
You could build an index which maps words to phrases and do something like:
let matched = set of all phrases
for each word in the searched phrase
let wordMatch = all phrases containing the current word
let matched = intersection of matched and wordMatch
After this, matched would contain all phrases matching all words in the target phrase. It could be pretty well optimized by initializing matched to the set of all phrases containing only words[0], and then only iterating over words[1..words.length]. Filtering phrases which are too short to match the target phrase may improve performance, too.
Unless I'm mistaken, a simple implementation has a worst case complexity (when the search phrase matches all phrases) of O(n·m), where n is the number of words in the search phrase, and m is the number of phrases.
A:
Your algo is quadratic in the number of phrases, that's probably what's slowing it down. Here I index phrases by words to get below quadratic in the common case.
# build index
foreach phrase: foreach word: phrases[word] += phrase
# use index to filter out phrases that contain all the words
# from another phrase
foreach phrase:
foreach word:
if first word:
siblings = phrases[word]
else
siblings = siblings intersection phrases[word]
# siblings now contains any phrase that has at least all our words
remove each sibling from the output set of phrases
# done!
A:
This is the problem of finding minimal values of a set of sets. The naive algorithm and problem definition looks like this:
set(s for s in sets if not any(other < s for other in sets))
There are sub-quadratic algorithms to do this (such as this), but given that N is 10000 the efficiency of the implementation probably matters more. The optimal approach depends heavily on the distribution of the input data. Given that the input sets are natural language phrases that mostly differ, the approach suggested by redtuna should work well. Here's a python implementation of that algorithm.
from collections import defaultdict
def find_minimal_phrases(phrases):
# Make the phrases hashable
phrases = map(frozenset, phrases)
# Create a map to find all phrases containing a word
phrases_containing = defaultdict(set)
for phrase in phrases:
for word in phrase:
phrases_containing[word].add(phrase)
minimal_phrases = []
found_superphrases = set()
# in sorted by length order to find minimal sets first thanks to the
# fact that a.superset(b) implies len(a) > len(b)
for phrase in sorted(phrases, key=len):
if phrase not in found_superphrases:
connected_phrases = [phrases_containing[word] for word in phrase]
connected_phrases.sort(key=len)
superphrases = reduce(set.intersection, connected_phrases)
found_superphrases.update(superphrases)
minimal_phrases.append(phrase)
return minimal_phrases
This is still quadratic, but on my machine it runs in 350ms for a set of 10k phrases containing 50% of minimal values with words from an exponential distribution.
A:
sort phrases by their contents, i.e., 'Z A' -> 'A Z', then eliminating phrases is easy going from shortest to longer ones.
|
Algorithm to filter a set of all phrases containing in other phrase
|
Given a set of phrases, i would like to filter the set of all phrases that contain any of the other phrases. Contained here means that if a phrase contains all the words of another phrase it should be filtered out. Order of the words within the phrase does not matter.
What i have so far is this:
Sort the set by the number of words in each phrase.
For each phrase X in the set:
For each phrase Y in the rest of the set:
If all the words in X are in Y then X is contained in Y, discard Y.
This is slow given a list of about 10k phrases.
Any better options?
|
[
"You could build an index which maps words to phrases and do something like:\n\nlet matched = set of all phrases\nfor each word in the searched phrase\n let wordMatch = all phrases containing the current word\n let matched = intersection of matched and wordMatch\n\nAfter this, matched would contain all phrases matching all words in the target phrase. It could be pretty well optimized by initializing matched to the set of all phrases containing only words[0], and then only iterating over words[1..words.length]. Filtering phrases which are too short to match the target phrase may improve performance, too.\nUnless I'm mistaken, a simple implementation has a worst case complexity (when the search phrase matches all phrases) of O(n·m), where n is the number of words in the search phrase, and m is the number of phrases.\n",
"Your algo is quadratic in the number of phrases, that's probably what's slowing it down. Here I index phrases by words to get below quadratic in the common case.\n# build index\nforeach phrase: foreach word: phrases[word] += phrase\n\n# use index to filter out phrases that contain all the words\n# from another phrase\nforeach phrase:\n foreach word: \n if first word:\n siblings = phrases[word]\n else\n siblings = siblings intersection phrases[word]\n # siblings now contains any phrase that has at least all our words\n remove each sibling from the output set of phrases \n\n# done!\n\n",
"This is the problem of finding minimal values of a set of sets. The naive algorithm and problem definition looks like this:\nset(s for s in sets if not any(other < s for other in sets))\n\nThere are sub-quadratic algorithms to do this (such as this), but given that N is 10000 the efficiency of the implementation probably matters more. The optimal approach depends heavily on the distribution of the input data. Given that the input sets are natural language phrases that mostly differ, the approach suggested by redtuna should work well. Here's a python implementation of that algorithm. \nfrom collections import defaultdict\n\ndef find_minimal_phrases(phrases):\n # Make the phrases hashable\n phrases = map(frozenset, phrases)\n\n # Create a map to find all phrases containing a word\n phrases_containing = defaultdict(set)\n for phrase in phrases:\n for word in phrase:\n phrases_containing[word].add(phrase)\n\n minimal_phrases = []\n found_superphrases = set()\n # in sorted by length order to find minimal sets first thanks to the\n # fact that a.superset(b) implies len(a) > len(b)\n for phrase in sorted(phrases, key=len):\n if phrase not in found_superphrases:\n connected_phrases = [phrases_containing[word] for word in phrase]\n connected_phrases.sort(key=len)\n superphrases = reduce(set.intersection, connected_phrases)\n found_superphrases.update(superphrases)\n minimal_phrases.append(phrase)\n return minimal_phrases\n\nThis is still quadratic, but on my machine it runs in 350ms for a set of 10k phrases containing 50% of minimal values with words from an exponential distribution.\n",
"sort phrases by their contents, i.e., 'Z A' -> 'A Z', then eliminating phrases is easy going from shortest to longer ones.\n"
] |
[
1,
1,
1,
0
] |
[] |
[] |
[
"algorithm",
"c#",
"c++",
"java",
"python"
] |
stackoverflow_0001372531_algorithm_c#_c++_java_python.txt
|
Q:
Is there a python library/module for creating a multi ssh connection?
I've been searching for a library that can access multiple ssh connections at once, Ruby has a Net::SSH::Multi module that allows multiple ssh connections at once. However I rather prefer coding this in Python, are there any similar SSH module for python?
A:
Paramiko is Python's SSH library.
I've never tried concurrent connections with Paramiko, but this answer says it's possible, and this little script seems to make multiple connections in different threads.
The Paramiko mailing list also confirms it's possible to make multiple connections by forking -- there was a security issue regarding that, and it was patched in early 2008.
|
Is there a python library/module for creating a multi ssh connection?
|
I've been searching for a library that can access multiple ssh connections at once, Ruby has a Net::SSH::Multi module that allows multiple ssh connections at once. However I rather prefer coding this in Python, are there any similar SSH module for python?
|
[
"Paramiko is Python's SSH library.\nI've never tried concurrent connections with Paramiko, but this answer says it's possible, and this little script seems to make multiple connections in different threads.\nThe Paramiko mailing list also confirms it's possible to make multiple connections by forking -- there was a security issue regarding that, and it was patched in early 2008.\n"
] |
[
2
] |
[] |
[] |
[
"python",
"ruby",
"ssh"
] |
stackoverflow_0001372657_python_ruby_ssh.txt
|
Q:
How to insert a row with autoincrement id in a multi-primary-key table?
I am writing a turbogears2 application. I have a table like this:
class Order(DeclarativeBase):
__tablename__ = 'order'
# id of order
id = Column(Integer, autoincrement=True, primary_key=True)
# buyer's id
buyer_id = Column(Integer, ForeignKey('user.user_id',
onupdate="CASCADE", ondelete="CASCADE"), primary_key=True)
I want to insert a new row into this table, but I got a "Field 'order_id' doesn't have a default value" error. It seems that I have to set the id of order manually, because I got two primary-key. My question is, how can I insert a row that generate new ID automatically?
If I generate id manually, I got some problem. For example:
maxId = DBSession.query(func.max(Order)).one()[0]
newOrder = Order(id=maxId + 1, buyer_id=xxx)
DBSession.add(newOrder)
Add a new order in this way seems ok, but however, we got some problem if two request run these code in almost same time.
If there is request a and b run this code in following order:
a.maxId = DBSession.query(func.max(Order)).one()[0]
b.maxId = DBSession.query(func.max(Order)).one()[0]
b.newOrder = Order(id=maxId + 1, buyer_id=xxx)
b.DBSession.add(newOrder)
a.newOrder = Order(id=maxId + 1, buyer_id=xxx)
a.DBSession.add(newOrder)
Then the request a might failed, because there is already an order with same id in table. I can catch the exception and try again. But I am wondering, is there any better way to do?
Sometimes, the id is not simple integer, we might need order id like this:
2009090133 standards for 33rd order at 2009-09-01
In these case, autoincrement is not usable. So I have no choice, manualy assign id for order. So my another question is, is there any better way than catch exception and retry to insert a row with id.
A:
If you want sequential numbers per buyer for your orders then you'll have to serialize the transactions inserting to one buyer. You can do that by acquiring exclusive lock on the buyer row:
sess.query(Buyer.id).with_lockmode('update').get(xxx)
order_id = sess.query(func.max(Order.id)+1).filter_by(buyer_id=xxx).scalar() or 1
sess.add(Order(id=order_id, buyer_id=xxx))
Using this pattern when two transactions try to insert order for one buyer in parallel one of them will block on the first line until the other transaction completes or fails.
A:
You should be using a default on your column definitions
id = Column(Integer, default = sqlexpression)
Where sqlexpression can be a sql expression. Here is the documentation. For autoincrement you should use the sql expression coalesce(select max(order.id) from order,0) + 1. For ease you could import sqlalchemy.sql.text so the id column could look something like
id = Column(Integer, default = text("coalesce(select max(order.id) from order,0) + 1"))
|
How to insert a row with autoincrement id in a multi-primary-key table?
|
I am writing a turbogears2 application. I have a table like this:
class Order(DeclarativeBase):
__tablename__ = 'order'
# id of order
id = Column(Integer, autoincrement=True, primary_key=True)
# buyer's id
buyer_id = Column(Integer, ForeignKey('user.user_id',
onupdate="CASCADE", ondelete="CASCADE"), primary_key=True)
I want to insert a new row into this table, but I got a "Field 'order_id' doesn't have a default value" error. It seems that I have to set the id of order manually, because I got two primary-key. My question is, how can I insert a row that generate new ID automatically?
If I generate id manually, I got some problem. For example:
maxId = DBSession.query(func.max(Order)).one()[0]
newOrder = Order(id=maxId + 1, buyer_id=xxx)
DBSession.add(newOrder)
Add a new order in this way seems ok, but however, we got some problem if two request run these code in almost same time.
If there is request a and b run this code in following order:
a.maxId = DBSession.query(func.max(Order)).one()[0]
b.maxId = DBSession.query(func.max(Order)).one()[0]
b.newOrder = Order(id=maxId + 1, buyer_id=xxx)
b.DBSession.add(newOrder)
a.newOrder = Order(id=maxId + 1, buyer_id=xxx)
a.DBSession.add(newOrder)
Then the request a might failed, because there is already an order with same id in table. I can catch the exception and try again. But I am wondering, is there any better way to do?
Sometimes, the id is not simple integer, we might need order id like this:
2009090133 standards for 33rd order at 2009-09-01
In these case, autoincrement is not usable. So I have no choice, manualy assign id for order. So my another question is, is there any better way than catch exception and retry to insert a row with id.
|
[
"If you want sequential numbers per buyer for your orders then you'll have to serialize the transactions inserting to one buyer. You can do that by acquiring exclusive lock on the buyer row:\nsess.query(Buyer.id).with_lockmode('update').get(xxx)\norder_id = sess.query(func.max(Order.id)+1).filter_by(buyer_id=xxx).scalar() or 1\nsess.add(Order(id=order_id, buyer_id=xxx))\n\nUsing this pattern when two transactions try to insert order for one buyer in parallel one of them will block on the first line until the other transaction completes or fails.\n",
"You should be using a default on your column definitions\nid = Column(Integer, default = sqlexpression)\n\nWhere sqlexpression can be a sql expression. Here is the documentation. For autoincrement you should use the sql expression coalesce(select max(order.id) from order,0) + 1. For ease you could import sqlalchemy.sql.text so the id column could look something like\nid = Column(Integer, default = text(\"coalesce(select max(order.id) from order,0) + 1\"))\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"database",
"python",
"sql",
"sqlalchemy"
] |
stackoverflow_0001372525_database_python_sql_sqlalchemy.txt
|
Q:
Reading a Django model's field options
Is it possible to read a Django model's fields' options? For example, with the model:
class MyModel(models.Model):
source_url = models.URLField(max_length=500)
...
i.e. how would I programmatically read the 'max_length' option from, say, within a view or form.
My current workaround is to define a separate class attribute:
class MyModel(models.Model):
SOURCE_URL_MAX_LENGTH=500
source_url = models.URLField(max_length=SOURCE_URL_MAX_LENGTH)
...
I can then access that from anywhere that imports models.MyModel, e.g.:
from models import MyModel
max_length = MyModel.SOURCE_URL_MAX_LENGTH
A:
Do it this way.
from models import MyModel
try:
max_length = MyModel._meta.get_field('source_url').max_length
except:
max_length = None
|
Reading a Django model's field options
|
Is it possible to read a Django model's fields' options? For example, with the model:
class MyModel(models.Model):
source_url = models.URLField(max_length=500)
...
i.e. how would I programmatically read the 'max_length' option from, say, within a view or form.
My current workaround is to define a separate class attribute:
class MyModel(models.Model):
SOURCE_URL_MAX_LENGTH=500
source_url = models.URLField(max_length=SOURCE_URL_MAX_LENGTH)
...
I can then access that from anywhere that imports models.MyModel, e.g.:
from models import MyModel
max_length = MyModel.SOURCE_URL_MAX_LENGTH
|
[
"Do it this way.\nfrom models import MyModel\ntry:\n max_length = MyModel._meta.get_field('source_url').max_length\nexcept:\n max_length = None\n\n"
] |
[
5
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001372706_django_python.txt
|
Q:
How to convert tab separated, pipe separated to CSV file format in Python
I have a text file (.txt) which could be in tab separated format or pipe separated format, and I need to convert it into CSV file format. I am using python 2.6. Can any one suggest me how to identify the delimiter in a text file, read the data and then convert that into comma separated file.
Thanks in advance
A:
I fear that you can't identify the delimiter without knowing what it is. The problem with CSV is, that, quoting ESR:
the Microsoft version of CSV is a textbook example of how not to design a textual file format.
The delimiter needs to be escaped in some way if it can appear in fields. Without knowing, how the escaping is done, automatically identifying it is difficult. Escaping could be done the UNIX way, using a backslash '\', or the Microsoft way, using quotes which then must be escaped, too. This is not a trivial task.
So my suggestion is to get full documentation from whoever generates the file you want to convert. Then you can use one of the approaches suggested in the other answers or some variant.
Edit:
Python provides csv.Sniffer that can help you deduce the format of your DSV. If your input looks like this (note the quoted delimiter in the first field of the second row):
a|b|c
"a|b"|c|d
foo|"bar|baz"|qux
You can do this:
import csv
csvfile = open("csvfile.csv")
dialect = csv.Sniffer().sniff(csvfile.read(1024))
csvfile.seek(0)
reader = csv.DictReader(csvfile, dialect=dialect)
for row in reader:
print row,
# => {'a': 'a|b', 'c': 'd', 'b': 'c'} {'a': 'foo', 'c': 'qux', 'b': 'bar|baz'}
# write records using other dialect
A:
Your strategy could be the following:
parse the file with BOTH a tab-separated csv reader and a pipe-separated csv reader
calculate some statistics on resulting rows to decide which resultset is the one you want to write. An idea could be counting the total number of fields in the two recordset (expecting that tab and pipe are not so common). Another one (if your data is strongly structured and you expect the same number of fields in each line) could be measuring the standard deviation of number of fields per line and take the record set with the smallest standard deviation.
In the following example you find the simpler statistic (total number of fields)
import csv
piperows= []
tabrows = []
#parsing | delimiter
f = open("file", "rb")
readerpipe = csv.reader(f, delimiter = "|")
for row in readerpipe:
piperows.append(row)
f.close()
#parsing TAB delimiter
f = open("file", "rb")
readertab = csv.reader(f, delimiter = "\t")
for row in readerpipe:
tabrows.append(row)
f.close()
#in this example, we use the total number of fields as indicator (but it's not guaranteed to work! it depends by the nature of your data)
#count total fields
totfieldspipe = reduce (lambda x,y: x+ y, [len(f) for f in piperows])
totfieldstab = reduce (lambda x,y: x+ y, [len(f) for f in tabrows])
if totfieldspipe > totfieldstab:
yourrows = piperows
else:
yourrows = tabrows
#the var yourrows contains the rows, now just write them in any format you like
A:
Like this
from __future__ import with_statement
import csv
import re
with open( input, "r" ) as source:
with open( output, "wb" ) as destination:
writer= csv.writer( destination )
for line in input:
writer.writerow( re.split( '[\t|]', line ) )
A:
for line in open("file"):
line=line.strip()
if "|" in line:
print ','.join(line.split("|"))
else:
print ','.join(line.split("\t"))
A:
I would suggest taking some of the example code from the existing answers, or perhaps better use the csv module from python and change it to first assume tab separated, then pipe separated, and produce two output files which are comma separated. Then you visually examine both files to determine which one you want and pick that.
If you actually have lots of files, then you need to try to find a way to detect which file is which.
One of the examples has this:
if "|" in line:
This may be enough: if the first line of a file contains a pipe, then maybe the whole file is pipe separated, else assume a tab separated file.
Alternatively fix the file to contain a key field in the first line which is easily identified - or maybe the first line contains column headers which can be detected.
|
How to convert tab separated, pipe separated to CSV file format in Python
|
I have a text file (.txt) which could be in tab separated format or pipe separated format, and I need to convert it into CSV file format. I am using python 2.6. Can any one suggest me how to identify the delimiter in a text file, read the data and then convert that into comma separated file.
Thanks in advance
|
[
"I fear that you can't identify the delimiter without knowing what it is. The problem with CSV is, that, quoting ESR:\n\nthe Microsoft version of CSV is a textbook example of how not to design a textual file format.\n\nThe delimiter needs to be escaped in some way if it can appear in fields. Without knowing, how the escaping is done, automatically identifying it is difficult. Escaping could be done the UNIX way, using a backslash '\\', or the Microsoft way, using quotes which then must be escaped, too. This is not a trivial task.\nSo my suggestion is to get full documentation from whoever generates the file you want to convert. Then you can use one of the approaches suggested in the other answers or some variant.\nEdit:\nPython provides csv.Sniffer that can help you deduce the format of your DSV. If your input looks like this (note the quoted delimiter in the first field of the second row):\na|b|c\n\"a|b\"|c|d\nfoo|\"bar|baz\"|qux\n\nYou can do this:\nimport csv\n\ncsvfile = open(\"csvfile.csv\")\ndialect = csv.Sniffer().sniff(csvfile.read(1024))\ncsvfile.seek(0)\n\nreader = csv.DictReader(csvfile, dialect=dialect)\nfor row in reader:\n print row,\n# => {'a': 'a|b', 'c': 'd', 'b': 'c'} {'a': 'foo', 'c': 'qux', 'b': 'bar|baz'}\n# write records using other dialect\n\n",
"Your strategy could be the following:\n\nparse the file with BOTH a tab-separated csv reader and a pipe-separated csv reader\ncalculate some statistics on resulting rows to decide which resultset is the one you want to write. An idea could be counting the total number of fields in the two recordset (expecting that tab and pipe are not so common). Another one (if your data is strongly structured and you expect the same number of fields in each line) could be measuring the standard deviation of number of fields per line and take the record set with the smallest standard deviation.\n\nIn the following example you find the simpler statistic (total number of fields)\nimport csv\n\npiperows= []\ntabrows = []\n\n#parsing | delimiter\nf = open(\"file\", \"rb\")\nreaderpipe = csv.reader(f, delimiter = \"|\")\nfor row in readerpipe:\n piperows.append(row)\nf.close()\n\n#parsing TAB delimiter\nf = open(\"file\", \"rb\")\nreadertab = csv.reader(f, delimiter = \"\\t\")\nfor row in readerpipe:\n tabrows.append(row)\nf.close()\n\n#in this example, we use the total number of fields as indicator (but it's not guaranteed to work! it depends by the nature of your data)\n#count total fields\ntotfieldspipe = reduce (lambda x,y: x+ y, [len(f) for f in piperows])\ntotfieldstab = reduce (lambda x,y: x+ y, [len(f) for f in tabrows])\n\nif totfieldspipe > totfieldstab:\n yourrows = piperows\nelse:\n yourrows = tabrows\n\n\n#the var yourrows contains the rows, now just write them in any format you like\n\n",
"Like this\nfrom __future__ import with_statement \nimport csv\nimport re\nwith open( input, \"r\" ) as source:\n with open( output, \"wb\" ) as destination:\n writer= csv.writer( destination )\n for line in input:\n writer.writerow( re.split( '[\\t|]', line ) )\n\n",
"for line in open(\"file\"):\n line=line.strip()\n if \"|\" in line:\n print ','.join(line.split(\"|\"))\n else:\n print ','.join(line.split(\"\\t\"))\n\n",
"I would suggest taking some of the example code from the existing answers, or perhaps better use the csv module from python and change it to first assume tab separated, then pipe separated, and produce two output files which are comma separated. Then you visually examine both files to determine which one you want and pick that.\nIf you actually have lots of files, then you need to try to find a way to detect which file is which.\nOne of the examples has this:\nif \"|\" in line:\n\nThis may be enough: if the first line of a file contains a pipe, then maybe the whole file is pipe separated, else assume a tab separated file.\nAlternatively fix the file to contain a key field in the first line which is easily identified - or maybe the first line contains column headers which can be detected.\n"
] |
[
6,
1,
0,
0,
0
] |
[] |
[] |
[
"csv",
"python"
] |
stackoverflow_0001366775_csv_python.txt
|
Q:
Weird python behaviour on machine with ARM CPU
What could possibly cause this weird python behaviour?
Python 2.6.2 (r262:71600, May 31 2009, 03:55:41)
[GCC 3.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> .1
1251938906.2350719
>>> .1
0.23507189750671387
>>> .1
0.0
>>> .1
-1073741823.0
>>> .1
-1073741823.0
>>> .1
-1073741823.0
>>>
It gives the same output for 0.1, 0.5, 5.1, 0.0, etc.. Integers are echoed back at me correctly, but anything with a decimal point gives me the crazy numbers.
This is a python binary compiled for ARM, installed via Optware on a Synology DiskStation 101j.
Has anyone seen anything like this before?
A:
Maybe it's compiled for the wrong VFP version.
Or your ARM has no VFP and needs to use software emulation instead, but the python binary tries to use hardware.
EDIT
Your DS-101j build on FW IXP420 BB cpu, which is Intel XScale (armv5b) (link). It has no hardware floating-point support. And "b" in armv5b stands for Big Endian. Some people has build problems, because gcc generates little endian code by default. Maybe this is the problem of your software FP lib. Check this search for more info.
A:
As zxcat said, this sounds like you're running on an ARM with no hardware-floating point and a busted soft-float library. A quick search didn't turn up what ARM variant is in the DS101j; anyone know?
|
Weird python behaviour on machine with ARM CPU
|
What could possibly cause this weird python behaviour?
Python 2.6.2 (r262:71600, May 31 2009, 03:55:41)
[GCC 3.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> .1
1251938906.2350719
>>> .1
0.23507189750671387
>>> .1
0.0
>>> .1
-1073741823.0
>>> .1
-1073741823.0
>>> .1
-1073741823.0
>>>
It gives the same output for 0.1, 0.5, 5.1, 0.0, etc.. Integers are echoed back at me correctly, but anything with a decimal point gives me the crazy numbers.
This is a python binary compiled for ARM, installed via Optware on a Synology DiskStation 101j.
Has anyone seen anything like this before?
|
[
"Maybe it's compiled for the wrong VFP version.\nOr your ARM has no VFP and needs to use software emulation instead, but the python binary tries to use hardware.\n\nEDIT\nYour DS-101j build on FW IXP420 BB cpu, which is Intel XScale (armv5b) (link). It has no hardware floating-point support. And \"b\" in armv5b stands for Big Endian. Some people has build problems, because gcc generates little endian code by default. Maybe this is the problem of your software FP lib. Check this search for more info.\n",
"As zxcat said, this sounds like you're running on an ARM with no hardware-floating point and a busted soft-float library. A quick search didn't turn up what ARM variant is in the DS101j; anyone know?\n"
] |
[
8,
0
] |
[] |
[] |
[
"arm",
"floating_point",
"python"
] |
stackoverflow_0001371228_arm_floating_point_python.txt
|
Q:
Looking for the "Hello World" of ctypes unicode processing (including both Python and C code)
Can someone show me a really simple Python ctypes example involving Unicode strings including the C code?
Say, a way to take a Python Unicode string and pass it to a C function which catenates it with itself and returns that to Python, which prints it.
A:
This program uses ctypes to call wcsncat from Python. It concatenates a and b into a buffer that is not quite long enough for a + b + (null terminator) to demonstrate the safer n version of concatenation.
You must pass create_unicode_buffer() instead of passing a regular immutable u"unicode string" for non-const wchar_t* parameters, otherwise you will probably get a segmentation fault.
If the function you need to talk to returns UCS-2 and sizeof(wchar_t) == 4 then you will not be able to use unicode_buffer() because it converts between wchar_t to Python's internal Unicode representation. In that case you might be able to use some combination of result.create_string_buffer() and result.decode('UCS2') or just create an array of c_short and u''.join(unichr(c) for c in buffer). I had to do that to debug an ODBC driver.
example.py:
#!/usr/bin/env python
#-*- encoding: utf-8 -*-
import sys
from ctypes import *
example = cdll.LoadLibrary(".libs/libexample.so")
example.its_an_example.restype = c_wchar_p
example.its_an_example.argtypes = (c_wchar_p, c_wchar_p, c_uint)
buf = create_unicode_buffer(19) # writable, unlike u"example".
buf[0] = u"\u0000"
a = u"あがぃいぅ ☃ "
b = u"個人 相命理 網上聯盟"
print example.its_an_example(buf, a, len(buf) - len(buf.value) - 1)
print example.its_an_example(buf, b, len(buf) - len(buf.value) - 1)
print buf.value # you may have to .encode("utf-8") before printing
sys.stdout.write(buf.value.encode("utf-8") + "\n")
example.c:
#include <stdlib.h>
#include <wchar.h>
wchar_t *its_an_example(wchar_t *dest, const wchar_t *src, size_t n) {
return wcsncat(dest, src, n);
}
Makefile: (ensure the indentation is one tab character, not spaces):
all:
libtool --mode=compile gcc -g -O -c example.c
libtool --mode=link gcc -g -O -o libexample.la example.lo \
-rpath /usr/local/lib
|
Looking for the "Hello World" of ctypes unicode processing (including both Python and C code)
|
Can someone show me a really simple Python ctypes example involving Unicode strings including the C code?
Say, a way to take a Python Unicode string and pass it to a C function which catenates it with itself and returns that to Python, which prints it.
|
[
"This program uses ctypes to call wcsncat from Python. It concatenates a and b into a buffer that is not quite long enough for a + b + (null terminator) to demonstrate the safer n version of concatenation.\nYou must pass create_unicode_buffer() instead of passing a regular immutable u\"unicode string\" for non-const wchar_t* parameters, otherwise you will probably get a segmentation fault.\nIf the function you need to talk to returns UCS-2 and sizeof(wchar_t) == 4 then you will not be able to use unicode_buffer() because it converts between wchar_t to Python's internal Unicode representation. In that case you might be able to use some combination of result.create_string_buffer() and result.decode('UCS2') or just create an array of c_short and u''.join(unichr(c) for c in buffer). I had to do that to debug an ODBC driver.\nexample.py:\n#!/usr/bin/env python\n#-*- encoding: utf-8 -*-\nimport sys\nfrom ctypes import *\nexample = cdll.LoadLibrary(\".libs/libexample.so\")\nexample.its_an_example.restype = c_wchar_p\nexample.its_an_example.argtypes = (c_wchar_p, c_wchar_p, c_uint)\nbuf = create_unicode_buffer(19) # writable, unlike u\"example\".\nbuf[0] = u\"\\u0000\"\na = u\"あがぃいぅ ☃ \"\nb = u\"個人 相命理 網上聯盟\"\nprint example.its_an_example(buf, a, len(buf) - len(buf.value) - 1)\nprint example.its_an_example(buf, b, len(buf) - len(buf.value) - 1)\nprint buf.value # you may have to .encode(\"utf-8\") before printing\nsys.stdout.write(buf.value.encode(\"utf-8\") + \"\\n\")\n\nexample.c:\n#include <stdlib.h>\n#include <wchar.h>\n\nwchar_t *its_an_example(wchar_t *dest, const wchar_t *src, size_t n) {\n return wcsncat(dest, src, n);\n}\n\nMakefile: (ensure the indentation is one tab character, not spaces):\nall:\n libtool --mode=compile gcc -g -O -c example.c\n libtool --mode=link gcc -g -O -o libexample.la example.lo \\\n -rpath /usr/local/lib\n\n"
] |
[
6
] |
[
"Untested, but I think this should work.\ns = \"inputstring\"\nmydll.my_c_fcn.restype = c_char_p\nresult = mydll.my_c_fcn(s)\nprint result\n\nAs for memory management, my understanding is that your c code needs to manage the memory it creates. That is, it should not free the input string, but eventually needs to free the return string.\n",
"from ctypes import *\n\nbuffer = create_string_buffer(128)\ncdll.msvcrt.strcat(buffer, \"blah\")\nprint buffer.value\n\n\nNote: I understand that the Python code is easy, but what I'm struggling with is the C code. Does it need to free its input string? Will its output string get freed by Python on its behalf?\n\nNo, you need to manually free the buffer yourself. What people normally do is copy the python string immediately from buffer.value, and then free the buffer.\n\nCan you post the C code? – mike 2 hours ago\n\n#include <string.h>\n\nchar* mystrcat(char* buffer) {\n strcat(buffer, \"blah\");\n return buffer;\n}\n\n"
] |
[
-1,
-1
] |
[
"c",
"ctypes",
"python"
] |
stackoverflow_0000890793_c_ctypes_python.txt
|
Q:
How to get/set local variables of a function (from outside) in Python?
If I have a function (in Python 2.5.2) like:
def sample_func():
a = 78
b = range(5)
#c = a + b[2] - x
My questions are:
How to get the local variables (a,b) of the function from outside without using locals() inside the function? (kind of reflection)
Is it possible to set a local variable (say x) from outside so that the commented line works? (I know it sounds weird).
Thanks in advance.
EDIT:
Everyone is asking for a use-case. But it is a weird situation. (Don't blame me, I did not create it). Here is the scenario:
I have an encrypted python source file containing a python function.
A C extension module decrypts it and builds that function in-memory.
A main python program first calls the C extension with that encrypted file location.
Then the main program calls the function that has been built in-memory (by the C extension)
But main program needs to know the local variables of that function (Dont ask me why, it was not me)
For some (damn) reason, main program needs to set a variable too (weirdest of all)
A:
No. A function that isn't being run doesn't have locals; it's just a function. Asking how to modify a function's locals when it's not running is like asking how to modify a program's heap when it's not running.
You can modify constants, though, if you really want to.
def func():
a = 10
print a
co = func.func_code
modified_consts = list(co.co_consts)
for idx, val in enumerate(modified_consts):
if modified_consts[idx] == 10: modified_consts[idx] = 15
modified_consts = tuple(modified_consts)
import types
modified_code = types.CodeType(co.co_argcount, co.co_nlocals, co.co_stacksize, co.co_flags, co.co_code, modified_consts, co.co_names, co.co_varnames, co.co_filename, co.co_name, co.co_firstlineno, co.co_lnotab)
modified_func = types.FunctionType(modified_code, func.func_globals)
# 15:
modified_func()
It's a hack, because there's no way to know which constant in co.co_consts is which; this uses a sentinel value to figure it out. Depending on whether you can constrain your use cases enough, that might be enough.
A:
I'm not sure what your use-case is, but this may work better as a class. You can define the __call__ method to make a class behave like a function.
e.g.:
>>> class sample_func(object):
... def __init__(self):
... self.a = 78
... self.b = range(5)
... def __call__(self):
... print self.a, self.b, self.x
...
>>> f = sample_func()
>>> print f.a
78
>>> f.x = 3
>>> f()
78 [0, 1, 2, 3, 4] 3
(this is based on your toy example, so the code doesn't make much sense. If you give more details, we may be able to provide better advice)
A:
Not sure if this is what you mean, but as functions are objects in Python you can bind variables to a function object and access them from 'outside':
def fa():
print 'x value of fa() when entering fa(): %s' % fa.x
print 'y value of fb() when entering fa(): %s' % fb.y
fa.x += fb.y
print 'x value of fa() after calculation in fa(): %s' % fa.x
print 'y value of fb() after calculation in fa(): %s' % fb.y
fa.count +=1
def fb():
print 'y value of fb() when entering fb(): %s' % fb.y
print 'x value of fa() when entering fa(): %s' % fa.x
fb.y += fa.x
print 'y value of fb() after calculation in fb(): %s' % fb.y
print 'x value of fa() after calculation in fb(): %s' % fa.x
print 'From fb() is see fa() has been called %s times' % fa.count
fa.x,fb.y,fa.count = 1,1,1
for i in range(10):
fa()
fb()
Please excuse me if I am terribly wrong... I´m a Python and programming beginner myself...
A:
The function's locals change whenever the function is run, so there's little meaning to access them while the function isn't running.
A:
Expecting a variable in a function to be set by an outside function BEFORE that function is called is such bad design that the only real answer I can recommend is changing the design. A function that expects its internal variables to be set before it is run is useless.
So the real question you have to ask is why does that function expect x to be defined outside the function? Does the original program that function use to belong to set a global variable that function would have had access to? If so, then it might be as easy as suggesting to the original authors of that function that they instead allow x to be passed in as an argument. A simple change in your sample function would make the code work in both situations:
def sample_func(x_local=None):
if not x_local:
x_local = x
a = 78
b = range(5)
c = a + b[2] - x_local
This will allow the function to accept a parameter from your main function the way you want to use it, but it will not break the other program as it will still use the globally defined x if the function is not given any arguments.
|
How to get/set local variables of a function (from outside) in Python?
|
If I have a function (in Python 2.5.2) like:
def sample_func():
a = 78
b = range(5)
#c = a + b[2] - x
My questions are:
How to get the local variables (a,b) of the function from outside without using locals() inside the function? (kind of reflection)
Is it possible to set a local variable (say x) from outside so that the commented line works? (I know it sounds weird).
Thanks in advance.
EDIT:
Everyone is asking for a use-case. But it is a weird situation. (Don't blame me, I did not create it). Here is the scenario:
I have an encrypted python source file containing a python function.
A C extension module decrypts it and builds that function in-memory.
A main python program first calls the C extension with that encrypted file location.
Then the main program calls the function that has been built in-memory (by the C extension)
But main program needs to know the local variables of that function (Dont ask me why, it was not me)
For some (damn) reason, main program needs to set a variable too (weirdest of all)
|
[
"No. A function that isn't being run doesn't have locals; it's just a function. Asking how to modify a function's locals when it's not running is like asking how to modify a program's heap when it's not running.\nYou can modify constants, though, if you really want to.\ndef func():\n a = 10\n print a\n\nco = func.func_code\nmodified_consts = list(co.co_consts)\nfor idx, val in enumerate(modified_consts):\n if modified_consts[idx] == 10: modified_consts[idx] = 15\n\nmodified_consts = tuple(modified_consts)\n\nimport types\nmodified_code = types.CodeType(co.co_argcount, co.co_nlocals, co.co_stacksize, co.co_flags, co.co_code, modified_consts, co.co_names, co.co_varnames, co.co_filename, co.co_name, co.co_firstlineno, co.co_lnotab)\nmodified_func = types.FunctionType(modified_code, func.func_globals)\n# 15:\nmodified_func()\n\nIt's a hack, because there's no way to know which constant in co.co_consts is which; this uses a sentinel value to figure it out. Depending on whether you can constrain your use cases enough, that might be enough.\n",
"I'm not sure what your use-case is, but this may work better as a class. You can define the __call__ method to make a class behave like a function.\ne.g.:\n>>> class sample_func(object):\n... def __init__(self):\n... self.a = 78\n... self.b = range(5)\n... def __call__(self):\n... print self.a, self.b, self.x\n... \n>>> f = sample_func()\n>>> print f.a\n78\n>>> f.x = 3\n>>> f()\n78 [0, 1, 2, 3, 4] 3\n\n(this is based on your toy example, so the code doesn't make much sense. If you give more details, we may be able to provide better advice)\n",
"Not sure if this is what you mean, but as functions are objects in Python you can bind variables to a function object and access them from 'outside':\ndef fa():\n print 'x value of fa() when entering fa(): %s' % fa.x\n print 'y value of fb() when entering fa(): %s' % fb.y\n fa.x += fb.y\n print 'x value of fa() after calculation in fa(): %s' % fa.x\n print 'y value of fb() after calculation in fa(): %s' % fb.y\n fa.count +=1\n\n\ndef fb():\n print 'y value of fb() when entering fb(): %s' % fb.y\n print 'x value of fa() when entering fa(): %s' % fa.x\n fb.y += fa.x\n print 'y value of fb() after calculation in fb(): %s' % fb.y\n print 'x value of fa() after calculation in fb(): %s' % fa.x\n print 'From fb() is see fa() has been called %s times' % fa.count\n\n\nfa.x,fb.y,fa.count = 1,1,1\n\nfor i in range(10):\n fa()\n fb()\n\nPlease excuse me if I am terribly wrong... I´m a Python and programming beginner myself...\n",
"The function's locals change whenever the function is run, so there's little meaning to access them while the function isn't running.\n",
"Expecting a variable in a function to be set by an outside function BEFORE that function is called is such bad design that the only real answer I can recommend is changing the design. A function that expects its internal variables to be set before it is run is useless.\nSo the real question you have to ask is why does that function expect x to be defined outside the function? Does the original program that function use to belong to set a global variable that function would have had access to? If so, then it might be as easy as suggesting to the original authors of that function that they instead allow x to be passed in as an argument. A simple change in your sample function would make the code work in both situations:\ndef sample_func(x_local=None):\n if not x_local:\n x_local = x\n a = 78\n b = range(5)\n c = a + b[2] - x_local\n\nThis will allow the function to accept a parameter from your main function the way you want to use it, but it will not break the other program as it will still use the globally defined x if the function is not given any arguments.\n"
] |
[
18,
9,
3,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001360721_python.txt
|
Q:
save python output to log
I have a python script the runs this code:
strpath = "sudo svnadmin create /svn/repos/" + short_name
os.popen (strpath, 'w')
How can I get the output of that command stored in a variable or written to a log file in the current directory?
I know, there may not be an output, but if there is, I need to know.
A:
Use the 'r' mode to open the pipe instead:
f = os.popen (strpath, 'r')
for line in f:
print line
f.close()
See the documentation for os.popen() for more information.
The subprocess module is a better way to execute external commands like this, because it allows much more control over the process execution, as well as providing access to both input and output streams simultaneously.
A:
If you don't need to write to the command, then just change the 'w' to 'r':
strpath = "sudo svnadmin create /svn/repos/" + short_name
output = os.popen (strpath, 'r')
for line in output.readlines():
# do stuff
Alternatively if you don't want to process line-by-line:
strpath = "sudo svnadmin create /svn/repos/" + short_name
output = os.popen (strpath, 'r')
outtext = output.read()
However, I'd echo Greg's suggestion of looking at the subprocess module instead.
A:
you can do it in your bash-part:
strpath = "sudo svnadmin create /svn/repos/" + short_name + ' > log.txt'
|
save python output to log
|
I have a python script the runs this code:
strpath = "sudo svnadmin create /svn/repos/" + short_name
os.popen (strpath, 'w')
How can I get the output of that command stored in a variable or written to a log file in the current directory?
I know, there may not be an output, but if there is, I need to know.
|
[
"Use the 'r' mode to open the pipe instead:\nf = os.popen (strpath, 'r')\nfor line in f:\n print line\nf.close()\n\nSee the documentation for os.popen() for more information.\nThe subprocess module is a better way to execute external commands like this, because it allows much more control over the process execution, as well as providing access to both input and output streams simultaneously.\n",
"If you don't need to write to the command, then just change the 'w' to 'r':\nstrpath = \"sudo svnadmin create /svn/repos/\" + short_name\noutput = os.popen (strpath, 'r')\n\nfor line in output.readlines():\n # do stuff\n\nAlternatively if you don't want to process line-by-line:\nstrpath = \"sudo svnadmin create /svn/repos/\" + short_name\noutput = os.popen (strpath, 'r')\n\nouttext = output.read()\n\nHowever, I'd echo Greg's suggestion of looking at the subprocess module instead.\n",
"you can do it in your bash-part:\nstrpath = \"sudo svnadmin create /svn/repos/\" + short_name + ' > log.txt'\n\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"logging",
"python",
"variables"
] |
stackoverflow_0001375283_logging_python_variables.txt
|
Q:
What is this piece of Python code doing?
This following is a snippet of Python code I found that solves a mathematical problem. What exactly is it doing? I wasn't too sure what to Google for.
x, y = x + 3 * y, 4 * x + 1 * y
Is this a special Python syntax?
A:
x, y = x + 3 * y, 4 * x + 1 * y
is the equivalent of:
x = x + 3 * y
y = 4 * x + 1 * y
EXCEPT that it uses the original values for x and y in both calculations - because the new values for x and y aren't assigned until both calculations are complete.
The generic form is:
x,y = a,b
where a and b are expressions the values of which get assigned to x and y respectively. You can actually assign any tuple (set of comma-separated values) to any tuple of variables of the same size - for instance,
x,y,z = a,b,c
would also work, but
w,x,y,z = a,b,c
would not because the number of values in the right-hand tuple doesn't match the number of variables in the left-hand tuple.
A:
It's an assignment to a tuple, also called sequence unpacking. Probably it's clearer when you add parenthesis around the tuples:
(x, y) = (x + 3 * y, 4 * x + 1 * y)
The value x + 3 * y is assigned to x and the value 4 * x + 1 * y is assigned to y.
It is equivalent to this:
x_new = x + 3 * y
y_new = 4 * x + 1 * y
x = x_new
y = y_new
A:
I also recently saw this referred to as "simultaneous assignment", which seems to capture the spirit of several of the answers.
|
What is this piece of Python code doing?
|
This following is a snippet of Python code I found that solves a mathematical problem. What exactly is it doing? I wasn't too sure what to Google for.
x, y = x + 3 * y, 4 * x + 1 * y
Is this a special Python syntax?
|
[
"x, y = x + 3 * y, 4 * x + 1 * y\n\nis the equivalent of:\nx = x + 3 * y\ny = 4 * x + 1 * y\n\nEXCEPT that it uses the original values for x and y in both calculations - because the new values for x and y aren't assigned until both calculations are complete.\nThe generic form is:\nx,y = a,b\n\nwhere a and b are expressions the values of which get assigned to x and y respectively. You can actually assign any tuple (set of comma-separated values) to any tuple of variables of the same size - for instance,\nx,y,z = a,b,c\n\nwould also work, but\nw,x,y,z = a,b,c\n\nwould not because the number of values in the right-hand tuple doesn't match the number of variables in the left-hand tuple.\n",
"It's an assignment to a tuple, also called sequence unpacking. Probably it's clearer when you add parenthesis around the tuples:\n(x, y) = (x + 3 * y, 4 * x + 1 * y)\n\nThe value x + 3 * y is assigned to x and the value 4 * x + 1 * y is assigned to y.\nIt is equivalent to this:\nx_new = x + 3 * y\ny_new = 4 * x + 1 * y\nx = x_new\ny = y_new\n\n",
"I also recently saw this referred to as \"simultaneous assignment\", which seems to capture the spirit of several of the answers.\n"
] |
[
16,
12,
0
] |
[] |
[] |
[
"math",
"python",
"syntax"
] |
stackoverflow_0001370604_math_python_syntax.txt
|
Q:
How can I call the svn.client.svn_client_list2 with python SVN API SWIG bindings?
The question
How do I call svn_client_list2 C API function from python via SVN API SWIG bindings?
Problem description
I can find that function from the svn.client module, but calling it is the problem, because the callback function it uses is a typedef svn_client_list_func_t and I don't know how to use that typedef in python.
Although I can find a class for it from svn.client.svn_client_list_func_t along with svn.client.svn_client_list_func_tPtr, but I can't find an example of how to use it.
Incorrect usage of svn.client.svn_client_list2
If you call the svn.client.svn_client_list2 function with a normal python function as callback parameter it gives you an error.
import svn.core, svn.client
path = svn.core.svn_path_canonicalize("/path/to/a/working_copy/")
pool = svn.core.Pool()
ctx = svn.client.svn_client_create_context(pool)
revision = svn.core.svn_opt_revision_t()
SVN_DIRENT_ALL = 0xffffffffl
def _handle_list(path, dirent, abs_path, pool):
print(path, dirent, abs_path, pool)
svn.client.svn_client_list2(path,
revision,
revision,
svn.core.svn_depth_infinity,
SVN_DIRENT_ALL,
True,
_handle_list,
ctx,
pool)
TypeError: argument number 7: a 'svn_client_list_func_t *' is expected, 'function(<function _handle_list at 0x01365270>)' is received
Incorrect usage of svn.client.svn_client_list_func_t
Trying to initialize the svn.client.svn_client_list_func_t will result to an exception.
callback_function = svn.client.svn_client_list_func_t()
RuntimeError: No constructor defined
Ideas how I can proceed?
A:
It looks like you can't really do this at the moment. When I dug into bit into the SWIG bindings code and documentation it says that when you're using target language functions as the callback function, you need a typemap for it as it says in the SWIG documentation:
Although SWIG does not normally allow callback functions to be written in the target language, this can be accomplished with the use of typemaps and other advanced SWIG features.
It looked like it was missing for Python...
|
How can I call the svn.client.svn_client_list2 with python SVN API SWIG bindings?
|
The question
How do I call svn_client_list2 C API function from python via SVN API SWIG bindings?
Problem description
I can find that function from the svn.client module, but calling it is the problem, because the callback function it uses is a typedef svn_client_list_func_t and I don't know how to use that typedef in python.
Although I can find a class for it from svn.client.svn_client_list_func_t along with svn.client.svn_client_list_func_tPtr, but I can't find an example of how to use it.
Incorrect usage of svn.client.svn_client_list2
If you call the svn.client.svn_client_list2 function with a normal python function as callback parameter it gives you an error.
import svn.core, svn.client
path = svn.core.svn_path_canonicalize("/path/to/a/working_copy/")
pool = svn.core.Pool()
ctx = svn.client.svn_client_create_context(pool)
revision = svn.core.svn_opt_revision_t()
SVN_DIRENT_ALL = 0xffffffffl
def _handle_list(path, dirent, abs_path, pool):
print(path, dirent, abs_path, pool)
svn.client.svn_client_list2(path,
revision,
revision,
svn.core.svn_depth_infinity,
SVN_DIRENT_ALL,
True,
_handle_list,
ctx,
pool)
TypeError: argument number 7: a 'svn_client_list_func_t *' is expected, 'function(<function _handle_list at 0x01365270>)' is received
Incorrect usage of svn.client.svn_client_list_func_t
Trying to initialize the svn.client.svn_client_list_func_t will result to an exception.
callback_function = svn.client.svn_client_list_func_t()
RuntimeError: No constructor defined
Ideas how I can proceed?
|
[
"It looks like you can't really do this at the moment. When I dug into bit into the SWIG bindings code and documentation it says that when you're using target language functions as the callback function, you need a typemap for it as it says in the SWIG documentation:\nAlthough SWIG does not normally allow callback functions to be written in the target language, this can be accomplished with the use of typemaps and other advanced SWIG features.\nIt looked like it was missing for Python...\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001298869_python.txt
|
Q:
pywikipedia login.py socket.error: (10060, 'Operation timed out')
I'm totally new to python, so hopefully someone can help if I'm doing something obviously wrong. I'm trying to create and run a simple pywikipedia bot on vocabularies.referata.com, a semantic mediawiki site. I downloaded the pywikipedia distro and created a family file:
import config, family, urllib # REQUIRED
class Family(family.Family): # REQUIRED
def __init__(self): # REQUIRED
family.Family.__init__(self) # REQUIRED
self.name = 'explicator' # REQUIRED; replace with actual name
self.langs = { # REQUIRED
'en': 'vocabularies.referata.com', # Include one line for each wiki in family
}
I've created a user, wikibot, and run:
python generate_user_files.py
as per instructions on:
http://meta.wikimedia.org/wiki/Using_the_python_wikipediabot
When I try to run:
python login.py
I'm getting the following error:
C:\pywikipedia>python login.py
Password for user wikibot on explicator:en:
Logging in to explicator:en as wikibot
Traceback (most recent call last):
File "login.py", line 376, in <module>
main()
File "login.py", line 372, in main
loginMan.login()
File "login.py", line 261, in login
cookiedata = self.getCookie(api)
File "login.py", line 178, in getCookie
response, data = self.site.postData(address, self.site.urlEncode(predata))
File "C:\pywikipedia\wikipedia.py", line 4915, in postData
conn.endheaders()
File "C:\Python25\lib\httplib.py", line 860, in endheaders
self._send_output()
File "C:\Python25\lib\httplib.py", line 732, in _send_output
self.send(msg)
File "C:\Python25\lib\httplib.py", line 699, in send
self.connect()
File "C:\Python25\lib\httplib.py", line 683, in connect
raise socket.error, msg
socket.error: (10060, 'Operation timed out')
Is their something stupid/apparent that I need to check or am doing wrong? I'm behind a firewall, would this be the problem? (and if so what steps do I need to take to fix it).
thanks for any help
Stuart
A:
I'm not familiar w/ pywikipedia =p, but the problem is at least about connection rather than python: the socket connection fails to be established at the beginning.
Is the post url, address in the login.py L178 , correct? Any typo or misconfiguration?
Is the url accessible? You could try to directly visit the url in browser and see if there is any http response. If the server is reachable, you could check whether it's listening on certain port like 80, by netstat -ant, netstat -anptcp or similar. On windows, a firewall w/ default settings may block communication, you could see whether there is any warning dialog waiting for confirm, or check the fireware log. Also, you need to have Administrator privilege to use port 80.
A:
Works for me, sorry. I just created an account, and used your family file. It seems to be on your side.
$ python login.py -v -v -family:explicator -lang:en
Pywikipediabot [http] trunk/pywikipedia (r6858, May 08 2009, 15:23:29)
Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41)
[GCC 4.3.3]
WARNING: Using -v -v on login.py might leak private data. When sharing, please double check your password is not readable and log out your bots session.
Password for user NicDumZ on explicator:en:
Logging in to explicator:en as NicDumZ
self.site.postData(/w/index.php?title=Special:Userlogin&useskin=monobook&action=submit, wpSkipCookieCheck=1&wpPassword=XXXXX&wpDomain=&wpRemember=1&wpLoginattempt=Aanmelden%20%26%20Inschrijven&wpName=NicDumZ)
302/Found
Date: Thu, 03 Sep 2009 19:46:47 GMT
Server: Apache
Cache-Control: private, must-revalidate, max-age=0
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: referata_session=XXXXXXXXXXdab8c53151d27046d68473; path=/; HttpOnly
Set-Cookie: referataUserID=4; expires=Sat, 03-Oct-2009 19:46:48 GMT; path=/; httponly
Set-Cookie: referataUserName=NicDumZ; expires=Sat, 03-Oct-2009 19:46:48 GMT; path=/; httponly
Set-Cookie: referatasession=XXXXXXXXXX270504613b1d26dfef82e6; expires=Sat, 03-Oct-2009 19:46:48 GMT; path=/; httponly
Vary: Accept-Encoding,Cookie
X-Vary-Options: Accept-Encoding;list-contains=gzip,Cookie;string-contains=referataToken;string-contains=referataLoggedOut;string-contains=referata_session
Location: http://vocabularies.referata.com/wiki/Main_Page
Content-Encoding: gzip
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
Should be logged in now
Can you try the same, with -v -v options so that I can help you debug that issue? Please comment back so I can get your updates.
|
pywikipedia login.py socket.error: (10060, 'Operation timed out')
|
I'm totally new to python, so hopefully someone can help if I'm doing something obviously wrong. I'm trying to create and run a simple pywikipedia bot on vocabularies.referata.com, a semantic mediawiki site. I downloaded the pywikipedia distro and created a family file:
import config, family, urllib # REQUIRED
class Family(family.Family): # REQUIRED
def __init__(self): # REQUIRED
family.Family.__init__(self) # REQUIRED
self.name = 'explicator' # REQUIRED; replace with actual name
self.langs = { # REQUIRED
'en': 'vocabularies.referata.com', # Include one line for each wiki in family
}
I've created a user, wikibot, and run:
python generate_user_files.py
as per instructions on:
http://meta.wikimedia.org/wiki/Using_the_python_wikipediabot
When I try to run:
python login.py
I'm getting the following error:
C:\pywikipedia>python login.py
Password for user wikibot on explicator:en:
Logging in to explicator:en as wikibot
Traceback (most recent call last):
File "login.py", line 376, in <module>
main()
File "login.py", line 372, in main
loginMan.login()
File "login.py", line 261, in login
cookiedata = self.getCookie(api)
File "login.py", line 178, in getCookie
response, data = self.site.postData(address, self.site.urlEncode(predata))
File "C:\pywikipedia\wikipedia.py", line 4915, in postData
conn.endheaders()
File "C:\Python25\lib\httplib.py", line 860, in endheaders
self._send_output()
File "C:\Python25\lib\httplib.py", line 732, in _send_output
self.send(msg)
File "C:\Python25\lib\httplib.py", line 699, in send
self.connect()
File "C:\Python25\lib\httplib.py", line 683, in connect
raise socket.error, msg
socket.error: (10060, 'Operation timed out')
Is their something stupid/apparent that I need to check or am doing wrong? I'm behind a firewall, would this be the problem? (and if so what steps do I need to take to fix it).
thanks for any help
Stuart
|
[
"I'm not familiar w/ pywikipedia =p, but the problem is at least about connection rather than python: the socket connection fails to be established at the beginning.\n\nIs the post url, address in the login.py L178 , correct? Any typo or misconfiguration? \nIs the url accessible? You could try to directly visit the url in browser and see if there is any http response. If the server is reachable, you could check whether it's listening on certain port like 80, by netstat -ant, netstat -anptcp or similar. On windows, a firewall w/ default settings may block communication, you could see whether there is any warning dialog waiting for confirm, or check the fireware log. Also, you need to have Administrator privilege to use port 80.\n\n",
"Works for me, sorry. I just created an account, and used your family file. It seems to be on your side.\n$ python login.py -v -v -family:explicator -lang:en\nPywikipediabot [http] trunk/pywikipedia (r6858, May 08 2009, 15:23:29)\nPython 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) \n[GCC 4.3.3]\nWARNING: Using -v -v on login.py might leak private data. When sharing, please double check your password is not readable and log out your bots session.\nPassword for user NicDumZ on explicator:en: \nLogging in to explicator:en as NicDumZ\nself.site.postData(/w/index.php?title=Special:Userlogin&useskin=monobook&action=submit, wpSkipCookieCheck=1&wpPassword=XXXXX&wpDomain=&wpRemember=1&wpLoginattempt=Aanmelden%20%26%20Inschrijven&wpName=NicDumZ)\n302/Found\nDate: Thu, 03 Sep 2009 19:46:47 GMT\nServer: Apache\nCache-Control: private, must-revalidate, max-age=0\nExpires: Thu, 01 Jan 1970 00:00:00 GMT\nSet-Cookie: referata_session=XXXXXXXXXXdab8c53151d27046d68473; path=/; HttpOnly\nSet-Cookie: referataUserID=4; expires=Sat, 03-Oct-2009 19:46:48 GMT; path=/; httponly\nSet-Cookie: referataUserName=NicDumZ; expires=Sat, 03-Oct-2009 19:46:48 GMT; path=/; httponly\nSet-Cookie: referatasession=XXXXXXXXXX270504613b1d26dfef82e6; expires=Sat, 03-Oct-2009 19:46:48 GMT; path=/; httponly\nVary: Accept-Encoding,Cookie\nX-Vary-Options: Accept-Encoding;list-contains=gzip,Cookie;string-contains=referataToken;string-contains=referataLoggedOut;string-contains=referata_session\nLocation: http://vocabularies.referata.com/wiki/Main_Page\nContent-Encoding: gzip\nTransfer-Encoding: chunked\nContent-Type: text/html; charset=utf-8\n\n\nShould be logged in now\n\nCan you try the same, with -v -v options so that I can help you debug that issue? Please comment back so I can get your updates.\n"
] |
[
0,
0
] |
[] |
[] |
[
"mediawiki",
"python",
"pywikibot"
] |
stackoverflow_0001368266_mediawiki_python_pywikibot.txt
|
Q:
XML parsing in Python
I'd like to parse a simple, small XML file using python however work on pyXML seems to have ceased. I'd like to use python 2.6 if possible. Can anyone recommend an XML parser that will work with 2.6?
Thanks
A:
If it's small and simple then just use the standard library:
from xml.dom.minidom import parse
doc = parse("filename.xml")
This will return a DOM tree implementing the standard Document Object Model API
If you later need to do complex things like schema validation or XPath querying then I recommend the third-party lxml module, which is a wrapper around the popular libxml2 C library.
A:
For most of my tasks I have used the Minidom Lightweight DOM implementation, from the official page:
from xml.dom.minidom import parse, parseString
dom1 = parse('c:\\temp\\mydata.xml') # parse an XML file by name
datasource = open('c:\\temp\\mydata.xml')
dom2 = parse(datasource) # parse an open file
dom3 = parseString('<myxml>Some data<empty/> some more data</myxml>')
A:
Here is also a very good example on how to use minidom along with explanations.
A:
Would lxml suit your needs? Its the first tool I turn to for xml parsing.
A:
A few years ago, I wrote a library for working with structured XML. It makes XML simpler by making some limiting assumptions.
You could use XML for something like a word processor document, in which case you have a complicated soup of stuff with XML tags embedded all over the place; in which case my library would not be good.
But if you are using XML for something like a config file, my library is rather convenient. You define classes that describe the structure of the XML you want, and once you have the classes done, there is a method to slurp in XML and parse it. The actual parsing is done by xml.dom.minidom, but then my library extracts the data and puts it in the classes.
The best part: you can declare a "Collection" type that will be a Python list with zero or more other XML elements inside it. This is great for things like Atom or RSS feeds (which was the original reason I designed the library).
Here's the URL: http://home.avvanta.com/~steveha/xe.html
I'd be happy to answer questions if you have any.
|
XML parsing in Python
|
I'd like to parse a simple, small XML file using python however work on pyXML seems to have ceased. I'd like to use python 2.6 if possible. Can anyone recommend an XML parser that will work with 2.6?
Thanks
|
[
"If it's small and simple then just use the standard library:\nfrom xml.dom.minidom import parse\ndoc = parse(\"filename.xml\")\n\nThis will return a DOM tree implementing the standard Document Object Model API\nIf you later need to do complex things like schema validation or XPath querying then I recommend the third-party lxml module, which is a wrapper around the popular libxml2 C library.\n",
"For most of my tasks I have used the Minidom Lightweight DOM implementation, from the official page:\nfrom xml.dom.minidom import parse, parseString\n\ndom1 = parse('c:\\\\temp\\\\mydata.xml') # parse an XML file by name\n\ndatasource = open('c:\\\\temp\\\\mydata.xml')\ndom2 = parse(datasource) # parse an open file\n\ndom3 = parseString('<myxml>Some data<empty/> some more data</myxml>')\n\n",
"Here is also a very good example on how to use minidom along with explanations.\n",
"Would lxml suit your needs? Its the first tool I turn to for xml parsing.\n",
"A few years ago, I wrote a library for working with structured XML. It makes XML simpler by making some limiting assumptions.\nYou could use XML for something like a word processor document, in which case you have a complicated soup of stuff with XML tags embedded all over the place; in which case my library would not be good.\nBut if you are using XML for something like a config file, my library is rather convenient. You define classes that describe the structure of the XML you want, and once you have the classes done, there is a method to slurp in XML and parse it. The actual parsing is done by xml.dom.minidom, but then my library extracts the data and puts it in the classes.\nThe best part: you can declare a \"Collection\" type that will be a Python list with zero or more other XML elements inside it. This is great for things like Atom or RSS feeds (which was the original reason I designed the library).\nHere's the URL: http://home.avvanta.com/~steveha/xe.html\nI'd be happy to answer questions if you have any.\n"
] |
[
19,
6,
5,
3,
1
] |
[] |
[] |
[
"parsing",
"python",
"python_2.6",
"xml"
] |
stackoverflow_0001373707_parsing_python_python_2.6_xml.txt
|
Q:
Error importing a python module in Django
In my Django project, the following line throws an ImportError: "No module named elementtree".
from elementtree import ElementTree
However, the module is installed (ie, I can run an interactive python shell, and type that exact line without any ImportError), and the directory containing the module is on the PYTHONPATH. But when I access any page in a browser, it somehow can't find the module, and throws the ImportError. What could be causing this?
A:
Can you import elementtree within the django shell:
python manage.py shell
Assuming you have multiple python versions and do not know which one is being used to run your site, add the following to your view and push python_ver to your template, it will show you the Python version you are using:
import sys
python_ver = sys.version
You can also explicitly add the path to elementtree programatically in your settings.py:
import sys
sys.path.append('path to where elementtree resides')
A:
I've also run into cross-platform issues where ElementTree was available from different modules on different systems... this ended up working for me:
try:
import elementtree.ElementTree as ET
except:
import xml.etree.ElementTree as ET
May or may not help for you...
A:
Go into your installation directory
Example:
C:\Python26\Lib\site-packages
And check if both elementtree and django are in there.
If they are both not there, then you probably have multiple installation directories for different versions of Python.
In any case, you can solve your problem by running this command:
python setup.py install
Run it twice, once inside the download for django and once inside the download for elementtree. It will install both of the downloads into whatever your current default python is.
References:
Installation documentation for ElementTree
Installation documentation for Django
|
Error importing a python module in Django
|
In my Django project, the following line throws an ImportError: "No module named elementtree".
from elementtree import ElementTree
However, the module is installed (ie, I can run an interactive python shell, and type that exact line without any ImportError), and the directory containing the module is on the PYTHONPATH. But when I access any page in a browser, it somehow can't find the module, and throws the ImportError. What could be causing this?
|
[
"Can you import elementtree within the django shell:\npython manage.py shell\n\nAssuming you have multiple python versions and do not know which one is being used to run your site, add the following to your view and push python_ver to your template, it will show you the Python version you are using:\nimport sys\npython_ver = sys.version\n\nYou can also explicitly add the path to elementtree programatically in your settings.py:\nimport sys\nsys.path.append('path to where elementtree resides')\n\n",
"I've also run into cross-platform issues where ElementTree was available from different modules on different systems... this ended up working for me:\ntry:\n import elementtree.ElementTree as ET\nexcept:\n import xml.etree.ElementTree as ET\n\nMay or may not help for you...\n",
"Go into your installation directory\nExample: \n\nC:\\Python26\\Lib\\site-packages\n\nAnd check if both elementtree and django are in there. \nIf they are both not there, then you probably have multiple installation directories for different versions of Python. \n\nIn any case, you can solve your problem by running this command:\n\npython setup.py install\n\nRun it twice, once inside the download for django and once inside the download for elementtree. It will install both of the downloads into whatever your current default python is. \nReferences: \n\nInstallation documentation for ElementTree\nInstallation documentation for Django\n\n"
] |
[
7,
1,
0
] |
[] |
[] |
[
"django",
"elementtree",
"python",
"python_module"
] |
stackoverflow_0001375382_django_elementtree_python_python_module.txt
|
Q:
Problem with polling sockets in python
After I begin the polling loop, all messages printed after the first iteration require me to press enter in the terminal for it to be displayed.
#!/usr/bin/python
import socket, select, os, pty, sys
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('', 5007))
s.listen(5)
mypoll = select.poll()
mypoll.register(s.fileno() )
while True:
print "poll time"
subr = mypoll.poll()
for x in subr[0]:
if x == s.fileno():
conn, addr = s.accept()
pid, fd = pty.fork()
if pid != 0:
mypoll.register(fd)
print "done. go back to poll now"
else:
print "forked"
#handles new connection
else:
data = os.read(x,1024)
print data
A:
After the first iteration, haven't you registered the pty fd, and are then polling it? And its fd will never be equal to the socket fd, so you will then os.read the pty fd. And isn't that now reading from your terminal? And so won't typing a return cause it to "print data"?
|
Problem with polling sockets in python
|
After I begin the polling loop, all messages printed after the first iteration require me to press enter in the terminal for it to be displayed.
#!/usr/bin/python
import socket, select, os, pty, sys
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('', 5007))
s.listen(5)
mypoll = select.poll()
mypoll.register(s.fileno() )
while True:
print "poll time"
subr = mypoll.poll()
for x in subr[0]:
if x == s.fileno():
conn, addr = s.accept()
pid, fd = pty.fork()
if pid != 0:
mypoll.register(fd)
print "done. go back to poll now"
else:
print "forked"
#handles new connection
else:
data = os.read(x,1024)
print data
|
[
"After the first iteration, haven't you registered the pty fd, and are then polling it? And its fd will never be equal to the socket fd, so you will then os.read the pty fd. And isn't that now reading from your terminal? And so won't typing a return cause it to \"print data\"?\n"
] |
[
1
] |
[] |
[] |
[
"fork",
"polling",
"python",
"sockets"
] |
stackoverflow_0001375772_fork_polling_python_sockets.txt
|
Q:
Wrapping Mutually Dependent Structs in Pyrex
I am attempting to wrap some C code in Python using Pyrex. I've run into an issue with defining two structs. In this case, the structures have been defined in terms of one another, and Pyrex cannot seem to handle the conflict. The structures look something like so:
typedef struct a {
b * b_pointer;
} a;
typedef struct b {
a a_obj;
} b;
They are placed in different files. The code I am using to wrap the structures looks like this:
def extern from "file.c":
ctypdef struct a:
b * b_pointer
ctypedef struct b:
a a_obj
File.c is a separate file containing function definitions, as opposed to the structure definitions, but it includes the source files that define these structures. Is there some way I can wrap both of these structures?
A:
You can use an incomplete type (you do need the corresponding C typedefs in to be in a .h file, not just a .c file):
cdef extern from "some.h":
ctypedef struct b
ctypedef struct a:
b * b_pointer
ctypedef struct b:
a a_obj
|
Wrapping Mutually Dependent Structs in Pyrex
|
I am attempting to wrap some C code in Python using Pyrex. I've run into an issue with defining two structs. In this case, the structures have been defined in terms of one another, and Pyrex cannot seem to handle the conflict. The structures look something like so:
typedef struct a {
b * b_pointer;
} a;
typedef struct b {
a a_obj;
} b;
They are placed in different files. The code I am using to wrap the structures looks like this:
def extern from "file.c":
ctypdef struct a:
b * b_pointer
ctypedef struct b:
a a_obj
File.c is a separate file containing function definitions, as opposed to the structure definitions, but it includes the source files that define these structures. Is there some way I can wrap both of these structures?
|
[
"You can use an incomplete type (you do need the corresponding C typedefs in to be in a .h file, not just a .c file):\ncdef extern from \"some.h\":\n ctypedef struct b\n ctypedef struct a:\n b * b_pointer\n ctypedef struct b:\n a a_obj\n\n"
] |
[
3
] |
[] |
[] |
[
"c",
"python",
"struct",
"wrapper"
] |
stackoverflow_0001375293_c_python_struct_wrapper.txt
|
Q:
In python, how does one test if a string-like object is mutable?
I have a function that takes a string-like argument.
I want to decide if I can safely store the argument and be sure that it won't change. So I'd like to test if it's mutable, e.g the result of a buffer() built from an array.array(), or not.
Currently I use:
type(s) == str
Is there a better way to do it?
(copying the argument is too costly, that's why I want to avoid it)
A:
It would be better to use
isinstance(s, basestring)
It works for Unicode strings too.
A:
If it's just a heuristic for your caching, just use whatever works. isinstance(x, str), for example, almost exactly like now. (Given you want to decide whether to cache or not; a False-bearing test just means a cache miss, you don't do anything wrong.)
(Remark: It turns out that buffer objects are hashable, even though their string representation may change under your feet; The hash discussion below is interesting, but is not the pure solution it was intended to be.)
However, well implemented classes should have instances being hashable if they are immutable and not if they are mutable. A general test would be to hash your object and test for success.
>>> hash({})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: dict objects are unhashable
This will give false positives I'm sure, but mutable objects being hashable is strictly an interface break; I would expect python library types to obey this interface, a test of a small sample gives correct answers:
hashabe: str (Immutable), buffer (Warning, immutable slice of (possibly) mutable object!)
unhashable: list, array.array
A:
Just use duck typing -- remember, it's "Easier to Ask for Forgiveness Than Permission". Try to mutate the string-like object, and be prepared to catch an exception if you can't.
A:
I'd just convert it to an immutable string:
>>> s1 = "possibly mutable"
>>>
>>> s2 = str(s1)
>>> s1 is s2
True
In case s1 is immutable the same object is given back, resulting in no memory overhead. If it's mutable a copy is being made.
|
In python, how does one test if a string-like object is mutable?
|
I have a function that takes a string-like argument.
I want to decide if I can safely store the argument and be sure that it won't change. So I'd like to test if it's mutable, e.g the result of a buffer() built from an array.array(), or not.
Currently I use:
type(s) == str
Is there a better way to do it?
(copying the argument is too costly, that's why I want to avoid it)
|
[
"It would be better to use\nisinstance(s, basestring)\n\nIt works for Unicode strings too.\n",
"If it's just a heuristic for your caching, just use whatever works. isinstance(x, str), for example, almost exactly like now. (Given you want to decide whether to cache or not; a False-bearing test just means a cache miss, you don't do anything wrong.)\n\n(Remark: It turns out that buffer objects are hashable, even though their string representation may change under your feet; The hash discussion below is interesting, but is not the pure solution it was intended to be.)\nHowever, well implemented classes should have instances being hashable if they are immutable and not if they are mutable. A general test would be to hash your object and test for success.\n>>> hash({})\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: dict objects are unhashable\n\nThis will give false positives I'm sure, but mutable objects being hashable is strictly an interface break; I would expect python library types to obey this interface, a test of a small sample gives correct answers:\nhashabe: str (Immutable), buffer (Warning, immutable slice of (possibly) mutable object!)\nunhashable: list, array.array\n\n",
"Just use duck typing -- remember, it's \"Easier to Ask for Forgiveness Than Permission\". Try to mutate the string-like object, and be prepared to catch an exception if you can't.\n",
"I'd just convert it to an immutable string:\n>>> s1 = \"possibly mutable\"\n>>> \n>>> s2 = str(s1)\n>>> s1 is s2\nTrue\n\nIn case s1 is immutable the same object is given back, resulting in no memory overhead. If it's mutable a copy is being made.\n"
] |
[
5,
4,
3,
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001375936_python.txt
|
Q:
Web/Screen Scraping with Google App Engine - Code works in python interpreter but not GAE
I want to do some web scraping with GAE. (Infinite Campus Student Information Portal, fyi). This service requires you to login to get in the website.
I had some code that worked using mechanize in normal python. When I learned that I couldn't use mechanize in Google App Engine I ended up using urllib2 + ClientForm. I couldn't get it to login to the server, so after a few hours of fiddling with cookie handling I ran the exact same code in a normal python interpreter, and it worked. I found the log file and saw a ton of messages about stripping out the 'host' header in my request... I found the source file on Google Code and the host header was in an 'untrusted' list and removed from all requests by user code.
Apparently GAE strips out the host header, which is required by I.C. to determine which school system to log you in, which is why it appeared like I couldn't login.
How would I get around this problem? I can't specify anything else in my fake form submission to the target site. Why would this be a "security hole" in the first place?
A:
App Engine does not strip out the Host header: it forces it to be an accurate value based on the URI you are requesting. Assuming that URI's absolute, the server isn't even allowed to consider the Host header anyway, per RFC2616:
If Request-URI is an absoluteURI, the host is part of the Request-URI.
Any Host header field value in the
request MUST be ignored.
...so I suspect you're misdiagnosing the cause of your problem. Try directing the request to a "dummy" server that you control (e.g. another very simple app engine app of yours) so you can look at all the headers and body of the request as it comes from your GAE app, vs, how it comes from your "normal python interpreter". What do you observe this way?
|
Web/Screen Scraping with Google App Engine - Code works in python interpreter but not GAE
|
I want to do some web scraping with GAE. (Infinite Campus Student Information Portal, fyi). This service requires you to login to get in the website.
I had some code that worked using mechanize in normal python. When I learned that I couldn't use mechanize in Google App Engine I ended up using urllib2 + ClientForm. I couldn't get it to login to the server, so after a few hours of fiddling with cookie handling I ran the exact same code in a normal python interpreter, and it worked. I found the log file and saw a ton of messages about stripping out the 'host' header in my request... I found the source file on Google Code and the host header was in an 'untrusted' list and removed from all requests by user code.
Apparently GAE strips out the host header, which is required by I.C. to determine which school system to log you in, which is why it appeared like I couldn't login.
How would I get around this problem? I can't specify anything else in my fake form submission to the target site. Why would this be a "security hole" in the first place?
|
[
"App Engine does not strip out the Host header: it forces it to be an accurate value based on the URI you are requesting. Assuming that URI's absolute, the server isn't even allowed to consider the Host header anyway, per RFC2616:\n\n\nIf Request-URI is an absoluteURI, the host is part of the Request-URI.\n Any Host header field value in the\n request MUST be ignored.\n\n\n...so I suspect you're misdiagnosing the cause of your problem. Try directing the request to a \"dummy\" server that you control (e.g. another very simple app engine app of yours) so you can look at all the headers and body of the request as it comes from your GAE app, vs, how it comes from your \"normal python interpreter\". What do you observe this way?\n"
] |
[
2
] |
[] |
[] |
[
"google_app_engine",
"python",
"screen_scraping"
] |
stackoverflow_0001376377_google_app_engine_python_screen_scraping.txt
|
Q:
Django: remove GROUP BY added with extra() method?
Hi (excuse me for my bad english) !
When I make this:
gallery_qs = Gallery.objects.all()\
.annotate(Count('photos'))\
.extra(select={'photo_id': 'photologue_photo.id'})
The sql query is :
SELECT (photologue_photo.id) AS `photo`, `photologue_gallery`.*
FROM `photologue_gallery`
LEFT OUTER JOIN `photologue_gallery_photos`
ON (`photologue_gallery`.`id` = `photologue_gallery_photos`.`gallery_id`)
LEFT OUTER JOIN `photologue_photo`
ON (`photologue_gallery_photos`.`photo_id` = `photologue_photo`.`id`)
GROUP BY `photologue_gallery`.`id`, photologue_photo.id
ORDER BY `photologue_gallery`.`publication_date` DESC
The problem is that extra method automatically adds photologue_photo.id in GROUP BY clause. And I need remove it, because it duplicates galleries, for example :
[<Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>]
Si I need to make this query with django, is it possible ?
SELECT (photologue_photo.id) AS `photo`, `photologue_gallery`.*
FROM `photologue_gallery`
LEFT OUTER JOIN `photologue_gallery_photos`
ON (`photologue_gallery`.`id` = `photologue_gallery_photos`.`gallery_id`)
LEFT OUTER JOIN `photologue_photo`
ON (`photologue_gallery_photos`.`photo_id` = `photologue_photo`.`id`)
GROUP BY `photologue_gallery`
ORDER BY `photologue_gallery`.`publication_date` DESC
Thank you ! :)
A:
I don't think you really need the extra. From Django's concept, you don't need to cherry pick specific columns while running a Django QuerySet. That logic can be done in the template side.
I assume you know how to push galley_qs to your template from your view:
# views.py
gallery_qs = Gallery.objects.all()\
.annotate(Count('photos'))
In your template/html:
{% for gallery in gallery_qs %}
{% for photo in gallery.photos %}
{% endfor %}
{% endfor %}
photos is your ManyToManyField in your gallery model.
A:
Why are you trying to get distinct gallery records with a photo_id annotated on them when gallery to photo ids is a many to many relationship? From what I can tell, the query you are doing would only get a single photo id for each gallery.
If you really do need to do the above, I think you could use distinct() to get distinct gallery records (btw, you don't need the "all()" in there).
Gallery.objects.distinct()\
.annotate(Count('photos'))\
.extra(select={'photo_id': 'photologue_photo.id'})
Or you could just simply access the photo id directly,
g = Gallery.objects.annotate(Count('photos'))
# Get the photo
photo = g.photos[0]
|
Django: remove GROUP BY added with extra() method?
|
Hi (excuse me for my bad english) !
When I make this:
gallery_qs = Gallery.objects.all()\
.annotate(Count('photos'))\
.extra(select={'photo_id': 'photologue_photo.id'})
The sql query is :
SELECT (photologue_photo.id) AS `photo`, `photologue_gallery`.*
FROM `photologue_gallery`
LEFT OUTER JOIN `photologue_gallery_photos`
ON (`photologue_gallery`.`id` = `photologue_gallery_photos`.`gallery_id`)
LEFT OUTER JOIN `photologue_photo`
ON (`photologue_gallery_photos`.`photo_id` = `photologue_photo`.`id`)
GROUP BY `photologue_gallery`.`id`, photologue_photo.id
ORDER BY `photologue_gallery`.`publication_date` DESC
The problem is that extra method automatically adds photologue_photo.id in GROUP BY clause. And I need remove it, because it duplicates galleries, for example :
[<Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>, <Gallery: Lorem ipsum dolor sit amet>]
Si I need to make this query with django, is it possible ?
SELECT (photologue_photo.id) AS `photo`, `photologue_gallery`.*
FROM `photologue_gallery`
LEFT OUTER JOIN `photologue_gallery_photos`
ON (`photologue_gallery`.`id` = `photologue_gallery_photos`.`gallery_id`)
LEFT OUTER JOIN `photologue_photo`
ON (`photologue_gallery_photos`.`photo_id` = `photologue_photo`.`id`)
GROUP BY `photologue_gallery`
ORDER BY `photologue_gallery`.`publication_date` DESC
Thank you ! :)
|
[
"I don't think you really need the extra. From Django's concept, you don't need to cherry pick specific columns while running a Django QuerySet. That logic can be done in the template side.\nI assume you know how to push galley_qs to your template from your view:\n# views.py\ngallery_qs = Gallery.objects.all()\\\n .annotate(Count('photos'))\n\nIn your template/html:\n{% for gallery in gallery_qs %}\n {% for photo in gallery.photos %}\n\n {% endfor %}\n{% endfor %}\n\nphotos is your ManyToManyField in your gallery model.\n",
"Why are you trying to get distinct gallery records with a photo_id annotated on them when gallery to photo ids is a many to many relationship? From what I can tell, the query you are doing would only get a single photo id for each gallery.\nIf you really do need to do the above, I think you could use distinct() to get distinct gallery records (btw, you don't need the \"all()\" in there).\nGallery.objects.distinct()\\\n .annotate(Count('photos'))\\\n .extra(select={'photo_id': 'photologue_photo.id'})\n\nOr you could just simply access the photo id directly,\ng = Gallery.objects.annotate(Count('photos'))\n# Get the photo\nphoto = g.photos[0]\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"django_models",
"photologue",
"python",
"sql"
] |
stackoverflow_0001372890_django_django_models_photologue_python_sql.txt
|
Q:
How do create a python module for MySQL Workbench?
I am trying to create a simple Python module for MySQL Workbench 5.1.17 SE however I cannot seem to register the module, that is, it is not displaying under the Plugins->Catalog menu.
The documentation appears to be rather weak at this time, the best I have found is Python Scripting in Workbench. There isn't much in the way of instructions here.
How do I create and install a python module with MySQL Workbench?
A:
I chatted with one of the developers of MySQL Workbench via IRC and it turns out there were two problems:
I had to make sure my python script ended with *_grt.py so that it was recognized as a module.
There is a bug at least in version 5.1.17+ that prevents more than one python script module from being loaded. This has been fixed in a branch but has not yet made it into the stable release. The current workaround is to remove / delete any *.py modules you have in the following folders:
\modules
C:\Documents and Settings\\Application Data\MySQL\Workbench\modules
|
How do create a python module for MySQL Workbench?
|
I am trying to create a simple Python module for MySQL Workbench 5.1.17 SE however I cannot seem to register the module, that is, it is not displaying under the Plugins->Catalog menu.
The documentation appears to be rather weak at this time, the best I have found is Python Scripting in Workbench. There isn't much in the way of instructions here.
How do I create and install a python module with MySQL Workbench?
|
[
"I chatted with one of the developers of MySQL Workbench via IRC and it turns out there were two problems:\n\nI had to make sure my python script ended with *_grt.py so that it was recognized as a module.\nThere is a bug at least in version 5.1.17+ that prevents more than one python script module from being loaded. This has been fixed in a branch but has not yet made it into the stable release. The current workaround is to remove / delete any *.py modules you have in the following folders:\n\n\\modules\nC:\\Documents and Settings\\\\Application Data\\MySQL\\Workbench\\modules\n\n\n"
] |
[
1
] |
[] |
[] |
[
"mysql_workbench",
"plugins",
"python"
] |
stackoverflow_0001375881_mysql_workbench_plugins_python.txt
|
Q:
calculate user inputed time with Python
I need to calculate (using python) how much time a user has inputed, whether they input something like 3:30 or 3.5. I'm not really sure what the best way to go about this is and I thought I'd ask for advice from the experts.
=== Edit ==================
To specify more clearly, I want the user to input hours and minutes or just minutes. I want them to be able to input the time in two formats, either in hh:mm (3:30 or 03:30) or as a float (3.5) hours.
The overall goal is to keep track of the time they have worked. So, I will be adding the time they enter up to get a total.
A:
Can you precisely define the syntax of the strings that the user is allowed to input? Once you do that, if it's simple enough it can be matched by simple Python string expressions, else you may be better off with pyparsing or the like. Also, a precise syntax will make it easier to identify any ambiguities so you can either change the rules (so that no input string is ever ambiguous) or at least decide precisely how to interpret them (AND document the fact for the user's benefit!-).
edit: given the OP's clarification (hh:mm or just minutes as a float) it seems simple:
while True:
s = raw_input('Please enter amount of time (hh:mm or just minutes):')
try:
if ':' in s:
h, m = s.split(':')
else:
h = ''
m = s
t = int(h)*3600 + float(m)* 60
except ValueError, e:
print "Problems with your input (%r): %s" % (s, e)
print "please try again!"
else:
break
You may want to get finer-grained in diagnosing exactly what problem the user input may have (when you accept and parse user input, 99% of the effort goes into identifying incredibly [[expletive deleted]] mistakes: it's VERY hard to make your code foolproof, because fools are do deucedly INGENUOUS!-), but this should help you get started.
A:
There are a few possible solutions, but at some point you're gonna run into ambiguous cases that will result in arbitrary conversions.
Overall I'd suggest taking any input and parsing the separators (whether : or . or something else) and then converting to seconds based on some schema of units you've defined.
Alternatively you could do a series of try/except statements to test it against different time formatting schemes to see if it matches.
I'm not sure what will be best in your case...
A:
First of all, you'll need some conventions. Is 3.55 five minutes to four hours, five milliseconds to four seconds, or 3 and 55/100 of a minute/hour/second? The same applies to 3:55. At least have a distinction between dot and colon, specifying that a dot means a fraction and a colon, a separator of hour/minute/second.
Although you haven't specified what "time" is (since or o'clock?), you'll need that too.
Then, it's simple a matter of having a final representation of a time that you want to work with, and keep converting the input until your final representation is achieved. Let's say you decide that ultimately time should be represented as MM:SS (two digits for minutes, a colon, two digits for seconds), you'll need to search the string for allowed occurrences of characters, and act accordingly. For example, having a colon and a dot at the same time is not allowed. If there's a single colon, you have a fraction, therefore you'll treat the second part as a fraction of 60.
Keep doing this until you have your final representation, and then just do what you gotta do with said "time".
I don't know on what constraints you're working with, but the problem could be narrowed if instead of a single "time" input, you had two: The first, where people type the hours, and the second, where they type the minutes. Of course, that would only work if you can divide the input...
A:
This is the code that we have in one of our internal web applications that we use for time-tracking purposes. When the user enters a time, the string value is passed through this function, which returns a structure of time data.
It's written in javascript, and the code could be directly ported to python.
I hope it helps a bit.
var ParseTime_NOW_MATCH = /^ *= *$/
var ParseTime_PLUS_MATCH = /^ *\+ *([0-9]{0,2}(\.[0-9]{0,3})?) *$/
var ParseTime_12_MATCH = /^ *([0-9]{1,2}):?([0-9]{2}) *([aApP])[mM]? *$/
var ParseTime_24_MATCH = /^ *([0-9]{1,2}):?([0-9]{2}) *$/
// ########################################################################################
// Returns either:
// {
// Error: false,
// HourDecimal: NN.NN,
// HourInt: NN,
// MinuteInt: NN,
// Format12: "SS:SS SS",
// Format24: "SS:SS"
// }
// or
// {
// Error: true,
// Message: "Error Message"
// }
function ParseTime(sTime)
{
var match;
var HH12;
var HH24;
var MM60;
var AMPM;
///////////////////////////////////////////////////////////////////////////////////////
if((match = ParseTime_NOW_MATCH.exec(sTime)) != null)
{
// console.log(match);
return {Error: true, Message: "Unsupported format"};
}
///////////////////////////////////////////////////////////////////////////////////////
else if((match = ParseTime_PLUS_MATCH.exec(sTime)) != null)
{
// console.log(match);
return {Error: true, Message: "Unsupported format"};
}
///////////////////////////////////////////////////////////////////////////////////////
else if((match = ParseTime_24_MATCH.exec(sTime)) != null)
{
// console.log("24");
// console.log(match);
HH24 = parseInt(match[1], 10);
MM60 = parseInt(match[2], 10);
if(HH24 > 23 || MM60 > 59)
{
return {Error: true, Message: "Invalid Hour or Minute (24)."};
}
else if(HH24 == 0)
{
HH12 = 12;
AMPM = 'AM';
}
else if(HH24 <= 11)
{
HH12 = HH24;
AMPM = 'AM';
}
else if(HH24 == 12)
{
HH12 = HH24;
AMPM = 'PM';
}
else
{
HH12 = HH24 - 12;
AMPM = 'PM';
}
}
///////////////////////////////////////////////////////////////////////////////////////
else if((match = ParseTime_12_MATCH.exec(sTime)) != null)
{
// console.log(match);
AMPM = ((match[3] == 'A' || match[3] == 'a') ? 'AM' : 'PM');
HH12 = parseInt(match[1], 10);
MM60 = parseInt(match[2], 10);
if(HH12 > 12 || HH12 < 1 || MM60 > 59)
{
return {Error: true, Message: "Invalid Hour or Minute (12)."};
}
else if(HH12 == 12 && AMPM == 'AM')
{
HH24 = 0;
}
else if(AMPM == 'AM')
{
HH24 = HH12;
}
else if(AMPM == 'PM')
{
HH24 = HH12 + 12;
}
}
///////////////////////////////////////////////////////////////////////////////////////
else
{
return {Error: true, Message: "Invalid Time Format."};
}
return {
Error : false,
HourDecimal : HH24 + (MM60 / 60),
HourInt : HH24,
MinuteInt : MM60,
Format12 : HH12 + ':' + (MM60 < 10 ? "0"+MM60 : MM60) + ' ' + AMPM,
Format24 : (HH24 < 10 ? "0"+HH24 : HH24) + ':' + (MM60 < 10 ? "0"+MM60 : MM60)
}
}
A:
Can you do this with a GUI and restrict the user input? Processing the text seems super error prone otherwise (on the part of the user, not to mention the programmer), and for hours worked... you sort-of want to get that right.
|
calculate user inputed time with Python
|
I need to calculate (using python) how much time a user has inputed, whether they input something like 3:30 or 3.5. I'm not really sure what the best way to go about this is and I thought I'd ask for advice from the experts.
=== Edit ==================
To specify more clearly, I want the user to input hours and minutes or just minutes. I want them to be able to input the time in two formats, either in hh:mm (3:30 or 03:30) or as a float (3.5) hours.
The overall goal is to keep track of the time they have worked. So, I will be adding the time they enter up to get a total.
|
[
"Can you precisely define the syntax of the strings that the user is allowed to input? Once you do that, if it's simple enough it can be matched by simple Python string expressions, else you may be better off with pyparsing or the like. Also, a precise syntax will make it easier to identify any ambiguities so you can either change the rules (so that no input string is ever ambiguous) or at least decide precisely how to interpret them (AND document the fact for the user's benefit!-).\nedit: given the OP's clarification (hh:mm or just minutes as a float) it seems simple:\n while True:\n s = raw_input('Please enter amount of time (hh:mm or just minutes):')\n try:\n if ':' in s:\n h, m = s.split(':')\n else:\n h = ''\n m = s\n t = int(h)*3600 + float(m)* 60\n except ValueError, e:\n print \"Problems with your input (%r): %s\" % (s, e)\n print \"please try again!\"\n else:\n break\n\nYou may want to get finer-grained in diagnosing exactly what problem the user input may have (when you accept and parse user input, 99% of the effort goes into identifying incredibly [[expletive deleted]] mistakes: it's VERY hard to make your code foolproof, because fools are do deucedly INGENUOUS!-), but this should help you get started.\n",
"There are a few possible solutions, but at some point you're gonna run into ambiguous cases that will result in arbitrary conversions.\nOverall I'd suggest taking any input and parsing the separators (whether : or . or something else) and then converting to seconds based on some schema of units you've defined.\nAlternatively you could do a series of try/except statements to test it against different time formatting schemes to see if it matches.\nI'm not sure what will be best in your case...\n",
"First of all, you'll need some conventions. Is 3.55 five minutes to four hours, five milliseconds to four seconds, or 3 and 55/100 of a minute/hour/second? The same applies to 3:55. At least have a distinction between dot and colon, specifying that a dot means a fraction and a colon, a separator of hour/minute/second.\nAlthough you haven't specified what \"time\" is (since or o'clock?), you'll need that too.\nThen, it's simple a matter of having a final representation of a time that you want to work with, and keep converting the input until your final representation is achieved. Let's say you decide that ultimately time should be represented as MM:SS (two digits for minutes, a colon, two digits for seconds), you'll need to search the string for allowed occurrences of characters, and act accordingly. For example, having a colon and a dot at the same time is not allowed. If there's a single colon, you have a fraction, therefore you'll treat the second part as a fraction of 60.\nKeep doing this until you have your final representation, and then just do what you gotta do with said \"time\".\nI don't know on what constraints you're working with, but the problem could be narrowed if instead of a single \"time\" input, you had two: The first, where people type the hours, and the second, where they type the minutes. Of course, that would only work if you can divide the input...\n",
"This is the code that we have in one of our internal web applications that we use for time-tracking purposes. When the user enters a time, the string value is passed through this function, which returns a structure of time data. \nIt's written in javascript, and the code could be directly ported to python.\nI hope it helps a bit.\nvar ParseTime_NOW_MATCH = /^ *= *$/\nvar ParseTime_PLUS_MATCH = /^ *\\+ *([0-9]{0,2}(\\.[0-9]{0,3})?) *$/\nvar ParseTime_12_MATCH = /^ *([0-9]{1,2}):?([0-9]{2}) *([aApP])[mM]? *$/\nvar ParseTime_24_MATCH = /^ *([0-9]{1,2}):?([0-9]{2}) *$/\n\n\n// ########################################################################################\n// Returns either:\n// {\n// Error: false,\n// HourDecimal: NN.NN,\n// HourInt: NN,\n// MinuteInt: NN,\n// Format12: \"SS:SS SS\",\n// Format24: \"SS:SS\"\n// }\n// or\n// {\n// Error: true,\n// Message: \"Error Message\"\n// }\nfunction ParseTime(sTime)\n{\n var match;\n\n var HH12;\n var HH24;\n var MM60;\n var AMPM;\n\n ///////////////////////////////////////////////////////////////////////////////////////\n if((match = ParseTime_NOW_MATCH.exec(sTime)) != null)\n {\n// console.log(match);\n return {Error: true, Message: \"Unsupported format\"};\n }\n ///////////////////////////////////////////////////////////////////////////////////////\n else if((match = ParseTime_PLUS_MATCH.exec(sTime)) != null)\n {\n// console.log(match);\n return {Error: true, Message: \"Unsupported format\"};\n }\n ///////////////////////////////////////////////////////////////////////////////////////\n else if((match = ParseTime_24_MATCH.exec(sTime)) != null)\n {\n// console.log(\"24\");\n// console.log(match);\n HH24 = parseInt(match[1], 10);\n MM60 = parseInt(match[2], 10);\n\n if(HH24 > 23 || MM60 > 59)\n {\n return {Error: true, Message: \"Invalid Hour or Minute (24).\"};\n }\n else if(HH24 == 0)\n {\n HH12 = 12;\n AMPM = 'AM';\n }\n else if(HH24 <= 11)\n {\n HH12 = HH24;\n AMPM = 'AM';\n }\n else if(HH24 == 12)\n {\n HH12 = HH24;\n AMPM = 'PM';\n }\n else\n {\n HH12 = HH24 - 12;\n AMPM = 'PM';\n }\n\n }\n ///////////////////////////////////////////////////////////////////////////////////////\n else if((match = ParseTime_12_MATCH.exec(sTime)) != null)\n {\n// console.log(match);\n AMPM = ((match[3] == 'A' || match[3] == 'a') ? 'AM' : 'PM');\n HH12 = parseInt(match[1], 10);\n MM60 = parseInt(match[2], 10);\n\n if(HH12 > 12 || HH12 < 1 || MM60 > 59)\n {\n return {Error: true, Message: \"Invalid Hour or Minute (12).\"};\n }\n else if(HH12 == 12 && AMPM == 'AM')\n {\n HH24 = 0;\n }\n else if(AMPM == 'AM')\n {\n HH24 = HH12;\n }\n else if(AMPM == 'PM')\n {\n HH24 = HH12 + 12;\n }\n }\n ///////////////////////////////////////////////////////////////////////////////////////\n else\n {\n return {Error: true, Message: \"Invalid Time Format.\"};\n }\n\n return {\n Error : false,\n HourDecimal : HH24 + (MM60 / 60),\n HourInt : HH24,\n MinuteInt : MM60,\n Format12 : HH12 + ':' + (MM60 < 10 ? \"0\"+MM60 : MM60) + ' ' + AMPM,\n Format24 : (HH24 < 10 ? \"0\"+HH24 : HH24) + ':' + (MM60 < 10 ? \"0\"+MM60 : MM60)\n }\n\n}\n\n",
"Can you do this with a GUI and restrict the user input? Processing the text seems super error prone otherwise (on the part of the user, not to mention the programmer), and for hours worked... you sort-of want to get that right.\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001376835_python_regex.txt
|
Q:
Python: cannot read / write in another commandline application by using subprocess module
I am using Python 3.0 in Windows and trying to automate the testing of a commandline application. The user can type commands in Application Under Test and it returns the output as 2 XML packets. One is a packet and the other one is an packet. By analyzing these packets I can verifyt he result. I ahev the code as below
p = subprocess.Popen(SomeCmdAppl, stdout=subprocess.PIPE,
shell = True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT)
p.stdin.write((command + '\r\n').encode())
time.sleep(2.5)
testresult = p.stdout.readline()
testresult = testresult.decode()
print(testresult)
I cannot ge any output back. It get stuck in place where I try to read the output by using readline(). I tried read() and it get stuck too
When I run the commandline application manually and type the command I get the output back correctly as tow xml packets as below
Sent: <PivotNetMessage>
<MessageId>16f8addf-d366-4031-b3d3-5593efb9f7dd</MessageId>
<ConversationId>373323be-31dd-4858-a7f9-37d97e36eb36</ConversationId>
<SageId>4e1e7c04-4cea-49b2-8af1-64d0f348e621</SagaId>
<SourcePath>C:\Python30\PyNTEST</SourcePath>
<Command>echo</Command>
<Content>Hello</Content>
<Time>7/4/2009 11:16:41 PM</Time>
<ErrorCode>0</ErrorCode>
<ErrorInfo></ErrorInfo>
</PivotNetMessagSent>
Recv: <PivotNetMessage>
<MessageId>16f8addf-d366-4031-b3d3-5593efb9f7dd</MessageId>
<ConversationId>373323be-31dd-4858-a7f9-37d97e36eb36</ConversationId>
<SageId>4e1e7c04-4cea-49b2-8af1-64d0f348e621</SagaId>
<SourcePath>C:\PivotNet\Endpoints\Pipeline\Pipeline_2.0.0.202</SourcePath>
<Command>echo</Command>
<Content>Hello</Content>
<Time>7/4/2009 11:16:41 PM</Time>
<ErrorCode>0</ErrorCode>
<ErrorInfo></ErrorInfo>
</PivotNetMessage>
But when I use the communicate() as below I get the Sent packet and never get the Recv: packet. Why am I missing the recv packet? The communicate(0 is supposed to bring everything from stdout. rt?
p = subprocess.Popen(SomeCmdAppl, stdout=subprocess.PIPE,
shell = True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT)
p.stdin.write((command + '\r\n').encode())
time.sleep(2.5)
result = p.communicate()[0]
print(result)
Can anybody help me with a sample code that should work? I don't know if it is needed to read and write in separate threads. Please help me. I need to do repeated read/write. Is there any advanced level module in python i can use. I think Pexpect module doesn't work in Windows
A:
This is a popular problem, e.g. see:
Interact with a Windows console application via Python
How do I get 'real-time' information back from a subprocess.Popen in python (2.5)
how do I read everything currently in a subprocess.stdout pipe and then return?
(Actually, you should have seen these during creation of your question...?!).
I have two things of interest:
p.stdin.write((command + '\r\n').encode()) is also buffered so your child process might not even have seen its input. You can try flushing this pipe.
In one of the other questions one suggested doing a stdout.read() on the child instead of readline(), with a suitable amount of characters to read. You might want to experiment with this.
Post your results.
A:
Try sending your input using communicate instead of using write:
result = p.communicate((command + '\r\n').encode())[0]
A:
Have you considered using pexpect instead of subprocess? It handles the details which are probably preventing your code from working (like flushing buffers, etc). It may not be available for Py3k yet, but it works well in 2.x.
See: http://pexpect.sourceforge.net/pexpect.html
|
Python: cannot read / write in another commandline application by using subprocess module
|
I am using Python 3.0 in Windows and trying to automate the testing of a commandline application. The user can type commands in Application Under Test and it returns the output as 2 XML packets. One is a packet and the other one is an packet. By analyzing these packets I can verifyt he result. I ahev the code as below
p = subprocess.Popen(SomeCmdAppl, stdout=subprocess.PIPE,
shell = True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT)
p.stdin.write((command + '\r\n').encode())
time.sleep(2.5)
testresult = p.stdout.readline()
testresult = testresult.decode()
print(testresult)
I cannot ge any output back. It get stuck in place where I try to read the output by using readline(). I tried read() and it get stuck too
When I run the commandline application manually and type the command I get the output back correctly as tow xml packets as below
Sent: <PivotNetMessage>
<MessageId>16f8addf-d366-4031-b3d3-5593efb9f7dd</MessageId>
<ConversationId>373323be-31dd-4858-a7f9-37d97e36eb36</ConversationId>
<SageId>4e1e7c04-4cea-49b2-8af1-64d0f348e621</SagaId>
<SourcePath>C:\Python30\PyNTEST</SourcePath>
<Command>echo</Command>
<Content>Hello</Content>
<Time>7/4/2009 11:16:41 PM</Time>
<ErrorCode>0</ErrorCode>
<ErrorInfo></ErrorInfo>
</PivotNetMessagSent>
Recv: <PivotNetMessage>
<MessageId>16f8addf-d366-4031-b3d3-5593efb9f7dd</MessageId>
<ConversationId>373323be-31dd-4858-a7f9-37d97e36eb36</ConversationId>
<SageId>4e1e7c04-4cea-49b2-8af1-64d0f348e621</SagaId>
<SourcePath>C:\PivotNet\Endpoints\Pipeline\Pipeline_2.0.0.202</SourcePath>
<Command>echo</Command>
<Content>Hello</Content>
<Time>7/4/2009 11:16:41 PM</Time>
<ErrorCode>0</ErrorCode>
<ErrorInfo></ErrorInfo>
</PivotNetMessage>
But when I use the communicate() as below I get the Sent packet and never get the Recv: packet. Why am I missing the recv packet? The communicate(0 is supposed to bring everything from stdout. rt?
p = subprocess.Popen(SomeCmdAppl, stdout=subprocess.PIPE,
shell = True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT)
p.stdin.write((command + '\r\n').encode())
time.sleep(2.5)
result = p.communicate()[0]
print(result)
Can anybody help me with a sample code that should work? I don't know if it is needed to read and write in separate threads. Please help me. I need to do repeated read/write. Is there any advanced level module in python i can use. I think Pexpect module doesn't work in Windows
|
[
"This is a popular problem, e.g. see:\n\nInteract with a Windows console application via Python\nHow do I get 'real-time' information back from a subprocess.Popen in python (2.5)\nhow do I read everything currently in a subprocess.stdout pipe and then return?\n\n(Actually, you should have seen these during creation of your question...?!).\nI have two things of interest:\n\np.stdin.write((command + '\\r\\n').encode()) is also buffered so your child process might not even have seen its input. You can try flushing this pipe.\nIn one of the other questions one suggested doing a stdout.read() on the child instead of readline(), with a suitable amount of characters to read. You might want to experiment with this. \n\nPost your results.\n",
"Try sending your input using communicate instead of using write:\nresult = p.communicate((command + '\\r\\n').encode())[0]\n\n",
"Have you considered using pexpect instead of subprocess? It handles the details which are probably preventing your code from working (like flushing buffers, etc). It may not be available for Py3k yet, but it works well in 2.x.\nSee: http://pexpect.sourceforge.net/pexpect.html\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"python",
"readline",
"subprocess"
] |
stackoverflow_0001165064_python_readline_subprocess.txt
|
Q:
Porting from Python to C#
I´m trying to learn C#, coming from a Python/PHP background, and I´m trying to port a script from Python to getting started.
The script reads a text file line by line (about 150K lines), apply a list of regex until one is matched, get the named groups results and add the values as properties of a class.
Here´s how the data looks like (each line starting by 'No.' is the beginning of a new record):
No.813177294 09/01/1987 150
Tit.INCAL INDÚSTRIA DE CALÇADOS LTDA (BR/PE)
*PARÁGRAFO ÚNICO DO ART. 162 DA LPI.
Procurador: ROBERTO C. FREIRE
No.901699870 02/06/2009 LD6
*Exigência Formal não respondida, Pedido de Registro de Marca considerado inexistente, de acordo com o Art. 157 da LPI
No.830009817 12/12/2008 003
Tit.BIOLAB SANUS FARMACÊUTICA LTDA. (BR/SP)
C.N.P.J./C.I.C./NºINPI : 49475833000106
Apres.: Nominativa ; Nat.: De Produto
Marca: ENXUG
NCL(9) 05 medicamentos para uso humano; preparações farmacêuticas; diuréticos, analgésicos;
anestésicos; anti-helmínticos; antibióticos; hormônios para uso medicinal.
Procurador: CRUZEIRO/NEWMARC PATENTES E MARCAS LTDA
And how the regex looks like:
regexp = {
# No.123456789 13/12/2008 560
# No.123456789 13/12/2008 560
# No.123456789 13/12/2008 560
# No.123456789 560
'number': re.compile(r'No.(?P<Number>[\d]{9}) +((?P<Date>[\d]{2}/[\d]{2}/[\d]{4}) +)?(?P<Code>.*)'),
# NCL(7) 25 no no no no no ; no no no no no no; *nonono no non o nono
# NCL(9) 25 no no no no no ; no no no no no no; *nonono no non o nono
'ncl': re.compile(r'NCL\([\d]{1}\) (?P<Ncl>[\d]{2})( (?P<Especification>.*))?'),
'doc': re.compile(r'C.N.P.J./C.I.C./NºINPI : (?P<Document>.*)'),
'description': re.compile(r'\*(?P<Description>.*)'),
...
}
Now my questions:
1) Can I use the same concept, applying each of a Dictionary<string, Regex> in each line until one is matched?
2) If I do, there´s a way to get a Dictionary<string, string> of the named groups results? (At this stage I can treat everything as a string).
3) If supposed I have a class like this...
class Record
{
public string Number { get; set; }
public string Date { get; set; }
public string Code { get; set; }
public string Ncl { get; set; }
public string Especification { get; set; }
public string Document { get; set; }
public string Description { get; set; }
}
...there is a way to set the properties with the values of the named groups?
4) I´m totally missing the point here, trying to code in a static typed language still thinking in a dynamically typed one? If this is the case, what can I do?
Sorry for this somewhat lengthy question. I really tried to resume to make this shorter :-)
Thanks in advance.
A:
1., sure
2., see e.g. here
3., yep, same basic concept as 2
4., nah, C# is flexible enough to allow you to port your architecture over
Also consider studying this book as the best intro to .NET for Python programmers AND vice versa (I'm biased, having been a tech editor and being a friend of the author, but I think this is objectively defensible;-).
A:
Yes you can.
.Net has support for named groups. So for (?<first>group)(?'second'group), the returned Match object will support named retrieval like this. You can build youself a dictionary from this object or directly pass the Match object
var match = Regex.Match("subject", "regex");
var matchedText = match.Groups("first")
See Named Groups in .Net and Regex support in .Net
I think writing a Record Record.Parse(namedValueCollection) would be a way to do it
You write code... You learn. I find the reverse direction a bit disorienting.. Moving from dynamic to static should be relatively easier... just that you might have to write relatively more code for some routine tasks like iteration or map or select etc.
A:
Sorry this is not a specific answer, but could you use IronPython to convert your scripts to run under the CLR and then step to C#?
A:
If you really want to learn C#, you should demand only references and not full answers, like this one (RegEx class), but I'm sure you can find much more information with a quick Google search too.
A:
What you're looking for sounds do-able. Of course you'll want to look at System.Text.RegularExpressions, specifically the Regex type there.
Additionally, I'm really fond of the iterator pattern for reading lines from a file:
public static IEnumerable<string> ReadLines(string path)
{
using(var sr = new StreamReader(path))
{
string line;
while ( (line = sr.ReadLine()) != null)
{
yield return line;
}
}
}
You start with that base code (which you can re-use almost everywhere) and call it in this method:
public static IEnumerable<Record> ReadRecords(string path)
{
IEnumerable<Regex> expresssions = new List<Regex>
{
new Regex( @"No.(?P<Number>[\d]{9}) +((?P<Date>[\d]{2}/[\d]{2}/[\d]{4}) +)?(?P<Code>.*)" ),
new Regex( @"NCL\([\d]{1}\) (?P<Ncl>[\d]{2})( (?P<Especification>"),
new Regex( @"C.N.P.J./C.I.C./NºINPI : (?P<Document>.*)")
};
foreach ( MatchCollection matches
in ReadLines(path)
.Select(s => expressions.First(e => e.IsMatch(s)).Matches(s)))
.Where(m => m.Count > 0)
)
{
yield return Record.FromExpressionMatches(matches);
}
}
Finish it up by adding a static factory method to your Record class that accepts a MatchCollection parameter. The one thing it looks like you're missing here is that you expect to hit each of the expressions once before completing a single record. That will work a little differently. But hopefully this gives you enough to get you really going.
A:
dictionary<string,string> dic_test = new dictionary<string,string>();
dic_test.add(key,value);
|
Porting from Python to C#
|
I´m trying to learn C#, coming from a Python/PHP background, and I´m trying to port a script from Python to getting started.
The script reads a text file line by line (about 150K lines), apply a list of regex until one is matched, get the named groups results and add the values as properties of a class.
Here´s how the data looks like (each line starting by 'No.' is the beginning of a new record):
No.813177294 09/01/1987 150
Tit.INCAL INDÚSTRIA DE CALÇADOS LTDA (BR/PE)
*PARÁGRAFO ÚNICO DO ART. 162 DA LPI.
Procurador: ROBERTO C. FREIRE
No.901699870 02/06/2009 LD6
*Exigência Formal não respondida, Pedido de Registro de Marca considerado inexistente, de acordo com o Art. 157 da LPI
No.830009817 12/12/2008 003
Tit.BIOLAB SANUS FARMACÊUTICA LTDA. (BR/SP)
C.N.P.J./C.I.C./NºINPI : 49475833000106
Apres.: Nominativa ; Nat.: De Produto
Marca: ENXUG
NCL(9) 05 medicamentos para uso humano; preparações farmacêuticas; diuréticos, analgésicos;
anestésicos; anti-helmínticos; antibióticos; hormônios para uso medicinal.
Procurador: CRUZEIRO/NEWMARC PATENTES E MARCAS LTDA
And how the regex looks like:
regexp = {
# No.123456789 13/12/2008 560
# No.123456789 13/12/2008 560
# No.123456789 13/12/2008 560
# No.123456789 560
'number': re.compile(r'No.(?P<Number>[\d]{9}) +((?P<Date>[\d]{2}/[\d]{2}/[\d]{4}) +)?(?P<Code>.*)'),
# NCL(7) 25 no no no no no ; no no no no no no; *nonono no non o nono
# NCL(9) 25 no no no no no ; no no no no no no; *nonono no non o nono
'ncl': re.compile(r'NCL\([\d]{1}\) (?P<Ncl>[\d]{2})( (?P<Especification>.*))?'),
'doc': re.compile(r'C.N.P.J./C.I.C./NºINPI : (?P<Document>.*)'),
'description': re.compile(r'\*(?P<Description>.*)'),
...
}
Now my questions:
1) Can I use the same concept, applying each of a Dictionary<string, Regex> in each line until one is matched?
2) If I do, there´s a way to get a Dictionary<string, string> of the named groups results? (At this stage I can treat everything as a string).
3) If supposed I have a class like this...
class Record
{
public string Number { get; set; }
public string Date { get; set; }
public string Code { get; set; }
public string Ncl { get; set; }
public string Especification { get; set; }
public string Document { get; set; }
public string Description { get; set; }
}
...there is a way to set the properties with the values of the named groups?
4) I´m totally missing the point here, trying to code in a static typed language still thinking in a dynamically typed one? If this is the case, what can I do?
Sorry for this somewhat lengthy question. I really tried to resume to make this shorter :-)
Thanks in advance.
|
[
"1., sure\n2., see e.g. here\n3., yep, same basic concept as 2\n4., nah, C# is flexible enough to allow you to port your architecture over\nAlso consider studying this book as the best intro to .NET for Python programmers AND vice versa (I'm biased, having been a tech editor and being a friend of the author, but I think this is objectively defensible;-).\n",
"\nYes you can. \n.Net has support for named groups. So for (?<first>group)(?'second'group), the returned Match object will support named retrieval like this. You can build youself a dictionary from this object or directly pass the Match object\nvar match = Regex.Match(\"subject\", \"regex\");\nvar matchedText = match.Groups(\"first\")\nSee Named Groups in .Net and Regex support in .Net\nI think writing a Record Record.Parse(namedValueCollection) would be a way to do it\nYou write code... You learn. I find the reverse direction a bit disorienting.. Moving from dynamic to static should be relatively easier... just that you might have to write relatively more code for some routine tasks like iteration or map or select etc.\n\n",
"Sorry this is not a specific answer, but could you use IronPython to convert your scripts to run under the CLR and then step to C#?\n",
"If you really want to learn C#, you should demand only references and not full answers, like this one (RegEx class), but I'm sure you can find much more information with a quick Google search too.\n",
"What you're looking for sounds do-able. Of course you'll want to look at System.Text.RegularExpressions, specifically the Regex type there. \nAdditionally, I'm really fond of the iterator pattern for reading lines from a file:\npublic static IEnumerable<string> ReadLines(string path)\n{\n using(var sr = new StreamReader(path))\n {\n string line;\n while ( (line = sr.ReadLine()) != null)\n {\n yield return line;\n }\n }\n}\n\nYou start with that base code (which you can re-use almost everywhere) and call it in this method:\npublic static IEnumerable<Record> ReadRecords(string path)\n{\n IEnumerable<Regex> expresssions = new List<Regex>\n {\n new Regex( @\"No.(?P<Number>[\\d]{9}) +((?P<Date>[\\d]{2}/[\\d]{2}/[\\d]{4}) +)?(?P<Code>.*)\" ),\n new Regex( @\"NCL\\([\\d]{1}\\) (?P<Ncl>[\\d]{2})( (?P<Especification>\"), \n new Regex( @\"C.N.P.J./C.I.C./NºINPI : (?P<Document>.*)\")\n };\n\n foreach ( MatchCollection matches \n in ReadLines(path)\n .Select(s => expressions.First(e => e.IsMatch(s)).Matches(s)))\n .Where(m => m.Count > 0) \n ) \n {\n yield return Record.FromExpressionMatches(matches);\n }\n}\n\nFinish it up by adding a static factory method to your Record class that accepts a MatchCollection parameter. The one thing it looks like you're missing here is that you expect to hit each of the expressions once before completing a single record. That will work a little differently. But hopefully this gives you enough to get you really going.\n",
"dictionary<string,string> dic_test = new dictionary<string,string>();\n\ndic_test.add(key,value);\n\n"
] |
[
3,
2,
1,
1,
1,
0
] |
[] |
[] |
[
"c#",
"python"
] |
stackoverflow_0001376976_c#_python.txt
|
Q:
PyParsing simple language expressions
I'm trying to write something that will parse some code. I'm able to successfully parse foo(spam) and spam+eggs, but foo(spam+eggs) (recursive descent? my terminology from compilers is a bit rusty) fails.
I have the following code:
from pyparsing_py3 import *
myVal = Word(alphas+nums+'_')
myFunction = myVal + '(' + delimitedList( myVal ) + ')'
myExpr = Forward()
mySubExpr = ( \
myVal \
| (Suppress('(') + Group(myExpr) + Suppress(')')) \
| myFunction \
)
myExpr << Group( mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr ) )
# SHOULD return: [blah, [foo, +, bar]]
# but actually returns: [blah]
print(myExpr.parseString('blah(foo+bar)'))
A:
Several issues: delimitedList is looking for a comma-delimited list of myVal, i.e. identifiers, as the only acceptable form of argument list, so of course it can't match 'foo+bar' (not a comma-delimited list of myVal!); fixing that reveals another -- myVal and myFunction start the same way so their order in mySubExpr matters; fixing that reveals yet another -- TWO levels of nesting instead of one. This versions seems ok...:
myVal = Word(alphas+nums+'_')
myExpr = Forward()
mySubExpr = (
(Suppress('(') + Group(myExpr) + Suppress(')'))
| myVal + Suppress('(') + Group(delimitedList(myExpr)) + Suppress(')')
| myVal
)
myExpr << mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr )
print(myExpr.parseString('blah(foo+bar)'))
emits ['blah', ['foo', '+', 'bar']] as desired. I also removed the redundant backslashes, since logical line continuation occurs anyway within parentheses; they were innocuous but did hamper readability.
A:
I've found that a good habit to get into when using the '<<' operator with Forwards is to always enclose the RHS in parentheses. That is:
myExpr << mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr )
is better as:
myExpr << ( mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr ) )
This is a result of my unfortunate choice of '<<' as the "insertion" operator for inserting the expression into a Forward. The parentheses are unnecessary in this particular case, but in this one:
integer = Word(nums)
myExpr << mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr ) | integer
we see why I say "unfortunate". If I simplify this to "A << B | C", we easily see that the precedence of operations causes evaluation to be performed as "(A << B) | C", since '<<' has higher precedence than '|'. The result is that the Forward A only gets the expression B inserted in it. The "| C" part does get executed, but what happens is that you get "A | C" which creates a MatchFirst object, which is then immediately discarded since it is not assigned to any variable name. The solution would be to group the statement within parentheses as "A << (B | C)". In expressions composed only using '+' operations, there is no actual need for the parentheses, since '+' has a higher precedence than '<<'. But this is just lucky coding, and causes problem when someone later adds an alternative expression using '|' and doesn't realize the precedence implications. So I suggest just adopting the style "A << (expression)" to help avoid this confusion.
(Someday I will write pyparsing 2.0 - which will allow me to break compatibilty with existing code - and change this to use the '<<=' operator, which fixes all of these precedence issues, since '<<=' has lower precedence than any of the other operators used by pyparsing.)
|
PyParsing simple language expressions
|
I'm trying to write something that will parse some code. I'm able to successfully parse foo(spam) and spam+eggs, but foo(spam+eggs) (recursive descent? my terminology from compilers is a bit rusty) fails.
I have the following code:
from pyparsing_py3 import *
myVal = Word(alphas+nums+'_')
myFunction = myVal + '(' + delimitedList( myVal ) + ')'
myExpr = Forward()
mySubExpr = ( \
myVal \
| (Suppress('(') + Group(myExpr) + Suppress(')')) \
| myFunction \
)
myExpr << Group( mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr ) )
# SHOULD return: [blah, [foo, +, bar]]
# but actually returns: [blah]
print(myExpr.parseString('blah(foo+bar)'))
|
[
"Several issues: delimitedList is looking for a comma-delimited list of myVal, i.e. identifiers, as the only acceptable form of argument list, so of course it can't match 'foo+bar' (not a comma-delimited list of myVal!); fixing that reveals another -- myVal and myFunction start the same way so their order in mySubExpr matters; fixing that reveals yet another -- TWO levels of nesting instead of one. This versions seems ok...:\nmyVal = Word(alphas+nums+'_') \n\nmyExpr = Forward()\nmySubExpr = (\n (Suppress('(') + Group(myExpr) + Suppress(')'))\n | myVal + Suppress('(') + Group(delimitedList(myExpr)) + Suppress(')')\n | myVal\n )\nmyExpr << mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr ) \n\nprint(myExpr.parseString('blah(foo+bar)'))\n\nemits ['blah', ['foo', '+', 'bar']] as desired. I also removed the redundant backslashes, since logical line continuation occurs anyway within parentheses; they were innocuous but did hamper readability.\n",
"I've found that a good habit to get into when using the '<<' operator with Forwards is to always enclose the RHS in parentheses. That is:\nmyExpr << mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr )\n\nis better as:\nmyExpr << ( mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr ) )\n\nThis is a result of my unfortunate choice of '<<' as the \"insertion\" operator for inserting the expression into a Forward. The parentheses are unnecessary in this particular case, but in this one:\ninteger = Word(nums)\nmyExpr << mySubExpr + ZeroOrMore( oneOf('+ - / * =') + mySubExpr ) | integer\n\nwe see why I say \"unfortunate\". If I simplify this to \"A << B | C\", we easily see that the precedence of operations causes evaluation to be performed as \"(A << B) | C\", since '<<' has higher precedence than '|'. The result is that the Forward A only gets the expression B inserted in it. The \"| C\" part does get executed, but what happens is that you get \"A | C\" which creates a MatchFirst object, which is then immediately discarded since it is not assigned to any variable name. The solution would be to group the statement within parentheses as \"A << (B | C)\". In expressions composed only using '+' operations, there is no actual need for the parentheses, since '+' has a higher precedence than '<<'. But this is just lucky coding, and causes problem when someone later adds an alternative expression using '|' and doesn't realize the precedence implications. So I suggest just adopting the style \"A << (expression)\" to help avoid this confusion.\n(Someday I will write pyparsing 2.0 - which will allow me to break compatibilty with existing code - and change this to use the '<<=' operator, which fixes all of these precedence issues, since '<<=' has lower precedence than any of the other operators used by pyparsing.)\n"
] |
[
4,
4
] |
[] |
[] |
[
"parsing",
"pyparsing",
"python"
] |
stackoverflow_0001376716_parsing_pyparsing_python.txt
|
Q:
Python dictionary to store socket objects
Can we store socket objects in a Python dictionary.
I want to create a socket, store socket object, do some stuff and then read from the socket(search from dictionary to get socketobject).
A:
Yes:
>>> import socket
>>> s = socket.socket()
>>> d = {"key" : s}
>>> d
{'key': <socket._socketobject object at 0x00CEB5A8>}
|
Python dictionary to store socket objects
|
Can we store socket objects in a Python dictionary.
I want to create a socket, store socket object, do some stuff and then read from the socket(search from dictionary to get socketobject).
|
[
"Yes:\n>>> import socket\n>>> s = socket.socket()\n>>> d = {\"key\" : s}\n>>> d\n{'key': <socket._socketobject object at 0x00CEB5A8>}\n\n"
] |
[
10
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001378079_python.txt
|
Q:
Creating GUI with Python in Linux
Quick question.
I'm using Linux and I want to try making a GUI with Python. I've heard about something like Qt, GTK+ and PyGTK but I don't know what they are exactly and what the difference between them is.
Is there any difference on how they work with different DEs like GNOME, KDE, XFCE etc.? Is there any IDE that allows you to create GUI like Microsoft Visual Studio does (for C#, C, Visual Basic etc.)?
Or should I maybe use another language other than Python to make GUI applications?
A:
Your first step should be http://wiki.python.org/moin/GuiProgramming
Some tool-kits integrate better in one environment over the other. For example PyQt, PyKDE (and the brand new PySide) will play nicer in a KDE environment, while the GTK versions (including the WX-widgets) will blend better into a GNOME/XFCE desktops.
You should look at the environment you want to target. You can go for basic portable GUI kit, or you can to a deeper integration with tour DE, like use of integrated password manager, and configuration file parsers, that are integrated in a specific DE like KDE or GNOME.
You should also consider the dependency that your selection dictates, and what is come by default with a basic DE. For example, PyKDE in the KDE 3.X branch had a non trivial set of dependencies, while at the 4.X branch the plasma binding made the Python GUI programming dependency less of an issue.
There are several IDE tools, in different levels of completeness and maturity. The best thing is to try one ore more, and see what best fit your needs.
A:
I would avoid using another language to make a GUI for Python.
I've had every good luck with wxwidgets, which is the python binding for WX, a cross-platform development system. It's pretty easy to learn and quite powerful. The problem with wxwidgets is that it is not installed by default, so your users will need to install it on every platform that they wish to run your application. Find more information about it at http://wxwidgets.org/.
If you want people to be able to use your program without installing anything else, use Tkinter, the GUI system that comes with Python.
I would avoid the Python bindings for GTK or KDE unless you already know those systems. They also need to be downloaded, and they do not seem to have as much adoption as wxwidgets.
A:
Each desktop environment uses a specific toolkit to build it's components. For example, KDE uses Qt and GNOME uses Gtk.
Your use of a toolkit will be dependent upon what type of desktop environment you're targeting at, and if you want to target a wide range of desktops then use a toolkit which will work on many desktop environments, like Wx widgets which will work on Linux, Mac OS and Windows. For building simple GUI applications, Tkinter will do.
A:
Use the glade UI designer and pyGtk bindings... that was my first ever experience with python and there are lots of blog posts and tutorials to help you get started
A:
Use PyGTK. As important as the toolkit is its underpinnings, with PyGTK you use GLib as well, with its filesystem abstractions (python module gio) that are very important for the Linux desktop, its high-level cross-desktop functions such as glib.get_user_data_dir() and its other application framework tools, and GObject and its property and signals model.
|
Creating GUI with Python in Linux
|
Quick question.
I'm using Linux and I want to try making a GUI with Python. I've heard about something like Qt, GTK+ and PyGTK but I don't know what they are exactly and what the difference between them is.
Is there any difference on how they work with different DEs like GNOME, KDE, XFCE etc.? Is there any IDE that allows you to create GUI like Microsoft Visual Studio does (for C#, C, Visual Basic etc.)?
Or should I maybe use another language other than Python to make GUI applications?
|
[
"Your first step should be http://wiki.python.org/moin/GuiProgramming\nSome tool-kits integrate better in one environment over the other. For example PyQt, PyKDE (and the brand new PySide) will play nicer in a KDE environment, while the GTK versions (including the WX-widgets) will blend better into a GNOME/XFCE desktops.\nYou should look at the environment you want to target. You can go for basic portable GUI kit, or you can to a deeper integration with tour DE, like use of integrated password manager, and configuration file parsers, that are integrated in a specific DE like KDE or GNOME.\nYou should also consider the dependency that your selection dictates, and what is come by default with a basic DE. For example, PyKDE in the KDE 3.X branch had a non trivial set of dependencies, while at the 4.X branch the plasma binding made the Python GUI programming dependency less of an issue.\nThere are several IDE tools, in different levels of completeness and maturity. The best thing is to try one ore more, and see what best fit your needs.\n",
"I would avoid using another language to make a GUI for Python. \nI've had every good luck with wxwidgets, which is the python binding for WX, a cross-platform development system. It's pretty easy to learn and quite powerful. The problem with wxwidgets is that it is not installed by default, so your users will need to install it on every platform that they wish to run your application. Find more information about it at http://wxwidgets.org/.\nIf you want people to be able to use your program without installing anything else, use Tkinter, the GUI system that comes with Python.\nI would avoid the Python bindings for GTK or KDE unless you already know those systems. They also need to be downloaded, and they do not seem to have as much adoption as wxwidgets.\n",
"Each desktop environment uses a specific toolkit to build it's components. For example, KDE uses Qt and GNOME uses Gtk. \nYour use of a toolkit will be dependent upon what type of desktop environment you're targeting at, and if you want to target a wide range of desktops then use a toolkit which will work on many desktop environments, like Wx widgets which will work on Linux, Mac OS and Windows. For building simple GUI applications, Tkinter will do.\n",
"Use the glade UI designer and pyGtk bindings... that was my first ever experience with python and there are lots of blog posts and tutorials to help you get started\n",
"Use PyGTK. As important as the toolkit is its underpinnings, with PyGTK you use GLib as well, with its filesystem abstractions (python module gio) that are very important for the Linux desktop, its high-level cross-desktop functions such as glib.get_user_data_dir() and its other application framework tools, and GObject and its property and signals model.\n"
] |
[
14,
4,
2,
1,
0
] |
[] |
[] |
[
"gtk",
"linux",
"pygtk",
"python",
"user_interface"
] |
stackoverflow_0001355918_gtk_linux_pygtk_python_user_interface.txt
|
Q:
CheckedListBox used from Python(pywin32)
Does anyone know how to get the list of items and check/uncheck items in a CheckedListBox from python?
I've found this to help me partly on the way. I think I've found the handle for the CheckedListBox(listed as a SysTreeView32 by WinGuiAuto.py).
One usage will for my part be to create an autoinstaller that manages to uncheck all checkboxes that installs bloatware.
A:
By using pywinauto I've managed to check items in a checkedlistbox by selecting them twice.
from pywinauto import application
app = application.Application()
app.Form1.CheckedListBox1.Select('item1')
app.Form1.CheckedListBox1.Select('item1')
|
CheckedListBox used from Python(pywin32)
|
Does anyone know how to get the list of items and check/uncheck items in a CheckedListBox from python?
I've found this to help me partly on the way. I think I've found the handle for the CheckedListBox(listed as a SysTreeView32 by WinGuiAuto.py).
One usage will for my part be to create an autoinstaller that manages to uncheck all checkboxes that installs bloatware.
|
[
"By using pywinauto I've managed to check items in a checkedlistbox by selecting them twice.\nfrom pywinauto import application\napp = application.Application()\napp.Form1.CheckedListBox1.Select('item1')\napp.Form1.CheckedListBox1.Select('item1')\n\n"
] |
[
1
] |
[] |
[] |
[
"checkedlistbox",
"message",
"python",
"pywin32",
"windows"
] |
stackoverflow_0001305703_checkedlistbox_message_python_pywin32_windows.txt
|
Q:
Python : email sending failing on SSL read
I keep getting this intermittent error when trying to send through 'smtp.gmail.com'.
Traceback (most recent call last):
File "/var/home/ptarjan/django/mysite/django/core/handlers/base.py", line 92, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/var/home/ptarjan/django/mysite/registration/views.py", line 137, in register
new_user = form.save()
File "/var/home/ptarjan/django/mysite/registration/forms.py", line 79, in save
email=self.cleaned_data['email'])
File "/var/home/ptarjan/django/mysite/django/db/transaction.py", line 240, in _commit_on_success
res = func(*args, **kw)
File "/var/home/ptarjan/django/mysite/registration/models.py", line 120, in create_inactive_user
registration_profile.send_registration_mail()
File "/var/home/ptarjan/django/mysite/registration/models.py", line 259, in send_registration_mail
send_mail(subject, message, settings.DEFAULT_FROM_EMAIL, [self.user.email])
File "/var/home/ptarjan/django/mysite/django/core/mail.py", line 390, in send_mail
connection=connection).send()
File "/var/home/ptarjan/django/mysite/django/core/mail.py", line 266, in send
return self.get_connection(fail_silently).send_messages([self])
File "/var/home/ptarjan/django/mysite/django/core/mail.py", line 172, in send_messages
sent = self._send(message)
File "/var/home/ptarjan/django/mysite/django/core/mail.py", line 186, in _send
email_message.message().as_string())
File "/usr/lib/python2.5/smtplib.py", line 704, in sendmail
(code,resp) = self.data(msg)
File "/usr/lib/python2.5/smtplib.py", line 484, in data
(code,repl)=self.getreply()
File "/usr/lib/python2.5/smtplib.py", line 352, in getreply
line = self.file.readline()
File "/usr/lib/python2.5/smtplib.py", line 160, in readline
chr = self.sslobj.read(1)
sslerror: The read operation timed out
I'm running Django 1.1 with the django-registarion app. With these in my settings.py
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = "**SECRET**"
EMAIL_HOST_PASSWORD = "**SECRET**"
EMAIL_PORT = 587
EMAIL_USE_TLS = True
A:
Altho' I don't know why, I have been thro' this, and it works when you have settings variables ordered in a particular order:
EMAIL_HOST
EMAIL_PORT
EMAIL_HOST_USER
EMAIL_HOST_PASSWORD
EMAIL_USE_TLS
A:
Looks like gmail may simply be occasionally slow to respond, so your operation times out. Perhaps you can use a try/except to catch such issues and retry a few times (maybe waiting a while between attempts). BTW, this task seems well suited to a dedicated or pooled thread which encapsulates the whole "send mail with retries" operation.
|
Python : email sending failing on SSL read
|
I keep getting this intermittent error when trying to send through 'smtp.gmail.com'.
Traceback (most recent call last):
File "/var/home/ptarjan/django/mysite/django/core/handlers/base.py", line 92, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/var/home/ptarjan/django/mysite/registration/views.py", line 137, in register
new_user = form.save()
File "/var/home/ptarjan/django/mysite/registration/forms.py", line 79, in save
email=self.cleaned_data['email'])
File "/var/home/ptarjan/django/mysite/django/db/transaction.py", line 240, in _commit_on_success
res = func(*args, **kw)
File "/var/home/ptarjan/django/mysite/registration/models.py", line 120, in create_inactive_user
registration_profile.send_registration_mail()
File "/var/home/ptarjan/django/mysite/registration/models.py", line 259, in send_registration_mail
send_mail(subject, message, settings.DEFAULT_FROM_EMAIL, [self.user.email])
File "/var/home/ptarjan/django/mysite/django/core/mail.py", line 390, in send_mail
connection=connection).send()
File "/var/home/ptarjan/django/mysite/django/core/mail.py", line 266, in send
return self.get_connection(fail_silently).send_messages([self])
File "/var/home/ptarjan/django/mysite/django/core/mail.py", line 172, in send_messages
sent = self._send(message)
File "/var/home/ptarjan/django/mysite/django/core/mail.py", line 186, in _send
email_message.message().as_string())
File "/usr/lib/python2.5/smtplib.py", line 704, in sendmail
(code,resp) = self.data(msg)
File "/usr/lib/python2.5/smtplib.py", line 484, in data
(code,repl)=self.getreply()
File "/usr/lib/python2.5/smtplib.py", line 352, in getreply
line = self.file.readline()
File "/usr/lib/python2.5/smtplib.py", line 160, in readline
chr = self.sslobj.read(1)
sslerror: The read operation timed out
I'm running Django 1.1 with the django-registarion app. With these in my settings.py
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = "**SECRET**"
EMAIL_HOST_PASSWORD = "**SECRET**"
EMAIL_PORT = 587
EMAIL_USE_TLS = True
|
[
"Altho' I don't know why, I have been thro' this, and it works when you have settings variables ordered in a particular order:\n\nEMAIL_HOST\nEMAIL_PORT\nEMAIL_HOST_USER\nEMAIL_HOST_PASSWORD\nEMAIL_USE_TLS\n\n",
"Looks like gmail may simply be occasionally slow to respond, so your operation times out. Perhaps you can use a try/except to catch such issues and retry a few times (maybe waiting a while between attempts). BTW, this task seems well suited to a dedicated or pooled thread which encapsulates the whole \"send mail with retries\" operation.\n"
] |
[
3,
0
] |
[] |
[] |
[
"django",
"email",
"python",
"smtp",
"ssl"
] |
stackoverflow_0001376450_django_email_python_smtp_ssl.txt
|
Q:
How to make this code handle big inputs more efficiently?
Hey. I know this is not a 'refactor my code' site but I made this little piece of code which works perfectly fine with moderately sized input but it's problematic with string of size, say, over 2000.
What it does - it takes a string of numbers as a parameter, and it returns the number of ways it can be interpreted as a string of letters, where each letter in the English alphabet is assigned a numeric value according to its lexical position: A -> 1, B-> 2, Z-> 26 etc.
Since some letters are represented as two numbers the suffix tree is not unique so there can be multiple interpretations. For example '111' could be both 'AAA', 'KA' and 'AK'.
This is my code. It's fairly readable and straightforward but it's problematic because:
It has to copy part of the string every time to call it as argument to the recursive part.
It has to store huge strings in the cache so it's very expensive, memory-wise.
... it's recursive.
Help much appreciated :)
cache = dict()
def alpha_code(numbers):
"""
Returns the number of ways a string of numbers
can be interpreted as an alphabetic sequence.
"""
global cache
if numbers in cache: return cache[numbers]
## check the basic cases
if numbers.startswith('0'): return 0
if len(numbers) <= 1: return 1
## dynamic programming part
## obviously we can treat the first (non-zero)
## digit as a single letter and continue -
## '342...' -> C + '42...'
total = alpha_code(numbers[1:])
## the first two digits make for a legal letter
## iff this condition holds
## '2511...' -> Y + '11...'
## '3711...' -> illegal
if numbers[:2] <= '26':
total += alpha_code(numbers[2:])
cache[numbers] = total
return total
A:
Try using a dynamic programming approach instead:
Create an array (call it 'P') with 1 element per character in the string.
Initialize P[0] = 1 (unless the first character is 0, in which case just return 0 for the result).
Initialize P[1] = 2 if the first two characters can be interpreted as a letter as can the current; otherwise 1 if the current character is non-zero, otherwise return 0 for the result).
Fill in the rest of the array from left to right, via the following rule (pseudocode):
P[x] =
(if current character is '0' then 0, else P[x-1])
+
(if previous character + current character can be interpreted as a letter
then P[x-2] else 0)
(Note that if P[x] is ever 0 you should return zero, since that means there were two 0's in a row which your rules don't seem to allow.)
The first portion of the sum is to deal with the case where the current character is interpreted as a letter; the second part of the sum is to deal with the case where the 2 most recent characters are interpreted as a letter.
Essentially, P[x] will be equal to the number of ways that the entirety of the string from the start up to position x can be interpreted as letters. Since you can determine this from looking at previous results, you only need to loop through the contents of the string once - an O(N) time instead of a O(2N) which is a huge improvement. Your final result is simply P[len(input)-1] since "everything from the start up to the end" is the same as just "the entire string".
Example run for your very basic input case of '111':
P[0] = 1 (Since 1 is non-zero)
P[1] = 2 (Since 11 is a valid letter, and 1 is also a valid letter)
P[2] = 3 (Since the most recent two characters together are a valid letter, and the current character is nonzero, so P[0]+P[1] = 1+2 = 3)
Since P[2] is our last result, and it's 3, our answer is 3.
If the string were '1111' instead, we'd continue another step:
P[3] = 5 (Since the most recent two characters are a valid letter, and current character is non-zero, so P[1]+P[2] = 2+3 = 5)
The answer is indeed 5 - valid interpretations being AAAA, KK, AKA, AAK, KAA. Notice how those 5 potential answers are built up from the potential interpretations of '11' and '111':
'11': AA or K
'111': AAA or KA or AK
'111'+A: AAA+A or KA+A or AK+A
'11'+K: AA+K or K+K
A:
Recursion elimination is always a fun task. Here, I'd focus on ensuring the cache is correctly populated, then just use it, as follows...:
import collections
def alpha_code(numbers):
# populate cache with all needed pieces
cache = dict()
pending_work = collections.deque([numbers])
while pending_work:
work = pending_work.popleft()
# if cache[work] is known or easy, just go for it
if work in cache:
continue
if work[:1] == '0':
cache[work] = 0
continue
elif len(work) <= 1:
cache[work] = 1
continue
# are there missing pieces? If so queue up the pieces
# on the left (shorter first), the current work piece
# on the right, and keep churning
n1 = work[1:]
t1 = cache.get(n1)
if t1 is None:
pending_work.appendleft(n1)
if work[:2] <= '26':
n2 = work[2:]
t2 = cache.get(n2)
if t2 is None:
pending_work.appendleft(n2)
else:
t2 = 0
if t1 is None or t2 is None:
pending_work.append(work)
continue
# we have all pieces needed to add this one
total = t1 + t2
cache[work] = total
# cache fully populated, so we know the answer
return cache[numbers]
A:
A non-recursive algorithm can be written but I don't think it will be faster. I'm no python expert so I'll just give you an algorithm:
Convert the array on numbers to an array of letters using just A thru I and leaving the zeros in place.
Create two nested loops where you search and replace all the known pairs that represent larger letters. (AA -> K)
The nice thing about this algorithm is you can optimize the search/replace by first searching for and indexing all the As and Bs in the array.
Since you are using Python, regardless of what you do, you should convert the string into a list of numbers. The numbers 0-9 are static objects in Python so that means they are free to allocate. You could also create reusable character objects of A thru Z. The other benefit of a list is the replace operation to remove two elements and insert a single element is problem much faster than copying the string over and over.
A:
You can largely reduce the memory footprint by not copying the string, and passing instead the original string and the index of the first character to study:
def alpha_code(numbers, start_from = 0)
....
You would then call recursively as:
alpha_code(numbers, start_from + 1) # or start_from + 2, etc.
This way, your preserve the simplicity of a recursive algorithm and you save a lot of memory.
|
How to make this code handle big inputs more efficiently?
|
Hey. I know this is not a 'refactor my code' site but I made this little piece of code which works perfectly fine with moderately sized input but it's problematic with string of size, say, over 2000.
What it does - it takes a string of numbers as a parameter, and it returns the number of ways it can be interpreted as a string of letters, where each letter in the English alphabet is assigned a numeric value according to its lexical position: A -> 1, B-> 2, Z-> 26 etc.
Since some letters are represented as two numbers the suffix tree is not unique so there can be multiple interpretations. For example '111' could be both 'AAA', 'KA' and 'AK'.
This is my code. It's fairly readable and straightforward but it's problematic because:
It has to copy part of the string every time to call it as argument to the recursive part.
It has to store huge strings in the cache so it's very expensive, memory-wise.
... it's recursive.
Help much appreciated :)
cache = dict()
def alpha_code(numbers):
"""
Returns the number of ways a string of numbers
can be interpreted as an alphabetic sequence.
"""
global cache
if numbers in cache: return cache[numbers]
## check the basic cases
if numbers.startswith('0'): return 0
if len(numbers) <= 1: return 1
## dynamic programming part
## obviously we can treat the first (non-zero)
## digit as a single letter and continue -
## '342...' -> C + '42...'
total = alpha_code(numbers[1:])
## the first two digits make for a legal letter
## iff this condition holds
## '2511...' -> Y + '11...'
## '3711...' -> illegal
if numbers[:2] <= '26':
total += alpha_code(numbers[2:])
cache[numbers] = total
return total
|
[
"Try using a dynamic programming approach instead:\n\nCreate an array (call it 'P') with 1 element per character in the string.\nInitialize P[0] = 1 (unless the first character is 0, in which case just return 0 for the result).\nInitialize P[1] = 2 if the first two characters can be interpreted as a letter as can the current; otherwise 1 if the current character is non-zero, otherwise return 0 for the result).\nFill in the rest of the array from left to right, via the following rule (pseudocode):\nP[x] =\n (if current character is '0' then 0, else P[x-1])\n +\n (if previous character + current character can be interpreted as a letter\n then P[x-2] else 0)\n\n(Note that if P[x] is ever 0 you should return zero, since that means there were two 0's in a row which your rules don't seem to allow.)\nThe first portion of the sum is to deal with the case where the current character is interpreted as a letter; the second part of the sum is to deal with the case where the 2 most recent characters are interpreted as a letter.\nEssentially, P[x] will be equal to the number of ways that the entirety of the string from the start up to position x can be interpreted as letters. Since you can determine this from looking at previous results, you only need to loop through the contents of the string once - an O(N) time instead of a O(2N) which is a huge improvement. Your final result is simply P[len(input)-1] since \"everything from the start up to the end\" is the same as just \"the entire string\".\nExample run for your very basic input case of '111':\n\nP[0] = 1 (Since 1 is non-zero)\nP[1] = 2 (Since 11 is a valid letter, and 1 is also a valid letter)\nP[2] = 3 (Since the most recent two characters together are a valid letter, and the current character is nonzero, so P[0]+P[1] = 1+2 = 3)\n\nSince P[2] is our last result, and it's 3, our answer is 3.\nIf the string were '1111' instead, we'd continue another step:\n\nP[3] = 5 (Since the most recent two characters are a valid letter, and current character is non-zero, so P[1]+P[2] = 2+3 = 5)\n\nThe answer is indeed 5 - valid interpretations being AAAA, KK, AKA, AAK, KAA. Notice how those 5 potential answers are built up from the potential interpretations of '11' and '111':\n'11': AA or K\n'111': AAA or KA or AK\n'111'+A: AAA+A or KA+A or AK+A\n'11'+K: AA+K or K+K\n",
"Recursion elimination is always a fun task. Here, I'd focus on ensuring the cache is correctly populated, then just use it, as follows...:\nimport collections\n\ndef alpha_code(numbers):\n # populate cache with all needed pieces\n cache = dict()\n pending_work = collections.deque([numbers])\n while pending_work:\n work = pending_work.popleft()\n # if cache[work] is known or easy, just go for it\n if work in cache:\n continue\n if work[:1] == '0':\n cache[work] = 0\n continue\n elif len(work) <= 1:\n cache[work] = 1\n continue\n # are there missing pieces? If so queue up the pieces\n # on the left (shorter first), the current work piece\n # on the right, and keep churning\n n1 = work[1:]\n t1 = cache.get(n1)\n if t1 is None:\n pending_work.appendleft(n1)\n if work[:2] <= '26':\n n2 = work[2:]\n t2 = cache.get(n2)\n if t2 is None:\n pending_work.appendleft(n2)\n else:\n t2 = 0\n if t1 is None or t2 is None:\n pending_work.append(work)\n continue\n # we have all pieces needed to add this one\n total = t1 + t2\n cache[work] = total\n\n # cache fully populated, so we know the answer\n return cache[numbers]\n\n",
"A non-recursive algorithm can be written but I don't think it will be faster. I'm no python expert so I'll just give you an algorithm:\nConvert the array on numbers to an array of letters using just A thru I and leaving the zeros in place. \nCreate two nested loops where you search and replace all the known pairs that represent larger letters. (AA -> K)\n\nThe nice thing about this algorithm is you can optimize the search/replace by first searching for and indexing all the As and Bs in the array.\nSince you are using Python, regardless of what you do, you should convert the string into a list of numbers. The numbers 0-9 are static objects in Python so that means they are free to allocate. You could also create reusable character objects of A thru Z. The other benefit of a list is the replace operation to remove two elements and insert a single element is problem much faster than copying the string over and over.\n",
"You can largely reduce the memory footprint by not copying the string, and passing instead the original string and the index of the first character to study:\ndef alpha_code(numbers, start_from = 0)\n ....\n\nYou would then call recursively as:\nalpha_code(numbers, start_from + 1) # or start_from + 2, etc.\n\nThis way, your preserve the simplicity of a recursive algorithm and you save a lot of memory.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"python",
"recursion"
] |
stackoverflow_0001377335_python_recursion.txt
|
Q:
How to get/set data of(into) visual components (of windows programs) programmatically?
I am talking about windows GUI programs. Say a program-window has a dialog box (or a confirm button) asking user input. How can I provide input to that program using my program (written in say C#, Java or Python). Or say, a program window is showing some image in one of its panels. How can I grab that from other(my) program. It is a kind of impersonating (or inter-win-program messaging?). Someone told me it can be done using spy++. But how? Can you explain? What code to write? What to do with spy++?
A:
spy++ is listening for all win32 messages. It is very useful for debugging an application but I don't think that it is a good idea to use it as an inter-process communication mechanism.
You can use win32 apis to send input to your program. As an example, You can modify the content of an edit text by using the SetWindowText function.
You need to get the handle of the window. You can use the FindWindow and the GetDlgItem to get it.
It will work for c++ and python thanks to win32 python extension. I don't know if there it is possible to use win32 api from java.
A:
To programmatically generate key presses, see Key Presses in Python, and search for "generate", "key", "emulate", "fake".
To programmatically capture the screen, see Capture screenshot of active window?, and search for terms like "capture", "window", "screen".
A:
If you want to write win32 code to do it, start here:
http://msdn.microsoft.com/en-us/library/ms632589(VS.85).aspx
If you just want to hook the gui as easily as possible, investigate:
http://www.winbatch.com/
http://www.autoitscript.com/
http://www.autohotkey.com/
|
How to get/set data of(into) visual components (of windows programs) programmatically?
|
I am talking about windows GUI programs. Say a program-window has a dialog box (or a confirm button) asking user input. How can I provide input to that program using my program (written in say C#, Java or Python). Or say, a program window is showing some image in one of its panels. How can I grab that from other(my) program. It is a kind of impersonating (or inter-win-program messaging?). Someone told me it can be done using spy++. But how? Can you explain? What code to write? What to do with spy++?
|
[
"spy++ is listening for all win32 messages. It is very useful for debugging an application but I don't think that it is a good idea to use it as an inter-process communication mechanism.\nYou can use win32 apis to send input to your program. As an example, You can modify the content of an edit text by using the SetWindowText function. \nYou need to get the handle of the window. You can use the FindWindow and the GetDlgItem to get it.\nIt will work for c++ and python thanks to win32 python extension. I don't know if there it is possible to use win32 api from java.\n",
"To programmatically generate key presses, see Key Presses in Python, and search for \"generate\", \"key\", \"emulate\", \"fake\".\nTo programmatically capture the screen, see Capture screenshot of active window?, and search for terms like \"capture\", \"window\", \"screen\".\n",
"If you want to write win32 code to do it, start here:\nhttp://msdn.microsoft.com/en-us/library/ms632589(VS.85).aspx\nIf you just want to hook the gui as easily as possible, investigate:\nhttp://www.winbatch.com/\nhttp://www.autoitscript.com/\nhttp://www.autohotkey.com/\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"c#",
"impersonation",
"java",
"python",
"windows"
] |
stackoverflow_0001378803_c#_impersonation_java_python_windows.txt
|
Q:
Python multiprocessing with twisted's reactor
I am working on a xmlrpc server which has to perform certain tasks cyclically. I am using twisted as the core of the xmlrpc service but I am running into a little problem:
class cemeteryRPC(xmlrpc.XMLRPC):
def __init__(self, dic):
xmlrpc.XMLRPC.__init__(self)
def xmlrpc_foo(self):
return 1
def cycle(self):
print "Hello"
time.sleep(3)
class cemeteryM( base ):
def __init__(self, dic): # dic is for cemetery
multiprocessing.Process.__init__(self)
self.cemRPC = cemeteryRPC()
def run(self):
# Start reactor on a second process
reactor.listenTCP( c.PORT_XMLRPC, server.Site( self.cemRPC ) )
p = multiprocessing.Process( target=reactor.run )
p.start()
while not self.exit.is_set():
self.cemRPC.cycle()
#p.join()
if __name__ == "__main__":
import errno
test = cemeteryM()
test.start()
# trying new method
notintr = False
while not notintr:
try:
test.join()
notintr = True
except OSError, ose:
if ose.errno != errno.EINTR:
raise ose
except KeyboardInterrupt:
notintr = True
How should i go about joining these two process so that their respective joins doesn't block?
(I am pretty confused by "join". Why would it block and I have googled but can't find much helpful explanation to the usage of join. Can someone explain this to me?)
Regards
A:
Do you really need to run Twisted in a separate process? That looks pretty unusual to me.
Try to think of Twisted's Reactor as your main loop - and hang everything you need off that - rather than trying to run Twisted as a background task.
The more normal way of performing this sort of operation would be to use Twisted's .callLater or to add a LoopingCall object to the Reactor.
e.g.
from twisted.web import xmlrpc, server
from twisted.internet import task
from twisted.internet import reactor
class Example(xmlrpc.XMLRPC):
def xmlrpc_add(self, a, b):
return a + b
def timer_event(self):
print "one second"
r = Example()
m = task.LoopingCall(r.timer_event)
m.start(1.0)
reactor.listenTCP(7080, server.Site(r))
reactor.run()
A:
Hey asdvawev - .join() in multiprocessing works just like .join() in threading - it's a blocking call the main thread runs to wait for the worker to shut down. If the worker never shuts down, then .join() will never return. For example:
class myproc(Process):
def run(self):
while True:
time.sleep(1)
Calling run on this means that join() will never, ever return. Typically to prevent this I'll use an Event() object passed into the child process to allow me to signal the child when to exit:
class myproc(Process):
def __init__(self, event):
self.event = event
Process.__init__(self)
def run(self):
while not self.event.is_set():
time.sleep(1)
Alternatively, if your work is encapsulated in a queue - you can simply have the child process work off of the queue until it encounters a sentinel (typically a None entry in the queue) and then shut down.
Both of these suggestions means that prior to calling .join() you can send set the event, or insert the sentinel and when join() is called, the process will finish it's current task and then exit properly.
|
Python multiprocessing with twisted's reactor
|
I am working on a xmlrpc server which has to perform certain tasks cyclically. I am using twisted as the core of the xmlrpc service but I am running into a little problem:
class cemeteryRPC(xmlrpc.XMLRPC):
def __init__(self, dic):
xmlrpc.XMLRPC.__init__(self)
def xmlrpc_foo(self):
return 1
def cycle(self):
print "Hello"
time.sleep(3)
class cemeteryM( base ):
def __init__(self, dic): # dic is for cemetery
multiprocessing.Process.__init__(self)
self.cemRPC = cemeteryRPC()
def run(self):
# Start reactor on a second process
reactor.listenTCP( c.PORT_XMLRPC, server.Site( self.cemRPC ) )
p = multiprocessing.Process( target=reactor.run )
p.start()
while not self.exit.is_set():
self.cemRPC.cycle()
#p.join()
if __name__ == "__main__":
import errno
test = cemeteryM()
test.start()
# trying new method
notintr = False
while not notintr:
try:
test.join()
notintr = True
except OSError, ose:
if ose.errno != errno.EINTR:
raise ose
except KeyboardInterrupt:
notintr = True
How should i go about joining these two process so that their respective joins doesn't block?
(I am pretty confused by "join". Why would it block and I have googled but can't find much helpful explanation to the usage of join. Can someone explain this to me?)
Regards
|
[
"Do you really need to run Twisted in a separate process? That looks pretty unusual to me.\nTry to think of Twisted's Reactor as your main loop - and hang everything you need off that - rather than trying to run Twisted as a background task.\nThe more normal way of performing this sort of operation would be to use Twisted's .callLater or to add a LoopingCall object to the Reactor.\ne.g.\nfrom twisted.web import xmlrpc, server\nfrom twisted.internet import task\nfrom twisted.internet import reactor\n\nclass Example(xmlrpc.XMLRPC): \n def xmlrpc_add(self, a, b):\n return a + b\n\n def timer_event(self):\n print \"one second\"\n\nr = Example()\nm = task.LoopingCall(r.timer_event)\nm.start(1.0)\n\nreactor.listenTCP(7080, server.Site(r))\nreactor.run()\n\n",
"Hey asdvawev - .join() in multiprocessing works just like .join() in threading - it's a blocking call the main thread runs to wait for the worker to shut down. If the worker never shuts down, then .join() will never return. For example:\nclass myproc(Process):\n def run(self):\n while True:\n time.sleep(1)\n\nCalling run on this means that join() will never, ever return. Typically to prevent this I'll use an Event() object passed into the child process to allow me to signal the child when to exit:\nclass myproc(Process):\n def __init__(self, event):\n self.event = event\n Process.__init__(self)\n def run(self):\n while not self.event.is_set():\n time.sleep(1)\n\nAlternatively, if your work is encapsulated in a queue - you can simply have the child process work off of the queue until it encounters a sentinel (typically a None entry in the queue) and then shut down.\nBoth of these suggestions means that prior to calling .join() you can send set the event, or insert the sentinel and when join() is called, the process will finish it's current task and then exit properly.\n"
] |
[
11,
3
] |
[] |
[] |
[
"multiprocessing",
"python",
"twisted"
] |
stackoverflow_0001377494_multiprocessing_python_twisted.txt
|
Q:
Closures in Python
I've been trying to learn Python, and while I'm enthusiastic about using closures in Python, I've been having trouble getting some code to work properly:
def memoize(fn):
def get(key):
return (False,)
def vset(key, value):
global get
oldget = get
def newget(ky):
if key==ky: return (True, value)
return oldget(ky)
get = newget
def mfun(*args):
cache = get(args)
if (cache[0]): return cache[1]
val = apply(fn, args)
vset(args, val)
return val
return mfun
def fib(x):
if x<2: return x
return fib(x-1)+fib(x-2)
def fibm(x):
if x<2: return x
return fibm(x-1)+fibm(x-2)
fibm = memoize(fibm)
Basically, what this is supposed to do is use closures to maintain the memoized state of the function. I realize there are probably many faster, easier to read, and in general more 'Pythonic' ways to implement this; however, my goal is to understand exactly how closures work in Python, and how they differ from Lisp, so I'm not interested in alternative solutions, just why my code doesn't work and what I can do (if anything) to fix it.
The problem I'm running into is when I try to use fibm - Python insists that get isn't defined:
Python 2.6.1 (r261:67515, Feb 1 2009, 11:39:55)
[GCC 4.0.1 (Apple Inc. build 5488)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import memoize
>>> memoize.fibm(35)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "memoize.py", line 14, in mfun
cache = get(args)
NameError: global name 'get' is not defined
>>>
Seeing as I'm new to Python, I don't know if I've done something wrong, or if this is just a limitation of the language. I'm hoping it's the former. :-)
A:
The problem is in your scoping, not in your closures. If you're up for some heavy reading, then you can try http://www.python.org/dev/peps/pep-3104/.
If that's not the case, here's the simple explanation:
The problem is in the statement global get . global refers to the outermost scope, and since there isn't any global function get, it throws.
What you need, is an access specifier for variables in the enclosing scope, and not the global scope.
In python 3.0, as I've tested, the nonlocal keyword is exactly what you need, in the place of global.
nonlocal get
...
In python 2.x, I just removed the global get and the oldget references and it works properly.
A:
def memoize(fn):
get = [lambda key: (False, None)]
def vset(args):
value = fn(*args)
oldget = get[0]
def newget(key):
if args == key:
return (True, value)
return oldget(key)
get[0] = newget
return value
def mfun(*args):
found, value = get[0](args)
if found:
return value
return vset(args)
return mfun
CALLS = 0
def fib(x):
global CALLS
CALLS += 1
if x<2: return x
return fib(x-1)+fib(x-2)
@memoize
def fibm(x):
global CALLS
CALLS += 1
if x<2: return x
return fibm(x-1)+fibm(x-2)
CALLS = 0
print "fib(35) is", fib(35), "and took", CALLS, "calls"
CALLS = 0
print "fibm(35) is", fibm(35), "and took", CALLS, "calls"
Output is:
fib(35) is 9227465 and took 29860703 calls
fibm(35) is 9227465 and took 36 calls
Similar to other answers, however this one works. :)
The important change from the code in the question is assigning to a non-global non-local (get); however, I also made some improvements while trying to maintain your *cough*broken*cough* closure use. Usually the cache is a dict instead of a linked list of closures.
A:
You want to put global get at the beginning of every function (except get itself).
the def get is an assignment to the name get, so you want get to be declared global before that.
Putting global get in mfun and vset makes them work. I can't point to the scoping rules that makes this necessary, but it works ;-)
Your conses are quite lispy too... :)
A:
Get is not global, but local to the surrounding function, that's why the global declaration fails.
If you remove the global, it still fails, because you can't assign to the captured variable name. To work around that, you can use an object as the variable captured by your closures and than just change properties of that object:
class Memo(object):
pass
def memoize(fn):
def defaultget(key):
return (False,)
memo = Memo()
memo.get = defaultget
def vset(key, value):
oldget = memo.get
def newget(ky):
if key==ky: return (True, value)
return oldget(ky)
memo.get = newget
def mfun(*args):
cache = memo.get(args)
if cache[0]: return cache[1]
val = apply(fn, args)
vset(args, val)
return val
return mfun
This way you don't need to assign to the captured variable names but still get what you wanted.
A:
Probably because you want the global get while it isn't a global?
By the way, apply is deprecated, use fn(*args) instead.
def memoize(fn):
def get(key):
return (False,)
def vset(key, value):
def newget(ky):
if key==ky: return (True, value)
return get(ky)
get = newget
def mfun(*args):
cache = get(args)
if (cache[0]): return cache[1]
val = fn(*args)
vset(args, val)
return val
return mfun
def fib(x):
if x<2: return x
return fib(x-1)+fib(x-2)
def fibm(x):
if x<2: return x
return fibm(x-1)+fibm(x-2)
fibm = memoize(fibm)
A:
I think the best way would be:
class Memoized(object):
def __init__(self,func):
self.cache = {}
self.func = func
def __call__(self,*args):
if args in self.cache: return cache[args]
else:
self.cache[args] = self.func(*args)
return self.cache[args]
|
Closures in Python
|
I've been trying to learn Python, and while I'm enthusiastic about using closures in Python, I've been having trouble getting some code to work properly:
def memoize(fn):
def get(key):
return (False,)
def vset(key, value):
global get
oldget = get
def newget(ky):
if key==ky: return (True, value)
return oldget(ky)
get = newget
def mfun(*args):
cache = get(args)
if (cache[0]): return cache[1]
val = apply(fn, args)
vset(args, val)
return val
return mfun
def fib(x):
if x<2: return x
return fib(x-1)+fib(x-2)
def fibm(x):
if x<2: return x
return fibm(x-1)+fibm(x-2)
fibm = memoize(fibm)
Basically, what this is supposed to do is use closures to maintain the memoized state of the function. I realize there are probably many faster, easier to read, and in general more 'Pythonic' ways to implement this; however, my goal is to understand exactly how closures work in Python, and how they differ from Lisp, so I'm not interested in alternative solutions, just why my code doesn't work and what I can do (if anything) to fix it.
The problem I'm running into is when I try to use fibm - Python insists that get isn't defined:
Python 2.6.1 (r261:67515, Feb 1 2009, 11:39:55)
[GCC 4.0.1 (Apple Inc. build 5488)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import memoize
>>> memoize.fibm(35)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "memoize.py", line 14, in mfun
cache = get(args)
NameError: global name 'get' is not defined
>>>
Seeing as I'm new to Python, I don't know if I've done something wrong, or if this is just a limitation of the language. I'm hoping it's the former. :-)
|
[
"The problem is in your scoping, not in your closures. If you're up for some heavy reading, then you can try http://www.python.org/dev/peps/pep-3104/.\nIf that's not the case, here's the simple explanation:\nThe problem is in the statement global get . global refers to the outermost scope, and since there isn't any global function get, it throws.\nWhat you need, is an access specifier for variables in the enclosing scope, and not the global scope.\nIn python 3.0, as I've tested, the nonlocal keyword is exactly what you need, in the place of global.\nnonlocal get\n...\n\nIn python 2.x, I just removed the global get and the oldget references and it works properly.\n",
"def memoize(fn):\n get = [lambda key: (False, None)]\n\n def vset(args):\n value = fn(*args)\n oldget = get[0]\n def newget(key):\n if args == key:\n return (True, value)\n return oldget(key)\n get[0] = newget\n return value\n\n def mfun(*args):\n found, value = get[0](args)\n if found:\n return value\n return vset(args)\n\n return mfun\n\nCALLS = 0\n\ndef fib(x):\n global CALLS\n CALLS += 1\n if x<2: return x\n return fib(x-1)+fib(x-2)\n\n@memoize\ndef fibm(x):\n global CALLS\n CALLS += 1\n if x<2: return x\n return fibm(x-1)+fibm(x-2)\n\nCALLS = 0\nprint \"fib(35) is\", fib(35), \"and took\", CALLS, \"calls\"\nCALLS = 0\nprint \"fibm(35) is\", fibm(35), \"and took\", CALLS, \"calls\"\n\nOutput is:\nfib(35) is 9227465 and took 29860703 calls\nfibm(35) is 9227465 and took 36 calls\n\nSimilar to other answers, however this one works. :)\nThe important change from the code in the question is assigning to a non-global non-local (get); however, I also made some improvements while trying to maintain your *cough*broken*cough* closure use. Usually the cache is a dict instead of a linked list of closures.\n",
"You want to put global get at the beginning of every function (except get itself).\nthe def get is an assignment to the name get, so you want get to be declared global before that.\nPutting global get in mfun and vset makes them work. I can't point to the scoping rules that makes this necessary, but it works ;-)\nYour conses are quite lispy too... :)\n",
"Get is not global, but local to the surrounding function, that's why the global declaration fails.\nIf you remove the global, it still fails, because you can't assign to the captured variable name. To work around that, you can use an object as the variable captured by your closures and than just change properties of that object:\nclass Memo(object):\n pass\n\ndef memoize(fn):\n def defaultget(key):\n return (False,)\n\n memo = Memo()\n memo.get = defaultget\n\n def vset(key, value):\n oldget = memo.get\n def newget(ky):\n if key==ky: return (True, value)\n return oldget(ky)\n memo.get = newget\n\n def mfun(*args):\n cache = memo.get(args)\n if cache[0]: return cache[1]\n\n val = apply(fn, args)\n vset(args, val)\n return val\n\n return mfun\n\nThis way you don't need to assign to the captured variable names but still get what you wanted.\n",
"Probably because you want the global get while it isn't a global?\nBy the way, apply is deprecated, use fn(*args) instead.\ndef memoize(fn):\n def get(key):\n return (False,)\n\n def vset(key, value):\n def newget(ky):\n if key==ky: return (True, value)\n return get(ky)\n get = newget\n\n def mfun(*args):\n cache = get(args)\n if (cache[0]): return cache[1]\n\n val = fn(*args)\n vset(args, val)\n return val\n\n return mfun\n\ndef fib(x):\n if x<2: return x\n return fib(x-1)+fib(x-2)\n\ndef fibm(x):\n if x<2: return x\n return fibm(x-1)+fibm(x-2)\n\nfibm = memoize(fibm)\n\n",
"I think the best way would be:\nclass Memoized(object):\n def __init__(self,func):\n self.cache = {}\n self.func = func\n def __call__(self,*args):\n if args in self.cache: return cache[args]\n else:\n self.cache[args] = self.func(*args)\n return self.cache[args]\n\n"
] |
[
8,
8,
1,
1,
0,
0
] |
[] |
[] |
[
"closures",
"lexical_scope",
"python"
] |
stackoverflow_0000505559_closures_lexical_scope_python.txt
|
Q:
What is faster in Python, "while" or "for xrange"
We can do numeric iteration like:
for i in xrange(10):
print i,
and in C-style:
i = 0
while i < 10:
print i,
i = i + 1
Yes, I know, the first one is less error-prone, more pythonic but is it fast enough as C-style version?
PS. I'm from C++ planet and pretty new on Python one.
A:
I am sure the while version is slower. Python will have to lookup the add operation for the integer object on each turn of the loop etc, it is not pure C just because it looks like it!
And if you want a pythonic version of exactly the above, use:
print " ".join(str(i) for i in xrange(10))
Edit: My timings look like this. This is just a silly running loop without printing, just to show you what writing out "i += 1" etc costs in Python.
$ python -mtimeit "i=0" "while i < 1000: i+=1"
1000 loops, best of 3: 303 usec per loop
$ python -mtimeit "for i in xrange(1000): pass"
10000 loops, best of 3: 120 usec per loop
A:
Who cares? Seriously. If you want to know, use timeit package (you can invoke it from command line with -m).
But it doesn't matter at all, because the difference is negligible. And in general, Python is not a language that you choose if you want speed.
A:
The first one.
You mean, faster to develop, right?
PS: It doesn't matter, machines these days are so fast that it is meaningless to ponder on micro optimizations, prior to identifying the bottlenecks using a thorough profiler.
A:
They are both to avoid :-)
Generally speaking, each time I see an iteration over numbers, I see some non-pythonic code, that could be expressed in a better way using iterations over lists or generators.
Actually, I've said "pythonic", but it is all about readability. Using idiomatic code will increase readability, and ultimately also performance, because the compiler will better know how to optimize it.
A:
If your program is too slow, try using psyco.
Don't worry about the kind of micro-optimisation in your question. Write your program to be maintainable (which includes following standard Python style so other programmers can read it easier).
A:
Well, if you are after efficiency in numerical code, you ought to use numpy and scipy. Your integration can be quickly written as numpy.sum( numpy.arange( 10 ) )
A:
In Python, the shorter and clearer version is always better. If I am not mistaken the range and xrange functions are not native, if you try xrange(sys.maxint+1) you will get an overflow error.
Besides, what the hell could this be useful for? If you are just printing 10 numbers, then surely readability counts a thousand times more - and I don't think you're going to print over a million numbers...
|
What is faster in Python, "while" or "for xrange"
|
We can do numeric iteration like:
for i in xrange(10):
print i,
and in C-style:
i = 0
while i < 10:
print i,
i = i + 1
Yes, I know, the first one is less error-prone, more pythonic but is it fast enough as C-style version?
PS. I'm from C++ planet and pretty new on Python one.
|
[
"I am sure the while version is slower. Python will have to lookup the add operation for the integer object on each turn of the loop etc, it is not pure C just because it looks like it!\nAnd if you want a pythonic version of exactly the above, use:\nprint \" \".join(str(i) for i in xrange(10))\n\n\nEdit: My timings look like this. This is just a silly running loop without printing, just to show you what writing out \"i += 1\" etc costs in Python.\n$ python -mtimeit \"i=0\" \"while i < 1000: i+=1\"\n1000 loops, best of 3: 303 usec per loop\n$ python -mtimeit \"for i in xrange(1000): pass\"\n10000 loops, best of 3: 120 usec per loop\n\n",
"Who cares? Seriously. If you want to know, use timeit package (you can invoke it from command line with -m).\nBut it doesn't matter at all, because the difference is negligible. And in general, Python is not a language that you choose if you want speed.\n",
"The first one.\nYou mean, faster to develop, right?\nPS: It doesn't matter, machines these days are so fast that it is meaningless to ponder on micro optimizations, prior to identifying the bottlenecks using a thorough profiler.\n",
"They are both to avoid :-)\nGenerally speaking, each time I see an iteration over numbers, I see some non-pythonic code, that could be expressed in a better way using iterations over lists or generators.\nActually, I've said \"pythonic\", but it is all about readability. Using idiomatic code will increase readability, and ultimately also performance, because the compiler will better know how to optimize it.\n",
"If your program is too slow, try using psyco.\nDon't worry about the kind of micro-optimisation in your question. Write your program to be maintainable (which includes following standard Python style so other programmers can read it easier).\n",
"Well, if you are after efficiency in numerical code, you ought to use numpy and scipy. Your integration can be quickly written as numpy.sum( numpy.arange( 10 ) )\n",
"In Python, the shorter and clearer version is always better. If I am not mistaken the range and xrange functions are not native, if you try xrange(sys.maxint+1) you will get an overflow error.\nBesides, what the hell could this be useful for? If you are just printing 10 numbers, then surely readability counts a thousand times more - and I don't think you're going to print over a million numbers...\n"
] |
[
16,
15,
3,
1,
1,
0,
0
] |
[] |
[] |
[
"micro_optimization",
"python"
] |
stackoverflow_0001377429_micro_optimization_python.txt
|
Q:
Where is Python used? I read about it a lot on Reddit
I have downloaded the Pyscripter and learning Python. But I have no Idea if it has any job value , especially in India. I am learning Python as a Hobby. But it would be comforting to know if Python programmers are in demand in India.
A:
Everywhere. It's used extensively by google for one.
See list of python software for more info, and also who uses python on the web?
A:
In many large companies it is a primary scripting language.
Google is using it along with Java and C++ and almost nothing else.
Also many web pages are built on top of python and Django.
Another place is game development. Many games have their engines written in C++ but all the logic in Python.
In other words it is one of the most valuable tools.
This might be of interest for you as well:
Is Python good for big software projects (not web based)?
Are there any good reasons why I should not use Python?
What did you use to teach yourself python?
A:
It definitely has job value. For instance Google requires it. Have a look at Google openings in India:
Excellent programming skills in at
least one of the following languages:
C, C++, Java or Python (C++/Python
preferred)
A:
Not sure about India, but you can get a decent overview of available Python jobs on the python.org jobs page here.
A:
Try looking at Mark Pilgrim's excellent book "Dive Into Python" which is available for download under GNU Free Documentation License.
HTH
cheers,
Rob
A:
In 10 years of web development I've had 1 client have me write an email parsing app with it. Not that it doesn't get used, but I've seen Ruby/php/.net way more often in the wild.
Edit:
From the other posts if you plan on working at Google, it sounds like the language to learn - LOL!
A:
It's juste one example but I know it is widely used in large scientific institutions with high tech machinery where non-programmers (typically physicists) need quick prototypes or tools to cover their data collection/processing needs. The easy-to access scripting language aspect clearly plays its role here. So I don't know about building a career out of that only but I'd definitely say that knowing Python is a very valuable asset on your resume, it'll strengthen your "smell of usefulness".
A:
The google app engine lets you use python (or Java). I HIGHLY recommend that you check it out. If you want to have a FREE website with a database (actually a datastore but it works much like a database) using python, THIS IS IT. It scales up too. If you start to get enough traffic you would have to start paying for the usage it requires.
http://code.google.com/appengine/docs/python/overview.html
You could make your own python based site and run some ads. Voila, make some money. Also, I'm sure google could be impressed by some good python because I hear they use it for much of their own sites.
|
Where is Python used? I read about it a lot on Reddit
|
I have downloaded the Pyscripter and learning Python. But I have no Idea if it has any job value , especially in India. I am learning Python as a Hobby. But it would be comforting to know if Python programmers are in demand in India.
|
[
"Everywhere. It's used extensively by google for one.\nSee list of python software for more info, and also who uses python on the web?\n",
"In many large companies it is a primary scripting language.\nGoogle is using it along with Java and C++ and almost nothing else.\nAlso many web pages are built on top of python and Django. \nAnother place is game development. Many games have their engines written in C++ but all the logic in Python.\nIn other words it is one of the most valuable tools.\nThis might be of interest for you as well:\n\nIs Python good for big software projects (not web based)?\nAre there any good reasons why I should not use Python?\nWhat did you use to teach yourself python?\n\n",
"It definitely has job value. For instance Google requires it. Have a look at Google openings in India:\n\nExcellent programming skills in at\n least one of the following languages:\n C, C++, Java or Python (C++/Python\n preferred)\n\n",
"Not sure about India, but you can get a decent overview of available Python jobs on the python.org jobs page here. \n",
"Try looking at Mark Pilgrim's excellent book \"Dive Into Python\" which is available for download under GNU Free Documentation License.\nHTH\ncheers,\nRob\n",
"In 10 years of web development I've had 1 client have me write an email parsing app with it. Not that it doesn't get used, but I've seen Ruby/php/.net way more often in the wild.\nEdit:\nFrom the other posts if you plan on working at Google, it sounds like the language to learn - LOL!\n",
"It's juste one example but I know it is widely used in large scientific institutions with high tech machinery where non-programmers (typically physicists) need quick prototypes or tools to cover their data collection/processing needs. The easy-to access scripting language aspect clearly plays its role here. So I don't know about building a career out of that only but I'd definitely say that knowing Python is a very valuable asset on your resume, it'll strengthen your \"smell of usefulness\".\n",
"The google app engine lets you use python (or Java). I HIGHLY recommend that you check it out. If you want to have a FREE website with a database (actually a datastore but it works much like a database) using python, THIS IS IT. It scales up too. If you start to get enough traffic you would have to start paying for the usage it requires.\nhttp://code.google.com/appengine/docs/python/overview.html\nYou could make your own python based site and run some ads. Voila, make some money. Also, I'm sure google could be impressed by some good python because I hear they use it for much of their own sites.\n"
] |
[
17,
10,
4,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000520210_python.txt
|
Q:
Set Snow Leopard to use python 2.5 rather than 2.6
I just upgraded to Snow Leopard and I'm trying to get it to use the old python 2.5 install. I had with all my modules in it. Does anyone know how to set the default python install to 2.5?
A:
I worked this out - if you have this problem open a terminal and type:
defaults write com.apple.versioner.python Version 2.5
A:
You want python_select.
Description: Switch the default python interpreter
|
Set Snow Leopard to use python 2.5 rather than 2.6
|
I just upgraded to Snow Leopard and I'm trying to get it to use the old python 2.5 install. I had with all my modules in it. Does anyone know how to set the default python install to 2.5?
|
[
"I worked this out - if you have this problem open a terminal and type:\ndefaults write com.apple.versioner.python Version 2.5\n\n",
"You want python_select.\n\nDescription: Switch the default python interpreter\n\n"
] |
[
8,
0
] |
[] |
[] |
[
"macos",
"osx_snow_leopard",
"python"
] |
stackoverflow_0001380281_macos_osx_snow_leopard_python.txt
|
Q:
pagination with the python cmd module
I'm prototyping a Python app with the cmd module.
Some messages to the user will be quite long and I'd like to paginate them.
The first 10 (or a configurable number) lines of the message would appear, and pressing the SPACE bar would display the next page, until the end of the message.
I don't want to reinvent something here, is there a simple mean to implement this feature?
A:
The simple thing would just be to pipe your script through "less" or a similar command at runtime.
Here's a simple method that does approximately what you want, though:
def print_and_wait(some_long_message):
lines = some_long_message.split('\n')
i=0
while i < len(lines):
print '\n'.join(lines[i:i+10])
raw_input("press enter to read more...")
i += 10
You could also look into using curses.
A:
As Yoni said above the right way to do this is to provide a print method that pages automatically inside your running cmd instance. The constructor of Cmd takes stdin and stdout arguments. So simple provide an object that works like stdout and supports your paging print method.
class PagingStdOut(object):
def write(self, buffer, lines_before_pause=40):
# do magic paging here...
A:
I had the same question. There is a pager built in to the pydoc module. I incorporated it thusly (which I find hackish and unsatisfying... I'm open to better ideas though).
I like the idea that it would autopage if there are more than x results and paging is on, which is possible to implement, but not done here.
import cmd
from pydoc import pager
from cStringIO import StringIO
import sys
PAGER = True
class Commander(cmd.Cmd):
prompt = "> "
def do_pager(self,line):
global PAGER
line = line + " 1"
tokens = line.lower().split()
if tokens[0] in ("on","true","t", "1"):
PAGER = True
print "# setting PAGER True"
elif tokens[0] in ("off","false","f","0"):
PAGER = False
print "# setting PAGER False"
else:
print "# can't set pager: don't know -> %s" % tokens[0]
def do_demo(self,line):
results = dict(a=1,b=2,c=3)
self.format_commandline_results(results)
def format_commandline_results(self,results):
if PAGER:
ofh = StringIO()
else:
ofh = sys.stdout
for (k,v) in sorted(results.items()):
print >> ofh, "%s -> %s" % (k,v)
if PAGER:
ofh.seek(0)
pager(ofh.read())
return None
def do_EOF(self,line):
print "",
return True
if __name__ == "__main__":
Commander().cmdloop("# try: \n> pager off \n> demo \n> pager on \n> demo \n\n")
A:
Paging subroutines can be found in the genutils.py file of IPython (see page, or page_dumb for a simpler one). The code is a little complicated, but that's probably unavoidable if you are trying to be portable to systems including Windows and the various kinds of terminal emulators.
|
pagination with the python cmd module
|
I'm prototyping a Python app with the cmd module.
Some messages to the user will be quite long and I'd like to paginate them.
The first 10 (or a configurable number) lines of the message would appear, and pressing the SPACE bar would display the next page, until the end of the message.
I don't want to reinvent something here, is there a simple mean to implement this feature?
|
[
"The simple thing would just be to pipe your script through \"less\" or a similar command at runtime.\nHere's a simple method that does approximately what you want, though:\ndef print_and_wait(some_long_message):\n lines = some_long_message.split('\\n')\n i=0\n while i < len(lines):\n print '\\n'.join(lines[i:i+10])\n raw_input(\"press enter to read more...\")\n i += 10\n\nYou could also look into using curses.\n",
"As Yoni said above the right way to do this is to provide a print method that pages automatically inside your running cmd instance. The constructor of Cmd takes stdin and stdout arguments. So simple provide an object that works like stdout and supports your paging print method.\nclass PagingStdOut(object):\n def write(self, buffer, lines_before_pause=40):\n # do magic paging here...\n\n",
"I had the same question. There is a pager built in to the pydoc module. I incorporated it thusly (which I find hackish and unsatisfying... I'm open to better ideas though).\nI like the idea that it would autopage if there are more than x results and paging is on, which is possible to implement, but not done here. \nimport cmd\nfrom pydoc import pager\nfrom cStringIO import StringIO\nimport sys\n\nPAGER = True\nclass Commander(cmd.Cmd):\n prompt = \"> \"\n def do_pager(self,line):\n global PAGER\n line = line + \" 1\"\n tokens = line.lower().split()\n if tokens[0] in (\"on\",\"true\",\"t\", \"1\"):\n PAGER = True\n print \"# setting PAGER True\"\n elif tokens[0] in (\"off\",\"false\",\"f\",\"0\"):\n PAGER = False\n print \"# setting PAGER False\"\n else:\n print \"# can't set pager: don't know -> %s\" % tokens[0]\n\n def do_demo(self,line):\n results = dict(a=1,b=2,c=3)\n self.format_commandline_results(results)\n\n def format_commandline_results(self,results):\n if PAGER:\n ofh = StringIO()\n else:\n ofh = sys.stdout\n\n for (k,v) in sorted(results.items()):\n print >> ofh, \"%s -> %s\" % (k,v)\n\n if PAGER:\n ofh.seek(0)\n pager(ofh.read())\n\n return None\n\n def do_EOF(self,line):\n print \"\",\n return True\n\nif __name__ == \"__main__\":\n Commander().cmdloop(\"# try: \\n> pager off \\n> demo \\n> pager on \\n> demo \\n\\n\")\n\n",
"Paging subroutines can be found in the genutils.py file of IPython (see page, or page_dumb for a simpler one). The code is a little complicated, but that's probably unavoidable if you are trying to be portable to systems including Windows and the various kinds of terminal emulators.\n"
] |
[
4,
3,
1,
0
] |
[] |
[] |
[
"cmd",
"pagination",
"python"
] |
stackoverflow_0000520963_cmd_pagination_python.txt
|
Q:
Is it possible to reference the output of an IronPython project from within a c# project?
I would like to build a code library in IronPython and have another C# project reference it. Can I do this? How?
Is this just as simple as building the project and referencing the dll? Is there any conflict with the dynamic aspect of it?
A:
There is currently no way to build CLS-compliant assemblies from IronPython. The pyc tool will generate a DLL from Python code, but it's really only useful from IronPython.
If you want to use IronPython from a C# app, you'll have to use the hosting interfaces (gory details). You could also check out IronPython in Action, which describes the hosting process quite well.
|
Is it possible to reference the output of an IronPython project from within a c# project?
|
I would like to build a code library in IronPython and have another C# project reference it. Can I do this? How?
Is this just as simple as building the project and referencing the dll? Is there any conflict with the dynamic aspect of it?
|
[
"There is currently no way to build CLS-compliant assemblies from IronPython. The pyc tool will generate a DLL from Python code, but it's really only useful from IronPython.\nIf you want to use IronPython from a C# app, you'll have to use the hosting interfaces (gory details). You could also check out IronPython in Action, which describes the hosting process quite well.\n"
] |
[
1
] |
[] |
[] |
[
"assemblies",
"c#",
"ironpython",
"python"
] |
stackoverflow_0001342645_assemblies_c#_ironpython_python.txt
|
Q:
Django and weird legacy database tables
I'm trying to integrate a legacy database in Django.
I'm running into problems with some weird tables that result from horribly bad database design, but I'm not free to change it.
The problem is that there are tables that dont have a primarykey ID, but a product ID and, here comes the problem, a lot of them are multiple in case of a certain column needs to have multiple values, for example
ID | ... | namestring
2 | ... | name1
2 | ... | name2
Is there a possibility to circumvent the usual primarykey behavior and write a function that returns an object for such an ID with multiple rows ? The column namestring could become a list then.
There is no manual editing required, as this is exported data from another system, I just have to access it..
A:
Django's ORM will have trouble working with this table unless you add a unique primary key column.
If you do add a primary key, then it would be trivial to write a method to query for a given product ID and return a list of the values corresponding to that product ID. Something like:
def names_for(product_id):
return [row.namestring for row in ProductName.objects.filter(product_id=product_id)]
This function could also be a custom manager method, or a method on your Product model, or whatever makes most sense to you.
EDIT: Assuming you have a Product model that the product_id in this table refers to, and the only use you'll have for this table is to look up these names for a given product, your other option is to leave this table out of the ORM altogether, and just write a method on Product that uses raw SQL and cursor.execute to fetch the names for that product. This is nice and clean and doesn't require adding a unique PK to the table. The main thing you lose is the ability to administer this table via Django modelforms or the admin.
|
Django and weird legacy database tables
|
I'm trying to integrate a legacy database in Django.
I'm running into problems with some weird tables that result from horribly bad database design, but I'm not free to change it.
The problem is that there are tables that dont have a primarykey ID, but a product ID and, here comes the problem, a lot of them are multiple in case of a certain column needs to have multiple values, for example
ID | ... | namestring
2 | ... | name1
2 | ... | name2
Is there a possibility to circumvent the usual primarykey behavior and write a function that returns an object for such an ID with multiple rows ? The column namestring could become a list then.
There is no manual editing required, as this is exported data from another system, I just have to access it..
|
[
"Django's ORM will have trouble working with this table unless you add a unique primary key column.\nIf you do add a primary key, then it would be trivial to write a method to query for a given product ID and return a list of the values corresponding to that product ID. Something like:\ndef names_for(product_id):\n return [row.namestring for row in ProductName.objects.filter(product_id=product_id)]\n\nThis function could also be a custom manager method, or a method on your Product model, or whatever makes most sense to you.\nEDIT: Assuming you have a Product model that the product_id in this table refers to, and the only use you'll have for this table is to look up these names for a given product, your other option is to leave this table out of the ORM altogether, and just write a method on Product that uses raw SQL and cursor.execute to fetch the names for that product. This is nice and clean and doesn't require adding a unique PK to the table. The main thing you lose is the ability to administer this table via Django modelforms or the admin.\n"
] |
[
2
] |
[] |
[] |
[
"database",
"django",
"legacy",
"python"
] |
stackoverflow_0001379905_database_django_legacy_python.txt
|
Q:
Slickest REPL console in any language
Many languages of REPL consoles with additional features like autocomplete and intellisense. For instance, iPython, Mathematica, and PyCrust all make some effort to go beyond a basic read eval loop. REPLs are particularly useful in languages where interactive exploration is very important, such as Matlab or R.
I'm looking for inspiration. What application provides the slickest REPL? Or what features do you always wish existed in your REPL of choice?
A:
Common Lisp and emacs with SLIME. All you could really want, think of, dream of, and then some.
A:
I really like Safari's Web Inspector Javascript console. Specifically:
Collapsible interactive object hierarchies
sprintf-style logging
Pretty-printing of closures, allowing you to peer into the internals of anonymous functions
Auto-complete / hinting of object properties on the command-line
A:
The command window in MATLAB 7.8.0 (R2009a) has a few nice features:
Tabbed-completion: You can type part of a variable or function name, then hit tab to get a selectable list of all possible variable or function completions. This also works when typing part of a string, which will bring up a selectable list of file names in the current directory that complete the string.
Argument-list format display: If you type a function name and an open parenthesis, you will automatically get a list of possible formats for the argument list along with a link to the help documentation for that function.
Here's a screenshot of the two options:
A:
I think PowerShell wins without any doubt.
|
Slickest REPL console in any language
|
Many languages of REPL consoles with additional features like autocomplete and intellisense. For instance, iPython, Mathematica, and PyCrust all make some effort to go beyond a basic read eval loop. REPLs are particularly useful in languages where interactive exploration is very important, such as Matlab or R.
I'm looking for inspiration. What application provides the slickest REPL? Or what features do you always wish existed in your REPL of choice?
|
[
"Common Lisp and emacs with SLIME. All you could really want, think of, dream of, and then some.\n",
"I really like Safari's Web Inspector Javascript console. Specifically:\n\nCollapsible interactive object hierarchies\nsprintf-style logging\nPretty-printing of closures, allowing you to peer into the internals of anonymous functions\nAuto-complete / hinting of object properties on the command-line\n\n",
"The command window in MATLAB 7.8.0 (R2009a) has a few nice features:\n\nTabbed-completion: You can type part of a variable or function name, then hit tab to get a selectable list of all possible variable or function completions. This also works when typing part of a string, which will bring up a selectable list of file names in the current directory that complete the string.\nArgument-list format display: If you type a function name and an open parenthesis, you will automatically get a list of possible formats for the argument list along with a link to the help documentation for that function.\n\nHere's a screenshot of the two options:\n\n",
"I think PowerShell wins without any doubt.\n"
] |
[
5,
4,
3,
0
] |
[] |
[] |
[
"console",
"matlab",
"python"
] |
stackoverflow_0001380592_console_matlab_python.txt
|
Q:
What lightweight python library for simple scientific visualization in 3D
I am writing a program in python to experiment an academic idea. Look at a resultant image the program generates:
The thick skeleton lines in the middle of the leaf is what need to be visualized. Every segment of the skeleton lines has a value associated with it, in the above image (drawn by pycairo), different shades of gray are used for visualization, the lighter color means higher value, black lines indicate the lines have value 0 associated with them. The problem is visualization using colors in this case is very unintuitive for human eyes, it would be much better to visualize the values in 3D like the following (taken from a paper):
the left image is a 3D visualization of the right one, the values associated with the lines are visualized as height of consecutive walls in 3D.
What is the best library to do this? I don't want to invest much time into doing this, so a lightweight library is preferred.
A:
If you want lightweight, then you can use PyOpenGL to just wrap OpenGL calls in python directly. This is probably the lightest-weight option.
If you want lots of features, I'd recommend using VTK. It's a very powerful visualization toolkit with Python wrappers (included). There are other packages built on top of this (such as Mayavi and Paraview), but the VTK wrappers alone are often easier to use. This would probably be my first choice, since they have some good samples you can use - all you'd need to do is make a VtkPolyData instance, and throw it in a renderer.
That being said, for ease of development, you might want something that simplifies this for you, such as wrappers for a light-weight rendering engine like Irrlicht via Pyrr. This makes it much easier to generate the picture.
A:
Have you looked at mayavi? Don't know if it meets your definition of "lightweight", but it does seem popular and reasonably easy to use for its power.
|
What lightweight python library for simple scientific visualization in 3D
|
I am writing a program in python to experiment an academic idea. Look at a resultant image the program generates:
The thick skeleton lines in the middle of the leaf is what need to be visualized. Every segment of the skeleton lines has a value associated with it, in the above image (drawn by pycairo), different shades of gray are used for visualization, the lighter color means higher value, black lines indicate the lines have value 0 associated with them. The problem is visualization using colors in this case is very unintuitive for human eyes, it would be much better to visualize the values in 3D like the following (taken from a paper):
the left image is a 3D visualization of the right one, the values associated with the lines are visualized as height of consecutive walls in 3D.
What is the best library to do this? I don't want to invest much time into doing this, so a lightweight library is preferred.
|
[
"If you want lightweight, then you can use PyOpenGL to just wrap OpenGL calls in python directly. This is probably the lightest-weight option.\nIf you want lots of features, I'd recommend using VTK. It's a very powerful visualization toolkit with Python wrappers (included). There are other packages built on top of this (such as Mayavi and Paraview), but the VTK wrappers alone are often easier to use. This would probably be my first choice, since they have some good samples you can use - all you'd need to do is make a VtkPolyData instance, and throw it in a renderer.\nThat being said, for ease of development, you might want something that simplifies this for you, such as wrappers for a light-weight rendering engine like Irrlicht via Pyrr. This makes it much easier to generate the picture.\n",
"Have you looked at mayavi? Don't know if it meets your definition of \"lightweight\", but it does seem popular and reasonably easy to use for its power.\n"
] |
[
4,
3
] |
[] |
[] |
[
"3d",
"geometry",
"python",
"visualization"
] |
stackoverflow_0001380861_3d_geometry_python_visualization.txt
|
Q:
How do I extract a date range from a csv using perl/php/grep/etc?
Is there a way to take text like below (if it was already in an array or a file) and have it strip the lines with a specified date range?
For instance if i wanted every line from 2009-09-04 until 2009-09-09 to be pulled out (maybe this can be done with grep?) how would I go about doing so?
date,test,time,avail
2009-09-01,JS,0.119,99.90
2009-09-02,JS,0.154,99.89
2009-09-03,SWF,0.177,99.90
2009-09-04,SWF,0.177,99.90
2009-09-05,SWF,0.177,99.90
2009-09-06,SWF,0.177,99.90
2009-09-07,SWF,0.177,99.90
2009-09-08,SWF,0.177,99.90
2009-09-09,SWF,0.177,99.90
2009-09-10,SWF,0.177,99.90
Thanks!
A:
Python
import csv
import datetime
start= datetime.datetime(2009,9,4)
end= datetime.datetime(2009,9,9)
source= csv.DictReader( open("someFile","rb") )
for row in source:
dt = datetime.datetime.strptime(row['date'],"%Y-%m-%d")
if start <= dt <= end:
print row # depends on what "pulled out" means
A:
Well, you could probably somehow make it work with grep, but sed is more suited for the task:
sort < file.csv | sed -ne /^2009-09-04/,/^2009-09-09/p
A:
(This solution is in PHP -- but you can probably do that directly from the command-line, I suppose, with somekind of grep or anything)
Considering your dates are in the YYYY-MM-DD format, and that they are at the beginning of each line, you just have to compare the lines alphabetically to compare the dates.
One solution would be to :
load the string
explode it by lines
remove the first line
iterate over the lines, keeping only those that interest you
For the first parts :
$str = <<<STR
date,test,time,avail
2009-09-01,JS,0.119,99.90
2009-09-02,JS,0.154,99.89
2009-09-03,SWF,0.177,99.90
2009-09-04,SWF,0.177,99.90
2009-09-05,SWF,0.177,99.90
2009-09-06,SWF,0.177,99.90
2009-09-07,SWF,0.177,99.90
2009-09-08,SWF,0.177,99.90
2009-09-09,SWF,0.177,99.90
2009-09-10,SWF,0.177,99.90
STR;
$lines = explode(PHP_EOL, $str);
unset($lines[0]); // first line is useless
And, to iterate over the lines, filtering in/out those you want / don't want, you could use a foreach loop... Or use the array_filter function, which exists just for this ;-)
For instance, you could use something like this :
$new_lines = array_filter($lines, 'my_filter');
var_dump($new_lines);
And your callback function would be :
function my_filter($line) {
$min = '2009-09-04';
$max = '2009-09-09';
if ($line >= $min && $line <= $max) {
return true;
} else {
return false;
}
}
And, the result :
array
4 => string '2009-09-04,SWF,0.177,99.90' (length=26)
5 => string '2009-09-05,SWF,0.177,99.90' (length=26)
6 => string '2009-09-06,SWF,0.177,99.90' (length=26)
7 => string '2009-09-07,SWF,0.177,99.90' (length=26)
8 => string '2009-09-08,SWF,0.177,99.90' (length=26)
Hope this helps ;-)
If your dates where not in the YYYY-MM-DD format, or not at the beginning of each line, you'd have to explode the lines, and use strtotime (or do some custom parsing, depending on the format), and, then, compare timestamps.
But, in your case... No need for all that ;-)
A:
awk solution is similar to sed:
awk '/^2009-09-04/,/^2009-09-09/ {next} {print}' filename
Without hardcoding the dates:
awk -v start='^2009-09-04' -v stop='^2009-09-09' '
$0 ~ start, $0 ~ stop {next}
{print}
' date.data
A:
You can use perl's flip flop to extract a line range.
A:
Using R
> d <- read.csv("http://dpaste.com/88980/plain/", sep=",", header=T)
> r1 <- rownames(d[d$date == "2009-09-04",])
> r2 <- rownames(d[d$date == "2009-09-09",])
> d[rownames(d) %in% r1:r2,]
date test time avail
4 2009-09-04 SWF 0.177 99.9
5 2009-09-05 SWF 0.177 99.9
6 2009-09-06 SWF 0.177 99.9
7 2009-09-07 SWF 0.177 99.9
8 2009-09-08 SWF 0.177 99.9
9 2009-09-09 SWF 0.177 99.9
>
A:
Perl:
perl -F/,/ -ane '
print if $F[0] ge "2009-09-04"
&& $F[0] le "2009-09-09"' filename
|
How do I extract a date range from a csv using perl/php/grep/etc?
|
Is there a way to take text like below (if it was already in an array or a file) and have it strip the lines with a specified date range?
For instance if i wanted every line from 2009-09-04 until 2009-09-09 to be pulled out (maybe this can be done with grep?) how would I go about doing so?
date,test,time,avail
2009-09-01,JS,0.119,99.90
2009-09-02,JS,0.154,99.89
2009-09-03,SWF,0.177,99.90
2009-09-04,SWF,0.177,99.90
2009-09-05,SWF,0.177,99.90
2009-09-06,SWF,0.177,99.90
2009-09-07,SWF,0.177,99.90
2009-09-08,SWF,0.177,99.90
2009-09-09,SWF,0.177,99.90
2009-09-10,SWF,0.177,99.90
Thanks!
|
[
"Python\nimport csv\nimport datetime\n\nstart= datetime.datetime(2009,9,4)\nend= datetime.datetime(2009,9,9)\n\nsource= csv.DictReader( open(\"someFile\",\"rb\") )\nfor row in source:\n dt = datetime.datetime.strptime(row['date'],\"%Y-%m-%d\")\n if start <= dt <= end:\n print row # depends on what \"pulled out\" means\n\n",
"Well, you could probably somehow make it work with grep, but sed is more suited for the task:\nsort < file.csv | sed -ne /^2009-09-04/,/^2009-09-09/p\n\n",
"(This solution is in PHP -- but you can probably do that directly from the command-line, I suppose, with somekind of grep or anything)\nConsidering your dates are in the YYYY-MM-DD format, and that they are at the beginning of each line, you just have to compare the lines alphabetically to compare the dates.\nOne solution would be to :\n\nload the string\nexplode it by lines\nremove the first line\niterate over the lines, keeping only those that interest you\n\nFor the first parts :\n$str = <<<STR\ndate,test,time,avail\n2009-09-01,JS,0.119,99.90\n2009-09-02,JS,0.154,99.89\n2009-09-03,SWF,0.177,99.90\n2009-09-04,SWF,0.177,99.90\n2009-09-05,SWF,0.177,99.90\n2009-09-06,SWF,0.177,99.90\n2009-09-07,SWF,0.177,99.90\n2009-09-08,SWF,0.177,99.90\n2009-09-09,SWF,0.177,99.90\n2009-09-10,SWF,0.177,99.90\nSTR;\n$lines = explode(PHP_EOL, $str);\nunset($lines[0]); // first line is useless\n\nAnd, to iterate over the lines, filtering in/out those you want / don't want, you could use a foreach loop... Or use the array_filter function, which exists just for this ;-)\nFor instance, you could use something like this :\n$new_lines = array_filter($lines, 'my_filter');\nvar_dump($new_lines);\n\nAnd your callback function would be :\nfunction my_filter($line) {\n $min = '2009-09-04';\n $max = '2009-09-09';\n if ($line >= $min && $line <= $max) {\n return true;\n } else {\n return false;\n }\n}\n\nAnd, the result :\narray\n 4 => string '2009-09-04,SWF,0.177,99.90' (length=26)\n 5 => string '2009-09-05,SWF,0.177,99.90' (length=26)\n 6 => string '2009-09-06,SWF,0.177,99.90' (length=26)\n 7 => string '2009-09-07,SWF,0.177,99.90' (length=26)\n 8 => string '2009-09-08,SWF,0.177,99.90' (length=26)\n\nHope this helps ;-)\n\nIf your dates where not in the YYYY-MM-DD format, or not at the beginning of each line, you'd have to explode the lines, and use strtotime (or do some custom parsing, depending on the format), and, then, compare timestamps.\nBut, in your case... No need for all that ;-)\n",
"awk solution is similar to sed:\nawk '/^2009-09-04/,/^2009-09-09/ {next} {print}' filename\n\nWithout hardcoding the dates:\nawk -v start='^2009-09-04' -v stop='^2009-09-09' '\n $0 ~ start, $0 ~ stop {next}\n {print}\n' date.data\n\n",
"You can use perl's flip flop to extract a line range.\n",
"Using R\n> d <- read.csv(\"http://dpaste.com/88980/plain/\", sep=\",\", header=T)\n> r1 <- rownames(d[d$date == \"2009-09-04\",])\n> r2 <- rownames(d[d$date == \"2009-09-09\",])\n> d[rownames(d) %in% r1:r2,]\n date test time avail\n4 2009-09-04 SWF 0.177 99.9\n5 2009-09-05 SWF 0.177 99.9\n6 2009-09-06 SWF 0.177 99.9\n7 2009-09-07 SWF 0.177 99.9\n8 2009-09-08 SWF 0.177 99.9\n9 2009-09-09 SWF 0.177 99.9\n>\n\n",
"Perl:\n\nperl -F/,/ -ane '\n print if $F[0] ge \"2009-09-04\"\n && $F[0] le \"2009-09-09\"' filename\n\n\n"
] |
[
4,
3,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"grep",
"perl",
"php",
"python",
"ruby"
] |
stackoverflow_0001369287_grep_perl_php_python_ruby.txt
|
Q:
Can I add parameters to a python property to reduce code duplication?
I have a the following class:
class Vector(object):
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
def _getx(self):
return self._x
def _setx(self, value):
self._x = float(value)
x = property(_getx, _setx)
def _gety(self):
return self._y
def _sety(self, value):
self._y = float(value)
y = property(_gety, _sety)
def _getz(self):
return self._z
def _setz(self, value):
self._z = float(value)
z = property(_getz, _setz)
The three getters and setters are all identical except for the object property they are modifying (x, y, z). Is there a way that I can write one get and one set and then pass the variable to modify so that I don't repeat myself?
A:
Sure, make a custom descriptor as per the concepts clearly explained in this doc:
class JonProperty(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, objtype):
return getattr(obj, self.name)
def __set__(self, obj, val):
setattr(obj, self.name, float(val))
and then just use it:
class Vector(object):
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
x = JonProperty('_x')
y = JonProperty('_y')
z = JonProperty('_z')
A:
Why not just write this?
class Vector(object):
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
If your getters and setters are a carryover in practice from something like Java, then DO NOT WRITE THEM. Just expose the attributes x, y, and z. You can change them to properties later if necessary, and, unlike Java (which would require some kind of recompile), all of the client code will still work just fine.
On the theme of getters and setters in Python, the general consensus as I understand it is "don't write 'em til you need 'em." Only write setters and getters if they actually add value to your program (like protecting an internal data structure, perhaps). Otherwise, leave them out, and spend your time doing actual productive coding.
A:
Not tested, but this should work:
def force_float(name):
def get(self):
return getattr(self, name)
def set(self, x):
setattr(self, name, float(x))
return property(get, set)
class Vector(object):
x = force_float("_x")
y = force_float("_y")
# etc
|
Can I add parameters to a python property to reduce code duplication?
|
I have a the following class:
class Vector(object):
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
def _getx(self):
return self._x
def _setx(self, value):
self._x = float(value)
x = property(_getx, _setx)
def _gety(self):
return self._y
def _sety(self, value):
self._y = float(value)
y = property(_gety, _sety)
def _getz(self):
return self._z
def _setz(self, value):
self._z = float(value)
z = property(_getz, _setz)
The three getters and setters are all identical except for the object property they are modifying (x, y, z). Is there a way that I can write one get and one set and then pass the variable to modify so that I don't repeat myself?
|
[
"Sure, make a custom descriptor as per the concepts clearly explained in this doc:\nclass JonProperty(object):\n def __init__(self, name):\n self.name = name\n\n def __get__(self, obj, objtype):\n return getattr(obj, self.name)\n\n def __set__(self, obj, val):\n setattr(obj, self.name, float(val))\n\nand then just use it:\nclass Vector(object):\n def __init__(self, x=0, y=0, z=0):\n self.x = x\n self.y = y\n self.z = z\n x = JonProperty('_x')\n y = JonProperty('_y')\n z = JonProperty('_z')\n\n",
"Why not just write this?\nclass Vector(object):\n def __init__(self, x=0, y=0, z=0):\n self.x = x\n self.y = y\n self.z = z\n\nIf your getters and setters are a carryover in practice from something like Java, then DO NOT WRITE THEM. Just expose the attributes x, y, and z. You can change them to properties later if necessary, and, unlike Java (which would require some kind of recompile), all of the client code will still work just fine.\nOn the theme of getters and setters in Python, the general consensus as I understand it is \"don't write 'em til you need 'em.\" Only write setters and getters if they actually add value to your program (like protecting an internal data structure, perhaps). Otherwise, leave them out, and spend your time doing actual productive coding.\n",
"Not tested, but this should work:\ndef force_float(name):\n def get(self):\n return getattr(self, name)\n def set(self, x):\n setattr(self, name, float(x))\n return property(get, set)\n\nclass Vector(object):\n x = force_float(\"_x\")\n y = force_float(\"_y\")\n # etc\n\n"
] |
[
16,
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001380566_python.txt
|
Q:
Python list vs. MySQL Select performance
I have a large list with 15k entries in a MySQL table from which I need to select a few items, many times. For example, I might want all entries with a number field between 1 and 10.
In SQL this would be easy:
SELECT text FROM table WHERE number>=1 AND number<10;
If I extract the entire table to a Python list:
PyList = [[text1, number1], [text2, number2], ...]
I could then extract those same text values I want by running through the entire list
for item in PyList
if item[1] >=1 and item[1]<10:
result.append(item[0])
Now, the performance question between the two is that I have to do this for a sliding window. I want to get those between 1 and 10, then 2 and 11, 3 and 12, ... 14990 and 15000
What approach is faster for a list this big?
An improvement in Python I'm thinking about is to pre-order the Python list by number. When the window moves I could remove the lowest value from result and append all elements verifying the next condition to get the new result. I would also keep track of index in the PyList so I would know where to start from in the next iteration. This would spare me from running through the entire list again.
I don't know how to speed up the MySQL for successive Selects that are very similar and I don't know how it works internally to understand performance differences between the two approaches.
How would you implement this?
A:
Simply define an index over number in your database, then the database can generate the result sets instantly. Plus it can do some calculations on these sets too, if that is your next step.
Databases are actually great at such queries, I'd let it do its job before trying something else.
A:
It's certainly going to be much faster to pull the data into memory than run ~15,000 queries.
My advice is to make sure the SQL query sorts the data by number. If the data is sorted, you can use the very fast lookup methods in the bisect standard library module to find indexes.
A:
Read all the data into Python (from the numbers you mention it should handily fit in memory), say into a variable pylist as you say, then prep an auxiliary data structure as follows:
import collections
d = collections.defaultdict(list)
for text, number in pylist:
d[number].append(text)
Now, to get all texts for numbers between low included and high excluded,
def slidingwindow(d, low, high):
result = []
for x in xrange(low, high):
result.extend(d.get(x, ()))
return result
A:
It is difficult to answer without actual performance, but my gut feeling is that it would be better to go for the SQL with bind variables (I am not MySQL expert, but in this case query syntax should be something like %varname).
The reason is that you would return data only when needed (thus user interface would be responsive much in advance) and you would rely on a system highly optimized for that kind of operation. On the other hand, retrieving a larger chunk of data i usually faster than retrieving smaller ones, so the "full python" approach could have its edge.
However, unless you have serious performance issues, I would still stick in using SQL, because it would lead to much simpler code, to read and understand.
|
Python list vs. MySQL Select performance
|
I have a large list with 15k entries in a MySQL table from which I need to select a few items, many times. For example, I might want all entries with a number field between 1 and 10.
In SQL this would be easy:
SELECT text FROM table WHERE number>=1 AND number<10;
If I extract the entire table to a Python list:
PyList = [[text1, number1], [text2, number2], ...]
I could then extract those same text values I want by running through the entire list
for item in PyList
if item[1] >=1 and item[1]<10:
result.append(item[0])
Now, the performance question between the two is that I have to do this for a sliding window. I want to get those between 1 and 10, then 2 and 11, 3 and 12, ... 14990 and 15000
What approach is faster for a list this big?
An improvement in Python I'm thinking about is to pre-order the Python list by number. When the window moves I could remove the lowest value from result and append all elements verifying the next condition to get the new result. I would also keep track of index in the PyList so I would know where to start from in the next iteration. This would spare me from running through the entire list again.
I don't know how to speed up the MySQL for successive Selects that are very similar and I don't know how it works internally to understand performance differences between the two approaches.
How would you implement this?
|
[
"Simply define an index over number in your database, then the database can generate the result sets instantly. Plus it can do some calculations on these sets too, if that is your next step. \nDatabases are actually great at such queries, I'd let it do its job before trying something else.\n",
"It's certainly going to be much faster to pull the data into memory than run ~15,000 queries.\nMy advice is to make sure the SQL query sorts the data by number. If the data is sorted, you can use the very fast lookup methods in the bisect standard library module to find indexes.\n",
"Read all the data into Python (from the numbers you mention it should handily fit in memory), say into a variable pylist as you say, then prep an auxiliary data structure as follows:\nimport collections\nd = collections.defaultdict(list)\nfor text, number in pylist:\n d[number].append(text)\n\nNow, to get all texts for numbers between low included and high excluded,\ndef slidingwindow(d, low, high):\n result = []\n for x in xrange(low, high):\n result.extend(d.get(x, ()))\n return result\n\n",
"It is difficult to answer without actual performance, but my gut feeling is that it would be better to go for the SQL with bind variables (I am not MySQL expert, but in this case query syntax should be something like %varname). \nThe reason is that you would return data only when needed (thus user interface would be responsive much in advance) and you would rely on a system highly optimized for that kind of operation. On the other hand, retrieving a larger chunk of data i usually faster than retrieving smaller ones, so the \"full python\" approach could have its edge.\nHowever, unless you have serious performance issues, I would still stick in using SQL, because it would lead to much simpler code, to read and understand.\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0001380917_mysql_python.txt
|
Q:
How to aggregate all attributes of a hierarchy of classes?
There is a hierchy of classes. Each class may define a class variable (to be specific, it's a dictionary), all of which have the same variable name. I'd like the very root class to be able to somehow access all of these variables (i.e. all the dictionaries joined together), given an instance of a child class. I can't seem to find the way to do so. No matter what I try, I always get stuck on the fact that I cannot retrieve the direct parent class given the child class. How can this be accomplished?
A:
As long as you're using new-style classes (i.e., object or some other built-in type is the "deepest ancestor"), __mro__ is what you're looking for. For example, given:
>>> class Root(object):
... d = {'za': 23}
...
>>> class Trunk(Root):
... d = {'ki': 45}
...
>>> class Branch(Root):
... d = {'fu': 67}
...
>>> class Leaf(Trunk, Branch):
... d = {'po': 89}
now,
>>> def getem(x):
... d = {}
... for x in x.__class__.__mro__:
... d.update(x.__dict__.get('d', ()))
... return d
...
>>> x = Leaf()
>>> getem(x)
{'za': 23, 'ki': 45, 'po': 89, 'fu': 67}
|
How to aggregate all attributes of a hierarchy of classes?
|
There is a hierchy of classes. Each class may define a class variable (to be specific, it's a dictionary), all of which have the same variable name. I'd like the very root class to be able to somehow access all of these variables (i.e. all the dictionaries joined together), given an instance of a child class. I can't seem to find the way to do so. No matter what I try, I always get stuck on the fact that I cannot retrieve the direct parent class given the child class. How can this be accomplished?
|
[
"As long as you're using new-style classes (i.e., object or some other built-in type is the \"deepest ancestor\"), __mro__ is what you're looking for. For example, given:\n>>> class Root(object):\n... d = {'za': 23}\n... \n>>> class Trunk(Root):\n... d = {'ki': 45}\n... \n>>> class Branch(Root):\n... d = {'fu': 67}\n... \n>>> class Leaf(Trunk, Branch):\n... d = {'po': 89}\n\nnow,\n>>> def getem(x):\n... d = {}\n... for x in x.__class__.__mro__:\n... d.update(x.__dict__.get('d', ()))\n... return d\n... \n>>> x = Leaf()\n>>> getem(x)\n{'za': 23, 'ki': 45, 'po': 89, 'fu': 67}\n\n"
] |
[
1
] |
[] |
[] |
[
"class",
"hierarchy",
"python"
] |
stackoverflow_0001381333_class_hierarchy_python.txt
|
Q:
Logging multithreaded processes in python
I was thinking of using the logging module to log all events to one file. The number of threads should be constant from start to finish, but if one thread fails, I'd like to just log that and continue on. What's a simple way of accomplishing this? Thanks!
A:
Not entirely sure what you mean by "one thread fails", but if by "fail" you mean that an exception propagates all the way up to the top function of the thread, then you can wrap every thread's top function (e.g. in a decorator) to catch any exception, log whatever you wish, and re-raise. The logging module should ensure thread-safety of logging actions without further precautions being needed on your part on that score.
|
Logging multithreaded processes in python
|
I was thinking of using the logging module to log all events to one file. The number of threads should be constant from start to finish, but if one thread fails, I'd like to just log that and continue on. What's a simple way of accomplishing this? Thanks!
|
[
"Not entirely sure what you mean by \"one thread fails\", but if by \"fail\" you mean that an exception propagates all the way up to the top function of the thread, then you can wrap every thread's top function (e.g. in a decorator) to catch any exception, log whatever you wish, and re-raise. The logging module should ensure thread-safety of logging actions without further precautions being needed on your part on that score.\n"
] |
[
7
] |
[] |
[] |
[
"logging",
"multithreading",
"python"
] |
stackoverflow_0001380985_logging_multithreading_python.txt
|
Q:
How do I wrap this C function, with multiple arguments, with ctypes?
I have the function prototype here:
extern "C" void __stdcall__declspec(dllexport) ReturnPulse(double*,double*,double*,double*,double*);
I need to write some python to access this function that is in a DLL.
I have loaded the DLL, but
each of the double* is actually pointing to a variable number of doubles (an array), and
I'm having trouble getting it to function properly.
Thanks all!
A:
I haven't looked at ctypes too much, but try using a numpy array of the right type. If that doesn't just automatically work, they also have a ctypes attribute that should contain a pointer to the data.
A:
To make an array with, say, n doubles:
arr7 = ctypes.c_double * `n`
x = arr7()
and pass x to your function where it wants a double*. Or if you need to initialize x as you make it:
x = arr7(i*0.1 for i in xrange(7))
and the like. You can loop over x, index it, and so on.
|
How do I wrap this C function, with multiple arguments, with ctypes?
|
I have the function prototype here:
extern "C" void __stdcall__declspec(dllexport) ReturnPulse(double*,double*,double*,double*,double*);
I need to write some python to access this function that is in a DLL.
I have loaded the DLL, but
each of the double* is actually pointing to a variable number of doubles (an array), and
I'm having trouble getting it to function properly.
Thanks all!
|
[
"I haven't looked at ctypes too much, but try using a numpy array of the right type. If that doesn't just automatically work, they also have a ctypes attribute that should contain a pointer to the data.\n",
"To make an array with, say, n doubles:\narr7 = ctypes.c_double * `n` \nx = arr7()\n\nand pass x to your function where it wants a double*. Or if you need to initialize x as you make it:\nx = arr7(i*0.1 for i in xrange(7))\n\nand the like. You can loop over x, index it, and so on.\n"
] |
[
1,
1
] |
[] |
[] |
[
"arrays",
"ctypes",
"pointers",
"python",
"return"
] |
stackoverflow_0001381016_arrays_ctypes_pointers_python_return.txt
|
Q:
Is packaging scripts as executables a solution for comercial applications?
What happens when you package a script as an executable? Is this a good way to distribute commercial applications? I remember I read something a long time ago that when you package scripts as executables, at runtime the exe decompresses the scripts to a temporary directory where they get ran.
If it's like that, than I don't think this can be viewed as a good solution, because a skilled user could find out where that directory is located and find the source code. I'm interested in Python & Ruby.
A:
With Python (e.g. pyinstaller -- be sure to get the SVN version, the "released" one is WAY out of date -- or py2exe) you can package bytecode. Sure, it can be "reverse compiled", just like Java bytecode or .NET assemblies (or for that matter, machine code), but I think it's a decent level of "obscurity" despite the existence of disassemblers. I guess you could also use something like pyobfuscate on the sources before you compile them, to make the names as unreadable as you can, but I have no personal experience doing that.
A:
Creating an application and creating a commercial software business are two distinct things.
At my job we have a commercial application developed in Ruby on Rails that is installed at client sites; no obfuscation or encryption applied.
There is just so much more going on at a business level: support, customization, training that it's almost ludicrous to think that they'd do something with the source code. Most sites are lucky if they can spare the cycles to manage the application, much less start wallowing in someone else's codebase.
Now that being said, we aren't publicly distributing the code or anything, just that we've made the choice to invest in making the application better and doing better by our customers than to burn the time trying to restrict access to our application.
A:
py2exe is great, I've used it before and it works just fine for encapsulating a program so it can run on other computers, even those lacking python. Like Alex said, it can be disassembled, but then so can C++ binaries. It's just a question of how much work the person is has to put into it. If someone has physical access to a the computer where data is stored, they have access to the data (And there's no difference between code and data). Compress it, encrypt it, compile it, those will only slow the determined.
Realistically, if you want to make the source code to your program as inaccessible as possible, you probably shouldn't look at Python or Ruby. C++ and some of the others are far more difficult to decompile, thereby providing more obscurity. You can practice the fine art of obfuscation, but even that won't stop someone trying to steal it (It's not like you have to understand the code to put it up on the Pirate Bay).
A:
This has been discussed before here for the Python case and some answers are valid for Ruby too. I highly recommend you to read a previous question in this site titled: How do I protect python code?. Pay attention to the first three answers.
|
Is packaging scripts as executables a solution for comercial applications?
|
What happens when you package a script as an executable? Is this a good way to distribute commercial applications? I remember I read something a long time ago that when you package scripts as executables, at runtime the exe decompresses the scripts to a temporary directory where they get ran.
If it's like that, than I don't think this can be viewed as a good solution, because a skilled user could find out where that directory is located and find the source code. I'm interested in Python & Ruby.
|
[
"With Python (e.g. pyinstaller -- be sure to get the SVN version, the \"released\" one is WAY out of date -- or py2exe) you can package bytecode. Sure, it can be \"reverse compiled\", just like Java bytecode or .NET assemblies (or for that matter, machine code), but I think it's a decent level of \"obscurity\" despite the existence of disassemblers. I guess you could also use something like pyobfuscate on the sources before you compile them, to make the names as unreadable as you can, but I have no personal experience doing that.\n",
"Creating an application and creating a commercial software business are two distinct things. \nAt my job we have a commercial application developed in Ruby on Rails that is installed at client sites; no obfuscation or encryption applied. \nThere is just so much more going on at a business level: support, customization, training that it's almost ludicrous to think that they'd do something with the source code. Most sites are lucky if they can spare the cycles to manage the application, much less start wallowing in someone else's codebase. \nNow that being said, we aren't publicly distributing the code or anything, just that we've made the choice to invest in making the application better and doing better by our customers than to burn the time trying to restrict access to our application.\n",
"py2exe is great, I've used it before and it works just fine for encapsulating a program so it can run on other computers, even those lacking python. Like Alex said, it can be disassembled, but then so can C++ binaries. It's just a question of how much work the person is has to put into it. If someone has physical access to a the computer where data is stored, they have access to the data (And there's no difference between code and data). Compress it, encrypt it, compile it, those will only slow the determined.\nRealistically, if you want to make the source code to your program as inaccessible as possible, you probably shouldn't look at Python or Ruby. C++ and some of the others are far more difficult to decompile, thereby providing more obscurity. You can practice the fine art of obfuscation, but even that won't stop someone trying to steal it (It's not like you have to understand the code to put it up on the Pirate Bay).\n",
"This has been discussed before here for the Python case and some answers are valid for Ruby too. I highly recommend you to read a previous question in this site titled: How do I protect python code?. Pay attention to the first three answers.\n"
] |
[
3,
3,
2,
1
] |
[] |
[] |
[
"executable",
"python",
"ruby"
] |
stackoverflow_0001380852_executable_python_ruby.txt
|
Q:
Wrong Mac OS X framework gets loaded
I've compiled a Python module using my own Qt4 library located in ~/opt/qt-4.6.0/,
but when I try to import that module, the dynamic libraries that get loaded are from my MacPorts Qt4 installation.
$ /opt/local/bin/python2.6
>>> import vtk
objc[58041]: Class QMacSoundDelegate is implemented in both /Users/luis/opt/qt-4.6.0/lib/QtGui.framework/Versions/4/QtGui and /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui. Using implementation from /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui.
objc[58045]: Class QCocoaColorPanelDelegate is implemented in both /Users/luis/opt/qt-4.6.0/lib/QtGui.framework/Versions/4/QtGui and /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui. Using implementation from /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui.
[... more output like above ...]
>>>
Is there a way of telling Python (also installed from MacPorts) to load the frameworks located in my ~/opt/qt-4.6.0/lib/ directory? I'm not sure what environment variables to change.
A:
Ok, after Barry Wark pointed me to dyld(1), the man page described a number of variables that I could set.
The first hint came from setting the environment variable DYLD_PRINT_LIBRARIES, so I could see what libraries were being loaded.
$ DYLD_PRINT_LIBRARIES=1 python -c 'import vtk'
[... snip ...]
dyld: loaded: /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui
dyld: loaded: /opt/local/lib/libpng12.0.dylib
dyld: loaded: /opt/local/libexec/qt4-mac/lib/QtSql.framework/Versions/4/QtSql
dyld: loaded: /opt/local/libexec/qt4-mac/lib/QtCore.framework/Versions/4/QtCore
[... snip ...]
dyld: loaded: /Users/luis/opt/qt-4.6.0/lib/QtGui.framework/Versions/4/QtGui
dyld: loaded: /Users/luis/opt/qt-4.6.0/lib/QtSql.framework/Versions/4/QtSql
dyld: loaded: /Users/luis/opt/qt-4.6.0/lib/QtCore.framework/Versions/4/QtCore
[... snip ...]
$
Ah, so the frameworks for qt4-mac were indeed being loaded first, just as we suspected. Rereading the man page, the next thing we can try is changing the DYLD_FRAMEWORK_PATH so that it knows where to look. I now added this line to the end of my ~/.bash_profile
export DYLD_FRAMEWORK_PATH="${HOME}/opt/qt-4.6.0/lib:${DYLD_FRAMEWORK_PATH}"
and after logging back in, we try importing the vtk python module again:
$ python -c 'import vtk'
$
There's no output this time. Issue fixed!
A:
Try setting the DYLD_LIBRARY_PATH to put your libraries in ~/opt/qt/... before the MacPorts' libraries before invoking python (take a look at ~/.profile for an example of how to do this if you don't know; MacPorts does the exact same thing to put its libraries on the DYLD_LIBRARY_PATH). dyld, the OS X dynamic linker uses DYLD_LIBRARY_PATH to find libraries at load time (among other methods); See man dyld for more info.
|
Wrong Mac OS X framework gets loaded
|
I've compiled a Python module using my own Qt4 library located in ~/opt/qt-4.6.0/,
but when I try to import that module, the dynamic libraries that get loaded are from my MacPorts Qt4 installation.
$ /opt/local/bin/python2.6
>>> import vtk
objc[58041]: Class QMacSoundDelegate is implemented in both /Users/luis/opt/qt-4.6.0/lib/QtGui.framework/Versions/4/QtGui and /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui. Using implementation from /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui.
objc[58045]: Class QCocoaColorPanelDelegate is implemented in both /Users/luis/opt/qt-4.6.0/lib/QtGui.framework/Versions/4/QtGui and /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui. Using implementation from /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui.
[... more output like above ...]
>>>
Is there a way of telling Python (also installed from MacPorts) to load the frameworks located in my ~/opt/qt-4.6.0/lib/ directory? I'm not sure what environment variables to change.
|
[
"Ok, after Barry Wark pointed me to dyld(1), the man page described a number of variables that I could set.\nThe first hint came from setting the environment variable DYLD_PRINT_LIBRARIES, so I could see what libraries were being loaded.\n$ DYLD_PRINT_LIBRARIES=1 python -c 'import vtk'\n[... snip ...]\ndyld: loaded: /opt/local/libexec/qt4-mac/lib/QtGui.framework/Versions/4/QtGui\ndyld: loaded: /opt/local/lib/libpng12.0.dylib\ndyld: loaded: /opt/local/libexec/qt4-mac/lib/QtSql.framework/Versions/4/QtSql\ndyld: loaded: /opt/local/libexec/qt4-mac/lib/QtCore.framework/Versions/4/QtCore\n[... snip ...]\ndyld: loaded: /Users/luis/opt/qt-4.6.0/lib/QtGui.framework/Versions/4/QtGui\ndyld: loaded: /Users/luis/opt/qt-4.6.0/lib/QtSql.framework/Versions/4/QtSql\ndyld: loaded: /Users/luis/opt/qt-4.6.0/lib/QtCore.framework/Versions/4/QtCore\n[... snip ...]\n$\n\nAh, so the frameworks for qt4-mac were indeed being loaded first, just as we suspected. Rereading the man page, the next thing we can try is changing the DYLD_FRAMEWORK_PATH so that it knows where to look. I now added this line to the end of my ~/.bash_profile\nexport DYLD_FRAMEWORK_PATH=\"${HOME}/opt/qt-4.6.0/lib:${DYLD_FRAMEWORK_PATH}\"\n\nand after logging back in, we try importing the vtk python module again:\n$ python -c 'import vtk'\n$\n\nThere's no output this time. Issue fixed!\n",
"Try setting the DYLD_LIBRARY_PATH to put your libraries in ~/opt/qt/... before the MacPorts' libraries before invoking python (take a look at ~/.profile for an example of how to do this if you don't know; MacPorts does the exact same thing to put its libraries on the DYLD_LIBRARY_PATH). dyld, the OS X dynamic linker uses DYLD_LIBRARY_PATH to find libraries at load time (among other methods); See man dyld for more info.\n"
] |
[
3,
2
] |
[] |
[] |
[
"macos",
"macports",
"python",
"qt4",
"vtk"
] |
stackoverflow_0001381177_macos_macports_python_qt4_vtk.txt
|
Q:
Model inheritance approach with Django's ORM
I want to store events in a web application I am fooling around with and I feel quite unsure about the pros and cons of each respective approach - using inheritance extensively or in a more modest manner.
Example:
class Event(models.Model):
moment = models.DateTimeField()
class UserEvent(Event):
user = models.ForeignKey(User)
class Meta:
abstract = True
class UserRegistrationEvent(UserEvent):
pass # Nothing to add really, the name of the class indicates it's type
class UserCancellationEvent(UserEvent):
reason = models.CharField()
It feels like I'm creating database tables like crazy. It would require alot of joins to select things out and might complicate querying. But it's design feels nice, I think.
Would it be more reasonable to use a "flatter" model that just has more fields?
class Event(models.Model):
moment = models.DateTimeField()
user = models.ForeignKey(User, blank=True, null=True)
type = models.CharField() # 'Registration', 'Cancellation' ...
reason = models.CharField(blank=True, null=True)
Thanks for your comments on this, anyone.
Philip
A:
Flat is better than nested. I don't see that the "deep inheritance" is really buying you anything in this case: I'd go for the flatter model as a simpler, plainer design, with likely better performance characteristics and ease of access.
A:
You might want to try Abstract base models. This implements inheritance without joins, by only creating tables for the derived classes, containing fields from both the parent and the derived model. This would allow you to keep your nested design, without the performance penalty of all those joins.
|
Model inheritance approach with Django's ORM
|
I want to store events in a web application I am fooling around with and I feel quite unsure about the pros and cons of each respective approach - using inheritance extensively or in a more modest manner.
Example:
class Event(models.Model):
moment = models.DateTimeField()
class UserEvent(Event):
user = models.ForeignKey(User)
class Meta:
abstract = True
class UserRegistrationEvent(UserEvent):
pass # Nothing to add really, the name of the class indicates it's type
class UserCancellationEvent(UserEvent):
reason = models.CharField()
It feels like I'm creating database tables like crazy. It would require alot of joins to select things out and might complicate querying. But it's design feels nice, I think.
Would it be more reasonable to use a "flatter" model that just has more fields?
class Event(models.Model):
moment = models.DateTimeField()
user = models.ForeignKey(User, blank=True, null=True)
type = models.CharField() # 'Registration', 'Cancellation' ...
reason = models.CharField(blank=True, null=True)
Thanks for your comments on this, anyone.
Philip
|
[
"Flat is better than nested. I don't see that the \"deep inheritance\" is really buying you anything in this case: I'd go for the flatter model as a simpler, plainer design, with likely better performance characteristics and ease of access.\n",
"You might want to try Abstract base models. This implements inheritance without joins, by only creating tables for the derived classes, containing fields from both the parent and the derived model. This would allow you to keep your nested design, without the performance penalty of all those joins.\n"
] |
[
6,
2
] |
[] |
[] |
[
"django",
"django_models",
"model_inheritance",
"python"
] |
stackoverflow_0001381423_django_django_models_model_inheritance_python.txt
|
Q:
MAC OS X Custom Application Keeping Bouncing in the Dock
First of all, thank you for taking the time to read this. I am new to developing applications for the Mac and I am having some problems. My application works fine, and that is not the focus of my question. Rather, I have a python program which essentially does this:
for i in values:
os.system(java program_and_options[i])
However, every time my program executes the java program, a java window is created in my dock (with an annoying animation) and most importantly steals the focus of my mouse and keyboard. Then it goes away a second later, to be replaced by another Java instance. This means that my batch program cannot be used while I am interacting with my Mac, because I get a hiccup every second or more often and cannot get anything done. My problem is that the act of displaying something in the dock takes my focus, and I would like it not to. Is there a setting on OS X to never display something in the dock (such as Java or python)?
Is there a Mac setting or term that I should use to properly describe this problem I am having? I completely lack the vocabulary to describe this problem and I hope I make sense. I appreciate any help.
I am running Mac OS X, Version 10.5.7 with a 1.66 GHz Intel Core Due, 2 GB memory, Macintosh HD. I am running Python 2.5.1, java version "1.5.0_16" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b06-284) Java HotSpot(TM) Client VM (build 1.5.0_16-133, mixed mode, sharing).
Thanks again,
-Brian J. Stinar-
A:
Does running Java with headless mode = true fix it?
http://zzamboni.org/brt/2007/12/07/disable-dock-icon-for-java-programs-in-mac-osx-howto/
A:
As far as I am aware there is no way to disable the annoying double Java bounce without making your Java application a first class citizen on Mac OS X (much like NetBeans, or Eclipse). As for making certain programs not show in the dock, there are .plist modifications that can be made so that the program does not show up in the dock. See http://www.macosxhints.com/article.php?story=20010701191518268
A:
It's certainly possible to write a Java application which doesn't display in the Dock... in fact, it's the default. If your application is showing up, it must be doing something which triggers window server access -- your best bet is to try and figure out what that is.
|
MAC OS X Custom Application Keeping Bouncing in the Dock
|
First of all, thank you for taking the time to read this. I am new to developing applications for the Mac and I am having some problems. My application works fine, and that is not the focus of my question. Rather, I have a python program which essentially does this:
for i in values:
os.system(java program_and_options[i])
However, every time my program executes the java program, a java window is created in my dock (with an annoying animation) and most importantly steals the focus of my mouse and keyboard. Then it goes away a second later, to be replaced by another Java instance. This means that my batch program cannot be used while I am interacting with my Mac, because I get a hiccup every second or more often and cannot get anything done. My problem is that the act of displaying something in the dock takes my focus, and I would like it not to. Is there a setting on OS X to never display something in the dock (such as Java or python)?
Is there a Mac setting or term that I should use to properly describe this problem I am having? I completely lack the vocabulary to describe this problem and I hope I make sense. I appreciate any help.
I am running Mac OS X, Version 10.5.7 with a 1.66 GHz Intel Core Due, 2 GB memory, Macintosh HD. I am running Python 2.5.1, java version "1.5.0_16" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b06-284) Java HotSpot(TM) Client VM (build 1.5.0_16-133, mixed mode, sharing).
Thanks again,
-Brian J. Stinar-
|
[
"Does running Java with headless mode = true fix it?\nhttp://zzamboni.org/brt/2007/12/07/disable-dock-icon-for-java-programs-in-mac-osx-howto/\n",
"As far as I am aware there is no way to disable the annoying double Java bounce without making your Java application a first class citizen on Mac OS X (much like NetBeans, or Eclipse). As for making certain programs not show in the dock, there are .plist modifications that can be made so that the program does not show up in the dock. See http://www.macosxhints.com/article.php?story=20010701191518268\n",
"It's certainly possible to write a Java application which doesn't display in the Dock... in fact, it's the default. If your application is showing up, it must be doing something which triggers window server access -- your best bet is to try and figure out what that is.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"java",
"macos",
"python"
] |
stackoverflow_0001381739_java_macos_python.txt
|
Q:
Beginner graphics program in Python giving 'out of stack space' error
I'm currently learning Python using Zelle's Introductory text, and I'm trying to recreate one of the example programs which uses an accompanying file graphics.py. Because I'm using Python 3.1 and the text was written for 2.x though, I'm using the GraphicsPy3.py file found at http://mcsp.wartburg.edu/zelle/python and renaming it graphics.py on my computer.
The file named futval_graph.py is as follows:
from graphics import *
def main():
print("This program plots the growth of a 10-year investment.")
principal = eval(input("Enter the initial principal: "))
apr = eval(input("Enter the annualized interest rate: "))
win = GraphWin("Investment Grown Chart", 320, 420)
win.setBackground("white")
Text(Point(20, 230), ' 0.0K').draw(win)
Text(Point(20, 180), ' 2.5K').draw(win)
Text(Point(20, 130), ' 5.0K').draw(win)
Text(Point(20, 80), ' 7.5K').draw(win)
Text(Point(20, 30), '10.0K').draw(win)
# Rest of code is here but I've commented it out to isolate the problem.
main()
When I run 'import futval_graph' on a fresh IDLE session the program simply runs and then hangs after inputing 'apr' without opening the new graphics window. When I run the program from the command line I get the following result:
C:\Python31>futval_graph.py
This program plots the growth of a 10-year investment.
Enter the initial principal: error in background error handler:
out of stack space (infinite loop?)
while executing
"::tcl::Bgerror {out of stack space (infinite loop?)} {-code 1 -level 0 -errorco de NONE -errorinfo {out of stack space (infinite loop?)
while execu..."
Especially frustrating is the fact that this series of commands works when entered into a fresh session of IDLE. And then when running 'import futval_graph' from IDLE after all of the commands have been run on their own, futval_graph works properly.
So my question is: how can I get futval_graph.py to run properly both from the command line and IDLE? Sorry if my explanation of the problem is a bit scattered. Let me know if any further info would help clarify.
A:
There appears to be a problem with the Python 3 version of graphics.py.
I downloaded the Python 3 version, renamed it to graphics.py, then ran the following.
PS C:\Users\jaraco\Desktop> python
Python 3.1.1 (r311:74483, Aug 17 2009, 17:02:12) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from graphics import *
>>> dir()
['BAD_OPTION', 'Circle', 'DEAD_THREAD', 'DEFAULT_CONFIG', 'Entry', 'GraphWin', 'GraphicsError', 'GraphicsObject', 'Image', 'Line', 'OBJ_ALREADY_DRAWN', 'Oval',
'Pixmap', 'Point', 'Polygon', 'Queue', 'Rectangle', 'Text', 'Transform', 'UNSUPPORTED_METHOD', '\_\_builtins\_\_', '\_\_doc\_\_', '\_\_name\_\_', '\_\_package\_\_', 'atexit', 'color_rgb', 'copy', 'os', 'sys', 'test', 'time', 'tk']
>>> error in background error handler:
out of stack space (infinite loop?)
while executing
"::tcl::Bgerror {out of stack space (infinite loop?)} {-code 1 -level 0 -errorcode NONE -errorinfo {out of stack space (infinite loop?)
while execu..."
As you can see, I get the same error, and I haven't even executed anything in the module. There appears to be a problem with the library itself, and not something you're doing in your code.
I would report this to the author, as he suggests.
I did find that I did not get the error if I simply imported the graphics module.
>>> import graphics
>>> dir(graphics)
I found that if I did this to your code, and then changed references GraphWin to graphics.GraphWin, Text to graphics.Text, and Point to graphics.Point, the problem seemed to go away, and I could run it from the command line.
import graphics
def main():
print("This program plots the growth of a 10-year investment.")
principal = eval(input("Enter the initial principal: "))
apr = eval(input("Enter the annualized interest rate: "))
win = graphics.GraphWin("Investment Grown Chart", 320, 420)
win.setBackground("white")
graphics.Text(graphics.Point(20, 230), ' 0.0K').draw(win)
graphics.Text(graphics.Point(20, 180), ' 2.5K').draw(win)
graphics.Text(graphics.Point(20, 130), ' 5.0K').draw(win)
graphics.Text(graphics.Point(20, 80), ' 7.5K').draw(win)
graphics.Text(graphics.Point(20, 30), '10.0K').draw(win)
# Rest of code is here but I've commented it out to isolate the problem.
main()
Why should this be? It shouldn't. It appears the graphics.py module has some side-effect that's not behaving properly.
I suspect you would not be running into these errors under the Python 2.x version.
A:
your code has issues with buil-in input, when it's called with non-empty string as argument. I suspect it might have something to do with the thread setup that graphics does.
If you make Tkinter widgets to read these inputs, may be it'll solve your problem.
To be honest, when you download graphicsPy3.py it says:
Graphics library ported to Python 3.x. Still experimental, please report any issues.
so, I suppose, you better follow this recommendation.
A:
After some additional research, it does appear that the call to input() does have some impact on triggering the unwanted behavior.
I re-wrote the program to not import the graphics module until after the input() calls were complete. In this case, I was unable to reproduce the error, and the code seemed to behave normally even when run from the command-line. I was able to get the parameters from the user, then it would begin to draw a graph (although with the sample code, only very little was drawn before the app closed). Perhaps this technique is a suitable workaround for your problem.
The underlying problem seems to have something to do with the way the tkinter module is initialized in a separate thread, and some undesirable interactions between threads. My guess is that the input() method, when run from a command-line, either locks a resource or otherwise triggers behavior that causes the Tk thread to go into an infinite loop.
Doing some searches around the Internet, I see that other users have gotten the same error, but for other reasons. Some were getting it when tkinter was built without thread support, but I don't think that applies here.
|
Beginner graphics program in Python giving 'out of stack space' error
|
I'm currently learning Python using Zelle's Introductory text, and I'm trying to recreate one of the example programs which uses an accompanying file graphics.py. Because I'm using Python 3.1 and the text was written for 2.x though, I'm using the GraphicsPy3.py file found at http://mcsp.wartburg.edu/zelle/python and renaming it graphics.py on my computer.
The file named futval_graph.py is as follows:
from graphics import *
def main():
print("This program plots the growth of a 10-year investment.")
principal = eval(input("Enter the initial principal: "))
apr = eval(input("Enter the annualized interest rate: "))
win = GraphWin("Investment Grown Chart", 320, 420)
win.setBackground("white")
Text(Point(20, 230), ' 0.0K').draw(win)
Text(Point(20, 180), ' 2.5K').draw(win)
Text(Point(20, 130), ' 5.0K').draw(win)
Text(Point(20, 80), ' 7.5K').draw(win)
Text(Point(20, 30), '10.0K').draw(win)
# Rest of code is here but I've commented it out to isolate the problem.
main()
When I run 'import futval_graph' on a fresh IDLE session the program simply runs and then hangs after inputing 'apr' without opening the new graphics window. When I run the program from the command line I get the following result:
C:\Python31>futval_graph.py
This program plots the growth of a 10-year investment.
Enter the initial principal: error in background error handler:
out of stack space (infinite loop?)
while executing
"::tcl::Bgerror {out of stack space (infinite loop?)} {-code 1 -level 0 -errorco de NONE -errorinfo {out of stack space (infinite loop?)
while execu..."
Especially frustrating is the fact that this series of commands works when entered into a fresh session of IDLE. And then when running 'import futval_graph' from IDLE after all of the commands have been run on their own, futval_graph works properly.
So my question is: how can I get futval_graph.py to run properly both from the command line and IDLE? Sorry if my explanation of the problem is a bit scattered. Let me know if any further info would help clarify.
|
[
"There appears to be a problem with the Python 3 version of graphics.py.\nI downloaded the Python 3 version, renamed it to graphics.py, then ran the following.\nPS C:\\Users\\jaraco\\Desktop> python\nPython 3.1.1 (r311:74483, Aug 17 2009, 17:02:12) [MSC v.1500 32 bit (Intel)] on\nwin32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from graphics import *\n>>> dir()\n['BAD_OPTION', 'Circle', 'DEAD_THREAD', 'DEFAULT_CONFIG', 'Entry', 'GraphWin', 'GraphicsError', 'GraphicsObject', 'Image', 'Line', 'OBJ_ALREADY_DRAWN', 'Oval',\n'Pixmap', 'Point', 'Polygon', 'Queue', 'Rectangle', 'Text', 'Transform', 'UNSUPPORTED_METHOD', '\\_\\_builtins\\_\\_', '\\_\\_doc\\_\\_', '\\_\\_name\\_\\_', '\\_\\_package\\_\\_', 'atexit', 'color_rgb', 'copy', 'os', 'sys', 'test', 'time', 'tk']\n>>> error in background error handler:\nout of stack space (infinite loop?)\n while executing\n\"::tcl::Bgerror {out of stack space (infinite loop?)} {-code 1 -level 0 -errorcode NONE -errorinfo {out of stack space (infinite loop?)\n while execu...\"\n\nAs you can see, I get the same error, and I haven't even executed anything in the module. There appears to be a problem with the library itself, and not something you're doing in your code.\nI would report this to the author, as he suggests.\nI did find that I did not get the error if I simply imported the graphics module.\n>>> import graphics\n>>> dir(graphics)\n\nI found that if I did this to your code, and then changed references GraphWin to graphics.GraphWin, Text to graphics.Text, and Point to graphics.Point, the problem seemed to go away, and I could run it from the command line.\nimport graphics\n\ndef main():\n print(\"This program plots the growth of a 10-year investment.\")\n\n principal = eval(input(\"Enter the initial principal: \"))\n apr = eval(input(\"Enter the annualized interest rate: \"))\n\n win = graphics.GraphWin(\"Investment Grown Chart\", 320, 420)\n win.setBackground(\"white\")\n graphics.Text(graphics.Point(20, 230), ' 0.0K').draw(win)\n graphics.Text(graphics.Point(20, 180), ' 2.5K').draw(win)\n graphics.Text(graphics.Point(20, 130), ' 5.0K').draw(win)\n graphics.Text(graphics.Point(20, 80), ' 7.5K').draw(win)\n graphics.Text(graphics.Point(20, 30), '10.0K').draw(win)\n\n # Rest of code is here but I've commented it out to isolate the problem.\n\nmain()\n\nWhy should this be? It shouldn't. It appears the graphics.py module has some side-effect that's not behaving properly.\nI suspect you would not be running into these errors under the Python 2.x version.\n",
"your code has issues with buil-in input, when it's called with non-empty string as argument. I suspect it might have something to do with the thread setup that graphics does.\nIf you make Tkinter widgets to read these inputs, may be it'll solve your problem.\nTo be honest, when you download graphicsPy3.py it says:\n\nGraphics library ported to Python 3.x. Still experimental, please report any issues.\n\nso, I suppose, you better follow this recommendation.\n",
"After some additional research, it does appear that the call to input() does have some impact on triggering the unwanted behavior.\nI re-wrote the program to not import the graphics module until after the input() calls were complete. In this case, I was unable to reproduce the error, and the code seemed to behave normally even when run from the command-line. I was able to get the parameters from the user, then it would begin to draw a graph (although with the sample code, only very little was drawn before the app closed). Perhaps this technique is a suitable workaround for your problem.\nThe underlying problem seems to have something to do with the way the tkinter module is initialized in a separate thread, and some undesirable interactions between threads. My guess is that the input() method, when run from a command-line, either locks a resource or otherwise triggers behavior that causes the Tk thread to go into an infinite loop.\nDoing some searches around the Internet, I see that other users have gotten the same error, but for other reasons. Some were getting it when tkinter was built without thread support, but I don't think that applies here.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001372949_python.txt
|
Q:
inserting a tuple into a mysql db
can anyone show me the syntax for inserting a python tuple/list into a mysql database?
i also need to know if it is possible for user to pass certain rows without inserting anything...
for example: a function is returning this tuple:
return job(jcardnum, jreg, jcarddate, jcardtime, jcardserve, jdeliver)
suppose the user didnt enter anything in jreg
would python itself enter null in to the related row in the DB or would i run in to trouble?
A:
As a comment says, the job(...) part is a function (or class) call -- whatever is returned from that call also gets returned from this return statement.
Let's assume it's a tuple. What if "the user didn't enter anything in jreg" -- well then, depending on a lot of code you're not showing us, that could be a runtime error (name jreg being undefined), an empty string or other initial default value never altered, or None; in the latter case that would indeed eventually become a NULL in the DB (if acceptable per the DB schema, of course -- otherwise, the DB would reject the insert attempt).
Once you DO finally have a correct and proper tuple T that you want to insert,
`mycursor.execute('INSERT INTO sometable VALUES(?, ?, ?, ?, ?, ?)', T)
is going to be close to the syntax you want -- if T has six items (and sometable has six columns, of course). Each ? is a placeholder and gets replaced with the corresponding item. mycursor will need to be an instance of Cursor, presumably obtained by some earlier call to myconnection.cursor where myconnection is an instance of Connection, built by the proper call to connect from the DB API module you're using with the right argumentrs.
If you show us about 100 times more code and DB schemas and tell us exactly WHAT you're trying to accomplish, we, collectively speaking, could no doubt be MUCH more useful and specific -- but based on the sub-epsilon amount of info you supply that's about as much as we, collectively speaking, can offer;-).
A:
Maybe, cursor.executemany is what you need?
suppose the user didnt enter anything
in jreg would python itself enter null
in to the related row in the DB or
would i run in to trouble?
I think you should just try, you probably see the warning or error if NULL is not allowed.
|
inserting a tuple into a mysql db
|
can anyone show me the syntax for inserting a python tuple/list into a mysql database?
i also need to know if it is possible for user to pass certain rows without inserting anything...
for example: a function is returning this tuple:
return job(jcardnum, jreg, jcarddate, jcardtime, jcardserve, jdeliver)
suppose the user didnt enter anything in jreg
would python itself enter null in to the related row in the DB or would i run in to trouble?
|
[
"As a comment says, the job(...) part is a function (or class) call -- whatever is returned from that call also gets returned from this return statement.\nLet's assume it's a tuple. What if \"the user didn't enter anything in jreg\" -- well then, depending on a lot of code you're not showing us, that could be a runtime error (name jreg being undefined), an empty string or other initial default value never altered, or None; in the latter case that would indeed eventually become a NULL in the DB (if acceptable per the DB schema, of course -- otherwise, the DB would reject the insert attempt).\nOnce you DO finally have a correct and proper tuple T that you want to insert,\n`mycursor.execute('INSERT INTO sometable VALUES(?, ?, ?, ?, ?, ?)', T)\n\nis going to be close to the syntax you want -- if T has six items (and sometable has six columns, of course). Each ? is a placeholder and gets replaced with the corresponding item. mycursor will need to be an instance of Cursor, presumably obtained by some earlier call to myconnection.cursor where myconnection is an instance of Connection, built by the proper call to connect from the DB API module you're using with the right argumentrs.\nIf you show us about 100 times more code and DB schemas and tell us exactly WHAT you're trying to accomplish, we, collectively speaking, could no doubt be MUCH more useful and specific -- but based on the sub-epsilon amount of info you supply that's about as much as we, collectively speaking, can offer;-).\n",
"Maybe, cursor.executemany is what you need?\n\nsuppose the user didnt enter anything\n in jreg would python itself enter null\n in to the related row in the DB or\n would i run in to trouble?\n\nI think you should just try, you probably see the warning or error if NULL is not allowed.\n"
] |
[
7,
0
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0001381840_mysql_python.txt
|
Q:
You don't have permission to access /index.py on this server
I am setting up a simple test page in Python. I only have two files: .htaccess and index.py. I get a 403 Forbidden error when trying to view the page - how can I fix this?
.htaccess:
RewriteEngine On
AddHandler application/x-httpd-cgi .py
DirectoryIndex index.py
index.py:
#!/usr/bin/python
print "Content-type: text/html\n\n"
print "test"
A:
What permissions have you set on index.py (e.g. what does ls -l index.py say, if in Linux or other Unix variants)?
|
You don't have permission to access /index.py on this server
|
I am setting up a simple test page in Python. I only have two files: .htaccess and index.py. I get a 403 Forbidden error when trying to view the page - how can I fix this?
.htaccess:
RewriteEngine On
AddHandler application/x-httpd-cgi .py
DirectoryIndex index.py
index.py:
#!/usr/bin/python
print "Content-type: text/html\n\n"
print "test"
|
[
"What permissions have you set on index.py (e.g. what does ls -l index.py say, if in Linux or other Unix variants)?\n"
] |
[
1
] |
[] |
[] |
[
".htaccess",
"python"
] |
stackoverflow_0001383632_.htaccess_python.txt
|
Q:
Dynamically attaching a method to an existing Python object generated with swig?
I am working with a Python class, and I don't have write access to its declaration.
How can I attach a custom method (such as __str__) to the objects created from that class without modifying the class declaration?
EDIT:
Thank you for all your answers. I tried them all but they haven't resolved my problem. Here is a minimal example that I hope will clarify the issue. I am using swig to wrap a C++ class, and the purpose is to override the __str__ function of an object returned by the swig module. I use cmake to build the example:
test.py
import example
ex = example.generate_example(2)
def prnt(self):
return str(self.x)
#How can I replace the __str__ function of object ex with prnt?
print ex
print prnt(ex)
example.hpp
struct example
{
int x;
};
example generate_example(int x);
example.cpp
#include "example.hpp"
#include <iostream>
example generate_example(int x)
{
example ex;
ex.x = x;
return ex;
}
int main()
{
example ex = generate_example(2);
std::cout << ex.x << "\n";
return 1;
}
example.i
%module example
%{
#include "example.hpp"
%}
%include "example.hpp"
CMakeLists.txt
cmake_minimum_required(VERSION 2.6)
find_package(SWIG REQUIRED)
include(${SWIG_USE_FILE})
find_package(PythonLibs)
include_directories(${PYTHON_INCLUDE_PATH})
include_directories(${CMAKE_CURRENT_SOURCE_DIR})
set_source_files_properties(example.i PROPERTIES CPLUSPLUS ON)
swig_add_module(example python example.i example)
swig_link_libraries(example ${PYTHON_LIBRARIES})
if(APPLE)
set(CMAKE_SHARED_MODULE_CREATE_CXX_FLAGS "${CMAKE_SHARED_MODULE_CREATE_CXX_FLAGS} -flat_namespace")
endif(APPLE)
To build and run test.py, copy all the files in a directory, and in that directory run
cmake .
make
python test.py
This results in the following output:
<example.example; proxy of <Swig Object of type 'example *' at 0x10021cc40> >
2
As you can see the swig object has its own str function, and that is what I am trying to override.
A:
If you create a wrapper class, this will work with any other class, either built-in or not. This is called "containment and delegation", and it is a common alternative to inheritance:
class SuperDuperWrapper(object):
def __init__(self, origobj):
self.myobj = origobj
def __str__(self):
return "SUPER DUPER " + str(self.myobj)
def __getattr__(self,attr):
return getattr(self.myobj, attr)
The __getattr__ method will delegate all undefined attribute requests on your SuperDuperWrapper object to the contained myobj object. In fact, given Python's dynamic typing, you could use this class to SuperDuper'ly wrap just about anything:
s = "hey ho!"
sds = SuperDuperWrapper(s)
print sds
i = 100
sdi = SuperDuperWrapper(i)
print sdi
Prints:
SUPER DUPER hey ho!
SUPER DUPER 100
In your case, you would take the returned object from the function you cannot modify, and wrap it in your own SuperDuperWrapper, but you could still otherwise access it just as if it were the base object.
print sds.split()
['hey', 'ho!']
A:
>>> class C(object):
... pass
...
>>> def spam(self):
... return 'spam'
...
>>> C.__str__ = spam
>>> print C()
spam
It won't work on classes which use __slots__.
A:
Create a subclass. Example:
>>> import datetime
>>> t = datetime.datetime.now()
>>> datetime.datetime.__str__ = lambda self: 'spam'
...
TypeError: cant set attributes of built-in/extension type 'datetime.datetime'
>>> t.__str__ = lambda self: 'spam'
...
AttributeError: 'datetime.datetime' object attribute '__str__' is read-only
>>> class mydate(datetime.datetime):
def __str__(self):
return 'spam'
>>> myt = mydate.now()
>>> print t
2009-09-05 13:11:34.600000
>>> print myt
spam
A:
This is my answer from another question:
import types
class someclass(object):
val = "Value"
def some_method(self):
print self.val
def some_method_upper(self):
print self.val.upper()
obj = someclass()
obj.some_method()
obj.some_method = types.MethodType(some_method_upper, obj)
obj.some_method()
A:
Note that using Alex's subclass idea, you can help yourself a little bit more using "from ... import ... as":
from datetime import datetime as datetime_original
class datetime(datetime_original):
def __str__(self):
return 'spam'
so now the class has the standard name, but different behaviour.
>>> print datetime.now()
'spam'
Of course, this can be dangerous...
|
Dynamically attaching a method to an existing Python object generated with swig?
|
I am working with a Python class, and I don't have write access to its declaration.
How can I attach a custom method (such as __str__) to the objects created from that class without modifying the class declaration?
EDIT:
Thank you for all your answers. I tried them all but they haven't resolved my problem. Here is a minimal example that I hope will clarify the issue. I am using swig to wrap a C++ class, and the purpose is to override the __str__ function of an object returned by the swig module. I use cmake to build the example:
test.py
import example
ex = example.generate_example(2)
def prnt(self):
return str(self.x)
#How can I replace the __str__ function of object ex with prnt?
print ex
print prnt(ex)
example.hpp
struct example
{
int x;
};
example generate_example(int x);
example.cpp
#include "example.hpp"
#include <iostream>
example generate_example(int x)
{
example ex;
ex.x = x;
return ex;
}
int main()
{
example ex = generate_example(2);
std::cout << ex.x << "\n";
return 1;
}
example.i
%module example
%{
#include "example.hpp"
%}
%include "example.hpp"
CMakeLists.txt
cmake_minimum_required(VERSION 2.6)
find_package(SWIG REQUIRED)
include(${SWIG_USE_FILE})
find_package(PythonLibs)
include_directories(${PYTHON_INCLUDE_PATH})
include_directories(${CMAKE_CURRENT_SOURCE_DIR})
set_source_files_properties(example.i PROPERTIES CPLUSPLUS ON)
swig_add_module(example python example.i example)
swig_link_libraries(example ${PYTHON_LIBRARIES})
if(APPLE)
set(CMAKE_SHARED_MODULE_CREATE_CXX_FLAGS "${CMAKE_SHARED_MODULE_CREATE_CXX_FLAGS} -flat_namespace")
endif(APPLE)
To build and run test.py, copy all the files in a directory, and in that directory run
cmake .
make
python test.py
This results in the following output:
<example.example; proxy of <Swig Object of type 'example *' at 0x10021cc40> >
2
As you can see the swig object has its own str function, and that is what I am trying to override.
|
[
"If you create a wrapper class, this will work with any other class, either built-in or not. This is called \"containment and delegation\", and it is a common alternative to inheritance:\nclass SuperDuperWrapper(object):\n def __init__(self, origobj):\n self.myobj = origobj\n def __str__(self):\n return \"SUPER DUPER \" + str(self.myobj)\n def __getattr__(self,attr):\n return getattr(self.myobj, attr)\n\nThe __getattr__ method will delegate all undefined attribute requests on your SuperDuperWrapper object to the contained myobj object. In fact, given Python's dynamic typing, you could use this class to SuperDuper'ly wrap just about anything:\ns = \"hey ho!\"\nsds = SuperDuperWrapper(s)\nprint sds\n\ni = 100\nsdi = SuperDuperWrapper(i)\nprint sdi\n\nPrints:\nSUPER DUPER hey ho!\nSUPER DUPER 100\n\nIn your case, you would take the returned object from the function you cannot modify, and wrap it in your own SuperDuperWrapper, but you could still otherwise access it just as if it were the base object.\nprint sds.split()\n['hey', 'ho!']\n\n",
">>> class C(object):\n... pass\n... \n>>> def spam(self):\n... return 'spam'\n... \n>>> C.__str__ = spam\n>>> print C()\nspam\n\nIt won't work on classes which use __slots__.\n",
"Create a subclass. Example:\n>>> import datetime\n>>> t = datetime.datetime.now()\n>>> datetime.datetime.__str__ = lambda self: 'spam'\n...\nTypeError: cant set attributes of built-in/extension type 'datetime.datetime'\n>>> t.__str__ = lambda self: 'spam'\n...\nAttributeError: 'datetime.datetime' object attribute '__str__' is read-only\n>>> class mydate(datetime.datetime):\n def __str__(self):\n return 'spam'\n\n>>> myt = mydate.now()\n>>> print t\n2009-09-05 13:11:34.600000\n>>> print myt\nspam\n\n",
"This is my answer from another question:\nimport types\nclass someclass(object):\n val = \"Value\"\n def some_method(self):\n print self.val\n\ndef some_method_upper(self):\n print self.val.upper()\n\nobj = someclass()\nobj.some_method()\n\nobj.some_method = types.MethodType(some_method_upper, obj)\nobj.some_method()\n\n",
"Note that using Alex's subclass idea, you can help yourself a little bit more using \"from ... import ... as\":\nfrom datetime import datetime as datetime_original\n\nclass datetime(datetime_original):\n def __str__(self):\n return 'spam'\n\nso now the class has the standard name, but different behaviour. \n>>> print datetime.now()\n 'spam'\n\nOf course, this can be dangerous...\n"
] |
[
24,
3,
3,
2,
1
] |
[] |
[] |
[
"class",
"dynamic",
"methods",
"python"
] |
stackoverflow_0001382871_class_dynamic_methods_python.txt
|
Q:
Python data structure for a collection of objects with random access based on an attribute
I need a collection of objects which can be looked up by a certain (unique) attribute common to each of the objects. Right now I am using a dicitionary assigning the dictionary key to the attribute.
Here is an example of what I have now:
class Item():
def __init__(self, uniq_key, title=None):
self.key = uniq_key
self.title = title
item_instance_1 = Item("unique_key1", title="foo")
item_instance_2 = Item("unique_key3", title="foo")
item_instance_3 = Item("unique_key2", title="foo")
item_collection = {
item_instance_1.key: item_instance_1,
item_instance_2.key: item_instance_2,
item_instance_3.key: item_instance_3
}
item_instance_1.key = "new_key"
Now this seems a rather cumbersome solution, as the key is not a reference to the attribute but takes the value of the key-attribute on assignment, meaning that:
the keys of the dictionary duplicate information already present in form of the object attribute and
when the object attribute is changed the dictionary key is not updated.
Using a list and iterating through the object seems even more inefficient.
So, is there more fitting data structure than dict for this particular case, a collection of objects giving me random access based on a certain object attribute?
This would need to work with Python 2.4 as that's what I am stuck with (at work).
If it hasn't been obvious, I'm new to Python.
A:
There is actually no duplication of information as you fear: the dict's key, and the object's .key attribute, are just two references to exactly the same object.
The only real problem is "what if the .key gets reassigned". Well then, clearly you must use a property that updates all the relevant dicts as well as the instance's attribute; so each object must know all the dicts in which it may be enregistered. Ideally one would want to use weak references for the purpose, to avoid circular dependencies, but, alas, you can't take a weakref.ref (or proxy) to a dict. So, I'm using normal references here, instead (the alternative is not to use dict instances but e.g. some special subclass -- not handy).
def enregister(d, obj):
obj.ds.append(d)
d[obj.key] = obj
class Item(object):
def __init__(self, uniq_key, title=None):
self._key = uniq_key
self.title = title
self.ds = []
def adjust_key(self, newkey):
newds = [d for d in self.ds if self._key in d]
for d in newds:
del d[self._key]
d[newkey] = self
self.ds = newds
self._key = newkey
def get_key(self):
return self._key
key = property(get_key, adjust_key)
Edit: if you want a single collection with ALL the instances of Item, that's even easier, as you can make the collection a class-level attribute; indeed it can be a WeakValueDictionary to avoid erroneously keeping items alive, if that's what you need. I.e.:
class Item(object):
all = weakref.WeakValueDictionary()
def __init__(self, uniq_key, title=None):
self._key = uniq_key
self.title = title
# here, if needed, you could check that the key
# is not ALREADY present in self.all
self.all[self._key] = self
def adjust_key(self, newkey):
# "key non-uniqueness" could be checked here too
del self.all[self._key]
self.all[newkey] = self
self._key = newkey
def get_key(self):
return self._key
key = property(get_key, adjust_key)
Now you can use Item.all['akey'], Item.all.get('akey'), for akey in Item.all:, and so forth -- all the rich functionality of dicts.
A:
There are a number of great things you can do here. One example would be to let the class keep track of everything:
class Item():
_member_dict = {}
@classmethod
def get_by_key(cls,key):
return cls._member_dict[key]
def __init__(self, uniq_key, title=None):
self.key = uniq_key
self.__class__._member_dict[key] = self
self.title = title
>>> i = Item('foo')
>>> i == Item.get_by_key('foo')
True
Note you will retain the update problem: if key changes, the _member_dict falls out of sync. This is where encapsulation will come in handy: make it (practically) impossible to change key without updating the dictionary. For a good tutorial on how to do that, see this tutorial.
A:
Well, dict really is what you want. What may be cumbersome is not the dict itself, but the way you are building it. Here is a slight enhancement to your example, showing how to use a list expression and the dict constructor to easily create your lookup dict. This also shows how to create a multimap kind of dict, to look up matching items given a field value that might be duplicated across items:
class Item(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def __str__(self):
return str(self.__dict__)
def __repr__(self):
return str(self)
allitems = [
Item(key="red", title="foo"),
Item(key="green", title="foo"),
Item(key="blue", title="foofoo"),
]
# if fields are unique
itemByKey = dict([(i.key,i) for i in allitems])
# if field value can be duplicated across items
# (for Python 2.5 and higher, you could use a defaultdict from
# the collections module)
itemsByTitle = {}
for i in allitems:
if i.title in itemsByTitle:
itemsByTitle[i.title].append(i)
else:
itemsByTitle[i.title] = [i]
print itemByKey["red"]
print itemsByTitle["foo"]
Prints:
{'key': 'red', 'title': 'foo'}
[{'key': 'red', 'title': 'foo'}, {'key': 'green', 'title': 'foo'}]
A:
Editing to correct the problem I had - which was due to my "collection = dict()" default parameter (*bonk*). Now, each call to the function will return a class with its own collection as intended - this for convenience in case more than one such collection should be needed. Also am putting the collection in the class and just returning the class instead of the two separately in a tuple as before. (Leaving the default container here as dict(), but that could be changed to Alex's WeakValueDictionary, which is of course very cool.)
def make_item_collection(container = None):
''' Create a class designed to be collected in a specific collection. '''
container = dict() if container is None else container
class CollectedItem(object):
collection = container
def __init__(self, key, title=None):
self.key = key
CollectedItem.collection[key] = self
self.title = title
def update_key(self, new_key):
CollectedItem.collection[
new_key] = CollectedItem.collection.pop(self.key)
self.key = new_key
return CollectedItem
# Usage Demo...
Item = make_item_collection()
my_collection = Item.collection
item_instance_1 = Item("unique_key1", title="foo1")
item_instance_2 = Item("unique_key2", title="foo2")
item_instance_3 = Item("unique_key3", title="foo3")
for k,v in my_collection.iteritems():
print k, v.title
item_instance_1.update_key("new_unique_key")
print '****'
for k,v in my_collection.iteritems():
print k, v.title
And here's the output in Python 2.5.2:
unique_key1 foo1
unique_key2 foo2
unique_key3 foo3
****
new_unique_key foo1
unique_key2 foo2
unique_key3 foo3
|
Python data structure for a collection of objects with random access based on an attribute
|
I need a collection of objects which can be looked up by a certain (unique) attribute common to each of the objects. Right now I am using a dicitionary assigning the dictionary key to the attribute.
Here is an example of what I have now:
class Item():
def __init__(self, uniq_key, title=None):
self.key = uniq_key
self.title = title
item_instance_1 = Item("unique_key1", title="foo")
item_instance_2 = Item("unique_key3", title="foo")
item_instance_3 = Item("unique_key2", title="foo")
item_collection = {
item_instance_1.key: item_instance_1,
item_instance_2.key: item_instance_2,
item_instance_3.key: item_instance_3
}
item_instance_1.key = "new_key"
Now this seems a rather cumbersome solution, as the key is not a reference to the attribute but takes the value of the key-attribute on assignment, meaning that:
the keys of the dictionary duplicate information already present in form of the object attribute and
when the object attribute is changed the dictionary key is not updated.
Using a list and iterating through the object seems even more inefficient.
So, is there more fitting data structure than dict for this particular case, a collection of objects giving me random access based on a certain object attribute?
This would need to work with Python 2.4 as that's what I am stuck with (at work).
If it hasn't been obvious, I'm new to Python.
|
[
"There is actually no duplication of information as you fear: the dict's key, and the object's .key attribute, are just two references to exactly the same object.\nThe only real problem is \"what if the .key gets reassigned\". Well then, clearly you must use a property that updates all the relevant dicts as well as the instance's attribute; so each object must know all the dicts in which it may be enregistered. Ideally one would want to use weak references for the purpose, to avoid circular dependencies, but, alas, you can't take a weakref.ref (or proxy) to a dict. So, I'm using normal references here, instead (the alternative is not to use dict instances but e.g. some special subclass -- not handy).\ndef enregister(d, obj):\n obj.ds.append(d)\n d[obj.key] = obj\n\nclass Item(object):\n def __init__(self, uniq_key, title=None):\n self._key = uniq_key\n self.title = title\n self.ds = []\n\n def adjust_key(self, newkey):\n newds = [d for d in self.ds if self._key in d]\n for d in newds:\n del d[self._key]\n d[newkey] = self\n self.ds = newds\n self._key = newkey\n\n def get_key(self):\n return self._key\n\n key = property(get_key, adjust_key)\n\nEdit: if you want a single collection with ALL the instances of Item, that's even easier, as you can make the collection a class-level attribute; indeed it can be a WeakValueDictionary to avoid erroneously keeping items alive, if that's what you need. I.e.:\nclass Item(object):\n\n all = weakref.WeakValueDictionary()\n\n def __init__(self, uniq_key, title=None):\n self._key = uniq_key\n self.title = title\n # here, if needed, you could check that the key\n # is not ALREADY present in self.all\n self.all[self._key] = self\n\n def adjust_key(self, newkey):\n # \"key non-uniqueness\" could be checked here too\n del self.all[self._key]\n self.all[newkey] = self\n self._key = newkey\n\n def get_key(self):\n return self._key\n\n key = property(get_key, adjust_key)\n\nNow you can use Item.all['akey'], Item.all.get('akey'), for akey in Item.all:, and so forth -- all the rich functionality of dicts.\n",
"There are a number of great things you can do here. One example would be to let the class keep track of everything:\nclass Item():\n _member_dict = {}\n @classmethod\n def get_by_key(cls,key):\n return cls._member_dict[key]\n def __init__(self, uniq_key, title=None):\n self.key = uniq_key\n self.__class__._member_dict[key] = self\n self.title = title\n\n>>> i = Item('foo')\n>>> i == Item.get_by_key('foo')\nTrue\n\nNote you will retain the update problem: if key changes, the _member_dict falls out of sync. This is where encapsulation will come in handy: make it (practically) impossible to change key without updating the dictionary. For a good tutorial on how to do that, see this tutorial.\n",
"Well, dict really is what you want. What may be cumbersome is not the dict itself, but the way you are building it. Here is a slight enhancement to your example, showing how to use a list expression and the dict constructor to easily create your lookup dict. This also shows how to create a multimap kind of dict, to look up matching items given a field value that might be duplicated across items:\nclass Item(object):\n def __init__(self, **kwargs):\n self.__dict__.update(kwargs)\n def __str__(self):\n return str(self.__dict__)\n def __repr__(self):\n return str(self)\n\nallitems = [\n Item(key=\"red\", title=\"foo\"),\n Item(key=\"green\", title=\"foo\"),\n Item(key=\"blue\", title=\"foofoo\"),\n ]\n\n# if fields are unique\nitemByKey = dict([(i.key,i) for i in allitems])\n\n# if field value can be duplicated across items\n# (for Python 2.5 and higher, you could use a defaultdict from \n# the collections module)\nitemsByTitle = {}\nfor i in allitems:\n if i.title in itemsByTitle:\n itemsByTitle[i.title].append(i)\n else:\n itemsByTitle[i.title] = [i]\n\n\n\nprint itemByKey[\"red\"]\nprint itemsByTitle[\"foo\"]\n\nPrints:\n{'key': 'red', 'title': 'foo'}\n[{'key': 'red', 'title': 'foo'}, {'key': 'green', 'title': 'foo'}]\n\n",
"Editing to correct the problem I had - which was due to my \"collection = dict()\" default parameter (*bonk*). Now, each call to the function will return a class with its own collection as intended - this for convenience in case more than one such collection should be needed. Also am putting the collection in the class and just returning the class instead of the two separately in a tuple as before. (Leaving the default container here as dict(), but that could be changed to Alex's WeakValueDictionary, which is of course very cool.)\ndef make_item_collection(container = None):\n ''' Create a class designed to be collected in a specific collection. '''\n container = dict() if container is None else container\n class CollectedItem(object):\n collection = container\n def __init__(self, key, title=None):\n self.key = key\n CollectedItem.collection[key] = self\n self.title = title\n def update_key(self, new_key):\n CollectedItem.collection[\n new_key] = CollectedItem.collection.pop(self.key)\n self.key = new_key\n return CollectedItem\n\n# Usage Demo...\n\nItem = make_item_collection()\nmy_collection = Item.collection\n\nitem_instance_1 = Item(\"unique_key1\", title=\"foo1\")\nitem_instance_2 = Item(\"unique_key2\", title=\"foo2\")\nitem_instance_3 = Item(\"unique_key3\", title=\"foo3\")\n\nfor k,v in my_collection.iteritems():\n print k, v.title\n\nitem_instance_1.update_key(\"new_unique_key\")\n\nprint '****'\nfor k,v in my_collection.iteritems():\n print k, v.title\n\nAnd here's the output in Python 2.5.2:\nunique_key1 foo1\nunique_key2 foo2\nunique_key3 foo3\n****\nnew_unique_key foo1\nunique_key2 foo2\nunique_key3 foo3\n\n"
] |
[
5,
2,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001383693_python.txt
|
Q:
Framework/CMS suggestions for enterprise website & intranet (I've got to convince the president its solid!)
Dear stack overflow community,
I've been given the task of overhauling a couple of websites for a large corporation I'm working for, as well as developing an internal intranet site for content management and document storage within the organization.
My "problem" is this: They want me to use a framework/set of languages/technologies that I can prove to them are "stable, enterprise-ready technologies with a proven track record."
The spec's "big picture" really isn't too complicated: Implement an enterprise-class CMS for management of each division's web pages that deal mostly with product information and documentation (i.e. a simpler version of www.linksys.com).
As an open-source programmer, I'd like to use Python with TurboGears and build it from scratch, but I can't really find a way to prove to the president that TurboGears has a huge enterprise track record. Zope seems to have a lot of enterprise usage, but it looks a bit bloated to me. Django could be an option, but doesn't seem as flexible as TurboGears.
I'd rather not use PHP, but Drupal has a very nice resume with the "right" names under it (AOL, Sony, MTV); plus it could save me building many of the CMS components from scratch.
Rails might be another option, but I'm not too familiar with it (and as a Python/PHP programmer, Ruby's syntax drives me crazy).
What would the S.O. community suggest for a project like this? I'm sure many of you have faced the same dilemma. What ended up working/not working for you? As I said before, my first choice would be Python, second would be PHP, third would be Rails.
Thank you,
Seth
A:
This is a contradictory statement: "The spec's "big picture" really isn't too complicated: Implement an enterprise-class CMS for management of each division's web pages".
"Enterprise Class" and "isn't too complicated" do not belong in the same sentence. Seriously.
"Enterprise Class" stuff is complicated because "enterprise class" tasks and environments are complicated.
Mind, just because something is deployed within an enterprise doesn't mean it requires an "enterprise class" tool. But those that DO have "enterprise class" requirements ARE complicated because the problem domain and deployment environment are complicated.
So, you need to be more clear on your specs than "buzzword compliant", "my boss has heard of it", "never breaks", etc.
CMS seems deceptively simple, but it's not. If it's geeks managing stuff for geeks, that's one thing, but CMSs tend to have great impact on non-technical end users which can dramatically complicate user interfaces, security, workflows, support, etc. Think "marketing wants to maintain the website", and that they're going to let their junior intern do it.
So, seriously, without REAL requirements it's hard to suggest anything. And without REAL requirements, and a solid understanding of your user base, you most certainly should NOT just "roll your own".
A:
If you like Python, and you want a web framework, I wouldn't go past Django. It's simple, powerful, and runs plenty of enterprise-level sites.
A few of the bigger sites using Django are Lawrence.com, Curse Gaming and some Washington Post sites. It just went to version 1.0 recently as well, so you have a solid code base to work from.
You can always throw in a list of companies that use Python if you want to, it includes people like Google, Yahoo and NASA.
A:
If you want an enterprise CMS, you don't build it from scratch with a framework. An enterprise CMS requires the work of thousands of people, like Plone. Here's Plone in the enterprise:
http://plone.net
A:
If you're looking for a enterprise-class CMS, why implement from scratch? There's a well-established, mature, Python-based enterprise-class CMS already available called Plone.
It was recently reviewed by a major IT publication:
"Plone does one thing -- Web content management -- and does it with aplomb. That's why you'll find well-known U.S. and international organizations in most industries running their Web sites, internets, and extranets with Plone." Inforworld, "Open source CMSes prove well worth the price" Oct 2007
Out of the box Plone provides most if not all of the features you'll need, and with hundreds of free add-ons available to implement any other features you'll need you may not need to do any coding to get your site up and running.
It is being used by government, non-profit, education and businesses. Names such as Novell, gnome.org, Discover Magazine, and thousands more. And you can be pretty certain its secure, the CIA is using it to run its public facing site.
The Plone community is very strong, it is one of the largest Open Source projects in the planet. There are hundreds of Plone service providers around the world to provide support for your deployment.
You can read up on the project itself on the Plone.org website. There's also Plone.net which provides case studies and success stories, global list of service providers and more media coverage.
A:
I agree with Will's comments. Building a CMS, an intranet, and a document management system sounds like a ton of work. My company would probably spend 6 months on the requirements for one of those systems and still hand off vague/incomplete requirements.
Here are a few questions:
Who will be maintaining the CMS and
Doc Management systems when you're
done? The odds of the apps being a
success go down if you drop a custom
Python app in the midst of a bunch
of salaried Java developers. I'm
not saying that it can't work, just
that the odds skew against it.
Are you looking for a single
app/framework to create the CMS for the
external sites, the CMS for the
intranet, and the document
management system? If so, that
should narrow the field of possible
CMSs considerably. For example, I
don't think that Drupal handles
Document Management well (if it
handles it at all.)
Who are the users of the systems?
Will the folks using the document
management system be the same ones
managing the websites and intranet?
Will the systems share workflow?
(Will document management system
content stay in its silo or can documents
migrate to the Web CMS or intranet?
Are there different "approvers" in
each area of the system or one set of overlords?)
Good luck!
A:
The first thing that comes to mind here is that you're approaching this all wrong. It seems like you're looking for a pet project for yourself and trying to decide what you'd like to do best. You didn't specify the scope of who is going to be managing this site.. which is the real question. Is it just you? Is it the management team? Is it each division?
Making a huge decision like this takes a lot of time and thought. We spend a lot of time just helping our clients choose the right CMS for their needs. There are a lot out there and a decision like this is not something to be taken lightly. A lot are good in the right situation and HORRIBLE in others. Also, what is right for you as the developer isn't necessarily right for your end user.
As someone up there suggested, you need a lot more research into what the requirements are before anyone (including the developer community) can make any suggestions about what is best to use.
A:
I agree with Will's, braveterry's and Divamatrix's comments. Fully.
There are tons of questions/issues/risks/considerations to take in order to succesfully launch a CMS solution for a medium/big enterprise. I will not repeat what Will and braveterry had said, instead of it I will offer a different point of view:
CMS for a medium-big company is not about Software. It is about proccesses and policies.
Which framework/tool to use must be dependent on exact requirements (kind of content, sources for content, who will be responsible to capture and create content, what are their abilities, who will be approving content updates, which departments will have a voice on what goes into the home page?, under which policies will be selected the content for the home page?, what will be the puropose for the home page? (marketing? sales? technical? branding?).
If the answers to these questions (there are lot more) are not clear to you or even if you do not get why are SO important. Then I think you need to contract a seasoned consulting firm.
PS: This gives me the idea to publish some sort of paper about this topic but that would take some days as I currently do not have the time to prepare it.
A:
"They want me to use a framework/set of languages/technologies that I can prove to them are "stable, enterprise-ready technologies with a proven track record.""
There's no proof of those features. None.
Is there some incumbent technology that they want you to use? If so, you might be swimming upstream.
If you're fighting for your preferred technology, you probably can't win their hearts and minds without a serious proof of concept or pilot project or something.
If they're willing to listen, they'd be more willing to listen if you had a demo that showed how rock-solid your preferred approach is.
if there is no incumbent, then they're just wringing their hands. In this case, you'll need some evidence they actually believe -- a pilot project or a proof of concept.
There's no Proof in this industry. For every technology you can find a proponent and an opponent. Even crap technology has proponents. Forget proof.
Just pick something that you can use very rapidly. Get something up and running so quickly, with such high quality that you're obviously right and the rest of your opinions must be equally right.
For this reason, flexibility has no value. Go with Django and get something to run ASAP.
A:
You'd like to build an Enterprise Class CMS from scratch? Just for one project? Are you crazy? Unless plan to go into the CMS business and have thousands and thousands of hours of development time there absolutely is no point to create a new one. There are excellent CMS's already out there. Drupal and Plone are the best in my opinion. I like Plone because its delightful to use. It's used by CIA, NASA, Akami, Novell and Ebay.
Best wishes,
Tony
A:
"Enterprise" is a marketing term. It has pretty much zero technical meaning. If your boss wants to hear Enterprise, then he will, but this won't mean that a given system is suitable for your needs.
Beware of lists of companies that use a given suite of software. "Ebay uses Plone", and "Ebay runs on Plone" are two very different statements.
Mostly, if you're doing "Enterprise" CMS (for whatever that term is worth) you should expect to have a learning curve that will only just begin to flatten out by the end of a significant project.
For your project, I'd suggest you try to figure out what you really need. If you think TurboGears (or any other framework) is a good fit, discuss some risk management strategies with your boss. Maybe a small pilot to start with. Adopting a new technology is risky. Many "large corporation" web sites are mission-critical these days.
For what it's worth. I like Plone, but I've only ever used it for non-corporate stuff. I don't personally know of any "Enterprise" implementations. At work I use Tridion, and I know of numerous implementations at that level. (If you're looking for a choice that will let you work in Python, Tridion isn't a good fit.)
A:
No matter what you choose, don't use Typo3. It is a huge unhackable mess with its own idiotic template "script" language, near impossible to learn quickly, hard to teach to your enterprise users and damn ugly. No wonder there are shops which earn a living just doing Typo3 consulting. It is somewhat popular but don't think there is any decent documentation.
A:
Seth, if you really want a E-CMS, dont try to reinvent the wheel. There are plenty tested E-CMS around. For example some Zope/Python based solutions like Plone. It is Enterprise tested, so easy to use, extremly extensible (as you have a complete applicationserver in the backend), there are books around explaining it for authors/editors, webmasters and developers. Evolve it where it doesnt fit. If you need more info ask at IRC (OPN/freenode, #plone) or if one of the 59 World Plone Day [1] locations is not too far away go there at November 7th 2008 and get in touch with Plone and its huge and helpful community. [1] http://plone.org/wpd
A:
CMS for a medium-big company is not about Software. It is about proccesses and policies.
Very true!
Association with prestigious names is not necessarily an indicator of pleasing end results.
I like Sony products, yes, but on the various occasions on which I have sought support from Sony sites I have felt like banging my head against a brick wall! Those head-cracking sites may not have been Drupal-oriented, I have no idea, but the point is: don't be sucked in by big names alone.
An issue you should expect is: preconceptions of what may be achieved (or constrained) by a system.
Allow yourself some learning time with Plone — ideally, for a large project such as this, invest in expert advice — and you'll realise that traditional-ish ideas of what a system can or should accomplish are mostly exceeded by Plone's capabilities.
Gauge user requirements with a very open mind (not based on simplicities such as "I'd like a system that's equal to system x") then come to plone.org | Support | Chat Room to further discuss your requirements.
A:
Keep an eye on Flossquality - Open source quality research
http://flossquality.eu/
Concerning Flossquality and the three quality-related projects under that heading, at http://n2.nabble.com/Plone-and-QUALOSS---QUALity-in-Open-Source-Software-tp1402419p1446439.html I imagined some questions that people in the open source communities (not just in Plone) might ask about the whole caboodle.
Very recently I received, off-list, some responses to those questions. As soon as I find time to read the relevant e-mails I'll aim to either share or at least abstract the responses.
|
Framework/CMS suggestions for enterprise website & intranet (I've got to convince the president its solid!)
|
Dear stack overflow community,
I've been given the task of overhauling a couple of websites for a large corporation I'm working for, as well as developing an internal intranet site for content management and document storage within the organization.
My "problem" is this: They want me to use a framework/set of languages/technologies that I can prove to them are "stable, enterprise-ready technologies with a proven track record."
The spec's "big picture" really isn't too complicated: Implement an enterprise-class CMS for management of each division's web pages that deal mostly with product information and documentation (i.e. a simpler version of www.linksys.com).
As an open-source programmer, I'd like to use Python with TurboGears and build it from scratch, but I can't really find a way to prove to the president that TurboGears has a huge enterprise track record. Zope seems to have a lot of enterprise usage, but it looks a bit bloated to me. Django could be an option, but doesn't seem as flexible as TurboGears.
I'd rather not use PHP, but Drupal has a very nice resume with the "right" names under it (AOL, Sony, MTV); plus it could save me building many of the CMS components from scratch.
Rails might be another option, but I'm not too familiar with it (and as a Python/PHP programmer, Ruby's syntax drives me crazy).
What would the S.O. community suggest for a project like this? I'm sure many of you have faced the same dilemma. What ended up working/not working for you? As I said before, my first choice would be Python, second would be PHP, third would be Rails.
Thank you,
Seth
|
[
"This is a contradictory statement: \"The spec's \"big picture\" really isn't too complicated: Implement an enterprise-class CMS for management of each division's web pages\".\n\"Enterprise Class\" and \"isn't too complicated\" do not belong in the same sentence. Seriously.\n\"Enterprise Class\" stuff is complicated because \"enterprise class\" tasks and environments are complicated.\nMind, just because something is deployed within an enterprise doesn't mean it requires an \"enterprise class\" tool. But those that DO have \"enterprise class\" requirements ARE complicated because the problem domain and deployment environment are complicated.\nSo, you need to be more clear on your specs than \"buzzword compliant\", \"my boss has heard of it\", \"never breaks\", etc.\nCMS seems deceptively simple, but it's not. If it's geeks managing stuff for geeks, that's one thing, but CMSs tend to have great impact on non-technical end users which can dramatically complicate user interfaces, security, workflows, support, etc. Think \"marketing wants to maintain the website\", and that they're going to let their junior intern do it.\nSo, seriously, without REAL requirements it's hard to suggest anything. And without REAL requirements, and a solid understanding of your user base, you most certainly should NOT just \"roll your own\".\n",
"If you like Python, and you want a web framework, I wouldn't go past Django. It's simple, powerful, and runs plenty of enterprise-level sites.\nA few of the bigger sites using Django are Lawrence.com, Curse Gaming and some Washington Post sites. It just went to version 1.0 recently as well, so you have a solid code base to work from.\nYou can always throw in a list of companies that use Python if you want to, it includes people like Google, Yahoo and NASA.\n",
"If you want an enterprise CMS, you don't build it from scratch with a framework. An enterprise CMS requires the work of thousands of people, like Plone. Here's Plone in the enterprise:\nhttp://plone.net\n",
"If you're looking for a enterprise-class CMS, why implement from scratch? There's a well-established, mature, Python-based enterprise-class CMS already available called Plone.\nIt was recently reviewed by a major IT publication:\n\"Plone does one thing -- Web content management -- and does it with aplomb. That's why you'll find well-known U.S. and international organizations in most industries running their Web sites, internets, and extranets with Plone.\" Inforworld, \"Open source CMSes prove well worth the price\" Oct 2007\nOut of the box Plone provides most if not all of the features you'll need, and with hundreds of free add-ons available to implement any other features you'll need you may not need to do any coding to get your site up and running.\nIt is being used by government, non-profit, education and businesses. Names such as Novell, gnome.org, Discover Magazine, and thousands more. And you can be pretty certain its secure, the CIA is using it to run its public facing site.\nThe Plone community is very strong, it is one of the largest Open Source projects in the planet. There are hundreds of Plone service providers around the world to provide support for your deployment.\nYou can read up on the project itself on the Plone.org website. There's also Plone.net which provides case studies and success stories, global list of service providers and more media coverage.\n",
"I agree with Will's comments. Building a CMS, an intranet, and a document management system sounds like a ton of work. My company would probably spend 6 months on the requirements for one of those systems and still hand off vague/incomplete requirements.\nHere are a few questions:\n\nWho will be maintaining the CMS and\nDoc Management systems when you're\ndone? The odds of the apps being a\nsuccess go down if you drop a custom\nPython app in the midst of a bunch\nof salaried Java developers. I'm\nnot saying that it can't work, just\nthat the odds skew against it.\nAre you looking for a single\napp/framework to create the CMS for the\nexternal sites, the CMS for the\nintranet, and the document\nmanagement system? If so, that\nshould narrow the field of possible\nCMSs considerably. For example, I\ndon't think that Drupal handles\nDocument Management well (if it\nhandles it at all.)\nWho are the users of the systems? \nWill the folks using the document\nmanagement system be the same ones\nmanaging the websites and intranet?\nWill the systems share workflow?\n(Will document management system\ncontent stay in its silo or can documents\nmigrate to the Web CMS or intranet? \nAre there different \"approvers\" in\neach area of the system or one set of overlords?)\n\nGood luck!\n",
"The first thing that comes to mind here is that you're approaching this all wrong. It seems like you're looking for a pet project for yourself and trying to decide what you'd like to do best. You didn't specify the scope of who is going to be managing this site.. which is the real question. Is it just you? Is it the management team? Is it each division? \nMaking a huge decision like this takes a lot of time and thought. We spend a lot of time just helping our clients choose the right CMS for their needs. There are a lot out there and a decision like this is not something to be taken lightly. A lot are good in the right situation and HORRIBLE in others. Also, what is right for you as the developer isn't necessarily right for your end user.\nAs someone up there suggested, you need a lot more research into what the requirements are before anyone (including the developer community) can make any suggestions about what is best to use. \n",
"I agree with Will's, braveterry's and Divamatrix's comments. Fully.\nThere are tons of questions/issues/risks/considerations to take in order to succesfully launch a CMS solution for a medium/big enterprise. I will not repeat what Will and braveterry had said, instead of it I will offer a different point of view:\nCMS for a medium-big company is not about Software. It is about proccesses and policies.\nWhich framework/tool to use must be dependent on exact requirements (kind of content, sources for content, who will be responsible to capture and create content, what are their abilities, who will be approving content updates, which departments will have a voice on what goes into the home page?, under which policies will be selected the content for the home page?, what will be the puropose for the home page? (marketing? sales? technical? branding?).\nIf the answers to these questions (there are lot more) are not clear to you or even if you do not get why are SO important. Then I think you need to contract a seasoned consulting firm.\nPS: This gives me the idea to publish some sort of paper about this topic but that would take some days as I currently do not have the time to prepare it.\n",
"\"They want me to use a framework/set of languages/technologies that I can prove to them are \"stable, enterprise-ready technologies with a proven track record.\"\"\nThere's no proof of those features. None.\nIs there some incumbent technology that they want you to use? If so, you might be swimming upstream.\n\nIf you're fighting for your preferred technology, you probably can't win their hearts and minds without a serious proof of concept or pilot project or something.\nIf they're willing to listen, they'd be more willing to listen if you had a demo that showed how rock-solid your preferred approach is.\n\nif there is no incumbent, then they're just wringing their hands. In this case, you'll need some evidence they actually believe -- a pilot project or a proof of concept.\nThere's no Proof in this industry. For every technology you can find a proponent and an opponent. Even crap technology has proponents. Forget proof. \nJust pick something that you can use very rapidly. Get something up and running so quickly, with such high quality that you're obviously right and the rest of your opinions must be equally right.\nFor this reason, flexibility has no value. Go with Django and get something to run ASAP.\n",
"You'd like to build an Enterprise Class CMS from scratch? Just for one project? Are you crazy? Unless plan to go into the CMS business and have thousands and thousands of hours of development time there absolutely is no point to create a new one. There are excellent CMS's already out there. Drupal and Plone are the best in my opinion. I like Plone because its delightful to use. It's used by CIA, NASA, Akami, Novell and Ebay. \nBest wishes, \nTony\n",
"\"Enterprise\" is a marketing term. It has pretty much zero technical meaning. If your boss wants to hear Enterprise, then he will, but this won't mean that a given system is suitable for your needs. \nBeware of lists of companies that use a given suite of software. \"Ebay uses Plone\", and \"Ebay runs on Plone\" are two very different statements. \nMostly, if you're doing \"Enterprise\" CMS (for whatever that term is worth) you should expect to have a learning curve that will only just begin to flatten out by the end of a significant project. \nFor your project, I'd suggest you try to figure out what you really need. If you think TurboGears (or any other framework) is a good fit, discuss some risk management strategies with your boss. Maybe a small pilot to start with. Adopting a new technology is risky. Many \"large corporation\" web sites are mission-critical these days. \nFor what it's worth. I like Plone, but I've only ever used it for non-corporate stuff. I don't personally know of any \"Enterprise\" implementations. At work I use Tridion, and I know of numerous implementations at that level. (If you're looking for a choice that will let you work in Python, Tridion isn't a good fit.)\n",
"No matter what you choose, don't use Typo3. It is a huge unhackable mess with its own idiotic template \"script\" language, near impossible to learn quickly, hard to teach to your enterprise users and damn ugly. No wonder there are shops which earn a living just doing Typo3 consulting. It is somewhat popular but don't think there is any decent documentation.\n",
"Seth, if you really want a E-CMS, dont try to reinvent the wheel. There are plenty tested E-CMS around. For example some Zope/Python based solutions like Plone. It is Enterprise tested, so easy to use, extremly extensible (as you have a complete applicationserver in the backend), there are books around explaining it for authors/editors, webmasters and developers. Evolve it where it doesnt fit. If you need more info ask at IRC (OPN/freenode, #plone) or if one of the 59 World Plone Day [1] locations is not too far away go there at November 7th 2008 and get in touch with Plone and its huge and helpful community. [1] http://plone.org/wpd\n",
"\nCMS for a medium-big company is not about Software. It is about proccesses and policies.\n\nVery true! \nAssociation with prestigious names is not necessarily an indicator of pleasing end results. \nI like Sony products, yes, but on the various occasions on which I have sought support from Sony sites I have felt like banging my head against a brick wall! Those head-cracking sites may not have been Drupal-oriented, I have no idea, but the point is: don't be sucked in by big names alone.\nAn issue you should expect is: preconceptions of what may be achieved (or constrained) by a system. \nAllow yourself some learning time with Plone — ideally, for a large project such as this, invest in expert advice — and you'll realise that traditional-ish ideas of what a system can or should accomplish are mostly exceeded by Plone's capabilities. \nGauge user requirements with a very open mind (not based on simplicities such as \"I'd like a system that's equal to system x\") then come to plone.org | Support | Chat Room to further discuss your requirements.\n",
"Keep an eye on Flossquality - Open source quality research\nhttp://flossquality.eu/\nConcerning Flossquality and the three quality-related projects under that heading, at http://n2.nabble.com/Plone-and-QUALOSS---QUALity-in-Open-Source-Software-tp1402419p1446439.html I imagined some questions that people in the open source communities (not just in Plone) might ask about the whole caboodle. \nVery recently I received, off-list, some responses to those questions. As soon as I find time to read the relevant e-mails I'll aim to either share or at least abstract the responses. \n"
] |
[
9,
8,
5,
4,
3,
3,
3,
2,
2,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"content_management_system",
"enterprise",
"frameworks",
"python"
] |
stackoverflow_0000241575_content_management_system_enterprise_frameworks_python.txt
|
Q:
OS X - multiple python versions, PATH and /usr/local
If you install multiple versions of python (I currently have the default 2.5, installed 3.0.1 and now installed 2.6.2), it automatically puts stuff in /usr/local, and it also adjusts the path to include the /Library/Frameworks/Python/Versions/theVersion/bin, but whats the point of that when /usr/local is already on the PATH, and all installed versions (except the default 2.5, which is in /usr/bin) are in there? I removed the python framework paths from my PATH in .bash_profile, and I can still type "python -V" => "Python 2.5.1", "python2.6 -V" => "Python 2.6.2","python3 -V" => "Python 3.0.1". Just wondering why it puts it in /usr/local, and also changes the PATH. And is what I did fine? Thanks.
Also, the 2.6 installation made it the 'current' one, having .../Python.framework/Versions/Current point to 2.6., So plain 'python' things in /usr/local/bin point to 2.6, but it doesn't matter because usr/bin comes first and things with the same name in there point to 2.5 stuff.. Anyway, 2.5 comes with leopard, I installed 3.0.1 just to have the latest version (that has a dmg file), and now I installed 2.6.2 for use with pygame.
EDIT: OK, here's how I understand it. When you install, say, Python 2.6.2:
A bunch of symlinks are added to /usr/local/bin, so when there's a #! /usr/local/bin/python shebang in a python script, it will run, and in /Applications/Python 2.6, the Python Launcher is made default application to run .py files, which uses /usr/local/bin/pythonw, and /Library/Frameworks/Python.framework/Versions/2.6/bin is created and added to the front of the path, so which python will get the python in there, and also #! /usr/bin/env python shebang's will run correctly.
A:
There's no a priori guarantee that /usr/local/bin will stay on the PATH (especially it will not necessarily stay "in front of" /usr/bin!-), so it's perfectly reasonable for an installer to ensure the specifically needed /Library/.../bin directory does get on the PATH. Plus, it may be the case that the /Library/.../bin has supplementary stuff that doesn't get symlinked into /usr/local/bin, although I believe that's not currently the case with recent Mac standard distributions of Python.
If you know that the way you'll arrange your path, and the exact set of executables you'll be using, are entirely satisfied from /usr/local/bin, then it's quite OK for you to remove the /Library/etc directories from your own path, of course.
A:
I just noticed/encountered this issue on my Mac. I have Python 2.5.4, 2.6.2, and 3.1.1 on my machine, and was looking for a way to easily change between them at will. That is when I noticed all the symlinks for the executables, which I found in both '/usr/bin' and '/usr/local/bin'. I ripped all the non-version specific symlinks out, leaving python2.5, python2.6, etc, and wrote a bash shell script that I can run as root to change one symlink I use to direct the path to the version of my choice
'/Library/Frameworks/Python.framework/Versions/Current'
The only bad thing about ripping the symlinks out, is if some other application needed them for some reason. My opinion as to why these symlinks are created is similar to Alex's assessment, the installer is trying to cover all of the bases. All of my versions have been installed by an installer, though I've been trying to compile my own to enable full 64-bit support, and when compiling and installing your own you can choose to not have the symlinks created or the PATH modified during installation.
|
OS X - multiple python versions, PATH and /usr/local
|
If you install multiple versions of python (I currently have the default 2.5, installed 3.0.1 and now installed 2.6.2), it automatically puts stuff in /usr/local, and it also adjusts the path to include the /Library/Frameworks/Python/Versions/theVersion/bin, but whats the point of that when /usr/local is already on the PATH, and all installed versions (except the default 2.5, which is in /usr/bin) are in there? I removed the python framework paths from my PATH in .bash_profile, and I can still type "python -V" => "Python 2.5.1", "python2.6 -V" => "Python 2.6.2","python3 -V" => "Python 3.0.1". Just wondering why it puts it in /usr/local, and also changes the PATH. And is what I did fine? Thanks.
Also, the 2.6 installation made it the 'current' one, having .../Python.framework/Versions/Current point to 2.6., So plain 'python' things in /usr/local/bin point to 2.6, but it doesn't matter because usr/bin comes first and things with the same name in there point to 2.5 stuff.. Anyway, 2.5 comes with leopard, I installed 3.0.1 just to have the latest version (that has a dmg file), and now I installed 2.6.2 for use with pygame.
EDIT: OK, here's how I understand it. When you install, say, Python 2.6.2:
A bunch of symlinks are added to /usr/local/bin, so when there's a #! /usr/local/bin/python shebang in a python script, it will run, and in /Applications/Python 2.6, the Python Launcher is made default application to run .py files, which uses /usr/local/bin/pythonw, and /Library/Frameworks/Python.framework/Versions/2.6/bin is created and added to the front of the path, so which python will get the python in there, and also #! /usr/bin/env python shebang's will run correctly.
|
[
"There's no a priori guarantee that /usr/local/bin will stay on the PATH (especially it will not necessarily stay \"in front of\" /usr/bin!-), so it's perfectly reasonable for an installer to ensure the specifically needed /Library/.../bin directory does get on the PATH. Plus, it may be the case that the /Library/.../bin has supplementary stuff that doesn't get symlinked into /usr/local/bin, although I believe that's not currently the case with recent Mac standard distributions of Python.\nIf you know that the way you'll arrange your path, and the exact set of executables you'll be using, are entirely satisfied from /usr/local/bin, then it's quite OK for you to remove the /Library/etc directories from your own path, of course.\n",
"I just noticed/encountered this issue on my Mac. I have Python 2.5.4, 2.6.2, and 3.1.1 on my machine, and was looking for a way to easily change between them at will. That is when I noticed all the symlinks for the executables, which I found in both '/usr/bin' and '/usr/local/bin'. I ripped all the non-version specific symlinks out, leaving python2.5, python2.6, etc, and wrote a bash shell script that I can run as root to change one symlink I use to direct the path to the version of my choice\n'/Library/Frameworks/Python.framework/Versions/Current'\nThe only bad thing about ripping the symlinks out, is if some other application needed them for some reason. My opinion as to why these symlinks are created is similar to Alex's assessment, the installer is trying to cover all of the bases. All of my versions have been installed by an installer, though I've been trying to compile my own to enable full 64-bit support, and when compiling and installing your own you can choose to not have the symlinks created or the PATH modified during installation.\n"
] |
[
5,
0
] |
[] |
[] |
[
"macos",
"multiple_versions",
"path",
"python"
] |
stackoverflow_0001383863_macos_multiple_versions_path_python.txt
|
Q:
Locking PC in Python on Ubuntu
i'm doing application that locks the PC using pyGtk, but i have a problem, when i click on the ok button the function of the button should get the time from the textbox, hide the window then sleep for a while, and at last lock the pc using a bash command. but it just don't hide.
and here is the complete program
A:
Provided you are using Gnome on Ubuntu
import os
os.system('gnome-screensaver-command –-lock')
A:
Is there any reason for the main class to be a thread? I would make it just a normal class, which would be a lot easier to debug. The reason its not working is that all gtk related stuff must happen in the gtk thread, so do all widget method calls like this: gobject.idle_add(widget.method_name). So to hide the password window: gobject.idle_add(self.pwdWindow.hide)
You'll have to import gobject first of course (You might need to install it first).
EDIT: I don't think that that was your problem, either way I edited your program a lot, here is the modified code.
|
Locking PC in Python on Ubuntu
|
i'm doing application that locks the PC using pyGtk, but i have a problem, when i click on the ok button the function of the button should get the time from the textbox, hide the window then sleep for a while, and at last lock the pc using a bash command. but it just don't hide.
and here is the complete program
|
[
"Provided you are using Gnome on Ubuntu \nimport os\n\nos.system('gnome-screensaver-command –-lock')\n\n",
"Is there any reason for the main class to be a thread? I would make it just a normal class, which would be a lot easier to debug. The reason its not working is that all gtk related stuff must happen in the gtk thread, so do all widget method calls like this: gobject.idle_add(widget.method_name). So to hide the password window: gobject.idle_add(self.pwdWindow.hide)\nYou'll have to import gobject first of course (You might need to install it first).\nEDIT: I don't think that that was your problem, either way I edited your program a lot, here is the modified code.\n"
] |
[
3,
1
] |
[] |
[] |
[
"pygtk",
"python"
] |
stackoverflow_0001376232_pygtk_python.txt
|
Q:
Serving file download with python
Hey gang, I'm trying to convert a legacy php script over to python and not having much luck.
The intent of the script is to serve up a file while concealing it's origin. Here's what's working in php:
<?php
$filepath = "foo.mp3";
$filesize = filesize($filepath);
header("Pragma: no-cache");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
// force download dialog
//header("Content-Type: application/force-download");
header("Content-Type: application/octet-stream");
header("Content-Type: application/download");
header('Content-Disposition: attachment;filename="'.$filepath.'"');
header("Content-Transfer-Encoding: binary");
#header('Content-Type: audio/mpeg3');
header('Content-Length: '.$filesize);
@readfile($filepath);
exit(0);
?>
When I do the equivilent in Python, I get a download that is zero bytes. Here's what I'm trying:
#!/usr/bin/env python
# encoding: utf-8
import sys
import os
import cgitb; cgitb.enable()
filepath = "foo.mp3"
filesize = os.path.getsize(filepath)
print "Prama: no-cache"
print "Expires: 0"
print "Cache-Control: must-revalidate, post-check=0, pre-check=0"
print "Content-Type: application/octet-stream"
print "Content-Type: application/download"
print 'Content-Disposition: attachment;filename="'+filepath+'"'
print "Content-Transfer-Encoding: binary"
print 'Content-Length: '+str(filesize)
print #required blank line
open(filepath,"rb").read()
Can anyone please help me?
A:
Well, maybe it's just me missing something, but... You are actually not writing the contents of the file to stdout. You are just reading it into memory, so it will never show up on the other side of the TCP connection...
Try:
sys.stdout.write(open(filepath,"rb").read())
sys.stdout.flush()
Depending on the file size, it might be better to read the file in chunks, like so:
chunk_size = 4096
handle = open(filepath, "rb")
while True:
buffer = handle.read(chunk_size)
if buffer:
sys.stdout.write(buffer)
else:
break
Another thing to be aware of: writing binary data to stdout may cause Python to choke due to encoding issues. This depends on the Python version you are using.
A:
I don't know if this is the only issue but the python print statement terminates lines with "\n" whereas HTTP headers need to be terminated with "\r\n"
|
Serving file download with python
|
Hey gang, I'm trying to convert a legacy php script over to python and not having much luck.
The intent of the script is to serve up a file while concealing it's origin. Here's what's working in php:
<?php
$filepath = "foo.mp3";
$filesize = filesize($filepath);
header("Pragma: no-cache");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
// force download dialog
//header("Content-Type: application/force-download");
header("Content-Type: application/octet-stream");
header("Content-Type: application/download");
header('Content-Disposition: attachment;filename="'.$filepath.'"');
header("Content-Transfer-Encoding: binary");
#header('Content-Type: audio/mpeg3');
header('Content-Length: '.$filesize);
@readfile($filepath);
exit(0);
?>
When I do the equivilent in Python, I get a download that is zero bytes. Here's what I'm trying:
#!/usr/bin/env python
# encoding: utf-8
import sys
import os
import cgitb; cgitb.enable()
filepath = "foo.mp3"
filesize = os.path.getsize(filepath)
print "Prama: no-cache"
print "Expires: 0"
print "Cache-Control: must-revalidate, post-check=0, pre-check=0"
print "Content-Type: application/octet-stream"
print "Content-Type: application/download"
print 'Content-Disposition: attachment;filename="'+filepath+'"'
print "Content-Transfer-Encoding: binary"
print 'Content-Length: '+str(filesize)
print #required blank line
open(filepath,"rb").read()
Can anyone please help me?
|
[
"Well, maybe it's just me missing something, but... You are actually not writing the contents of the file to stdout. You are just reading it into memory, so it will never show up on the other side of the TCP connection...\nTry:\nsys.stdout.write(open(filepath,\"rb\").read())\nsys.stdout.flush()\n\nDepending on the file size, it might be better to read the file in chunks, like so:\nchunk_size = 4096\nhandle = open(filepath, \"rb\")\n\nwhile True:\n buffer = handle.read(chunk_size)\n if buffer:\n sys.stdout.write(buffer)\n else:\n break\n\nAnother thing to be aware of: writing binary data to stdout may cause Python to choke due to encoding issues. This depends on the Python version you are using.\n",
"I don't know if this is the only issue but the python print statement terminates lines with \"\\n\" whereas HTTP headers need to be terminated with \"\\r\\n\"\n"
] |
[
5,
1
] |
[
"You should check out urllib to set and work with headers. Here's a small example that does this.\n"
] |
[
-1
] |
[
"binary",
"php",
"python"
] |
stackoverflow_0001384320_binary_php_python.txt
|
Q:
python variable scope issue
i am stuck at scope resolution in python.
let me explain a code first:
class serv_db:
def __init__(self, db):
self.db = db
self.dbc = self.db.cursor()
def menudisp (self):
print"Welcome to Tata Motors"
print"Please select one of the options to continue:"
print"1. Insert Car Info"
print"2. Display Car Info"
print"3. Update Car Info"
print"4. Exit"
menu_choice = raw_input("Enter what you want to do: ")
if menu_choice==1: additem()
elif menu_choice==2: getitem()
elif menu_choice==3: edititem()
elif menu_choice==4: sys.exit()
def additem (self):
reg = raw_input("\n\nTo continue, please enter the Registration # of car: ")
print"There are 3 books in our database:"
print"1. Job Card"
print"2. Car"
print"3. Customer"
ch = raw_input("\nEnter your choice: ")
if ch==1: adnewjob()
elif ch==2: adnewcar(self, reg)
elif ch==3: adnewcust()
def adnewcar ( self, reg ):
print "adding info to database: car"
carreg = reg #error here
mftr = raw_input("Enter the Manufacturer of your car: ")
model = raw_input("Enter the Model of your car: ")
car_tb = (carreg,mftr,model)
#writing to DB
self.dbc.execute("insert into car(reg, mftr, model) values(%s,%s,%s)", car_tb)
def main():
db = MySQLdb.connect(user="root", passwd="", db="tatamotors")
service = serv_db(db)
service.menudisp()
if __name__ == '__main__':
main()
i am inputting a registration num into the variable reg now based upon the user's choice one of three functions is performed. i havent yet created the adnewjob() and adnewcust() functions yet. the adnewcar() is ready. when i try to pass the value down to the adnewcar() function, it gives an error saying:
This is the entire traceback:
Traceback <most recent call last>:
File "tatamotors.py", line 5, in <module>
class serv_db:
File "tatamotors.py", line 38, in serv_db
carreg = reg
Name Error: name 'reg' is not defined
i am pretty sure i am making some mistake. n00b here. go easy. thanks :)
EDIT i have joined all the relevant functions and classes. i have also included the related functions too.
A:
It's a mistake to explicitly pass self when calling a method on your class. It's another mistake comparing ch to integers, when raw_input returns a string
Try
elif ch=='2': self.adnewcar(reg)
instead
You also have a print misindented in adnewcar.
But even then, after fixing all this I cannot reproduce your NameError. You really need to edit your question with
More code (the whole class at least.)
Full traceback of the error.
EDIT: I really don't know how you even get that traceback. The code you pasted is filled with the errors I illustrate, no use of self and no use of quotes around the integer.
Per chance are you using Python 3.0? What's your environment?
For the record, this works for me, using Python 2.5.2
class serv_db:
def __init__(self, db):
self.db = db
self.dbc = self.db.cursor()
def menudisp (self):
print"Welcome to Tata Motors"
print"Please select one of the options to continue:"
print"1. Insert Car Info"
print"2. Display Car Info"
print"3. Update Car Info"
print"4. Exit"
menu_choice = raw_input("Enter what you want to do: ")
if menu_choice=='1': self.additem()
elif menu_choice=='2': self.getitem()
elif menu_choice=='3': self.edititem()
elif menu_choice=='4': sys.exit()
def additem (self):
reg = raw_input("\n\nTo continue, please enter the Registration # of car: ")
print"There are 3 books in our database:"
print"1. Job Card"
print"2. Car"
print"3. Customer"
ch = raw_input("\nEnter your choice: ")
if ch=='1': self.adnewjob()
elif ch=='2': self.adnewcar(reg)
elif ch=='3': self.adnewcust()
def adnewcar ( self, reg ):
print "adding info to database: car"
carreg = reg #error here
mftr = raw_input("Enter the Manufacturer of your car: ")
model = raw_input("Enter the Model of your car: ")
car_tb = (carreg,mftr,model)
#writing to DB
self.dbc.execute("insert into car(reg, mftr, model) values(%s,%s,%s)", car_tb)
def main():
db = MySQLdb.connect(user="root", passwd="", db="tatamotors")
service = serv_db(db)
service.menudisp()
if __name__ == '__main__':
main()
A:
You need all these three:
if menu_choice==1: self.additem() # 1: self.
elif ch=='2': self.adnewcar(reg) # 2: self. instead of (self, reg)
print "adding info to database: car" # 3: indented.
Always remember to keep indents consistent throughout a .py, otherwise the interpreter will have a hard time keeping track of your scopes.
A:
is it a copy error or do you have your indentation wrong? The (I suppose) methods align with the class definition instead of being indented one level.
Perhaps you are mixing tabs and spaces?
btw if you define your class correctly you should be calling self.additem() instead of additem() (and the same goes for adnewcar(self,reg, at the moment it works because it is in fact not a method, but a module level function.
|
python variable scope issue
|
i am stuck at scope resolution in python.
let me explain a code first:
class serv_db:
def __init__(self, db):
self.db = db
self.dbc = self.db.cursor()
def menudisp (self):
print"Welcome to Tata Motors"
print"Please select one of the options to continue:"
print"1. Insert Car Info"
print"2. Display Car Info"
print"3. Update Car Info"
print"4. Exit"
menu_choice = raw_input("Enter what you want to do: ")
if menu_choice==1: additem()
elif menu_choice==2: getitem()
elif menu_choice==3: edititem()
elif menu_choice==4: sys.exit()
def additem (self):
reg = raw_input("\n\nTo continue, please enter the Registration # of car: ")
print"There are 3 books in our database:"
print"1. Job Card"
print"2. Car"
print"3. Customer"
ch = raw_input("\nEnter your choice: ")
if ch==1: adnewjob()
elif ch==2: adnewcar(self, reg)
elif ch==3: adnewcust()
def adnewcar ( self, reg ):
print "adding info to database: car"
carreg = reg #error here
mftr = raw_input("Enter the Manufacturer of your car: ")
model = raw_input("Enter the Model of your car: ")
car_tb = (carreg,mftr,model)
#writing to DB
self.dbc.execute("insert into car(reg, mftr, model) values(%s,%s,%s)", car_tb)
def main():
db = MySQLdb.connect(user="root", passwd="", db="tatamotors")
service = serv_db(db)
service.menudisp()
if __name__ == '__main__':
main()
i am inputting a registration num into the variable reg now based upon the user's choice one of three functions is performed. i havent yet created the adnewjob() and adnewcust() functions yet. the adnewcar() is ready. when i try to pass the value down to the adnewcar() function, it gives an error saying:
This is the entire traceback:
Traceback <most recent call last>:
File "tatamotors.py", line 5, in <module>
class serv_db:
File "tatamotors.py", line 38, in serv_db
carreg = reg
Name Error: name 'reg' is not defined
i am pretty sure i am making some mistake. n00b here. go easy. thanks :)
EDIT i have joined all the relevant functions and classes. i have also included the related functions too.
|
[
"It's a mistake to explicitly pass self when calling a method on your class. It's another mistake comparing ch to integers, when raw_input returns a string\nTry\nelif ch=='2': self.adnewcar(reg)\n\ninstead\nYou also have a print misindented in adnewcar.\nBut even then, after fixing all this I cannot reproduce your NameError. You really need to edit your question with \n\nMore code (the whole class at least.)\nFull traceback of the error.\n\nEDIT: I really don't know how you even get that traceback. The code you pasted is filled with the errors I illustrate, no use of self and no use of quotes around the integer. \nPer chance are you using Python 3.0? What's your environment?\nFor the record, this works for me, using Python 2.5.2\nclass serv_db:\n def __init__(self, db):\n self.db = db\n self.dbc = self.db.cursor()\n\n def menudisp (self):\n print\"Welcome to Tata Motors\"\n print\"Please select one of the options to continue:\"\n print\"1. Insert Car Info\"\n print\"2. Display Car Info\"\n print\"3. Update Car Info\"\n print\"4. Exit\"\n menu_choice = raw_input(\"Enter what you want to do: \")\n if menu_choice=='1': self.additem()\n elif menu_choice=='2': self.getitem()\n elif menu_choice=='3': self.edititem()\n elif menu_choice=='4': sys.exit()\n\n def additem (self):\n reg = raw_input(\"\\n\\nTo continue, please enter the Registration # of car: \")\n print\"There are 3 books in our database:\"\n print\"1. Job Card\"\n print\"2. Car\"\n print\"3. Customer\"\n ch = raw_input(\"\\nEnter your choice: \")\n if ch=='1': self.adnewjob()\n elif ch=='2': self.adnewcar(reg)\n elif ch=='3': self.adnewcust()\n\n def adnewcar ( self, reg ):\n print \"adding info to database: car\"\n carreg = reg #error here\n mftr = raw_input(\"Enter the Manufacturer of your car: \")\n model = raw_input(\"Enter the Model of your car: \")\n car_tb = (carreg,mftr,model)\n #writing to DB\n self.dbc.execute(\"insert into car(reg, mftr, model) values(%s,%s,%s)\", car_tb)\n\ndef main():\n db = MySQLdb.connect(user=\"root\", passwd=\"\", db=\"tatamotors\")\n service = serv_db(db)\n service.menudisp()\n\nif __name__ == '__main__':\n main()\n\n",
"You need all these three:\nif menu_choice==1: self.additem() # 1: self.\nelif ch=='2': self.adnewcar(reg) # 2: self. instead of (self, reg)\n print \"adding info to database: car\" # 3: indented.\n\nAlways remember to keep indents consistent throughout a .py, otherwise the interpreter will have a hard time keeping track of your scopes.\n",
"is it a copy error or do you have your indentation wrong? The (I suppose) methods align with the class definition instead of being indented one level. \nPerhaps you are mixing tabs and spaces?\nbtw if you define your class correctly you should be calling self.additem() instead of additem() (and the same goes for adnewcar(self,reg, at the moment it works because it is in fact not a method, but a module level function.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001384301_python.txt
|
Q:
Python : Revert to base __str__ behavior
How can I revert back to the default function that python uses if there is no __str__ method?
class A :
def __str__(self) :
return "Something useless"
class B(A) :
def __str__(self) :
return some_magic_base_function(self)
A:
You can use object.__str__():
class A:
def __str__(self):
return "Something useless"
class B(A):
def __str__(self):
return object.__str__(self)
This gives you the default output for instances of B:
>>> b = B()
>>> str(b)
'<__main__.B instance at 0x7fb34c4f09e0>'
A:
"the default function that python uses if there is no __str__ method" is repr, so:
class B(A) :
def __str__(self) :
return repr(self)
This holds whether __repr__ has been overridden in the inheritance chain or not. IOW, if you ALSO need to bypass possible overrides of __repr__ (as opposed to using them if they exist, as this approach would do), you will need explicit calls to object.__repr__(self) (or to object.__str__ as another answer suggested -- same thing).
|
Python : Revert to base __str__ behavior
|
How can I revert back to the default function that python uses if there is no __str__ method?
class A :
def __str__(self) :
return "Something useless"
class B(A) :
def __str__(self) :
return some_magic_base_function(self)
|
[
"You can use object.__str__():\nclass A:\n def __str__(self):\n return \"Something useless\"\n\nclass B(A):\n def __str__(self):\n return object.__str__(self)\n\nThis gives you the default output for instances of B:\n>>> b = B()\n>>> str(b)\n'<__main__.B instance at 0x7fb34c4f09e0>'\n\n",
"\"the default function that python uses if there is no __str__ method\" is repr, so:\nclass B(A) :\n def __str__(self) :\n return repr(self)\n\nThis holds whether __repr__ has been overridden in the inheritance chain or not. IOW, if you ALSO need to bypass possible overrides of __repr__ (as opposed to using them if they exist, as this approach would do), you will need explicit calls to object.__repr__(self) (or to object.__str__ as another answer suggested -- same thing).\n"
] |
[
12,
2
] |
[] |
[] |
[
"python",
"string"
] |
stackoverflow_0001384542_python_string.txt
|
Q:
Show *only* docstring in Sphinx documentation?
Sphinx has a feature called automethod that extracts the documentation from a method's docstring and embeds that into the documentation. But it not only embeds the docstring, but also the method signature (name + arguments). How do I embed only the docstring (excluding the method signature)?
ref: http://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html
A:
I think what you're looking for is:
from sphinx.ext import autodoc
class DocsonlyMethodDocumenter(autodoc.MethodDocumenter):
def format_args(self):
return None
autodoc.add_documenter(DocsonlyMethodDocumenter)
per the current sources this should allow overriding what class is responsible for documenting methods (older versions of add_documenter forbade such overrides, but now they're explicitly allowed). Having format_args return None, of course, is THE documented way in autodoc to say "don't bother with the signature".
I think this is the clean, architected way to perform this task, and, as such, preferable to monkeypatching alternatives. If you need to live with some old versions of sphinx however you may indeed have to monkeypatch (autodoc.MethodDocumenter.format_args=lambda _:None -- eek!-) though I would recommend upgrading sphinx to the current version as a better approach if at all feasible in your specific deployment.
|
Show *only* docstring in Sphinx documentation?
|
Sphinx has a feature called automethod that extracts the documentation from a method's docstring and embeds that into the documentation. But it not only embeds the docstring, but also the method signature (name + arguments). How do I embed only the docstring (excluding the method signature)?
ref: http://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html
|
[
"I think what you're looking for is:\nfrom sphinx.ext import autodoc\n\nclass DocsonlyMethodDocumenter(autodoc.MethodDocumenter):\n def format_args(self):\n return None\n\nautodoc.add_documenter(DocsonlyMethodDocumenter)\n\nper the current sources this should allow overriding what class is responsible for documenting methods (older versions of add_documenter forbade such overrides, but now they're explicitly allowed). Having format_args return None, of course, is THE documented way in autodoc to say \"don't bother with the signature\".\nI think this is the clean, architected way to perform this task, and, as such, preferable to monkeypatching alternatives. If you need to live with some old versions of sphinx however you may indeed have to monkeypatch (autodoc.MethodDocumenter.format_args=lambda _:None -- eek!-) though I would recommend upgrading sphinx to the current version as a better approach if at all feasible in your specific deployment.\n"
] |
[
17
] |
[] |
[] |
[
"autodoc",
"python",
"python_sphinx"
] |
stackoverflow_0001370283_autodoc_python_python_sphinx.txt
|
Q:
Is concurrent computing important for web development?
Let's say I have a web application running on S servers with an average of C cores each. My application is processing an average of R requests at any instant. Assuming R is around 10 times larger than S * C, won't benefits from spreading the work of a request across multiple cores be minimal since each core is processing around 10 requests already?
If I'm correct why does this guy say concurrency is so important to the future of Python as a language for web development?
I can see many reasons why my argument would be incorrect. Perhaps the application receives a few very-difficult-to-process requests that are outnumbered by available cores. Or perhaps there is a large variation in the difficulty of requests, so it's possible for one core to be unlucky and be given 10 consecutive difficult requests, with the result being that some of them take much longer than is reasonable. Given that the guy who wrote the above essay is so much more experienced than I am, I think there's a significant chance I'm wrong about this, but I'd like to know why.
A:
In the hypothetical circumstances you design, with about 10 requests "in play" per core, as long as the request-to-core assignment is handled sensibly (probably even the simplest round-robin load balancing will do), it's just fine if each request lives throughout its lifetime on a single core.
Point is, that scenario's just ONE possibility -- heavy requests that could really benefit (in terms of lower latency) from marshaling multiple cores per request are surely an alternative possibility. I suspect that on today's web your scenario is more prevalent, but it sure would be nice to handle both kinds, AND "batch-like" background processing ones too.... especially since the number of cores (as opposed to each core's speed) is what's increasing, and what's going to keep increasing, these days.
Far be it from me to argue against Jacob Kaplan-Moss's wisdom, but I'm used to getting pretty good concurrency, at my employer, in nicer and more explicit AND trasparent ways than he seems to advocate -- mapreduce for batch-like jobs, distributed-hashing based sharding for enrolling N backends to shard the work for 1 query, and the like.
Maybe I just don't have enough real-life experience with (say) Erlang, Scala, or Haskell's relatively-new software transactional memory, to see how wonderfully they scale to high utilization of tens or hundrends of thousands of cores on low-QPS, high-work-per-Q workloads... but it seems to me that the silver bullet for this scenario (net of the relatively limited subset of cases where you can turn to mapreduce, pregel, sharding, etc) has not yet been invented in ANY language. With explicit, carefully crafted architecture Python is surely no worse than Java, C# or C++ at handling such scenarios, in my working experience at least.
A:
Not anytime soon in my estimation. The lifespan of most single web requests are well under a second. In light of this it makes little sense to split up the web request task itself and rather distribute the web request tasks across the cores. Something web servers are capable of and most already do.
A:
Caveat: I've only skimmed the "Concurrency" section, which seems to be what you're referring to.The issue seems to be (and this isn't new, of course):
Python threads don't run in parallel due to the GIL.
A system with many cores will need as many backends (in practice, you probably want at least 2xN threads).
Systems are moving towards having more cores; typical PCs have four cores, and affordable server systems with 128 or more cores probably aren't far off.
Running 256 separate Python processes means no data is shared; the entire application and any loaded data is replicated in each process, leading to massive memory waste.
The last bit is where this logic fails. Indeed, if you start 256 Python backends in the naive way, there's no data shared. However, that has no design forethought: that's the wrong way to start lots of backend processes.
The correct way is to load your entire application (all of the Python modules you depend on, etc.) in a single master process. Then that master process forks off backend processes to handle requests. These become separate processes, but standard copy-on-write memory management means that all fixed data already loaded is shared with the master. All of the code that was loaded in advance by the master is now shared among all of the workers, despite the fact that they're all separate processes.
(Of course, COW means that if you write to it, it makes a new copy of the data--but things like compiled Python bytecode should not be changed after loading.)
I don't know if there are Python-related problems which prevent this, but if so, those are implementation details to be fixed. This approach is far easier to implement than trying to eliminate the GIL. It also eliminates any chance of traditional locking and threading problems. Those aren't as bad as they are in some languages and other use cases--there's almost no interaction or locking between the threads--but they don't disappear completely and race conditions in Python are just as much of a pain to track down as they are in any other language.
A:
One thing you're omitting is that a web request isn't a single sequential series of instructions that involve only the CPU.
A typical web request handler might need to do some computation with the CPU, then read some config data off the disk, then ask the database server for some records that have to get transferred to it over ethernet, and so on. The CPU usage might be low, but it could still take a nontrivial amount of time due to waiting on all that I/O between each step.
I think that, even with the GIL, Python can run other threads while one thread waits on I/O. (Other processes certainly can.) Still, Python threads aren't like Erlang threads: start enough of them and it'll start to hurt.
Another issue is memory. C libraries are shared between processes, but (AFAIK) Python libraries aren't. So starting up 10x as many Python processes may reduce I/O waiting, but now you've got 10 copies of each Python module loaded, per core.
I don't know how significant these are, but they do complicate things well beyond "R > 10 * S * C". There's lots still to be done in the Python world to solve them, because these aren't easy problems.
A:
In the article, he seems to single out the GIL as the cause of holding back concurrent processing in web applications in Python, which I simply don't understand. As you get larger, eventually you're going to have another server, and, GIL or not GIL, it won't matter - you have multiple machines.
If he's talking about being able to squeeze more out of a single computer, then I don't think thats as relevant, especially to large-scale distributed computing - different machines don't share a GIL. And, really, if you going to have lots of computers in a cluster, it's better to have a more mid-range servers instead of a single super server for a lot of reasons.
If he means as a way for better supporting functional and asynchronous approaches, then I somewhat agree, but it seems tangential to his "we need better concurrency" point. Python can has it now (which he acknowledges), but, apparently, its not good enough (all because of the GIL, naturally). To be honest, it seems more like bashing on the GIL than a justification of the importance of concurrency in web development.
One important point, with regards to concurrency and web development, is that concurrency is hard. The beauty of something like PHP is that there is no concurrency. You have a process, and you are stuck in that process. Its so simple and easy. You don't have to worry about any sort of concurrency problems - suddenly programming is much easier.
|
Is concurrent computing important for web development?
|
Let's say I have a web application running on S servers with an average of C cores each. My application is processing an average of R requests at any instant. Assuming R is around 10 times larger than S * C, won't benefits from spreading the work of a request across multiple cores be minimal since each core is processing around 10 requests already?
If I'm correct why does this guy say concurrency is so important to the future of Python as a language for web development?
I can see many reasons why my argument would be incorrect. Perhaps the application receives a few very-difficult-to-process requests that are outnumbered by available cores. Or perhaps there is a large variation in the difficulty of requests, so it's possible for one core to be unlucky and be given 10 consecutive difficult requests, with the result being that some of them take much longer than is reasonable. Given that the guy who wrote the above essay is so much more experienced than I am, I think there's a significant chance I'm wrong about this, but I'd like to know why.
|
[
"In the hypothetical circumstances you design, with about 10 requests \"in play\" per core, as long as the request-to-core assignment is handled sensibly (probably even the simplest round-robin load balancing will do), it's just fine if each request lives throughout its lifetime on a single core.\nPoint is, that scenario's just ONE possibility -- heavy requests that could really benefit (in terms of lower latency) from marshaling multiple cores per request are surely an alternative possibility. I suspect that on today's web your scenario is more prevalent, but it sure would be nice to handle both kinds, AND \"batch-like\" background processing ones too.... especially since the number of cores (as opposed to each core's speed) is what's increasing, and what's going to keep increasing, these days.\nFar be it from me to argue against Jacob Kaplan-Moss's wisdom, but I'm used to getting pretty good concurrency, at my employer, in nicer and more explicit AND trasparent ways than he seems to advocate -- mapreduce for batch-like jobs, distributed-hashing based sharding for enrolling N backends to shard the work for 1 query, and the like.\nMaybe I just don't have enough real-life experience with (say) Erlang, Scala, or Haskell's relatively-new software transactional memory, to see how wonderfully they scale to high utilization of tens or hundrends of thousands of cores on low-QPS, high-work-per-Q workloads... but it seems to me that the silver bullet for this scenario (net of the relatively limited subset of cases where you can turn to mapreduce, pregel, sharding, etc) has not yet been invented in ANY language. With explicit, carefully crafted architecture Python is surely no worse than Java, C# or C++ at handling such scenarios, in my working experience at least.\n",
"Not anytime soon in my estimation. The lifespan of most single web requests are well under a second. In light of this it makes little sense to split up the web request task itself and rather distribute the web request tasks across the cores. Something web servers are capable of and most already do. \n",
"Caveat: I've only skimmed the \"Concurrency\" section, which seems to be what you're referring to.The issue seems to be (and this isn't new, of course):\n\nPython threads don't run in parallel due to the GIL.\nA system with many cores will need as many backends (in practice, you probably want at least 2xN threads).\nSystems are moving towards having more cores; typical PCs have four cores, and affordable server systems with 128 or more cores probably aren't far off.\nRunning 256 separate Python processes means no data is shared; the entire application and any loaded data is replicated in each process, leading to massive memory waste.\n\nThe last bit is where this logic fails. Indeed, if you start 256 Python backends in the naive way, there's no data shared. However, that has no design forethought: that's the wrong way to start lots of backend processes.\nThe correct way is to load your entire application (all of the Python modules you depend on, etc.) in a single master process. Then that master process forks off backend processes to handle requests. These become separate processes, but standard copy-on-write memory management means that all fixed data already loaded is shared with the master. All of the code that was loaded in advance by the master is now shared among all of the workers, despite the fact that they're all separate processes.\n(Of course, COW means that if you write to it, it makes a new copy of the data--but things like compiled Python bytecode should not be changed after loading.)\nI don't know if there are Python-related problems which prevent this, but if so, those are implementation details to be fixed. This approach is far easier to implement than trying to eliminate the GIL. It also eliminates any chance of traditional locking and threading problems. Those aren't as bad as they are in some languages and other use cases--there's almost no interaction or locking between the threads--but they don't disappear completely and race conditions in Python are just as much of a pain to track down as they are in any other language.\n",
"One thing you're omitting is that a web request isn't a single sequential series of instructions that involve only the CPU.\nA typical web request handler might need to do some computation with the CPU, then read some config data off the disk, then ask the database server for some records that have to get transferred to it over ethernet, and so on. The CPU usage might be low, but it could still take a nontrivial amount of time due to waiting on all that I/O between each step.\nI think that, even with the GIL, Python can run other threads while one thread waits on I/O. (Other processes certainly can.) Still, Python threads aren't like Erlang threads: start enough of them and it'll start to hurt.\nAnother issue is memory. C libraries are shared between processes, but (AFAIK) Python libraries aren't. So starting up 10x as many Python processes may reduce I/O waiting, but now you've got 10 copies of each Python module loaded, per core.\nI don't know how significant these are, but they do complicate things well beyond \"R > 10 * S * C\". There's lots still to be done in the Python world to solve them, because these aren't easy problems.\n",
"In the article, he seems to single out the GIL as the cause of holding back concurrent processing in web applications in Python, which I simply don't understand. As you get larger, eventually you're going to have another server, and, GIL or not GIL, it won't matter - you have multiple machines.\nIf he's talking about being able to squeeze more out of a single computer, then I don't think thats as relevant, especially to large-scale distributed computing - different machines don't share a GIL. And, really, if you going to have lots of computers in a cluster, it's better to have a more mid-range servers instead of a single super server for a lot of reasons.\nIf he means as a way for better supporting functional and asynchronous approaches, then I somewhat agree, but it seems tangential to his \"we need better concurrency\" point. Python can has it now (which he acknowledges), but, apparently, its not good enough (all because of the GIL, naturally). To be honest, it seems more like bashing on the GIL than a justification of the importance of concurrency in web development.\nOne important point, with regards to concurrency and web development, is that concurrency is hard. The beauty of something like PHP is that there is no concurrency. You have a process, and you are stuck in that process. Its so simple and easy. You don't have to worry about any sort of concurrency problems - suddenly programming is much easier.\n"
] |
[
5,
4,
1,
1,
0
] |
[] |
[] |
[
"concurrency",
"python",
"web_applications"
] |
stackoverflow_0001384715_concurrency_python_web_applications.txt
|
Q:
Unpickling classes from Python 3 in Python 2
If a Python 3 class is pickled using protocol 2, it is supposed to work in Python 2, but unfortunately, this fails because the names of some classes have changed.
Assume we have code called as follows.
Sender
pickle.dumps(obj,2)
Receiver
pickle.loads(atom)
To give a specific case, if obj={}, then the error given is:
ImportError: No module named builtins
This is because Python 2 uses __builtin__ instead.
The question is the best way to fix this problem.
A:
This problem is Python issue 3675. This bug is actually fixed in Python 3.11.
If we import:
from lib2to3.fixes.fix_imports import MAPPING
MAPPING maps Python 2 names to Python 3 names. We want this in reverse.
REVERSE_MAPPING={}
for key,val in MAPPING.items():
REVERSE_MAPPING[val]=key
We can override the Unpickler and loads
class Python_3_Unpickler(pickle.Unpickler):
"""Class for pickling objects from Python 3"""
def find_class(self,module,name):
if module in REVERSE_MAPPING:
module=REVERSE_MAPPING[module]
__import__(module)
mod = sys.modules[module]
klass = getattr(mod, name)
return klass
def loads(str):
file = pickle.StringIO(str)
return Python_3_Unpickler(file).load()
We then call this loads instead of pickle.loads.
This should solve the problem.
|
Unpickling classes from Python 3 in Python 2
|
If a Python 3 class is pickled using protocol 2, it is supposed to work in Python 2, but unfortunately, this fails because the names of some classes have changed.
Assume we have code called as follows.
Sender
pickle.dumps(obj,2)
Receiver
pickle.loads(atom)
To give a specific case, if obj={}, then the error given is:
ImportError: No module named builtins
This is because Python 2 uses __builtin__ instead.
The question is the best way to fix this problem.
|
[
"This problem is Python issue 3675. This bug is actually fixed in Python 3.11.\nIf we import:\nfrom lib2to3.fixes.fix_imports import MAPPING\n\nMAPPING maps Python 2 names to Python 3 names. We want this in reverse.\nREVERSE_MAPPING={}\nfor key,val in MAPPING.items():\n REVERSE_MAPPING[val]=key\n\nWe can override the Unpickler and loads\nclass Python_3_Unpickler(pickle.Unpickler):\n \"\"\"Class for pickling objects from Python 3\"\"\"\n def find_class(self,module,name):\n if module in REVERSE_MAPPING:\n module=REVERSE_MAPPING[module]\n __import__(module)\n mod = sys.modules[module]\n klass = getattr(mod, name)\n return klass\n\ndef loads(str):\n file = pickle.StringIO(str)\n return Python_3_Unpickler(file).load() \n\nWe then call this loads instead of pickle.loads.\nThis should solve the problem.\n"
] |
[
14
] |
[] |
[] |
[
"pickle",
"python",
"python_3.x"
] |
stackoverflow_0001385096_pickle_python_python_3.x.txt
|
Q:
How to set timeout detection on a RabbitMQ server?
I am trying out RabbitMQ with this python binding.
One thing I noticed is that if I kill a consumer uncleanly (emulating a crashed program), the server will think that this consumer is still there for a long time. The result of this is that every other message will be ignored.
For example if you kill a consumer 1 time and reconnect, then 1/2 messages will be ignored. If you kill another consumer, then 2/3 messages will be ignored. If you kill a 3rd, then 3/4 messages will be ignored and so on.
I've tried turning on acknowledgments, but it doesn't seem to be helping. The only solution I have found is to manually stop the server and reset it.
Is there a better way?
How to recreate this scenario
Run rabbitmq.
Unarchive this library.
Download the consumer and publisher here.
Run amqp_consumer.py twice. Run amqp_publisher.py, feeding in some data and observe that it works as expected. Messages are received round robin style.
Kill one of the consumer processes with kill -9 or task manager.
Now when you publish a message, 50% of the messages will be lost.
A:
I don't see amqp_consumer.py or amqp_producer.py in the tarball, so reproducing the fault is tricky.
RabbitMQ terminates connections, releasing their unacknowledged messages for redelivery to other clients, whenever it is told by the operating system that a socket has closed. Your symptoms are very strange, in that even a kill -9 ought to cause the TCP socket to be cleaned up properly.
Some people have noticed problems with sockets surviving longer than they should when running with a firewall or NAT device between the AMQP clients and the server. Could that be an issue here, or are you running everything on localhost? Also, what operating system are you running the various components of the system on?
ETA: From your comment below, I am guessing that while you are running the server on Linux, you may be running the clients on Windows. If this is the case, then it could be that the Windows TCP driver is not closing the sockets correctly, which is different from the kill-9 behaviour on Unix. (On Unix, the kernel will properly close the TCP connections on any killed process.)
If that's the case, then the bad news is that RabbitMQ can only release resources when the socket is closed, so if the client operating system doesn't do that, there's nothing it can do. This is the same as almost every other TCP-based service out there.
The good news, though, is that AMQP supports a "heartbeat" option for exactly these cases, where the networking fabric is untrustworthy. You could try enabling heartbeats. When they're enabled, if the server doesn't receive any traffic within a configurable interval, it decides that the connection must be dead.
The bad news, however, is that I don't think py-amqplib supports heartbeats at the moment. Worth a try, though!
A:
RabbitMQ doesn't have a timeout on acknowledgements from the client that a message has been processed: see this post (the whole thread might be of interest). Some salient points from the post:
The AMQP ack model for subscriptions
and "pull" are identical. In both
cases the message is kept on the
server but is unavailable to other
consumers until it either has been
ack'ed (and gets removed), nack'ed
(with basic.reject; though RabbitMQ
does not implement that) or the
channel/connection is closed (at which
point the message becomes available
to other consumers).
and (my emphases)
There is no timeout on waiting for
acks. Usually that is not a problem
since the common cases of a missing
ack - network or client failure -
will result in the connection getting
dropped (and thus trigger the
behaviour described above). Still,
a timeout could be useful to, say,
deal with alive but unresponsive
consumers. That has come up in
discussion before. Is there a specific
use case you have in mind that
requires such functionality?
The problem might well be occurring because in a client pull model, it's harder for the server to detect a broken connection (as opposed to an alive but unresponsive consumer), particularly as the server seems happy to wait forever for an ack.
Update: On Linux, you can attach signal handlers for SIGTERM and/or SIGKILL and/or SIGINT and hopefully close down the connection in an orderly way from the client. On Windows, I believe closing from Task Manager invokes the Win32 TerminateProcess API, about which MSDN says:
If a process is terminated by
TerminateProcess, all threads of the
process are terminated immediately
with no chance to run additional code.
This means that the thread does not
execute code in termination handler
blocks. In addition, no attached DLLs
are notified that the process is
detaching.
This means it might be difficult to catch termination and close down in an orderly way.
It might be worth pursuing on the RabbitMQ list with your own use case for an ack timeout.
A:
Please provide a few more specifics regarding the components you've declared. Usually (and independent of the the client implementation) a queue with the properties
exclusive and
auto-delete
should get removed as soon as the connection between the declaring client and the broker breaks up. This won't help you with shared queues, though. Please detail a bit what exactly you are trying to model.
|
How to set timeout detection on a RabbitMQ server?
|
I am trying out RabbitMQ with this python binding.
One thing I noticed is that if I kill a consumer uncleanly (emulating a crashed program), the server will think that this consumer is still there for a long time. The result of this is that every other message will be ignored.
For example if you kill a consumer 1 time and reconnect, then 1/2 messages will be ignored. If you kill another consumer, then 2/3 messages will be ignored. If you kill a 3rd, then 3/4 messages will be ignored and so on.
I've tried turning on acknowledgments, but it doesn't seem to be helping. The only solution I have found is to manually stop the server and reset it.
Is there a better way?
How to recreate this scenario
Run rabbitmq.
Unarchive this library.
Download the consumer and publisher here.
Run amqp_consumer.py twice. Run amqp_publisher.py, feeding in some data and observe that it works as expected. Messages are received round robin style.
Kill one of the consumer processes with kill -9 or task manager.
Now when you publish a message, 50% of the messages will be lost.
|
[
"I don't see amqp_consumer.py or amqp_producer.py in the tarball, so reproducing the fault is tricky.\nRabbitMQ terminates connections, releasing their unacknowledged messages for redelivery to other clients, whenever it is told by the operating system that a socket has closed. Your symptoms are very strange, in that even a kill -9 ought to cause the TCP socket to be cleaned up properly.\nSome people have noticed problems with sockets surviving longer than they should when running with a firewall or NAT device between the AMQP clients and the server. Could that be an issue here, or are you running everything on localhost? Also, what operating system are you running the various components of the system on?\nETA: From your comment below, I am guessing that while you are running the server on Linux, you may be running the clients on Windows. If this is the case, then it could be that the Windows TCP driver is not closing the sockets correctly, which is different from the kill-9 behaviour on Unix. (On Unix, the kernel will properly close the TCP connections on any killed process.)\nIf that's the case, then the bad news is that RabbitMQ can only release resources when the socket is closed, so if the client operating system doesn't do that, there's nothing it can do. This is the same as almost every other TCP-based service out there.\nThe good news, though, is that AMQP supports a \"heartbeat\" option for exactly these cases, where the networking fabric is untrustworthy. You could try enabling heartbeats. When they're enabled, if the server doesn't receive any traffic within a configurable interval, it decides that the connection must be dead.\nThe bad news, however, is that I don't think py-amqplib supports heartbeats at the moment. Worth a try, though!\n",
"RabbitMQ doesn't have a timeout on acknowledgements from the client that a message has been processed: see this post (the whole thread might be of interest). Some salient points from the post:\n\nThe AMQP ack model for subscriptions\n and \"pull\" are identical. In both \n cases the message is kept on the\n server but is unavailable to other \n consumers until it either has been\n ack'ed (and gets removed), nack'ed \n (with basic.reject; though RabbitMQ\n does not implement that) or the \n channel/connection is closed (at which\n point the message becomes available\n to other consumers).\n\nand (my emphases)\n\nThere is no timeout on waiting for\n acks. Usually that is not a problem \n since the common cases of a missing\n ack - network or client failure - \n will result in the connection getting\n dropped (and thus trigger the \n behaviour described above). Still,\n a timeout could be useful to, say, \n deal with alive but unresponsive\n consumers. That has come up in \n discussion before. Is there a specific\n use case you have in mind that \n requires such functionality?\n\nThe problem might well be occurring because in a client pull model, it's harder for the server to detect a broken connection (as opposed to an alive but unresponsive consumer), particularly as the server seems happy to wait forever for an ack.\nUpdate: On Linux, you can attach signal handlers for SIGTERM and/or SIGKILL and/or SIGINT and hopefully close down the connection in an orderly way from the client. On Windows, I believe closing from Task Manager invokes the Win32 TerminateProcess API, about which MSDN says:\n\nIf a process is terminated by\n TerminateProcess, all threads of the\n process are terminated immediately\n with no chance to run additional code.\n This means that the thread does not\n execute code in termination handler\n blocks. In addition, no attached DLLs\n are notified that the process is\n detaching.\n\nThis means it might be difficult to catch termination and close down in an orderly way.\nIt might be worth pursuing on the RabbitMQ list with your own use case for an ack timeout.\n",
"Please provide a few more specifics regarding the components you've declared. Usually (and independent of the the client implementation) a queue with the properties\n\nexclusive and\nauto-delete\n\nshould get removed as soon as the connection between the declaring client and the broker breaks up. This won't help you with shared queues, though. Please detail a bit what exactly you are trying to model.\n"
] |
[
11,
5,
2
] |
[] |
[] |
[
"amqp",
"message_queue",
"python",
"rabbitmq"
] |
stackoverflow_0001345239_amqp_message_queue_python_rabbitmq.txt
|
Q:
Calling non-static method from static one in Python
I can't find if it's possible to call a non-static method from a static one in Python.
Thanks
EDIT:
Ok. And what about static from static? Can I do this:
class MyClass(object):
@staticmethod
def static_method_one(cmd):
...
@staticmethod
def static_method_two(cmd):
static_method_one(cmd)
A:
It's perfectly possible, but not very meaningful. Ponder the following class:
class MyClass:
# Normal method:
def normal_method(self, data):
print "Normal method called with instance %s and data %s" % (self, data)
@classmethod
def class_method(cls, data):
print "Class method called with class %s and data %s" % (cls, data)
@staticmethod
def static_method(data):
print "Static method called with data %s" % (data)
Obviously, we can call this in the expected ways:
>>> instance = MyClass()
>>> instance.normal_method("Success!")
Normal method called with instance <__main__.MyClass instance at 0xb7d26bcc> and data Success!
>>> instance.class_method("Success!")
Class method called with class __main__.MyClass and data Success!
>>> instance.static_method("Success!")
Static method called with data Success!
But also consider this:
>>> MyClass.normal_method(instance, "Success!")
Normal method called with instance <__main__.MyClass instance at 0xb7d26bcc> and data Success!
The syntax instance.normal_method() is pretty much just a "shortcut" for MyClass.normal_method(instance). That's why there is this "self" parameter in methods, to pass in self. The name self is not magical, you can call it whatever you want.
The same trick is perfectly possible from withing a static method. You can call the normal method with an instance as first parameter, like so:
@staticmethod
def a_cool_static_method(instance, data):
print "Cool method called with instance %s and data %s" % (instance, data)
MyClass.normal_method(instance, data)
MyClass.class_method(data)
MyClass.static_method(data)
>>> instance.a_cool_static_method(instance, "So Cool!")
Cool method called with instance <__main__.MyClass instance at 0xb7d26bcc> and data So Cool!
Normal method called with instance <__main__.MyClass instance at 0xb7d26bcc> and data So Cool!
Class method called with class __main__.MyClass and data So Cool!
Static method called with data So Cool!
So the answer is yes, you can cal non-static methods from static methods. But only if you can pass in an instance as first parameter. So you either have to generate it from inside the static method (and in that case you are probably better off with a class method) or pass it in. But if you pass in the instance, you can typically just make it a normal method.
So you can, but, it's pretty pointless.
And that then begs the question: Why do you want to?
A:
After the other answers and your follow-up question - regarding static method from static method: Yes you can:
>>> class MyClass(object):
@staticmethod
def static_method_one(x):
return MyClass.static_method_two(x)
@staticmethod
def static_method_two(x):
return 2 * x
>>> MyClass.static_method_one(5)
10
And, in case you're curious, also yes for class method from class method (easy to test this stuff in the interpreter - all this is cut and pasted from Idle in 2.5.2) [**EDITED to make correction in usage pointed out by others**]:
>>> class MyClass2(object):
@classmethod
def class_method_one(cls, x):
return cls.class_method_two(x)
@classmethod
def class_method_two(cls, x):
return 2 * x
>>> MyClass2.class_method_one(5)
10
A:
Use class methods, not static methods. Why else put it inside a class?
class MyClass(object):
@classmethod
def static_method_one(cls, cmd):
...
@classmethod
def static_method_two(cls, cmd):
cls.static_method_one(cmd)
A:
When in a static method, you don't have a self instance: what object are you calling the non-static methods on? Certainly if you have an instance lying around, you can call methods on it.
A:
It's not possible withotut the instance of the class. You could add a param to your method f(x, y, ..., me) and use me as the object to call the non-static methods on.
|
Calling non-static method from static one in Python
|
I can't find if it's possible to call a non-static method from a static one in Python.
Thanks
EDIT:
Ok. And what about static from static? Can I do this:
class MyClass(object):
@staticmethod
def static_method_one(cmd):
...
@staticmethod
def static_method_two(cmd):
static_method_one(cmd)
|
[
"It's perfectly possible, but not very meaningful. Ponder the following class:\nclass MyClass:\n # Normal method:\n def normal_method(self, data):\n print \"Normal method called with instance %s and data %s\" % (self, data)\n\n @classmethod\n def class_method(cls, data):\n print \"Class method called with class %s and data %s\" % (cls, data)\n\n @staticmethod\n def static_method(data):\n print \"Static method called with data %s\" % (data)\n\nObviously, we can call this in the expected ways:\n>>> instance = MyClass()\n>>> instance.normal_method(\"Success!\")\nNormal method called with instance <__main__.MyClass instance at 0xb7d26bcc> and data Success!\n\n>>> instance.class_method(\"Success!\")\nClass method called with class __main__.MyClass and data Success!\n\n>>> instance.static_method(\"Success!\")\nStatic method called with data Success!\n\nBut also consider this:\n>>> MyClass.normal_method(instance, \"Success!\")\nNormal method called with instance <__main__.MyClass instance at 0xb7d26bcc> and data Success!\n\nThe syntax instance.normal_method() is pretty much just a \"shortcut\" for MyClass.normal_method(instance). That's why there is this \"self\" parameter in methods, to pass in self. The name self is not magical, you can call it whatever you want.\nThe same trick is perfectly possible from withing a static method. You can call the normal method with an instance as first parameter, like so:\n @staticmethod\n def a_cool_static_method(instance, data):\n print \"Cool method called with instance %s and data %s\" % (instance, data)\n MyClass.normal_method(instance, data)\n MyClass.class_method(data)\n MyClass.static_method(data)\n\n>>> instance.a_cool_static_method(instance, \"So Cool!\")\nCool method called with instance <__main__.MyClass instance at 0xb7d26bcc> and data So Cool!\nNormal method called with instance <__main__.MyClass instance at 0xb7d26bcc> and data So Cool!\nClass method called with class __main__.MyClass and data So Cool!\nStatic method called with data So Cool!\n\nSo the answer is yes, you can cal non-static methods from static methods. But only if you can pass in an instance as first parameter. So you either have to generate it from inside the static method (and in that case you are probably better off with a class method) or pass it in. But if you pass in the instance, you can typically just make it a normal method.\nSo you can, but, it's pretty pointless.\nAnd that then begs the question: Why do you want to?\n",
"After the other answers and your follow-up question - regarding static method from static method: Yes you can:\n>>> class MyClass(object):\n @staticmethod\n def static_method_one(x):\n return MyClass.static_method_two(x)\n @staticmethod\n def static_method_two(x):\n return 2 * x\n\n\n>>> MyClass.static_method_one(5)\n10\n\nAnd, in case you're curious, also yes for class method from class method (easy to test this stuff in the interpreter - all this is cut and pasted from Idle in 2.5.2) [**EDITED to make correction in usage pointed out by others**]:\n>>> class MyClass2(object):\n @classmethod\n def class_method_one(cls, x):\n return cls.class_method_two(x)\n @classmethod\n def class_method_two(cls, x):\n return 2 * x\n\n\n>>> MyClass2.class_method_one(5)\n10\n\n",
"Use class methods, not static methods. Why else put it inside a class?\nclass MyClass(object):\n\n @classmethod\n def static_method_one(cls, cmd):\n ...\n\n @classmethod\n def static_method_two(cls, cmd):\n cls.static_method_one(cmd)\n\n",
"When in a static method, you don't have a self instance: what object are you calling the non-static methods on? Certainly if you have an instance lying around, you can call methods on it.\n",
"It's not possible withotut the instance of the class. You could add a param to your method f(x, y, ..., me) and use me as the object to call the non-static methods on.\n"
] |
[
15,
7,
4,
2,
0
] |
[] |
[] |
[
"oop",
"python"
] |
stackoverflow_0001385546_oop_python.txt
|
Q:
Edit python31 file and it opens notepad and starts python26
I am in python31,
then I go to file open i left click to open file
and it opens in notepad(simple text editor)python31
The moment it opens the notepad, it starts python26
I thought it has something to open with, and I have changed that to python31
And it still opens python26
EDIT:
The file is created by python26, but it is not executable.
A:
I am guessing here, the question it not very clear.
It sounds like the .py extension in Windows is associated with the Python 2.6 runtime. (This normally get setup this way during installation of Python on Windows). You can change this by updating the associated file extensions and programs in Windows.
By double clicking on the file it is not opening the file for editing but instead running it. If you want to edit the file you have to either right-click and select an approriate edit action or open the file from your editor's 'open file' action. (Or change the .py extension to open in your favourite editor)
A:
I had similar problems, and right clicking and changing edit options didn't solve it. You can try repairing python31 installation with running the msi installer and selecting it as the default python
|
Edit python31 file and it opens notepad and starts python26
|
I am in python31,
then I go to file open i left click to open file
and it opens in notepad(simple text editor)python31
The moment it opens the notepad, it starts python26
I thought it has something to open with, and I have changed that to python31
And it still opens python26
EDIT:
The file is created by python26, but it is not executable.
|
[
"I am guessing here, the question it not very clear. \nIt sounds like the .py extension in Windows is associated with the Python 2.6 runtime. (This normally get setup this way during installation of Python on Windows). You can change this by updating the associated file extensions and programs in Windows.\nBy double clicking on the file it is not opening the file for editing but instead running it. If you want to edit the file you have to either right-click and select an approriate edit action or open the file from your editor's 'open file' action. (Or change the .py extension to open in your favourite editor)\n",
"I had similar problems, and right clicking and changing edit options didn't solve it. You can try repairing python31 installation with running the msi installer and selecting it as the default python\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"python_3.x",
"windows"
] |
stackoverflow_0001385280_python_python_3.x_windows.txt
|
Q:
How to get 280slides.com functionality?
I have seen 280slides.com and it is really impresive. But its developers had to create their own language.
Which platform or language would you use to have an as similar as possible functionality?
Is it possible to do something similar in python?
Could you give any working examples?
A:
Inventing our own language was a miniscule part of the problem. What was important was developing the right framework, which is now available as Cappuccino (cappuccino.org).
You ask what platform/language you could use to develop something similar? I assume you already know that the answer to what platform is the web. 280 Slides is web based, and that is an integral part of the experience.
And when it comes to the web, you realistically have one development choice: JavaScript. Fortunately, once you accept that, there are a lot of things you can do, including targeting JavaScript with other languages (like Java with GWT).
Objective-J is a pretty thin layer on top of JavaScript, so if it's the only thing keeping you from trying Cappuccino, I strongly recommend giving it a shot. As far as the server is concerned, there's nothing remarkable going on. Almost all the magic is happening in the browser.
A:
From memory, that language (Objective-J) compiles into javascript, so it's just the old HTML + CSS + Javascript + <insert server-side language here>. Python could easily be the server-side language. If you want examples of python web frameworks, look at Django and Plone.
|
How to get 280slides.com functionality?
|
I have seen 280slides.com and it is really impresive. But its developers had to create their own language.
Which platform or language would you use to have an as similar as possible functionality?
Is it possible to do something similar in python?
Could you give any working examples?
|
[
"Inventing our own language was a miniscule part of the problem. What was important was developing the right framework, which is now available as Cappuccino (cappuccino.org). \nYou ask what platform/language you could use to develop something similar? I assume you already know that the answer to what platform is the web. 280 Slides is web based, and that is an integral part of the experience. \nAnd when it comes to the web, you realistically have one development choice: JavaScript. Fortunately, once you accept that, there are a lot of things you can do, including targeting JavaScript with other languages (like Java with GWT). \nObjective-J is a pretty thin layer on top of JavaScript, so if it's the only thing keeping you from trying Cappuccino, I strongly recommend giving it a shot. As far as the server is concerned, there's nothing remarkable going on. Almost all the magic is happening in the browser.\n",
"From memory, that language (Objective-J) compiles into javascript, so it's just the old HTML + CSS + Javascript + <insert server-side language here>. Python could easily be the server-side language. If you want examples of python web frameworks, look at Django and Plone.\n"
] |
[
8,
0
] |
[] |
[] |
[
"python",
"rich_internet_application"
] |
stackoverflow_0001385722_python_rich_internet_application.txt
|
Q:
How do you check if a widget has focus in Tkinter?
from Tkinter import *
app = Tk()
text_field = Entry(app)
text_field.pack()
app.mainloop()
I want to be able to check if text_field is currently selected or focused, so that I know whether or not to do something with its contents when the user presses enter.
A:
If you want to do something when the user presses enter only if the focus is on the entry widget, simply add a binding to the entry widget. It will only fire if that widget has focus. For example:
import tkinter as tk
root = tk.Tk()
e1 = tk.Entry(root)
e2 = tk.Entry(root)
e1.pack()
e2.pack()
def handleReturn(event):
print("return: event.widget is",event.widget)
print("focus is:", root.focus_get())
e1.bind("<Return>", handleReturn)
root.mainloop()
Notice that the handler is only called if the first entry has focus when you press return.
If you really want a global binding and need to know which widget has focus, use the focus_get() method on the root object. In the following example a binding is put on "." (the main toplevel) so that it fires no matter what has focus:
import tkinter as tk
root = tk.Tk()
e1 = tk.Entry(root)
e2 = tk.Entry(root)
e1.pack()
e2.pack()
def handleReturn(event):
print("return: event.widget is",event.widget)
print("focus is:",root.focus_get())
root.bind("<Return>", handleReturn)
root.mainloop()
Notice the difference between the two: in the first example, the handler will only be called when you press return in the first entry widget. There is no need to check which widget has focus. In the second example, the handler will be called no matter which widget has focus.
Both solutions are good depending on what you really need to have happened. If your main goal is to only do something when the user presses return in a specific widget, use the former. If you want a global binding, but in that binding do something different based on what has or doesn't have focus, do the latter example.
|
How do you check if a widget has focus in Tkinter?
|
from Tkinter import *
app = Tk()
text_field = Entry(app)
text_field.pack()
app.mainloop()
I want to be able to check if text_field is currently selected or focused, so that I know whether or not to do something with its contents when the user presses enter.
|
[
"If you want to do something when the user presses enter only if the focus is on the entry widget, simply add a binding to the entry widget. It will only fire if that widget has focus. For example:\nimport tkinter as tk\n\nroot = tk.Tk()\ne1 = tk.Entry(root)\ne2 = tk.Entry(root)\ne1.pack()\ne2.pack()\n\ndef handleReturn(event):\n print(\"return: event.widget is\",event.widget)\n print(\"focus is:\", root.focus_get())\n\ne1.bind(\"<Return>\", handleReturn)\n\nroot.mainloop()\n\nNotice that the handler is only called if the first entry has focus when you press return.\nIf you really want a global binding and need to know which widget has focus, use the focus_get() method on the root object. In the following example a binding is put on \".\" (the main toplevel) so that it fires no matter what has focus:\nimport tkinter as tk\n\nroot = tk.Tk()\ne1 = tk.Entry(root)\ne2 = tk.Entry(root)\ne1.pack()\ne2.pack()\n\ndef handleReturn(event):\n print(\"return: event.widget is\",event.widget)\n print(\"focus is:\",root.focus_get())\n\nroot.bind(\"<Return>\", handleReturn)\n\nroot.mainloop()\n\nNotice the difference between the two: in the first example, the handler will only be called when you press return in the first entry widget. There is no need to check which widget has focus. In the second example, the handler will be called no matter which widget has focus.\nBoth solutions are good depending on what you really need to have happened. If your main goal is to only do something when the user presses return in a specific widget, use the former. If you want a global binding, but in that binding do something different based on what has or doesn't have focus, do the latter example.\n"
] |
[
29
] |
[] |
[] |
[
"focus",
"python",
"tkinter",
"tkinter_entry"
] |
stackoverflow_0001385921_focus_python_tkinter_tkinter_entry.txt
|
Q:
is there a way to start/stop linux processes with python?
I want to be able to start a process and then be able to kill it afterwards
A:
Here's a little python script that starts a process, checks if it is running, waits a while, kills it, waits for it to terminate, then checks again. It uses the 'kill' command. Version 2.6 of python subprocess has a kill function. This was written on 2.5.
import subprocess
import time
proc = subprocess.Popen(["sleep", "60"], shell=False)
print 'poll =', proc.poll(), '("None" means process not terminated yet)'
time.sleep(3)
subprocess.call(["kill", "-9", "%d" % proc.pid])
proc.wait()
print 'poll =', proc.poll()
The timed output shows that it was terminated after about 3 seconds, and not 60 as the call to sleep suggests.
$ time python prockill.py
poll = None ("None" means process not terminated yet)
poll = -9
real 0m3.082s
user 0m0.055s
sys 0m0.029s
A:
Have a look at the subprocess module.
You can also use low-level primitives like fork() via the os module.
A:
http://docs.python.org/library/os.html#process-management
A:
A simple function that uses subprocess module:
def CMD(cmd) :
p = subprocess.Popen(cmd, shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
close_fds=False)
return (p.stdin, p.stdout, p.stderr)
A:
see docs for primitive fork() and modules subprocess, multiprocessing, multithreading
A:
If you need to interact with the sub process at all, I recommend the pexpect module (link text). You can send input to the process, receive (or "expect") output in return, and you can close the process (with force=True to send SIGKILL).
|
is there a way to start/stop linux processes with python?
|
I want to be able to start a process and then be able to kill it afterwards
|
[
"Here's a little python script that starts a process, checks if it is running, waits a while, kills it, waits for it to terminate, then checks again. It uses the 'kill' command. Version 2.6 of python subprocess has a kill function. This was written on 2.5.\nimport subprocess\nimport time\n\nproc = subprocess.Popen([\"sleep\", \"60\"], shell=False)\nprint 'poll =', proc.poll(), '(\"None\" means process not terminated yet)'\ntime.sleep(3)\nsubprocess.call([\"kill\", \"-9\", \"%d\" % proc.pid])\nproc.wait()\nprint 'poll =', proc.poll()\n\nThe timed output shows that it was terminated after about 3 seconds, and not 60 as the call to sleep suggests.\n$ time python prockill.py \npoll = None (\"None\" means process not terminated yet)\npoll = -9\n\nreal 0m3.082s\nuser 0m0.055s\nsys 0m0.029s\n\n",
"Have a look at the subprocess module.\nYou can also use low-level primitives like fork() via the os module.\n",
"http://docs.python.org/library/os.html#process-management\n",
"A simple function that uses subprocess module:\ndef CMD(cmd) :\n p = subprocess.Popen(cmd, shell=True,\n stdin=subprocess.PIPE,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n close_fds=False)\n return (p.stdin, p.stdout, p.stderr)\n\n",
"see docs for primitive fork() and modules subprocess, multiprocessing, multithreading \n",
"If you need to interact with the sub process at all, I recommend the pexpect module (link text). You can send input to the process, receive (or \"expect\") output in return, and you can close the process (with force=True to send SIGKILL).\n"
] |
[
14,
8,
3,
3,
0,
0
] |
[] |
[] |
[
"linux",
"python"
] |
stackoverflow_0001378974_linux_python.txt
|
Q:
Add Preprocessor to HTML (Probably in Apache)
I would like to add a preprocessor to HTML pages. Basically, I have a program that takes the name of an HTML file containing preprocessor instructions and outputs the contents of the file after preprocessing to stdout. This mechanism could change if it makes things easier. All I want to do is hook this into Apache so that all the files that my website serves get put through the preprocessor before going out to the browser. A solution that works with other HTTP servers than Apache would be preferred, but is not required.
If my understanding is correct, this is roughly what PHP does.
If it makes any difference, the preprocessor is written in Python.
A:
If you have Apache and a "preprocessor" written in python, why not go for mod_python?
|
Add Preprocessor to HTML (Probably in Apache)
|
I would like to add a preprocessor to HTML pages. Basically, I have a program that takes the name of an HTML file containing preprocessor instructions and outputs the contents of the file after preprocessing to stdout. This mechanism could change if it makes things easier. All I want to do is hook this into Apache so that all the files that my website serves get put through the preprocessor before going out to the browser. A solution that works with other HTTP servers than Apache would be preferred, but is not required.
If my understanding is correct, this is roughly what PHP does.
If it makes any difference, the preprocessor is written in Python.
|
[
"If you have Apache and a \"preprocessor\" written in python, why not go for mod_python?\n"
] |
[
4
] |
[] |
[] |
[
"apache",
"html",
"php",
"preprocessor",
"python"
] |
stackoverflow_0001385965_apache_html_php_preprocessor_python.txt
|
Q:
How to avoid html-escaping in evoque
I try to make my evoque templates color-code a bit,
but the html I get is already escaped with lt-gt's
I read there should be something like a quoted-no-more class
but I haven't been able to find the evoque.quoted package
My aim is to not have escaped html coming out of the template, but 'real'.
from pygments import highlight
from pygments.lexers import get_lexer_by_name
from pygments.formatters import HtmlFormatter
from evoque.domain import Domain
import os
tmpl="""
$begin{code}
${codyfy(evoque(name=label), lang=label.split()[0][1:])}
$end{code}
$begin{c 0}
int main(void){printf("hello world");return 0;}
$end{c 0}
$begin{python 0}
print "hello world"
$end{python 0}
$evoque{#code, label="#c 0"}
$evoque{#code, label="#python 0"}
"""
td = Domain(os.path.abspath("."))
def codyfy(src,lang="python"):
return highlight(src,get_lexer_by_name(lang, stripall=True),HtmlFormatter())
td.set_on_globals('codyfy',codyfy)
td.set_template("testtmpl", src=tmpl, from_string=True)
t = td.get_template("testtmpl")
print t.evoque()
A:
Have you tried it with raw=True? See:
http://evoque.gizmojo.org/howto/source/
I haven't used Qpy before, but perhaps this note will help:
Defining custom quoted-no-more classes
[...] It is also highly recommended to download and install the Qpy unicode templating utility that provides the qpy.xml Quoted-No-More class for automatic input escaping. [...]
A:
Yeps - we got in parallel, ars and I.
The answer is here - snip it into above:
from qpy import xml
def codyfy(src,lang="python"):
return xml(highlight(src,get_lexer_by_name(lang, stripall=True),HtmlFormatter()))
xml() is apparently sort of a payload making subsequent escapers lay off.
A:
qpy.xml() is the quoted-no-more class for XML (and HTML) provided by the Qpy package -- in both superfast C and alternative python versions. Evoque looks for this specific class when quoting="xml" (i.e. equivalent to: quoting=qpy.xml) is used on template loading or rendering.
But any custom quoted-no-more type may be specified as value to the quoting parameter. The evoque.quoted package provides a base quoted-no-more type, and some concrete examples, to make the definition of your custom quoted-no-more types easier. However, as noted, the evoque.quoted is not yet available -- it is mentioned in the changelog but for an as yet unreleased version of Evoque (see the changelog page).
If you pass the output of a whole bunch of templates thru codyfy(), you might also wish to consider specifying it as a filter, either on each template, or as default filter on a collection. An example of using filters is:
http://evoque.gizmojo.org/howto/markdown/
Using filters would in fact be a better approach as it is more general -- you will:
a) not have to hard-wire the qpy.xml() call within codyfy(), and
b) be able to use the exact same codyfy() function as filter in all templates, even when these specify a quoting other than qpy.xml.
|
How to avoid html-escaping in evoque
|
I try to make my evoque templates color-code a bit,
but the html I get is already escaped with lt-gt's
I read there should be something like a quoted-no-more class
but I haven't been able to find the evoque.quoted package
My aim is to not have escaped html coming out of the template, but 'real'.
from pygments import highlight
from pygments.lexers import get_lexer_by_name
from pygments.formatters import HtmlFormatter
from evoque.domain import Domain
import os
tmpl="""
$begin{code}
${codyfy(evoque(name=label), lang=label.split()[0][1:])}
$end{code}
$begin{c 0}
int main(void){printf("hello world");return 0;}
$end{c 0}
$begin{python 0}
print "hello world"
$end{python 0}
$evoque{#code, label="#c 0"}
$evoque{#code, label="#python 0"}
"""
td = Domain(os.path.abspath("."))
def codyfy(src,lang="python"):
return highlight(src,get_lexer_by_name(lang, stripall=True),HtmlFormatter())
td.set_on_globals('codyfy',codyfy)
td.set_template("testtmpl", src=tmpl, from_string=True)
t = td.get_template("testtmpl")
print t.evoque()
|
[
"Have you tried it with raw=True? See:\n\nhttp://evoque.gizmojo.org/howto/source/\n\nI haven't used Qpy before, but perhaps this note will help:\n\nDefining custom quoted-no-more classes\n[...] It is also highly recommended to download and install the Qpy unicode templating utility that provides the qpy.xml Quoted-No-More class for automatic input escaping. [...]\n\n",
"Yeps - we got in parallel, ars and I.\nThe answer is here - snip it into above:\nfrom qpy import xml\ndef codyfy(src,lang=\"python\"):\n return xml(highlight(src,get_lexer_by_name(lang, stripall=True),HtmlFormatter()))\n\nxml() is apparently sort of a payload making subsequent escapers lay off.\n",
"qpy.xml() is the quoted-no-more class for XML (and HTML) provided by the Qpy package -- in both superfast C and alternative python versions. Evoque looks for this specific class when quoting=\"xml\" (i.e. equivalent to: quoting=qpy.xml) is used on template loading or rendering. \nBut any custom quoted-no-more type may be specified as value to the quoting parameter. The evoque.quoted package provides a base quoted-no-more type, and some concrete examples, to make the definition of your custom quoted-no-more types easier. However, as noted, the evoque.quoted is not yet available -- it is mentioned in the changelog but for an as yet unreleased version of Evoque (see the changelog page).\nIf you pass the output of a whole bunch of templates thru codyfy(), you might also wish to consider specifying it as a filter, either on each template, or as default filter on a collection. An example of using filters is: \nhttp://evoque.gizmojo.org/howto/markdown/\nUsing filters would in fact be a better approach as it is more general -- you will:\na) not have to hard-wire the qpy.xml() call within codyfy(), and\nb) be able to use the exact same codyfy() function as filter in all templates, even when these specify a quoting other than qpy.xml. \n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001285134_python.txt
|
Q:
maximum number combinations
I am trying to generate a list of all possible number combinations within a set of four numbers using all numbers from 0 through 9.
I'm getting close but the output doesn't show every possible combination starting from 0000 all the way to 9999.
Any clues as to why the following code is dropping certain combinations?
def permgen(items, n):
if n==0: yield []
else:
for i in range(len(items)):
for cc in permgen(items[:i]+items[i+1:],n-1):
yield [items[i]]+cc
if __name__=="__main__":
for c in permgen(['0','1','2','3','4','5','6','7','8','9'],4): print ''.join(c)
A:
If you have python 2.6, why not use itertools.combinations?
from itertools import combinations
combinations(range(10), 4)
A:
This line:
for cc in permgen(items[:i]+items[i+1:],n-1):
You're basically saying "get a number, than add another one different from ir, repeat n times, then return a list of these digits. That's going to give you numbers where no digit appears more than once. If you change that line to:
for cc in permgen(items,n-1):
then you get all combinations.
A:
Take a look at itertools' combinatoric generators:
>>> from itertools import combinations, permutations, product
>>> def pp(chunks):
... print(' '.join(map(''.join, chunks)))
...
>>> pp(combinations('012', 2))
01 02 12
>>> pp(permutations('012', 2))
01 02 10 12 20 21
>>> pp(product('012', repeat=2))
00 01 02 10 11 12 20 21 22
>>> from itertools import combinations_with_replacement
>>> pp(combinations_with_replacement('012', 2))
00 01 02 11 12 22
combinations_with_replacement is available in Python 3.1 (or 2.7).
It seems that itertools.product is the most suitable for your task.
A:
int ra;
for(ra=0,ra<10000;ra++) printf("%04u\n",ra);
|
maximum number combinations
|
I am trying to generate a list of all possible number combinations within a set of four numbers using all numbers from 0 through 9.
I'm getting close but the output doesn't show every possible combination starting from 0000 all the way to 9999.
Any clues as to why the following code is dropping certain combinations?
def permgen(items, n):
if n==0: yield []
else:
for i in range(len(items)):
for cc in permgen(items[:i]+items[i+1:],n-1):
yield [items[i]]+cc
if __name__=="__main__":
for c in permgen(['0','1','2','3','4','5','6','7','8','9'],4): print ''.join(c)
|
[
"If you have python 2.6, why not use itertools.combinations?\nfrom itertools import combinations\ncombinations(range(10), 4)\n\n",
"This line:\nfor cc in permgen(items[:i]+items[i+1:],n-1):\n\nYou're basically saying \"get a number, than add another one different from ir, repeat n times, then return a list of these digits. That's going to give you numbers where no digit appears more than once. If you change that line to:\nfor cc in permgen(items,n-1):\n\nthen you get all combinations.\n",
"Take a look at itertools' combinatoric generators:\n>>> from itertools import combinations, permutations, product\n>>> def pp(chunks):\n... print(' '.join(map(''.join, chunks)))\n...\n>>> pp(combinations('012', 2))\n01 02 12\n>>> pp(permutations('012', 2))\n01 02 10 12 20 21\n>>> pp(product('012', repeat=2))\n00 01 02 10 11 12 20 21 22\n>>> from itertools import combinations_with_replacement\n>>> pp(combinations_with_replacement('012', 2))\n00 01 02 11 12 22\n\ncombinations_with_replacement is available in Python 3.1 (or 2.7).\nIt seems that itertools.product is the most suitable for your task.\n",
"int ra;\nfor(ra=0,ra<10000;ra++) printf(\"%04u\\n\",ra);\n\n"
] |
[
12,
4,
4,
0
] |
[] |
[] |
[
"combinations",
"python"
] |
stackoverflow_0001385929_combinations_python.txt
|
Q:
SQLCODE -1829 on connect using informixdb
While trying to connect to the database I get a strange error:
DatabaseError: SQLCODE -1829 in CONNECT:
ì¦à : Cannot open file 'os.iem'
ì¦à : Cannot open file 'os.iem'
I can confirm that the file is present in $INFORMIXDIR/msg/en_us/0333/ directory. The environment variables INFORMIXDIR, INFORMIXSERVER and ONCONFIG are set correctly and as expected by my instance. Any clues on what I might be doing wrong?
Am connecting using informixdb (version 2.5) and am connecting to Informix version 11.5. The user who is connecting has the requisite permissions.
A:
ok figured this one out! It appears only the env values set before the import of the informixdb module affect the way the module works. So the following does not work:
import informixdb
os.environ["INFORMIXDIR"] = "/opt/informix"
...
def conn(db):
informixdb.connect(db, self.username, self.passwd)
...
conn('local')
whereas the following does:
os.environ["INFORMIXDIR"] = "/opt/informix"
import informixdb
...
def conn(db):
informixdb.connect(db, self.username, self.passwd)
...
conn('local')
|
SQLCODE -1829 on connect using informixdb
|
While trying to connect to the database I get a strange error:
DatabaseError: SQLCODE -1829 in CONNECT:
ì¦à : Cannot open file 'os.iem'
ì¦à : Cannot open file 'os.iem'
I can confirm that the file is present in $INFORMIXDIR/msg/en_us/0333/ directory. The environment variables INFORMIXDIR, INFORMIXSERVER and ONCONFIG are set correctly and as expected by my instance. Any clues on what I might be doing wrong?
Am connecting using informixdb (version 2.5) and am connecting to Informix version 11.5. The user who is connecting has the requisite permissions.
|
[
"ok figured this one out! It appears only the env values set before the import of the informixdb module affect the way the module works. So the following does not work:\nimport informixdb\nos.environ[\"INFORMIXDIR\"] = \"/opt/informix\"\n\n...\ndef conn(db):\n informixdb.connect(db, self.username, self.passwd)\n...\nconn('local')\n\nwhereas the following does:\nos.environ[\"INFORMIXDIR\"] = \"/opt/informix\"\nimport informixdb\n\n...\ndef conn(db):\n informixdb.connect(db, self.username, self.passwd)\n...\nconn('local')\n\n"
] |
[
1
] |
[] |
[] |
[
"informix",
"python"
] |
stackoverflow_0001385731_informix_python.txt
|
Q:
'Query' object has no attribute 'kind' when using appcfg.py download_data
I'm having problems with bulk downloads -- all of my data is not being pulled down.
I'm still debugging, but I see in my console:
Traceback (most recent call last):
File "/Users/matthew/local/opt/google_appengine/google/appengine/tools/adaptive_thread_pool.py", line 150, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "/Users/matthew/local/opt/google_appengine/google/appengine/tools/bulkloader.py", line 675, in PerformWork
transfer_time = self._TransferItem(thread_pool)
File "/Users/matthew/local/opt/google_appengine/google/appengine/tools/bulkloader.py", line 1054, in _TransferItem
download_result = self.request_manager.GetEntities(self)
File "/Users/matthew/local/opt/google_appengine/google/appengine/tools/bulkloader.py", line 1274, in GetEntities
query = key_range_item.key_range.make_directed_datastore_query(self.kind)
File "/Users/matthew/local/opt/google_appengine/google/appengine/ext/key_range/__init__.py", line 246, in make_directed_datastore_query
query = self.filter_datastore_query(query)
File "/Users/matthew/local/opt/google_appengine/google/appengine/ext/key_range/__init__.py", line 175, in filter_datastore_query
return EmptyDatastoreQuery(query.kind)
AttributeError: 'Query' object has no attribute 'kind'
[INFO ] An error occurred. Shutting down...
.....[ERROR ] Error in Thread-7: 'Query' object has no attribute 'kind'
[INFO ] Have 83 entities, 0 previously transferred
[INFO ] 83 entities (0 bytes) transferred in 2.5 seconds
Any ideas? For this test, I'm only exporting data for a single Model, but each record does have 2 references to another Model.
A:
It looks like you've encountered a bug in the bulk downloader, unfortunately. Can you please file a bug report here? It'd help if you can supply the model definition and bulk loader exporter subclass definition.
|
'Query' object has no attribute 'kind' when using appcfg.py download_data
|
I'm having problems with bulk downloads -- all of my data is not being pulled down.
I'm still debugging, but I see in my console:
Traceback (most recent call last):
File "/Users/matthew/local/opt/google_appengine/google/appengine/tools/adaptive_thread_pool.py", line 150, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "/Users/matthew/local/opt/google_appengine/google/appengine/tools/bulkloader.py", line 675, in PerformWork
transfer_time = self._TransferItem(thread_pool)
File "/Users/matthew/local/opt/google_appengine/google/appengine/tools/bulkloader.py", line 1054, in _TransferItem
download_result = self.request_manager.GetEntities(self)
File "/Users/matthew/local/opt/google_appengine/google/appengine/tools/bulkloader.py", line 1274, in GetEntities
query = key_range_item.key_range.make_directed_datastore_query(self.kind)
File "/Users/matthew/local/opt/google_appengine/google/appengine/ext/key_range/__init__.py", line 246, in make_directed_datastore_query
query = self.filter_datastore_query(query)
File "/Users/matthew/local/opt/google_appengine/google/appengine/ext/key_range/__init__.py", line 175, in filter_datastore_query
return EmptyDatastoreQuery(query.kind)
AttributeError: 'Query' object has no attribute 'kind'
[INFO ] An error occurred. Shutting down...
.....[ERROR ] Error in Thread-7: 'Query' object has no attribute 'kind'
[INFO ] Have 83 entities, 0 previously transferred
[INFO ] 83 entities (0 bytes) transferred in 2.5 seconds
Any ideas? For this test, I'm only exporting data for a single Model, but each record does have 2 references to another Model.
|
[
"It looks like you've encountered a bug in the bulk downloader, unfortunately. Can you please file a bug report here? It'd help if you can supply the model definition and bulk loader exporter subclass definition.\n"
] |
[
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0001386191_google_app_engine_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.