content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Login input
Suppose
My system login ID is tom2deu.
i have one Python program.
Now i am going to modified this Python program.
My question
Can we print my login ID to a seprate notepad or any other file ?
means can we print any person detail(login ID) who had logged the system and modified the program.
A:
I'm not sure what problem you're trying to solve, but if you want to track changes to source files you should probably use a version control system such as Subversion. In a nutshell, it will track all the changes to your source files and also manage conflicts (when two people try to change a file at the same time).
A:
If you want a general solution you should take pyinotify, which is a wrapper for the Linux kernel's inotify feature (kernel version >= 2.6.13). With it you can register for certain events in the filesystem, like e.g. in the following code:
from pyinotify import WatchManager, ThreadedNotifier, ProcessEvent, EventsCodes
file_to_monitor = "/tmp/test.py"
class FSEventHook(ProcessEvent):
def __init__(self, watch_path):
ProcessEvent.__init__(self)
wm = WatchManager()
wm.add_watch(watch_path, EventsCodes.ALL_FLAGS['IN_CLOSE_WRITE'], rec=False)
self.notifier = ThreadedNotifier(wm, self)
def start(self):
self.notifier.start()
def process_IN_CLOSE_WRITE(self, event):
if os.path.isfile(event.pathname):
print "%s changed"%pathname
fshook = FSEventHook(file_to_monitor)
fshook.start()
The following events are supported: IN_MOVED_FROM, IN_CREATE, IN_ONESHOT, IN_IGNORED, IN_ONLYDIR, IN_Q_OVERFLOW, IN_MOVED_TO, IN_DELETE, IN_DONT_FOLLOW, IN_CLOSE_WRITE, IN_MOVE_SELF, IN_ACCESS, IN_MODIFY, IN_MASK_ADD, IN_CLOSE_NOWRITE, IN_ISDIR, IN_UNMOUNT, IN_DELETE_SELF, ALL_EVENTS, IN_OPEN, IN_ATTRIB. For each of them you have to implement its own process_XXX() method, which will be called back if the event is triggered.
A:
try these..
import os
print os.environ['USERNAME']
or
os.getlogin()
then save in a variable and use file handling to store it as a text file..
A:
What you are asking is if you can track who made changes to a file. And that's not a Python question, but a question of the operating system. To be able to track who changed a file, you need to have an auditing system installed. If you use Linux, it has an audit subsystem that you can configure to track this information, I think.
|
Login input
|
Suppose
My system login ID is tom2deu.
i have one Python program.
Now i am going to modified this Python program.
My question
Can we print my login ID to a seprate notepad or any other file ?
means can we print any person detail(login ID) who had logged the system and modified the program.
|
[
"I'm not sure what problem you're trying to solve, but if you want to track changes to source files you should probably use a version control system such as Subversion. In a nutshell, it will track all the changes to your source files and also manage conflicts (when two people try to change a file at the same time).\n",
"If you want a general solution you should take pyinotify, which is a wrapper for the Linux kernel's inotify feature (kernel version >= 2.6.13). With it you can register for certain events in the filesystem, like e.g. in the following code:\nfrom pyinotify import WatchManager, ThreadedNotifier, ProcessEvent, EventsCodes\n\nfile_to_monitor = \"/tmp/test.py\"\n\nclass FSEventHook(ProcessEvent):\n def __init__(self, watch_path):\n ProcessEvent.__init__(self)\n\n wm = WatchManager()\n wm.add_watch(watch_path, EventsCodes.ALL_FLAGS['IN_CLOSE_WRITE'], rec=False)\n self.notifier = ThreadedNotifier(wm, self)\n\n def start(self):\n self.notifier.start()\n\n def process_IN_CLOSE_WRITE(self, event):\n if os.path.isfile(event.pathname):\n print \"%s changed\"%pathname\n\nfshook = FSEventHook(file_to_monitor)\nfshook.start()\n\nThe following events are supported: IN_MOVED_FROM, IN_CREATE, IN_ONESHOT, IN_IGNORED, IN_ONLYDIR, IN_Q_OVERFLOW, IN_MOVED_TO, IN_DELETE, IN_DONT_FOLLOW, IN_CLOSE_WRITE, IN_MOVE_SELF, IN_ACCESS, IN_MODIFY, IN_MASK_ADD, IN_CLOSE_NOWRITE, IN_ISDIR, IN_UNMOUNT, IN_DELETE_SELF, ALL_EVENTS, IN_OPEN, IN_ATTRIB. For each of them you have to implement its own process_XXX() method, which will be called back if the event is triggered.\n",
"try these..\nimport os\nprint os.environ['USERNAME']\nor \nos.getlogin()\nthen save in a variable and use file handling to store it as a text file..\n",
"What you are asking is if you can track who made changes to a file. And that's not a Python question, but a question of the operating system. To be able to track who changed a file, you need to have an auditing system installed. If you use Linux, it has an audit subsystem that you can configure to track this information, I think.\n"
] |
[
2,
2,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001030966_python.txt
|
Q:
what is python equivalent to PHP $_SERVER?
I couldn't find out python equivalent to PHP $_SERVER.
Is there any? Or, what are the methods to bring equivalent results?
Thanks in advance.
A:
Using mod_wsgi, which I would recommend over mod_python (long story but trust me) ... Your application is passed an environment variable such as:
def application(environ, start_response):
...
And the environment contains typical elements from $_SERVER in PHP
...
environ['REQUEST_URI'];
...
And so on.
http://www.modwsgi.org/
Good Luck
REVISION
The real correct answer is use something like Flask
A:
You don't state it explicitly, but I assume you are using mod_python? If so, (and if you don't want to use mod_wsgi instead as suggested earlier) take a look at the documentation for the request object. It contains most of the attributes you'd find in $_SERVER.
An example, to get the full URI of the request, you'd do this:
def yourHandler(req):
querystring=req.parsed_uri[apache.URI_QUERY]
The querystring attribute will now contain the request's querystring, that is, the part after the '?'. (So, for http://www.example.com/index?this=test, querystring would be this=test)
|
what is python equivalent to PHP $_SERVER?
|
I couldn't find out python equivalent to PHP $_SERVER.
Is there any? Or, what are the methods to bring equivalent results?
Thanks in advance.
|
[
"Using mod_wsgi, which I would recommend over mod_python (long story but trust me) ... Your application is passed an environment variable such as:\ndef application(environ, start_response):\n ...\n\nAnd the environment contains typical elements from $_SERVER in PHP\n...\nenviron['REQUEST_URI'];\n...\n\nAnd so on.\nhttp://www.modwsgi.org/\nGood Luck\nREVISION\nThe real correct answer is use something like Flask\n",
"You don't state it explicitly, but I assume you are using mod_python? If so, (and if you don't want to use mod_wsgi instead as suggested earlier) take a look at the documentation for the request object. It contains most of the attributes you'd find in $_SERVER.\nAn example, to get the full URI of the request, you'd do this:\ndef yourHandler(req):\n querystring=req.parsed_uri[apache.URI_QUERY]\n\nThe querystring attribute will now contain the request's querystring, that is, the part after the '?'. (So, for http://www.example.com/index?this=test, querystring would be this=test)\n"
] |
[
11,
1
] |
[] |
[] |
[
"php",
"python"
] |
stackoverflow_0001031192_php_python.txt
|
Q:
encryption/decryption of one time password in python
how to encrypt one time password using the public key and again recover it by the private key of the user , i need to do it using python
A:
You can use Python's encryption library called PyCrypto (www.pycrypto.org). Here's some overview of Public Key encryption using PyCrypto: http://www.dlitz.net/software/pycrypto/doc/#crypto-publickey-public-key-algorithms
A:
Use an encryption library, for example pyopenssl, which looks more up-to-date then pycrypto.
pyopenssl is a rather thin wrapper around (a subset of) the OpenSSL
library. With thin wrapper I mean that a lot of the object methods do
nothing more than calling a corresponding function in the OpenSSL
library.
|
encryption/decryption of one time password in python
|
how to encrypt one time password using the public key and again recover it by the private key of the user , i need to do it using python
|
[
"You can use Python's encryption library called PyCrypto (www.pycrypto.org). Here's some overview of Public Key encryption using PyCrypto: http://www.dlitz.net/software/pycrypto/doc/#crypto-publickey-public-key-algorithms\n",
"Use an encryption library, for example pyopenssl, which looks more up-to-date then pycrypto.\n\npyopenssl is a rather thin wrapper around (a subset of) the OpenSSL\n library. With thin wrapper I mean that a lot of the object methods do\n nothing more than calling a corresponding function in the OpenSSL\n library.\n\n"
] |
[
4,
0
] |
[] |
[] |
[
"encryption",
"python"
] |
stackoverflow_0001031588_encryption_python.txt
|
Q:
Profiling self and arguments in python?
How do I profile a call that involves self and arguments in python?
def performProfile(self):
import cProfile
self.profileCommand(1000000)
def profileCommand(self, a):
for i in a:
pass
In the above example how would I profile just the call to profileCommand? I figured out I need to use runctx for argument, but how do I deal with self? (The code actually involves a UI, so it's hard to pull out the call to profile separately).
A:
you need to pass locals/globals dict and pass first argument what you will usually type
e.g.
cProfile.runctx("self.profileCommand(100)", globals(),locals())
use something like this
class A(object):
def performProfile(self):
import cProfile
cProfile.runctx("self.profileCommand(100)", globals(),locals())
def profileCommand(self, a):
for i in xrange(a):
pass
print "end."
A().performProfile()
and don't call whole UI in profile , profile the specific function or computation
|
Profiling self and arguments in python?
|
How do I profile a call that involves self and arguments in python?
def performProfile(self):
import cProfile
self.profileCommand(1000000)
def profileCommand(self, a):
for i in a:
pass
In the above example how would I profile just the call to profileCommand? I figured out I need to use runctx for argument, but how do I deal with self? (The code actually involves a UI, so it's hard to pull out the call to profile separately).
|
[
"you need to pass locals/globals dict and pass first argument what you will usually type\ne.g.\ncProfile.runctx(\"self.profileCommand(100)\", globals(),locals())\n\nuse something like this\nclass A(object):\n def performProfile(self):\n import cProfile\n cProfile.runctx(\"self.profileCommand(100)\", globals(),locals())\n\n def profileCommand(self, a):\n for i in xrange(a):\n pass\n print \"end.\"\n\nA().performProfile()\n\nand don't call whole UI in profile , profile the specific function or computation\n"
] |
[
13
] |
[] |
[] |
[
"profiling",
"python"
] |
stackoverflow_0001031657_profiling_python.txt
|
Q:
"Adding" Dictionaries in Python?
Possible Duplicate:
python dict.add_by_value(dict_2) ?
My input is two dictionaries that have string keys and integer values. I want to add the two dictionaries so that the result has all the keys of the input dictionaries, and the values are the sum of the input dictionaries' values.
For clarity, if a key appears in only one of the inputs, that key/value will appear in the result, whereas if the key appears in both dictionaries, the sum of values will appear in the result.
For example, say my input is:
a = dict()
a['cat'] = 1
a['fish'] = 10
a['aardvark'] = 1000
b = dict()
b['cat'] = 2
b['dog'] = 200
b['aardvark'] = 2000
I would like the result to be:
{'cat': 3, 'fish': 10, 'dog': 200, 'aardvark': 3000}
Knowing Python there must be a one-liner to get this done (it doesn't really have to be one line...). Any thoughts?
A:
How about that:
dict( [ (n, a.get(n, 0)+b.get(n, 0)) for n in set(a)|set(b) ] )
Or without creating an intermediate list (generator is enough):
dict( (n, a.get(n, 0)+b.get(n, 0)) for n in set(a)|set(b) )
Post Scriptum:
As a commentator addressed correctly, there is a way to implement that easier with the new (from Py2.7) collections.Counter class. As much I remember, this version was not available when I wrote the answer:
from collections import Counter
dict(Counter(a)+Counter(b))
A:
result in a:
for elem in b:
a[elem] = a.get(elem, 0) + b[elem]
result in c:
c = dict(a)
for elem in b:
c[elem] = a.get(elem, 0) + b[elem]
A:
Not in one line, but ...
import itertools
import collections
a = dict()
a['cat'] = 1
a['fish'] = 10
a['aardvark'] = 1000
b = dict()
b['cat'] = 2
b['dog'] = 200
b['aardvark'] = 2000
c = collections.defaultdict(int)
for k, v in itertools.chain(a.iteritems(), b.iteritems()):
c[k] += v
You can easily extend it to a larger number of dictionaries.
A:
One liner (as sortof requested): get key lists, add them, discard duplicates, iterate over result with list comprehension, return (key,value) pairs for the sum if the key is in both dicts, or just the individual values if not. Wrap in dict.
>>> dict([(x,a[x]+b[x]) if (x in a and x in b) else (x,a[x]) if (x in a) else (x,b[x]) for x in set(a.keys()+b.keys())])
{'aardvark': 3000, 'fish': 10, 'dog': 200, 'cat': 3}
|
"Adding" Dictionaries in Python?
|
Possible Duplicate:
python dict.add_by_value(dict_2) ?
My input is two dictionaries that have string keys and integer values. I want to add the two dictionaries so that the result has all the keys of the input dictionaries, and the values are the sum of the input dictionaries' values.
For clarity, if a key appears in only one of the inputs, that key/value will appear in the result, whereas if the key appears in both dictionaries, the sum of values will appear in the result.
For example, say my input is:
a = dict()
a['cat'] = 1
a['fish'] = 10
a['aardvark'] = 1000
b = dict()
b['cat'] = 2
b['dog'] = 200
b['aardvark'] = 2000
I would like the result to be:
{'cat': 3, 'fish': 10, 'dog': 200, 'aardvark': 3000}
Knowing Python there must be a one-liner to get this done (it doesn't really have to be one line...). Any thoughts?
|
[
"How about that:\ndict( [ (n, a.get(n, 0)+b.get(n, 0)) for n in set(a)|set(b) ] )\n\nOr without creating an intermediate list (generator is enough):\ndict( (n, a.get(n, 0)+b.get(n, 0)) for n in set(a)|set(b) )\n\n\nPost Scriptum:\nAs a commentator addressed correctly, there is a way to implement that easier with the new (from Py2.7) collections.Counter class. As much I remember, this version was not available when I wrote the answer:\nfrom collections import Counter\ndict(Counter(a)+Counter(b))\n\n",
"result in a:\nfor elem in b:\n a[elem] = a.get(elem, 0) + b[elem]\n\nresult in c:\nc = dict(a)\nfor elem in b:\n c[elem] = a.get(elem, 0) + b[elem]\n\n",
"Not in one line, but ...\nimport itertools\nimport collections\na = dict()\na['cat'] = 1\na['fish'] = 10\na['aardvark'] = 1000\nb = dict()\nb['cat'] = 2\nb['dog'] = 200\nb['aardvark'] = 2000\nc = collections.defaultdict(int)\nfor k, v in itertools.chain(a.iteritems(), b.iteritems()):\n c[k] += v\n\nYou can easily extend it to a larger number of dictionaries.\n",
"One liner (as sortof requested): get key lists, add them, discard duplicates, iterate over result with list comprehension, return (key,value) pairs for the sum if the key is in both dicts, or just the individual values if not. Wrap in dict.\n>>> dict([(x,a[x]+b[x]) if (x in a and x in b) else (x,a[x]) if (x in a) else (x,b[x]) for x in set(a.keys()+b.keys())])\n{'aardvark': 3000, 'fish': 10, 'dog': 200, 'cat': 3}\n\n"
] |
[
51,
15,
15,
4
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0001031199_dictionary_python.txt
|
Q:
Python: finding keys with unique values in a dictionary?
I receive a dictionary as input, and want to return a list of keys for which the dictionary values are unique in the scope of that dictionary.
I will clarify with an example. Say my input is dictionary a, constructed as follows:
a = dict()
a['cat'] = 1
a['fish'] = 1
a['dog'] = 2 # <-- unique
a['bat'] = 3
a['aardvark'] = 3
a['snake'] = 4 # <-- unique
a['wallaby'] = 5
a['badger'] = 5
The result I expect is ['dog', 'snake'].
There are obvious brute force ways to achieve this, however I wondered if there's a neat Pythonian way to get the job done.
A:
I think efficient way if dict is too large would be
countMap = {}
for v in a.itervalues():
countMap[v] = countMap.get(v,0) + 1
uni = [ k for k, v in a.iteritems() if countMap[v] == 1]
A:
Note that this actually is a bruteforce:
l = a.values()
b = [x for x in a if l.count(a[x]) == 1]
A:
Here is a solution that only requires traversing the dict once:
def unique_values(d):
seen = {} # dict (value, key)
result = set() # keys with unique values
for k,v in d.iteritems():
if v in seen:
result.discard(seen[v])
else:
seen[v] = k
result.add(k)
return list(result)
A:
>>> b = []
>>> import collections
>>> bag = collections.defaultdict(lambda: 0)
>>> for v in a.itervalues():
... bag[v] += 1
...
>>> b = [k for (k, v) in a.iteritems() if bag[v] == 1]
>>> b.sort() # optional
>>> print b
['dog', 'snake']
>>>
A:
A little more verbose, but does need only one pass over a:
revDict = {}
for k, v in a.iteritems():
if v in revDict:
revDict[v] = None
else:
revDict[v] = k
[ x for x in revDict.itervalues() if x != None ]
( I hope it works, since I can't test it here )
A:
What about subclassing?
class UniqueValuesDict(dict):
def __init__(self, *args):
dict.__init__(self, *args)
self._inverse = {}
def __setitem__(self, key, value):
if value in self.values():
if value in self._inverse:
del self._inverse[value]
else:
self._inverse[value] = key
dict.__setitem__(self, key, value)
def unique_values(self):
return self._inverse.values()
a = UniqueValuesDict()
a['cat'] = 1
a['fish'] = 1
a[None] = 1
a['duck'] = 1
a['dog'] = 2 # <-- unique
a['bat'] = 3
a['aardvark'] = 3
a['snake'] = 4 # <-- unique
a['wallaby'] = 5
a['badger'] = 5
assert a.unique_values() == ['dog', 'snake']
A:
Here's another variation.
>>> import collections
>>> inverse= collections.defaultdict(list)
>>> for k,v in a.items():
... inverse[v].append(k)
...
>>> [ v[0] for v in inverse.values() if len(v) == 1 ]
['dog', 'snake']
I'm partial to this because the inverted dictionary is such a common design pattern.
|
Python: finding keys with unique values in a dictionary?
|
I receive a dictionary as input, and want to return a list of keys for which the dictionary values are unique in the scope of that dictionary.
I will clarify with an example. Say my input is dictionary a, constructed as follows:
a = dict()
a['cat'] = 1
a['fish'] = 1
a['dog'] = 2 # <-- unique
a['bat'] = 3
a['aardvark'] = 3
a['snake'] = 4 # <-- unique
a['wallaby'] = 5
a['badger'] = 5
The result I expect is ['dog', 'snake'].
There are obvious brute force ways to achieve this, however I wondered if there's a neat Pythonian way to get the job done.
|
[
"I think efficient way if dict is too large would be\ncountMap = {}\nfor v in a.itervalues():\n countMap[v] = countMap.get(v,0) + 1\nuni = [ k for k, v in a.iteritems() if countMap[v] == 1]\n\n",
"Note that this actually is a bruteforce:\nl = a.values()\nb = [x for x in a if l.count(a[x]) == 1]\n\n",
"Here is a solution that only requires traversing the dict once:\ndef unique_values(d):\n seen = {} # dict (value, key)\n result = set() # keys with unique values\n for k,v in d.iteritems():\n if v in seen:\n result.discard(seen[v])\n else:\n seen[v] = k\n result.add(k)\n return list(result)\n\n",
">>> b = []\n>>> import collections\n>>> bag = collections.defaultdict(lambda: 0)\n>>> for v in a.itervalues():\n... bag[v] += 1\n...\n>>> b = [k for (k, v) in a.iteritems() if bag[v] == 1]\n>>> b.sort() # optional\n>>> print b\n['dog', 'snake']\n>>>\n\n",
"A little more verbose, but does need only one pass over a:\nrevDict = {}\nfor k, v in a.iteritems():\n if v in revDict:\n revDict[v] = None\n else:\n revDict[v] = k\n\n[ x for x in revDict.itervalues() if x != None ]\n\n( I hope it works, since I can't test it here )\n",
"What about subclassing? \nclass UniqueValuesDict(dict):\n\n def __init__(self, *args):\n dict.__init__(self, *args)\n self._inverse = {}\n\n def __setitem__(self, key, value):\n if value in self.values():\n if value in self._inverse:\n del self._inverse[value]\n else:\n self._inverse[value] = key\n dict.__setitem__(self, key, value)\n\n def unique_values(self):\n return self._inverse.values()\n\na = UniqueValuesDict()\n\na['cat'] = 1\na['fish'] = 1\na[None] = 1\na['duck'] = 1\na['dog'] = 2 # <-- unique\na['bat'] = 3\na['aardvark'] = 3\na['snake'] = 4 # <-- unique\na['wallaby'] = 5\na['badger'] = 5\n\nassert a.unique_values() == ['dog', 'snake']\n\n",
"Here's another variation.\n>>> import collections\n>>> inverse= collections.defaultdict(list)\n>>> for k,v in a.items():\n... inverse[v].append(k)\n... \n>>> [ v[0] for v in inverse.values() if len(v) == 1 ]\n['dog', 'snake']\n\nI'm partial to this because the inverted dictionary is such a common design pattern.\n"
] |
[
14,
5,
5,
4,
2,
2,
0
] |
[
"You could do something like this (just count the number of occurrences for each value):\ndef unique(a):\n from collections import defaultdict\n count = defaultdict(lambda: 0)\n for k, v in a.iteritems():\n count[v] += 1\n for v, c in count.iteritems():\n if c <= 1:\n yield v\n\n",
"Use nested list comprehensions!\nprint [v[0] for v in \n dict([(v, [k for k in a.keys() if a[k] == v])\n for v in set(a.values())]).values()\n if len(v) == 1]\n\n"
] |
[
-1,
-2
] |
[
"dictionary",
"python"
] |
stackoverflow_0001032281_dictionary_python.txt
|
Q:
Check if only one variable in a list of variables is set
I'm looking for a simple method to check if only one variable in a list of variables has a True value.
I've looked at this logical xor post and is trying to find a way to adapt to multiple variables and only one true.
Example
>>>TrueXor(1,0,0)
True
>>>TrueXor(0,0,1)
True
>>>TrueXor(1,1,0)
False
>>>TrueXor(0,0,0,0,0)
False
A:
There isn't one built in but it's not to hard to roll you own:
def TrueXor(*args):
return sum(args) == 1
Since "[b]ooleans are a subtype of plain integers" (source) you can sum the list of integers quite easily and you can also pass true booleans into this function as well.
So these two calls are homogeneous:
TrueXor(1, 0, 0)
TrueXor(True, False, False)
If you want explicit boolean conversion: sum( bool(x) for x in args ) == 1.
A:
I think the sum-based solution is fine for the given example, but keep in mind that boolean predicates in python always short-circuit their evaluation. So you might want to consider something more consistent with all and any.
def any_one(iterable):
it = iter(iterable)
return any(it) and not any(it)
A:
>>> def f(*n):
... n = [bool(i) for i in n]
... return n.count(True) == 1
...
>>> f(0, 0, 0)
False
>>> f(1, 0, 0)
True
>>> f(1, 0, 1)
False
>>> f(1, 1, 1)
False
>>> f(0, 1, 0)
True
>>>
A:
The question you linked to already provides the solution for two variables. All you have to do is extend it to work on n variables:
import operator
def only_one_set(*vars):
bools = [bool(v) for v in vars]
return reduce(operator.xor, bools, False)
>>> a, b, c, d, e = False, '', [], 10, -99
>>> only_one_set(a, b, c, d)
True
>>> only_one_set(a, b, c, d, e)
False
A:
Here's my straightforward approach. I've renamed it only_one since xor with more than one input is usually a parity checker, not an "only one" checker.
def only_one(*args):
result = False
for a in args:
if a:
if result:
return False
else:
result = True
return result
Testing:
>>> only_one(1,0,0)
True
>>> only_one(0,0,1)
True
>>> only_one(1,1,0)
False
>>> only_one(0,0,0,0,0)
False
>>> only_one(1,1,0,1)
False
|
Check if only one variable in a list of variables is set
|
I'm looking for a simple method to check if only one variable in a list of variables has a True value.
I've looked at this logical xor post and is trying to find a way to adapt to multiple variables and only one true.
Example
>>>TrueXor(1,0,0)
True
>>>TrueXor(0,0,1)
True
>>>TrueXor(1,1,0)
False
>>>TrueXor(0,0,0,0,0)
False
|
[
"There isn't one built in but it's not to hard to roll you own:\ndef TrueXor(*args):\n return sum(args) == 1\n\nSince \"[b]ooleans are a subtype of plain integers\" (source) you can sum the list of integers quite easily and you can also pass true booleans into this function as well.\nSo these two calls are homogeneous:\nTrueXor(1, 0, 0)\nTrueXor(True, False, False)\n\nIf you want explicit boolean conversion: sum( bool(x) for x in args ) == 1.\n",
"I think the sum-based solution is fine for the given example, but keep in mind that boolean predicates in python always short-circuit their evaluation. So you might want to consider something more consistent with all and any.\ndef any_one(iterable):\n it = iter(iterable)\n return any(it) and not any(it)\n\n",
">>> def f(*n):\n... n = [bool(i) for i in n]\n... return n.count(True) == 1\n...\n>>> f(0, 0, 0)\nFalse\n>>> f(1, 0, 0)\nTrue\n>>> f(1, 0, 1)\nFalse\n>>> f(1, 1, 1)\nFalse\n>>> f(0, 1, 0)\nTrue\n>>>\n\n",
"The question you linked to already provides the solution for two variables. All you have to do is extend it to work on n variables:\nimport operator\n\ndef only_one_set(*vars):\n bools = [bool(v) for v in vars]\n return reduce(operator.xor, bools, False)\n\n>>> a, b, c, d, e = False, '', [], 10, -99\n>>> only_one_set(a, b, c, d)\nTrue\n>>> only_one_set(a, b, c, d, e)\nFalse\n\n",
"Here's my straightforward approach. I've renamed it only_one since xor with more than one input is usually a parity checker, not an \"only one\" checker.\ndef only_one(*args):\n result = False\n for a in args:\n if a:\n if result:\n return False\n else:\n result = True\n return result\n\nTesting:\n>>> only_one(1,0,0)\nTrue\n>>> only_one(0,0,1)\nTrue\n>>> only_one(1,1,0)\nFalse\n>>> only_one(0,0,0,0,0)\nFalse\n>>> only_one(1,1,0,1)\nFalse\n\n"
] |
[
26,
10,
5,
1,
1
] |
[] |
[] |
[
"python",
"xor"
] |
stackoverflow_0001032411_python_xor.txt
|
Q:
Decorators that are properties of decorated objects?
I want to create a decorator that allows me to refer back to the decorated object and grab another decorator from it, the same way you can use setter/deleter on properties:
@property
def x(self):
return self._x
@x.setter
def x(self, y):
self._x = y
Specifically, I'd like it to act basically the same as property, but emulate a sequence instead of a single value. Here's my first shot at it, but it doesn't seem to work:
def listprop(indices):
def dec(func):
class c(object):
def __init__(self, l):
self.l = l
def __getitem__(self, i):
if not i in self.l:
raise Exception("Invalid item: " + i)
return func(i)
@staticmethod
def setter(func):
def set(self, i, val):
if not i in self.l:
raise Exception("Invalid item: " + i)
func(i, val)
c.__setitem__ = set
return c(indices)
return dec
# ...
class P:
@listprop(range(3))
def prop(self, i):
return get_prop(i)
@prop.setter
def prop(self, i, val):
set_prop(i, val)
I'm pretty sure that c.__setitem__ = set is wrong, but I can't figure out how to get a reference to the instance at that point. Ideas?
Alex Martelli's solution works on 2.6, but something about it is failing on 2.4 and 2.5 (I'd prefer to have it work on these older versions as well, though it's not strictly necessary):
2.4:
>>> p = P()
>>> p.prop
>>> p.prop[0]
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: unsubscriptable object
2.5:
>>> p = P()
>>> p.prop
>>> p.prop[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object is unsubscriptable
2.6:
>>> p = P()
>>> p.prop
<__main__.c object at 0x017F5730>
>>> p.prop[0]
0
A:
I fixed many little details and the following version seems to work as you require:
def listprop(indices):
def dec(func):
class c(object):
def __init__(self, l, obj=None):
self.l = l
self.obj = obj
def __get__(self, obj, cls=None):
return c(self.l, obj)
def __getitem__(self, i):
if not i in self.l:
raise Exception("Invalid item: " + i)
return func(self.obj, i)
def setter(self, sfunc):
def doset(self, i, val):
if not i in self.l:
raise Exception("Invalid item: " + i)
sfunc(self.obj, i, val)
c.__setitem__ = doset
return self
result = c(indices)
return result
return dec
# ...
class P:
@staticmethod
def get_prop(i): return i*100
@staticmethod
def set_prop(i, val): print 'set %s to %s' % (i, val)
@listprop(range(3))
def prop(self, i):
return self.get_prop(i)
@prop.setter
def prop(self, i, val):
self.set_prop(i, val)
As you see, assigning to c.__setitem__ was not the problem -- there were others, such as c lacking a __get__ (so it wasn't a descriptor, see this guiide) and setter returning None (so p.prop ended up as None).
|
Decorators that are properties of decorated objects?
|
I want to create a decorator that allows me to refer back to the decorated object and grab another decorator from it, the same way you can use setter/deleter on properties:
@property
def x(self):
return self._x
@x.setter
def x(self, y):
self._x = y
Specifically, I'd like it to act basically the same as property, but emulate a sequence instead of a single value. Here's my first shot at it, but it doesn't seem to work:
def listprop(indices):
def dec(func):
class c(object):
def __init__(self, l):
self.l = l
def __getitem__(self, i):
if not i in self.l:
raise Exception("Invalid item: " + i)
return func(i)
@staticmethod
def setter(func):
def set(self, i, val):
if not i in self.l:
raise Exception("Invalid item: " + i)
func(i, val)
c.__setitem__ = set
return c(indices)
return dec
# ...
class P:
@listprop(range(3))
def prop(self, i):
return get_prop(i)
@prop.setter
def prop(self, i, val):
set_prop(i, val)
I'm pretty sure that c.__setitem__ = set is wrong, but I can't figure out how to get a reference to the instance at that point. Ideas?
Alex Martelli's solution works on 2.6, but something about it is failing on 2.4 and 2.5 (I'd prefer to have it work on these older versions as well, though it's not strictly necessary):
2.4:
>>> p = P()
>>> p.prop
>>> p.prop[0]
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: unsubscriptable object
2.5:
>>> p = P()
>>> p.prop
>>> p.prop[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object is unsubscriptable
2.6:
>>> p = P()
>>> p.prop
<__main__.c object at 0x017F5730>
>>> p.prop[0]
0
|
[
"I fixed many little details and the following version seems to work as you require:\ndef listprop(indices):\n def dec(func):\n class c(object):\n def __init__(self, l, obj=None):\n self.l = l\n self.obj = obj\n def __get__(self, obj, cls=None):\n return c(self.l, obj)\n def __getitem__(self, i):\n if not i in self.l:\n raise Exception(\"Invalid item: \" + i)\n return func(self.obj, i)\n def setter(self, sfunc):\n def doset(self, i, val):\n if not i in self.l:\n raise Exception(\"Invalid item: \" + i)\n sfunc(self.obj, i, val)\n c.__setitem__ = doset\n return self\n result = c(indices)\n return result\n return dec\n\n# ...\nclass P:\n @staticmethod\n def get_prop(i): return i*100\n\n @staticmethod\n def set_prop(i, val): print 'set %s to %s' % (i, val)\n\n @listprop(range(3))\n def prop(self, i):\n return self.get_prop(i)\n\n @prop.setter\n def prop(self, i, val):\n self.set_prop(i, val)\n\nAs you see, assigning to c.__setitem__ was not the problem -- there were others, such as c lacking a __get__ (so it wasn't a descriptor, see this guiide) and setter returning None (so p.prop ended up as None).\n"
] |
[
3
] |
[] |
[] |
[
"decorator",
"python"
] |
stackoverflow_0001033107_decorator_python.txt
|
Q:
Simple User management example for Google App Engine?
I am newbie in Google App Engine. While I was going through the tutorial, I found several things that we do in php-mysql is not available in GAE. For example in dataStore auto increment feature is not available. Also I am confused about session management in GAE. Over all I am confused and can not visualize the whole thing.
Please advise me a simple user management system with user registration, user login, user logout, session (create,manage,destroy) with data Store. Also please advise me where I can get simple but effective examples.
Thanks in advance.
A:
I tend to use my own user and session manangement
For my web handlers I will attach a decorator called session and one called authorize. The session decorator will attach a session to every request, and the authorize decorator will make sure that the user is authorised.
(A word of caution, the authorize decorator is specific to how I develop my applications - the username being the first parameter in most requests).
So for example a web handler may look like:
class UserProfile(webapp.RequestHandler):
@session
@authorize
def get(self, user):
# Do some funky stuff
# The session is attached to the self object.
someObjectAttachedToSession = self.SessionObj.SomeStuff
self.response.out.write("hello %s" % user)
In the above code, the session decorator attaches some session stuff that I need based on the cookies that are present on the request. The authorize header will make sure that the user can only access the page if the session is the correct one.
The decorators code are below:
import functools
from model import Session
import logging
def authorize(redirectTo = "/"):
def factory(method):
'Ensures that when an auth cookie is presented to the request that is is valid'
@functools.wraps(method)
def wrapper(self, *args, **kwargs):
#Get the session parameters
auth_id = self.request.cookies.get('auth_id', '')
session_id = self.request.cookies.get('session_id', '')
#Check the db for the session
session = Session.GetSession(session_id, auth_id)
if session is None:
self.redirect(redirectTo)
return
else:
if session.settings is None:
self.redirect(redirectTo)
return
username = session.settings.key().name()
if len(args) > 0:
if username != args[0]:
# The user is allowed to view this page.
self.redirect(redirectTo)
return
result = method(self, *args, **kwargs)
return result
return wrapper
return factory
def session(method):
'Ensures that the sessions object (if it exists) is attached to the request.'
@functools.wraps(method)
def wrapper(self, *args, **kwargs):
#Get the session parameters
auth_id = self.request.cookies.get('auth_id', '')
session_id = self.request.cookies.get('session_id', '')
#Check the db for the session
session = Session.GetSession(session_id, auth_id)
if session is None:
session = Session()
session.session_id = Session.MakeId()
session.auth_token = Session.MakeId()
session.put()
# Attach the session to the method
self.SessionObj = session
#Call the handler.
result = method(self, *args, **kwargs)
self.response.headers.add_header('Set-Cookie', 'auth_id=%s; path=/; HttpOnly' % str(session.auth_token))
self.response.headers.add_header('Set-Cookie', 'session_id=%s; path=/; HttpOnly' % str(session.session_id))
return result
return wrapper
def redirect(method, redirect = "/user/"):
'When a known user is logged in redirect them to their home page'
@functools.wraps(method)
def wrapper(self, *args, **kwargs):
try:
if self.SessionObj is not None:
if self.SessionObj.settings is not None:
# Check that the session is correct
username = self.SessionObj.settings.key().name()
self.redirect(redirect + username)
return
except:
pass
return method(self, *args, **kwargs)
return wrapper
A:
Django is your best bet -- with the version I pointed you to, auth and sessions should both "just work" as per the Django docs. this article gives simple instructions and example of how to proceed from there.
For Django sessions, see here; for Django auth, here.
A:
You don't write user management and registration and all that, because you use Google's own authentication services. This is all included in the App Engine documentation.
|
Simple User management example for Google App Engine?
|
I am newbie in Google App Engine. While I was going through the tutorial, I found several things that we do in php-mysql is not available in GAE. For example in dataStore auto increment feature is not available. Also I am confused about session management in GAE. Over all I am confused and can not visualize the whole thing.
Please advise me a simple user management system with user registration, user login, user logout, session (create,manage,destroy) with data Store. Also please advise me where I can get simple but effective examples.
Thanks in advance.
|
[
"I tend to use my own user and session manangement\nFor my web handlers I will attach a decorator called session and one called authorize. The session decorator will attach a session to every request, and the authorize decorator will make sure that the user is authorised.\n(A word of caution, the authorize decorator is specific to how I develop my applications - the username being the first parameter in most requests).\nSo for example a web handler may look like:\nclass UserProfile(webapp.RequestHandler):\n @session\n @authorize\n def get(self, user):\n # Do some funky stuff\n # The session is attached to the self object.\n someObjectAttachedToSession = self.SessionObj.SomeStuff\n self.response.out.write(\"hello %s\" % user)\n\nIn the above code, the session decorator attaches some session stuff that I need based on the cookies that are present on the request. The authorize header will make sure that the user can only access the page if the session is the correct one.\nThe decorators code are below:\nimport functools\nfrom model import Session\nimport logging\n\ndef authorize(redirectTo = \"/\"):\n def factory(method):\n 'Ensures that when an auth cookie is presented to the request that is is valid'\n @functools.wraps(method)\n def wrapper(self, *args, **kwargs):\n\n #Get the session parameters\n auth_id = self.request.cookies.get('auth_id', '')\n session_id = self.request.cookies.get('session_id', '')\n\n #Check the db for the session\n session = Session.GetSession(session_id, auth_id) \n\n if session is None:\n self.redirect(redirectTo)\n return\n else:\n if session.settings is None:\n self.redirect(redirectTo)\n return\n\n username = session.settings.key().name()\n\n if len(args) > 0: \n if username != args[0]:\n # The user is allowed to view this page.\n self.redirect(redirectTo)\n return\n\n result = method(self, *args, **kwargs)\n\n return result\n return wrapper\n return factory\n\ndef session(method):\n 'Ensures that the sessions object (if it exists) is attached to the request.'\n @functools.wraps(method)\n def wrapper(self, *args, **kwargs):\n\n #Get the session parameters\n auth_id = self.request.cookies.get('auth_id', '')\n session_id = self.request.cookies.get('session_id', '')\n\n #Check the db for the session\n session = Session.GetSession(session_id, auth_id) \n\n if session is None:\n session = Session()\n session.session_id = Session.MakeId()\n session.auth_token = Session.MakeId()\n session.put()\n\n # Attach the session to the method\n self.SessionObj = session \n\n #Call the handler. \n result = method(self, *args, **kwargs)\n\n self.response.headers.add_header('Set-Cookie', 'auth_id=%s; path=/; HttpOnly' % str(session.auth_token))\n self.response.headers.add_header('Set-Cookie', 'session_id=%s; path=/; HttpOnly' % str(session.session_id))\n\n return result\n return wrapper\n\ndef redirect(method, redirect = \"/user/\"):\n 'When a known user is logged in redirect them to their home page'\n @functools.wraps(method)\n def wrapper(self, *args, **kwargs):\n try: \n if self.SessionObj is not None:\n if self.SessionObj.settings is not None:\n # Check that the session is correct\n username = self.SessionObj.settings.key().name()\n\n self.redirect(redirect + username)\n return\n except:\n pass\n return method(self, *args, **kwargs)\n return wrapper\n\n",
"Django is your best bet -- with the version I pointed you to, auth and sessions should both \"just work\" as per the Django docs. this article gives simple instructions and example of how to proceed from there.\nFor Django sessions, see here; for Django auth, here.\n",
"You don't write user management and registration and all that, because you use Google's own authentication services. This is all included in the App Engine documentation.\n"
] |
[
22,
6,
1
] |
[] |
[] |
[
"google_app_engine",
"php",
"python"
] |
stackoverflow_0001030293_google_app_engine_php_python.txt
|
Q:
Pylons - use Python 2.5 or 2.6?
Which version of Python is recommended for Pylons, and why?
A:
Pylons itself says it needs at least 2.3, and recommends 2.4+. Since 2.6 is production ready, I'd use that.
A:
You can use Python 2.3 to 2.6, though 2.3 support will be dropped in the next version. You can't use Python 3 yet.
There's no real reason to favor Python 2.5 or 2.6 at this point. Use what works best for you.
A:
I'd say use 2.5 :
there is one reason to favor 2.5 over 2.6 : if you need to be compatible with the python given on a linux installation or on Macs (I dont' know what py version mac provide, but you get the idea).
Of course, if you need some feature of 2.6, please do it, but if it's not the case why require 2.6 ? Remember that your app will be hosted somewhere where restrictions apply for deployment.
If you will distribute your app on opensource, even more so.
|
Pylons - use Python 2.5 or 2.6?
|
Which version of Python is recommended for Pylons, and why?
|
[
"Pylons itself says it needs at least 2.3, and recommends 2.4+. Since 2.6 is production ready, I'd use that.\n",
"You can use Python 2.3 to 2.6, though 2.3 support will be dropped in the next version. You can't use Python 3 yet.\nThere's no real reason to favor Python 2.5 or 2.6 at this point. Use what works best for you.\n",
"I'd say use 2.5 : \nthere is one reason to favor 2.5 over 2.6 : if you need to be compatible with the python given on a linux installation or on Macs (I dont' know what py version mac provide, but you get the idea).\nOf course, if you need some feature of 2.6, please do it, but if it's not the case why require 2.6 ? Remember that your app will be hosted somewhere where restrictions apply for deployment.\nIf you will distribute your app on opensource, even more so.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"pylons",
"python"
] |
stackoverflow_0001033367_pylons_python.txt
|
Q:
Specifying TkInter Callbacks In Dictionary For Display Launcher Function
I am having trouble building a Python function that launches TkInter objects, with commands bound to menu buttons, using button specifications held in a dictionary.
SITUATION
I am building a GUI in Python using TkInter. I have written a Display class (based on the GuiMaker class in Lutz, "Programming Python") that should provide a window for data entry for a variety of entities, so the commands and display of any instance will vary. I would like to configure these entity-specific commands and displays in a dictionary file. The file is evaluated when the script is run, and its dictionary passed to a launcher function when a Display instance is called for. But the instances' commands can't find the instance methods I'm trying to bind to them.
SPECIFICATION IN FUNCTIONS WORKS
This isn't a problem when the Display instance is launched with configurations specified in a dedicated function. For instance this works fine:
def launchEmployee():
display = ''
menuBar = [('File', 0, [('Save', 0, (lambda: display.onSave()))])]
title = 'Employee Data Entry'
display_args = {'title': title,
'menuBar': menuBar}
display = DisplayScreen(**display_args)
The DisplayScreen subclasses from the GuiMaker, which has methods for processing the menuBar object to create a menu. It has an onSave() method.
Done this way, the Display instance finds and runs its own onSave() method when the 'Save' button is clicked.
SPECIFICATION FROM DICTIONARY FILE DOESN'T WORK
But this doesn't work when I try to launch the Display instance from a launcher function, pulling its specification from a dictionary held in a separate file.
config_file:
{'menuBar':[('File', 0, [('Save', 0, (lambda: display.onSave()))])],
'title': 'Employee Data Entry'}
script file:
config = eval(open('config_file', 'r').read())
def launchDisplay(config):
display = ''
display = DisplayScreen(**config)
Run this way, clicking 'Save' generates an error, saying there is no global object 'display'.
THEORY: DICTIONARY CASE LOOKS FOR OBJECTS IN SCOPE AT EVAL() CALL
I speculate that in the function case, 'display' is a string object, whose lack of the method onSave() isn't a problem for the the assigment to menuBar because its examination for the method is deferred inside the lambda function. When the Display instance is assigned to the 'display' object, this overloads the prior assignment of the string object, but Python still knows about 'display' and goes to it when asked for its onSave() method.
If so, the configuration case is failing because the 'display' object doesn't exist at all when the config dictionary is created by evaluation. This doen't cause an error at the eval() call because, again, the lambda function hides the object from inspection until called. But when called, Python goes looking for the 'display' object in scope at the moment of the eval() call, where it finds nothing, and then reports an error.
BUT: PUTTING EVAL() CALL IN SCOPE DOESN'T HELP
But I have tried moving the evaluation of the dictionary file into the function and after the creation of the 'display' string object, and this doesn't work either.
SO:
What is going on here?
How can I specify methods to be bound to commands in a dictionary to be accessed when instantiating the display object? It really seems better to specify these screens in a configuration file than in a host of duplicative functions.
A:
When your lambda executes is when scope applies, but the issue is a bit subtler.
In the first case that lambda is a nested function of launchEmployee so the Python compiler (when it compiles the enclosing function) knows to scan its body for references to local variables of the enclosing function and forms the closure appropriately
In the second case, the eval hides the nested function from the Python compiler at the time it's compiling the enclosing function, so it doesn't even know it's an enclosing function, nor that it should form a closure, or how.
I suggest you don't play around with names but rather insert the new display object into the lambda after instantiating it. That does require finding the lambda (or more than one lambda) but you may do it by systematically (e.g. recursively) walking all items in config's values, looking for ones which are instances of type(lambda:0) and adopting some convention such as, 'the widget being created is referred in those lambdas by the name "widget" and is the last argument (with a default value)'.
So you change your config file to:
'menuBar':[('File', 0, [('Save', 0, (lambda widget=None: widget.onSave()))])],
'title': 'Employee Data Entry'}
and, after display = DisplayScreen(**config), post-process config as follows:
def place_widget(widget, mess):
if isinstance(mess, (list, tuple)):
for item in mess:
place_widget(widget, item)
elif isinstance(mess, type(lambda:0)):
if mess.func_code.co_varnames[-1:] == ('widget',):
mess.func_defaults = mess.func_defaults[:-1] + (widget,)
Admittedly somewhat-tricky code, but I don't see a straightforward way to do it within your specified context of an eval'd string made into a dict containing some lambdas.
I would normally recommend standard library module inspect for such introspection, but since this post-processing function must inevitably mess around with the func_defaults (inspect only examines, does not alter objects' internals like that), it seems more consistent to have all of its code churn at the same, pretty-down-deep level.
Edit: a simpler approach is possible if you don't insist on having widgets be only local variables, but can instead make them e.g. attributes of a global object.
So at module level you have, say:
class Bunch(object): pass
widgets = Bunch()
and in your function you assign the newly made widget to widgets.foobar instead of assigning to bare name foobar. Then your evaled lambda can be something like:
lambda: widgets.foobar.onSave()
and everything will be fine, because, this way, no closure is needed (it's only needed -- and none is forthcoming due to the eval -- in your original approach to preserve the local variables you are using for the time the lambda will be needing them).
|
Specifying TkInter Callbacks In Dictionary For Display Launcher Function
|
I am having trouble building a Python function that launches TkInter objects, with commands bound to menu buttons, using button specifications held in a dictionary.
SITUATION
I am building a GUI in Python using TkInter. I have written a Display class (based on the GuiMaker class in Lutz, "Programming Python") that should provide a window for data entry for a variety of entities, so the commands and display of any instance will vary. I would like to configure these entity-specific commands and displays in a dictionary file. The file is evaluated when the script is run, and its dictionary passed to a launcher function when a Display instance is called for. But the instances' commands can't find the instance methods I'm trying to bind to them.
SPECIFICATION IN FUNCTIONS WORKS
This isn't a problem when the Display instance is launched with configurations specified in a dedicated function. For instance this works fine:
def launchEmployee():
display = ''
menuBar = [('File', 0, [('Save', 0, (lambda: display.onSave()))])]
title = 'Employee Data Entry'
display_args = {'title': title,
'menuBar': menuBar}
display = DisplayScreen(**display_args)
The DisplayScreen subclasses from the GuiMaker, which has methods for processing the menuBar object to create a menu. It has an onSave() method.
Done this way, the Display instance finds and runs its own onSave() method when the 'Save' button is clicked.
SPECIFICATION FROM DICTIONARY FILE DOESN'T WORK
But this doesn't work when I try to launch the Display instance from a launcher function, pulling its specification from a dictionary held in a separate file.
config_file:
{'menuBar':[('File', 0, [('Save', 0, (lambda: display.onSave()))])],
'title': 'Employee Data Entry'}
script file:
config = eval(open('config_file', 'r').read())
def launchDisplay(config):
display = ''
display = DisplayScreen(**config)
Run this way, clicking 'Save' generates an error, saying there is no global object 'display'.
THEORY: DICTIONARY CASE LOOKS FOR OBJECTS IN SCOPE AT EVAL() CALL
I speculate that in the function case, 'display' is a string object, whose lack of the method onSave() isn't a problem for the the assigment to menuBar because its examination for the method is deferred inside the lambda function. When the Display instance is assigned to the 'display' object, this overloads the prior assignment of the string object, but Python still knows about 'display' and goes to it when asked for its onSave() method.
If so, the configuration case is failing because the 'display' object doesn't exist at all when the config dictionary is created by evaluation. This doen't cause an error at the eval() call because, again, the lambda function hides the object from inspection until called. But when called, Python goes looking for the 'display' object in scope at the moment of the eval() call, where it finds nothing, and then reports an error.
BUT: PUTTING EVAL() CALL IN SCOPE DOESN'T HELP
But I have tried moving the evaluation of the dictionary file into the function and after the creation of the 'display' string object, and this doesn't work either.
SO:
What is going on here?
How can I specify methods to be bound to commands in a dictionary to be accessed when instantiating the display object? It really seems better to specify these screens in a configuration file than in a host of duplicative functions.
|
[
"When your lambda executes is when scope applies, but the issue is a bit subtler.\nIn the first case that lambda is a nested function of launchEmployee so the Python compiler (when it compiles the enclosing function) knows to scan its body for references to local variables of the enclosing function and forms the closure appropriately\nIn the second case, the eval hides the nested function from the Python compiler at the time it's compiling the enclosing function, so it doesn't even know it's an enclosing function, nor that it should form a closure, or how.\nI suggest you don't play around with names but rather insert the new display object into the lambda after instantiating it. That does require finding the lambda (or more than one lambda) but you may do it by systematically (e.g. recursively) walking all items in config's values, looking for ones which are instances of type(lambda:0) and adopting some convention such as, 'the widget being created is referred in those lambdas by the name \"widget\" and is the last argument (with a default value)'.\nSo you change your config file to:\n'menuBar':[('File', 0, [('Save', 0, (lambda widget=None: widget.onSave()))])],\n 'title': 'Employee Data Entry'}\n\nand, after display = DisplayScreen(**config), post-process config as follows:\ndef place_widget(widget, mess):\n if isinstance(mess, (list, tuple)):\n for item in mess:\n place_widget(widget, item)\n elif isinstance(mess, type(lambda:0)):\n if mess.func_code.co_varnames[-1:] == ('widget',):\n mess.func_defaults = mess.func_defaults[:-1] + (widget,)\n\nAdmittedly somewhat-tricky code, but I don't see a straightforward way to do it within your specified context of an eval'd string made into a dict containing some lambdas.\nI would normally recommend standard library module inspect for such introspection, but since this post-processing function must inevitably mess around with the func_defaults (inspect only examines, does not alter objects' internals like that), it seems more consistent to have all of its code churn at the same, pretty-down-deep level.\nEdit: a simpler approach is possible if you don't insist on having widgets be only local variables, but can instead make them e.g. attributes of a global object.\nSo at module level you have, say:\nclass Bunch(object): pass\nwidgets = Bunch()\n\nand in your function you assign the newly made widget to widgets.foobar instead of assigning to bare name foobar. Then your evaled lambda can be something like:\nlambda: widgets.foobar.onSave()\n\nand everything will be fine, because, this way, no closure is needed (it's only needed -- and none is forthcoming due to the eval -- in your original approach to preserve the local variables you are using for the time the lambda will be needing them).\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001033130_python.txt
|
Q:
Template driven feed parsing
Requirements:
I have a Python project which parses data feeds from multiple sources in varying formats (Atom, valid XML, invalid XML, CSV, almost-garbage, etc...) and inserts the resulting data into a database. The catch is the information required to parse each of the feeds must also be stored in the database.
Current solution:
My previous solution was to store small python scripts which are evaled on the raw data, and return a data object for the parsed data. I'd really like to get away from this method as it obviously opens up a nasty security hole.
Ideal solution:
What I'm looking for is what I would describe as a template-driven feed parser for Python, so that I can write a template file for each of the feed formats, and this template file would be used to make sense of the various data formats.
I've had limited success finding something like this in the past, and was hoping someone may have a good suggestion.
A:
Instead of evaling scripts, maybe you should consider making a package of them?
Parsing CSV is one thing — the format is simple and regular, parsing XML requires completely another approach. Considering you don't want to write every single parser from scratch, why not just write a bunch of small modules, each having identical API and use them? I believe, using Python itself (not some templating DSL) is ideal for this sort of thing.
For example, this is an approach I've seen in one small torrent-fetching script I'm using:
Main program:
...
def import_plugin(name):
mod = __import__(name)
components = name.split('.')
for comp in components[1:]:
mod = getattr(mod, comp)
return mod
...
feed_parser = import_plugin('parsers.%s' % feed['format'])
data = feed_parser(...)
...
parsers/csv.py:
#!/usr/bin/python
from __future__ import absolute_import
import urllib2
import csv
def parse_feed(...):
...
If you don't particularly like dynamically loaded modules, you may consider writing, for example, a single module with several parses classes (probably derived from some "abstract parser" base class).
class BaseParser(object):
...
class CSVParser(BaseParser):
...
register_feed_parser(CSVParser, ['text/plain', 'text/csv'])
...
parsers = get_registered_feed_parsers(feed['mime_type'])
data = None
for parser in parsers:
try:
data = parser(feed['data'])
if data is not None: break
except ParsingError:
pass
...
|
Template driven feed parsing
|
Requirements:
I have a Python project which parses data feeds from multiple sources in varying formats (Atom, valid XML, invalid XML, CSV, almost-garbage, etc...) and inserts the resulting data into a database. The catch is the information required to parse each of the feeds must also be stored in the database.
Current solution:
My previous solution was to store small python scripts which are evaled on the raw data, and return a data object for the parsed data. I'd really like to get away from this method as it obviously opens up a nasty security hole.
Ideal solution:
What I'm looking for is what I would describe as a template-driven feed parser for Python, so that I can write a template file for each of the feed formats, and this template file would be used to make sense of the various data formats.
I've had limited success finding something like this in the past, and was hoping someone may have a good suggestion.
|
[
"Instead of evaling scripts, maybe you should consider making a package of them?\nParsing CSV is one thing — the format is simple and regular, parsing XML requires completely another approach. Considering you don't want to write every single parser from scratch, why not just write a bunch of small modules, each having identical API and use them? I believe, using Python itself (not some templating DSL) is ideal for this sort of thing.\nFor example, this is an approach I've seen in one small torrent-fetching script I'm using:\nMain program:\n...\ndef import_plugin(name):\n mod = __import__(name)\n components = name.split('.')\n for comp in components[1:]:\n mod = getattr(mod, comp)\n return mod\n\n...\nfeed_parser = import_plugin('parsers.%s' % feed['format'])\ndata = feed_parser(...)\n...\n\nparsers/csv.py:\n#!/usr/bin/python\nfrom __future__ import absolute_import\n\nimport urllib2\nimport csv\n\ndef parse_feed(...):\n ...\n\nIf you don't particularly like dynamically loaded modules, you may consider writing, for example, a single module with several parses classes (probably derived from some \"abstract parser\" base class).\nclass BaseParser(object):\n ...\n\nclass CSVParser(BaseParser):\n ...\nregister_feed_parser(CSVParser, ['text/plain', 'text/csv'])\n...\n\nparsers = get_registered_feed_parsers(feed['mime_type'])\ndata = None\nfor parser in parsers:\n try:\n data = parser(feed['data'])\n if data is not None: break\n except ParsingError:\n pass\n...\n\n"
] |
[
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001032976_python.txt
|
Q:
Monkeypatching a method call in Python
How do I put off attribute access in Python?
Let's assume we have:
def foo():
...
class Bar:
...
bar = Bar()
Is it possible to implement Bar so that any time bar is accessed, a value returned by the callback foo() would be provided?
bar name already exists in the context. That's why it's access semantics should be preserved (it cannot be a callable, turning bar into a property of a class, using SomeClass.bar instead of bar also won't work). I need to keep everything as-is, but change the program so that bar would refer to on-the-fly generated data by foo().
UPD: Thanks all for your answers, from which it seems impossible to do this type of thing in Python. I'm gonna find a workaround.
A:
I guess you want to link some attribute "data" to foo:
class Bar:
data = property(lambda self: foo())
bar = Bar()
bar.data # calls foo()
A:
You're basically asking for a way to hijack a variable (how would you reassign it?) in the module namespace, which is not possible in Python.
You'll have to use attribute accessors of a class if you want the described behavior:
class MyClass(object):
def __getattr__(self, attr):
if attr == 'bar':
print 'getting bar... call the foo()!'
else:
return object.__getattribute__(self, attr)
def __setattr__(self, attr, val):
if attr == 'bar':
print 'bar was set to', val
else:
object.__setattr__(self, attr, val)
A:
"any time bar is accessed"... What kinds of accesses are your callers going to be making? (E.g. are they doing "1+bar", are they doing "bar[5:]", are they doing "bar.func()", etc). Will they call the Bar() constructor each time?
Right now, your question is so fuzzy and general that I think it's impossible. But if you're a bit more specific then we might be able to help.
A:
Is this what you're looking for?
>>> def foo(): print('foo() was called');
...
>>> class Bar:
... pass;
...
>>> bar = Bar();
>>> bar.data = foo;
>>> bar.data()
foo() was called
>>>
A:
Thanks for the clarification!
This just can't work! A variable is a variable in most languages, not a function-call. You can do much in Python, but you just can't do that.
The reason is, that you have always some intrinsic language rules. One rule in Python is, that variables are variables. When you read a variable (not modifying it or anything else) you can rely, that it will be the same in the next code-line.
Monkeypatching it to be a function call would just change this rule.
What you want, could only be done by a still more dynamic language. Something like a macro-processing system or a language that do not have variables but something like labels that can be attached to anything. But this would also make compiler-creation for it much more difficult -- hence fully dynamic. The compiler would have to take all coding in the program into account.
A:
You can override Bar's __new__ method to execute arbitrary code and return a new instance. See: http://docs.python.org/reference/datamodel.html#object.__new__
Here's a contrived example:
>>> class Bar(object):
... def __new__(cls):
... print "Calling Bar.__new__"
... return super(Bar, cls).__new__(cls)
...
>>> bar = Bar()
Calling Bar.__new__
>>> bar
<__main__.Bar object at 0x0222B3D0>
EDIT: I misunderstood what you wanted, your clarification cleared that up. I don't think what you want is possible.
A:
If you want the results of foo(), the easiest way is to do this:
foo()
Anything else is just unnecessary complication. I suspect that you have oversimplified your example until it doens't make sense.
Edit: OK, so you are trying to change somebody elses code. No, it's not possible to transform a variable to a method in a generic way. You could replace it with an object that will return different variables of it's used in a way that consistently will cause some sort of python method to be called.
barrio.py:
bar = "Some sort of string"
fool.py:
import random
class Foo:
def __str__(self):
return str(random.random())
import barrio
barrio.bar = Foo()
Now run python:
>>> import fool
>>> import barrio
>>> print barrio.bar
0.783394625457
>>> print barrio.bar
0.662363816543
>>> print barrio.bar
0.342930701226
>>> print barrio.bar
0.781452467433
This works because print will call str on the object, since it's not a string. But in general, no, it's not possible.
|
Monkeypatching a method call in Python
|
How do I put off attribute access in Python?
Let's assume we have:
def foo():
...
class Bar:
...
bar = Bar()
Is it possible to implement Bar so that any time bar is accessed, a value returned by the callback foo() would be provided?
bar name already exists in the context. That's why it's access semantics should be preserved (it cannot be a callable, turning bar into a property of a class, using SomeClass.bar instead of bar also won't work). I need to keep everything as-is, but change the program so that bar would refer to on-the-fly generated data by foo().
UPD: Thanks all for your answers, from which it seems impossible to do this type of thing in Python. I'm gonna find a workaround.
|
[
"I guess you want to link some attribute \"data\" to foo:\nclass Bar:\n data = property(lambda self: foo())\n\n\nbar = Bar()\nbar.data # calls foo()\n\n",
"You're basically asking for a way to hijack a variable (how would you reassign it?) in the module namespace, which is not possible in Python.\nYou'll have to use attribute accessors of a class if you want the described behavior:\nclass MyClass(object):\n def __getattr__(self, attr):\n if attr == 'bar':\n print 'getting bar... call the foo()!'\n else:\n return object.__getattribute__(self, attr)\n\n def __setattr__(self, attr, val):\n if attr == 'bar':\n print 'bar was set to', val\n else:\n object.__setattr__(self, attr, val)\n\n",
"\"any time bar is accessed\"... What kinds of accesses are your callers going to be making? (E.g. are they doing \"1+bar\", are they doing \"bar[5:]\", are they doing \"bar.func()\", etc). Will they call the Bar() constructor each time?\nRight now, your question is so fuzzy and general that I think it's impossible. But if you're a bit more specific then we might be able to help.\n",
"Is this what you're looking for?\n>>> def foo(): print('foo() was called');\n...\n>>> class Bar:\n... pass;\n...\n>>> bar = Bar();\n>>> bar.data = foo;\n>>> bar.data()\nfoo() was called\n>>>\n\n",
"Thanks for the clarification!\nThis just can't work! A variable is a variable in most languages, not a function-call. You can do much in Python, but you just can't do that.\nThe reason is, that you have always some intrinsic language rules. One rule in Python is, that variables are variables. When you read a variable (not modifying it or anything else) you can rely, that it will be the same in the next code-line.\nMonkeypatching it to be a function call would just change this rule.\nWhat you want, could only be done by a still more dynamic language. Something like a macro-processing system or a language that do not have variables but something like labels that can be attached to anything. But this would also make compiler-creation for it much more difficult -- hence fully dynamic. The compiler would have to take all coding in the program into account.\n",
"You can override Bar's __new__ method to execute arbitrary code and return a new instance. See: http://docs.python.org/reference/datamodel.html#object.__new__\nHere's a contrived example:\n>>> class Bar(object):\n... def __new__(cls):\n... print \"Calling Bar.__new__\"\n... return super(Bar, cls).__new__(cls)\n...\n>>> bar = Bar()\nCalling Bar.__new__\n>>> bar\n<__main__.Bar object at 0x0222B3D0>\n\nEDIT: I misunderstood what you wanted, your clarification cleared that up. I don't think what you want is possible.\n",
"If you want the results of foo(), the easiest way is to do this:\nfoo()\n\nAnything else is just unnecessary complication. I suspect that you have oversimplified your example until it doens't make sense.\nEdit: OK, so you are trying to change somebody elses code. No, it's not possible to transform a variable to a method in a generic way. You could replace it with an object that will return different variables of it's used in a way that consistently will cause some sort of python method to be called. \nbarrio.py:\n bar = \"Some sort of string\"\nfool.py:\nimport random\n\nclass Foo:\n def __str__(self):\n return str(random.random())\n\nimport barrio\nbarrio.bar = Foo()\n\nNow run python:\n>>> import fool\n>>> import barrio\n\n>>> print barrio.bar\n0.783394625457\n>>> print barrio.bar\n0.662363816543\n>>> print barrio.bar\n0.342930701226\n>>> print barrio.bar\n0.781452467433\n\nThis works because print will call str on the object, since it's not a string. But in general, no, it's not possible.\n"
] |
[
6,
2,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"callback",
"properties",
"python",
"reference"
] |
stackoverflow_0001033519_callback_properties_python_reference.txt
|
Q:
python class attribute inheritance
I am trying to save myself a bit of typing by writing the following code, but it seems I can't do this:
class lgrAdminObject(admin.ModelAdmin):
fields = ["title","owner"]
list_display = ["title","origin","approved", "sendToFrames"]
class Photos(lgrAdminObject):
fields.extend(["albums"])
why doesn't that work? Also since they're not functions, I can't do the super trick
fields = super(Photos, self).fields
fields.extend(["albums"])
A:
Inheritance applies after the class's body executes. In the class body, you can use lgrAdminObject.fields -- you sure you want to alter the superclass's attribute rather than making a copy of it first, though? Seems peculiar... I'd start with a copy:
class Photos(lgrAdminObject):
fields = list(lgrAdminObject.fields)
before continuing with alterations.
A:
Have you tried this?
fields = lgrAdminObject.fields + ["albums"]
You need to create a new class attribute, not extend the one from the parent class.
A:
If you insist on using class attributes, you can reference the base class directly.
class Photos(lgrAdminObject):
lgrAdminObject.fields.extend(["albums"])
A trivial check follows:
>>> class B0:
... fields = ["title","owner"]
...
>>> class C1(B0):
... B0.fields.extend(["albums"])
...
>>> C1.fields
['title', 'owner', 'albums']
>>> B0.fields
['title', 'owner', 'albums']
>>>
Looks like the base attribute is modified as well, probably not what you were looking for.
Look into defining some executable method (__init__(), maybe?).
Or (better?) follow @Alex Martelli's suggestion and copy the base attribute:
>>> class B0:
... fields = ["title","owner"]
...
>>> class C1(B0):
... fields = B0.fields[:]
... fields.extend(["albums"])
...
>>> C1.fields
['title', 'owner', 'albums']
>>> B0.fields
['title', 'owner']
>>>
A:
Also, note that when you have a list as a class attribute, it belongs to the class, not the instances. So if you modify it, it will change for all instances. It's probably better to use a tuple instead, unless you intend to modify it with this effect (and then you should document that clearly, because that is a common cause of confusion).
|
python class attribute inheritance
|
I am trying to save myself a bit of typing by writing the following code, but it seems I can't do this:
class lgrAdminObject(admin.ModelAdmin):
fields = ["title","owner"]
list_display = ["title","origin","approved", "sendToFrames"]
class Photos(lgrAdminObject):
fields.extend(["albums"])
why doesn't that work? Also since they're not functions, I can't do the super trick
fields = super(Photos, self).fields
fields.extend(["albums"])
|
[
"Inheritance applies after the class's body executes. In the class body, you can use lgrAdminObject.fields -- you sure you want to alter the superclass's attribute rather than making a copy of it first, though? Seems peculiar... I'd start with a copy:\nclass Photos(lgrAdminObject):\n fields = list(lgrAdminObject.fields)\n\nbefore continuing with alterations.\n",
"Have you tried this?\nfields = lgrAdminObject.fields + [\"albums\"]\n\nYou need to create a new class attribute, not extend the one from the parent class.\n",
"If you insist on using class attributes, you can reference the base class directly.\nclass Photos(lgrAdminObject):\n lgrAdminObject.fields.extend([\"albums\"])\n\nA trivial check follows:\n>>> class B0:\n... fields = [\"title\",\"owner\"]\n... \n>>> class C1(B0):\n... B0.fields.extend([\"albums\"])\n... \n>>> C1.fields\n['title', 'owner', 'albums']\n>>> B0.fields\n['title', 'owner', 'albums']\n>>> \n\nLooks like the base attribute is modified as well, probably not what you were looking for.\nLook into defining some executable method (__init__(), maybe?).\nOr (better?) follow @Alex Martelli's suggestion and copy the base attribute:\n>>> class B0:\n... fields = [\"title\",\"owner\"]\n... \n>>> class C1(B0):\n... fields = B0.fields[:]\n... fields.extend([\"albums\"])\n... \n>>> C1.fields\n['title', 'owner', 'albums']\n>>> B0.fields\n['title', 'owner']\n>>> \n\n",
"Also, note that when you have a list as a class attribute, it belongs to the class, not the instances. So if you modify it, it will change for all instances. It's probably better to use a tuple instead, unless you intend to modify it with this effect (and then you should document that clearly, because that is a common cause of confusion).\n"
] |
[
7,
4,
2,
1
] |
[] |
[] |
[
"class",
"django",
"inheritance",
"python"
] |
stackoverflow_0001033443_class_django_inheritance_python.txt
|
Q:
Python List Question
i have an issue i could use some help with, i have python list that looks like this:
fail = [
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\b\\include', 'Test.java']
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\c', 'apa1.txt']
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt']
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\d', 'Sourcecheck.py']
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\a\\include', 'svin.txt']
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'apa2.txt']
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'apa.txt']
sha1 value, directory, filename
What i want is to separate this content in two different lists based on the sha1 value and directory. For example.
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'apa.txt']
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt']
i want to add to the list duplicate = [], because it's in the same directory with the same sha1 value (and only that directory). The rest of the entries i want to add to another list, say diff = [], because the sha1 value is the same but the directories differ.
I'm kinda lost with the logic here so all that help i could get would be thankful!
EDIT: Fixed a typo, last value (filename) was in some cases a 1-list element, which was 100% incorrect, thansk to SilentGhost to become aware of this issue.
A:
duplicate = []
# Sort the list so we can compare adjacent values
fail.sort()
#if you didn't want to modify the list in place you can use:
#sortedFail = sorted(fail)
# and then use sortedFail in the rest of the code instead of fail
for i, x in enumerate(fail):
if i+1 == len(fail):
#end of the list
break
if x[:2] == fail[i+1][:2]:
if x not in duplicate:
duplicate.add(x)
if fail[i+1] not in duplicate:
duplicate.add(fail[i+1])
# diff is just anything not in duplicate as far as I can tell from the explanation
diff = [d for d in fail if d not in duplicate]
With your example input
duplicate: [
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', ['apa.txt']],
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt']
]
diff: [
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', ['apa2.txt']],
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\c', 'apa1.txt'],
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\a\\include', ['svin.txt']],
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\b\\include', 'Test.java'],
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\d', 'Sourcecheck.py']
]
So perhaps I missed something, but I think this is what you were asking for.
A:
you could simply loop through all the values then use an inner loop to compare directories, then if the directory is the same compare values, then assign lists. this would give you a decent n^2 algorithm to sort it out.
maybe like this untested code:
>>>for i in range(len(fail)-1):
... dir = fail[i][1]
... sha1 = fail[i][0]
... for j in range(i+1,len(fail)):
... if dir == fail[j][1]: #is this how you compare strings?
... if sha1 == fail[j][0]:
... #remove from fail and add to duplicate and add other to diff
again the code is untested.
A:
In the following code sample, I use a key based on the SHA1 and directory name to detect unique and duplicate entries and spare dictionaries for housekeeping.
# Test dataset
fail = [
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\b\\include', 'Test.java'],
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\c', 'apa1.txt'],
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt'],
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\d', 'Sourcecheck.py'],
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\a\\include', ['svin.txt']],
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', ['apa2.txt']],
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', ['apa.txt']],
]
def sort_duplicates(filelist):
"""Returns a tuplie whose first element is a list of unique files,
and second element is a list of duplicate files.
"""
diff = []
diff_d = {}
duplicate = []
duplicate_d = {}
for entry in filelist:
# Make an immutable key based on the SHA-1 and directory strings
key = (entry[0], entry[1])
# If this entry is a known duplicate, add it to the duplicate list
if key in duplicate_d:
duplicate.append(entry)
# If this entry is a new duplicate, add it to the duplicate list
elif key in diff_d:
duplicate.append(entry)
duplicate_d[key] = entry
# And relocate the matching entry to the duplicate list
matching_entry = diff_d[key]
duplicate.append(matching_entry)
duplicate_d[key] = matching_entry
del diff_d[key]
diff.remove(matching_entry)
# Otherwise add this entry to the different list
else:
diff.append(entry)
diff_d[key] = entry
return (diff, duplicate)
def test():
global fail
diff, dups = sort_duplicates(fail)
print "Diff:", diff
print "Dups:", dups
test()
A:
Here's another way to go at it using dictionaries to group by sha and directory. This also gets rid of the random lists in the file names.
new_fail = {} # {sha: {dir: [filenames]}}
for item in fail:
# split data into it's parts
sha, directory, filename = item
# make sure the correct elements exist in the data structure
if sha not in new_fail:
new_fail[sha] = {}
if directory not in new_fail[sha]:
new_fail[sha][directory] = []
# this is where the lists are removed from the file names
if type(filename) == type([]):
filename = filename[0]
new_fail[sha][directory].append(filename)
diff = []
dup = []
# loop through the data, analyzing it
for sha, val in new_fail.iteritems():
for directory, filenames in val.iteritems():
# check to see if the sha/dir combo has more than one file name
if len(filenames) > 1:
for filename in filenames:
dup.append([sha, directory, filename])
else:
diff.append([sha, dir, filenames[0]])
To print it:
print 'diff:'
for i in diff:
print i
print '\ndup:'
for i in dup:
print i
Sample data looks like this:
diff:
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\d', 'Sourcecheck.py']
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\b\\include', 'Test.java']
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\a\\include', 'svin.txt']
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'apa2.txt']
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\c', 'apa1.txt']
dup:
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt']
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'apa.txt']
A:
I believe the accepted answer will be slightly more efficient (Python's internal sort should be faster than my dictionary walk), but since I already came up with this, I may as well post it. :-)
This technique uses a multilevel dictionary to avoid both sorting and explicit comparisons.
hashes = {}
diff = []
dupe = []
# build the dictionary
for sha, path, files in fail:
try:
hashes[sha][path].append(files)
except KeyError:
try:
hashes[sha][path] = [files]
except:
hashes[sha] = dict((path, [files]))
for sha, paths in hashes.iteritems():
if len(paths) > 1:
for path, files in paths.iteritems():
for file in files:
diff.append([sha, path, file])
for path, files in paths.iteritems():
if len(files) > 1:
for file in files:
dupe.append([sha, path, file])
The result will be:
diff = [
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\d', 'Sourcecheck.py'],
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\b\\include', 'Test.java'],
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\a\\include', ['svin.txt']],
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', ['apa2.txt']],
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\c', 'apa1.txt']
]
dupe = [
[['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt'],
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', ['apa.txt']]
]
|
Python List Question
|
i have an issue i could use some help with, i have python list that looks like this:
fail = [
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\b\\include', 'Test.java']
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\c', 'apa1.txt']
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt']
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\d', 'Sourcecheck.py']
['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\a\\include', 'svin.txt']
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'apa2.txt']
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'apa.txt']
sha1 value, directory, filename
What i want is to separate this content in two different lists based on the sha1 value and directory. For example.
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'apa.txt']
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt']
i want to add to the list duplicate = [], because it's in the same directory with the same sha1 value (and only that directory). The rest of the entries i want to add to another list, say diff = [], because the sha1 value is the same but the directories differ.
I'm kinda lost with the logic here so all that help i could get would be thankful!
EDIT: Fixed a typo, last value (filename) was in some cases a 1-list element, which was 100% incorrect, thansk to SilentGhost to become aware of this issue.
|
[
"duplicate = []\n# Sort the list so we can compare adjacent values\nfail.sort()\n#if you didn't want to modify the list in place you can use:\n#sortedFail = sorted(fail)\n# and then use sortedFail in the rest of the code instead of fail\nfor i, x in enumerate(fail):\n if i+1 == len(fail):\n #end of the list\n break\n if x[:2] == fail[i+1][:2]:\n if x not in duplicate:\n duplicate.add(x)\n if fail[i+1] not in duplicate:\n duplicate.add(fail[i+1])\n# diff is just anything not in duplicate as far as I can tell from the explanation\ndiff = [d for d in fail if d not in duplicate]\n\nWith your example input \nduplicate: [\n ['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', ['apa.txt']], \n ['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', 'knark.txt']\n ]\n\ndiff: [\n ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\a', ['apa2.txt']], \n ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\c', 'apa1.txt'], \n ['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\a\\\\include', ['svin.txt']],\n ['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\b\\\\include', 'Test.java'],\n ['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\d', 'Sourcecheck.py']\n ]\n\nSo perhaps I missed something, but I think this is what you were asking for.\n",
"you could simply loop through all the values then use an inner loop to compare directories, then if the directory is the same compare values, then assign lists. this would give you a decent n^2 algorithm to sort it out.\nmaybe like this untested code:\n>>>for i in range(len(fail)-1):\n... dir = fail[i][1]\n... sha1 = fail[i][0]\n... for j in range(i+1,len(fail)):\n... if dir == fail[j][1]: #is this how you compare strings?\n... if sha1 == fail[j][0]:\n... #remove from fail and add to duplicate and add other to diff\n\nagain the code is untested.\n",
"In the following code sample, I use a key based on the SHA1 and directory name to detect unique and duplicate entries and spare dictionaries for housekeeping. \n# Test dataset\nfail = [\n['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\b\\\\include', 'Test.java'],\n['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\c', 'apa1.txt'],\n['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', 'knark.txt'],\n['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\d', 'Sourcecheck.py'],\n['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\a\\\\include', ['svin.txt']],\n['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\a', ['apa2.txt']],\n['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', ['apa.txt']],\n]\n\n\ndef sort_duplicates(filelist):\n \"\"\"Returns a tuplie whose first element is a list of unique files,\n and second element is a list of duplicate files.\n \"\"\"\n diff = []\n diff_d = {}\n\n duplicate = []\n duplicate_d = {}\n\n for entry in filelist:\n\n # Make an immutable key based on the SHA-1 and directory strings\n key = (entry[0], entry[1])\n\n # If this entry is a known duplicate, add it to the duplicate list\n if key in duplicate_d:\n duplicate.append(entry)\n\n # If this entry is a new duplicate, add it to the duplicate list\n elif key in diff_d:\n duplicate.append(entry)\n duplicate_d[key] = entry\n\n # And relocate the matching entry to the duplicate list\n matching_entry = diff_d[key]\n duplicate.append(matching_entry)\n duplicate_d[key] = matching_entry\n del diff_d[key]\n diff.remove(matching_entry)\n\n # Otherwise add this entry to the different list\n else:\n diff.append(entry)\n diff_d[key] = entry\n\n return (diff, duplicate)\n\ndef test():\n global fail\n diff, dups = sort_duplicates(fail)\n print \"Diff:\", diff\n print \"Dups:\", dups\n\ntest()\n\n",
"Here's another way to go at it using dictionaries to group by sha and directory. This also gets rid of the random lists in the file names.\nnew_fail = {} # {sha: {dir: [filenames]}}\nfor item in fail:\n # split data into it's parts\n sha, directory, filename = item\n\n # make sure the correct elements exist in the data structure\n if sha not in new_fail:\n new_fail[sha] = {}\n if directory not in new_fail[sha]:\n new_fail[sha][directory] = []\n\n # this is where the lists are removed from the file names\n if type(filename) == type([]):\n filename = filename[0]\n\n new_fail[sha][directory].append(filename)\n\ndiff = []\ndup = []\n\n# loop through the data, analyzing it\nfor sha, val in new_fail.iteritems():\n for directory, filenames in val.iteritems():\n\n # check to see if the sha/dir combo has more than one file name\n if len(filenames) > 1:\n for filename in filenames:\n dup.append([sha, directory, filename])\n else:\n diff.append([sha, dir, filenames[0]])\n\nTo print it:\nprint 'diff:'\nfor i in diff:\n print i\nprint '\\ndup:'\nfor i in dup:\n print i\n\nSample data looks like this:\n\ndiff:\n['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\d', 'Sourcecheck.py']\n['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\b\\\\include', 'Test.java']\n['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\a\\\\include', 'svin.txt']\n['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\a', 'apa2.txt']\n['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\c', 'apa1.txt']\n\ndup:\n['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', 'knark.txt']\n['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', 'apa.txt']\n",
"I believe the accepted answer will be slightly more efficient (Python's internal sort should be faster than my dictionary walk), but since I already came up with this, I may as well post it. :-)\nThis technique uses a multilevel dictionary to avoid both sorting and explicit comparisons.\nhashes = {}\ndiff = []\ndupe = []\n\n# build the dictionary\nfor sha, path, files in fail:\n try:\n hashes[sha][path].append(files)\n except KeyError:\n try:\n hashes[sha][path] = [files]\n except:\n hashes[sha] = dict((path, [files]))\n\nfor sha, paths in hashes.iteritems():\n if len(paths) > 1:\n for path, files in paths.iteritems():\n for file in files:\n diff.append([sha, path, file])\n for path, files in paths.iteritems():\n if len(files) > 1:\n for file in files:\n dupe.append([sha, path, file])\n\nThe result will be:\ndiff = [\n ['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\d', 'Sourcecheck.py'],\n ['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\b\\\\include', 'Test.java'],\n ['da39a3ee5e6b4b0d3255bfef95601890afd80709', 'ron\\\\a\\\\include', ['svin.txt']],\n ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\a', ['apa2.txt']],\n ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\c', 'apa1.txt']\n]\ndupe = [\n [['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', 'knark.txt'],\n ['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', ['apa.txt']]\n]\n\n"
] |
[
3,
1,
1,
0,
0
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0001034145_list_python.txt
|
Q:
How do I use the bash time function from python?
I would like to use python to make system calls to programs and time them. From the Linux command line if you type:
$ time prog args
You get something along the lines of:
real 0m0.110s
user 0m0.060s
sys 0m0.024s
if you do a 'man time', it states that you can type:
$ time -f "%E" prog args
in order to format to get only display the elapsed time. This does not work in the shell. I beleive it doesn't work because bash must have its own time function. If you type:
$/usr/bin/time -f "%E" prog args
you get the expected output as stated in the man pages.
I want the bash time because it seems to be more accurate. My problem is, when you try to use the 'time prog args' command from python, it doesn't behave the same way as when you type it in.
Code below to see what I'm doing (Please excuse the sloppiness):
#!/usr/bin/env python
"""Quick & Dirty script to time program execution for multiple inputs.
Saves the results as a .csv for importing into excel."""
import subprocess
timing = 'time -f "%E" '
program = 'java smartfib '
filename = 'smarttimes.csv'
#arglist = [0, 10, 20, 30, 31, 32, 33, 34, 35]
#arglist = [0, 2, 3, 5]
arglist = range(50)
timelist = list()
#run the program with the specified input values
for arg in arglist:
cmd = timing + program + str(arg)
pipe = subprocess.Popen(cmd, shell = True, bufsize = 256,
stderr = subprocess.PIPE, stdout = subprocess.PIPE)
[answer,times] = pipe.communicate() #read from pipe
#print times
if(int(answer) < 0):
print "overflow occured at fib(%d)!\n" %arg
break
#save (n, [time it takes to complete] as a tuple in a list)
timelist.append([arg,times])
csv = list() #list for formatted comma separated values for excel
for item in range(len(timelist)):
csv.append(
str(timelist[item][0]) + "," + timelist[item][1][2:].strip('\n')
)
results = file(filename,'w')
results.write('n,time(in s)\n')
for item in range(len(csv)):
results.write(csv[item]+'\n')
results.close()
def getTimeInSeconds(self, timestring):
pass
A:
You are correct that bash has it's own version of time.
$ type time
time is a shell keyword
Perhaps you could explicitly invoke bash with the -c option to get it's timing.
Depending on which distribution you're using, the default shell may be dash, a simpler shell that doesn't have time as a keyword. Both Debian and Ubuntu use dash but I don't know what Red Hat, Fedora and others use.
A:
I don't think time will be more accurate than Pythons methods, unless you use os.times() in versions below 2.5.3, and look at the user time, because earlier versions of Python has a big there.
And although the time command behaves differently then bash's time builtin, they both return the relevant information, so I'm unclear on what the problem is, really.
|
How do I use the bash time function from python?
|
I would like to use python to make system calls to programs and time them. From the Linux command line if you type:
$ time prog args
You get something along the lines of:
real 0m0.110s
user 0m0.060s
sys 0m0.024s
if you do a 'man time', it states that you can type:
$ time -f "%E" prog args
in order to format to get only display the elapsed time. This does not work in the shell. I beleive it doesn't work because bash must have its own time function. If you type:
$/usr/bin/time -f "%E" prog args
you get the expected output as stated in the man pages.
I want the bash time because it seems to be more accurate. My problem is, when you try to use the 'time prog args' command from python, it doesn't behave the same way as when you type it in.
Code below to see what I'm doing (Please excuse the sloppiness):
#!/usr/bin/env python
"""Quick & Dirty script to time program execution for multiple inputs.
Saves the results as a .csv for importing into excel."""
import subprocess
timing = 'time -f "%E" '
program = 'java smartfib '
filename = 'smarttimes.csv'
#arglist = [0, 10, 20, 30, 31, 32, 33, 34, 35]
#arglist = [0, 2, 3, 5]
arglist = range(50)
timelist = list()
#run the program with the specified input values
for arg in arglist:
cmd = timing + program + str(arg)
pipe = subprocess.Popen(cmd, shell = True, bufsize = 256,
stderr = subprocess.PIPE, stdout = subprocess.PIPE)
[answer,times] = pipe.communicate() #read from pipe
#print times
if(int(answer) < 0):
print "overflow occured at fib(%d)!\n" %arg
break
#save (n, [time it takes to complete] as a tuple in a list)
timelist.append([arg,times])
csv = list() #list for formatted comma separated values for excel
for item in range(len(timelist)):
csv.append(
str(timelist[item][0]) + "," + timelist[item][1][2:].strip('\n')
)
results = file(filename,'w')
results.write('n,time(in s)\n')
for item in range(len(csv)):
results.write(csv[item]+'\n')
results.close()
def getTimeInSeconds(self, timestring):
pass
|
[
"You are correct that bash has it's own version of time.\n$ type time\ntime is a shell keyword\n\nPerhaps you could explicitly invoke bash with the -c option to get it's timing.\nDepending on which distribution you're using, the default shell may be dash, a simpler shell that doesn't have time as a keyword. Both Debian and Ubuntu use dash but I don't know what Red Hat, Fedora and others use.\n",
"I don't think time will be more accurate than Pythons methods, unless you use os.times() in versions below 2.5.3, and look at the user time, because earlier versions of Python has a big there.\nAnd although the time command behaves differently then bash's time builtin, they both return the relevant information, so I'm unclear on what the problem is, really.\n"
] |
[
2,
0
] |
[] |
[] |
[
"bash",
"linux",
"python",
"time",
"unix"
] |
stackoverflow_0001034566_bash_linux_python_time_unix.txt
|
Q:
Flash Characters on Screen in Linux
I have a XFCE 4.6 on kernel 2.6. Is there a quick and easy way to flash a message on the screen for a few seconds?
My Thinkpad T60 has 3 volume buttons (up, down, mute). When I pressed the buttons, I would like to flash the volume on the screen for a second on screen. Can it be done with Python?
A:
notification-daemon-xfce allows libnotify clients to show brief messages in XFCE. libnotify has Python bindings available.
As an untested example,
import pynotify
import sys
pynotify.init(sys.argv[0])
notification = pynotify.Notification("Title", "body", "dialog-info")
notification.set_urgency(pynotify.URGENCY_NORMAL)
notification.set_timeout(pynotify.EXPIRES_DEFAULT)
notification.show()
A:
The quickest solution is to use notify-send (provided typically in package libnotify-bin) from the command line
notify-send Hello!
|
Flash Characters on Screen in Linux
|
I have a XFCE 4.6 on kernel 2.6. Is there a quick and easy way to flash a message on the screen for a few seconds?
My Thinkpad T60 has 3 volume buttons (up, down, mute). When I pressed the buttons, I would like to flash the volume on the screen for a second on screen. Can it be done with Python?
|
[
"notification-daemon-xfce allows libnotify clients to show brief messages in XFCE. libnotify has Python bindings available.\nAs an untested example,\nimport pynotify\nimport sys\npynotify.init(sys.argv[0])\nnotification = pynotify.Notification(\"Title\", \"body\", \"dialog-info\")\nnotification.set_urgency(pynotify.URGENCY_NORMAL)\nnotification.set_timeout(pynotify.EXPIRES_DEFAULT)\nnotification.show()\n\n",
"The quickest solution is to use notify-send (provided typically in package libnotify-bin) from the command line\nnotify-send Hello!\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"linux",
"python",
"xfce"
] |
stackoverflow_0001030240_linux_python_xfce.txt
|
Q:
python multiprocessing manager
My problem is:
I have 3 procs that would like to share config loaded from the same class and a couple of queues. I would like to spawn another proc as a multiprocessing.manager to share those informations.
How can I do that? Could someone purchase a sample code avoiding use of global vars and making use of multiprocessing manager class?
Python docs wasn't so helpfull :-(
A:
I found this particular section in the Python multiprocessing docs helpful. The following program:
from multiprocessing import Process, Queue, current_process
import time
def f(q):
name = current_process().name
config = q.get()
print "%s got config: %s" % (name, config)
print "%s beginning processing at %s" % (name, time.asctime())
time.sleep(5)
print "%s completing processing at %s" % (name, time.asctime())
if __name__ == '__main__':
q = Queue()
processes = []
cfg = { 'my' : 'config', 'data' : 'here' }
for i in range(3):
p = Process(target=f, args=(q,))
processes.append(p)
p.start()
q.put(cfg)
for p in processes:
p.join()
demonstrates the main script (which is the "multiprocessing manager" in your question) creating 3 processes and sending them each a configuration (shown here as a dictionary).
Each process reads the configuration, does its processing (here, just sleep for 5 secs) then terminates. The output from running this script is:
Process-1 got config: {'my': 'config', 'data': 'here'}
Process-1 beginning processing at Tue Jun 23 23:34:23 2009
Process-2 got config: {'my': 'config', 'data': 'here'}
Process-2 beginning processing at Tue Jun 23 23:34:23 2009
Process-3 got config: {'my': 'config', 'data': 'here'}
Process-3 beginning processing at Tue Jun 23 23:34:23 2009
Process-1 completing processing at Tue Jun 23 23:34:28 2009
Process-2 completing processing at Tue Jun 23 23:34:28 2009
Process-3 completing processing at Tue Jun 23 23:34:28 2009
|
python multiprocessing manager
|
My problem is:
I have 3 procs that would like to share config loaded from the same class and a couple of queues. I would like to spawn another proc as a multiprocessing.manager to share those informations.
How can I do that? Could someone purchase a sample code avoiding use of global vars and making use of multiprocessing manager class?
Python docs wasn't so helpfull :-(
|
[
"I found this particular section in the Python multiprocessing docs helpful. The following program:\nfrom multiprocessing import Process, Queue, current_process\nimport time\n\ndef f(q):\n name = current_process().name\n config = q.get()\n print \"%s got config: %s\" % (name, config)\n print \"%s beginning processing at %s\" % (name, time.asctime())\n time.sleep(5)\n print \"%s completing processing at %s\" % (name, time.asctime())\n\nif __name__ == '__main__':\n q = Queue()\n processes = []\n cfg = { 'my' : 'config', 'data' : 'here' }\n for i in range(3):\n p = Process(target=f, args=(q,))\n processes.append(p)\n p.start()\n q.put(cfg)\n\n for p in processes:\n p.join()\n\ndemonstrates the main script (which is the \"multiprocessing manager\" in your question) creating 3 processes and sending them each a configuration (shown here as a dictionary).\nEach process reads the configuration, does its processing (here, just sleep for 5 secs) then terminates. The output from running this script is:\nProcess-1 got config: {'my': 'config', 'data': 'here'}\nProcess-1 beginning processing at Tue Jun 23 23:34:23 2009\nProcess-2 got config: {'my': 'config', 'data': 'here'}\nProcess-2 beginning processing at Tue Jun 23 23:34:23 2009\nProcess-3 got config: {'my': 'config', 'data': 'here'}\nProcess-3 beginning processing at Tue Jun 23 23:34:23 2009\nProcess-1 completing processing at Tue Jun 23 23:34:28 2009\nProcess-2 completing processing at Tue Jun 23 23:34:28 2009\nProcess-3 completing processing at Tue Jun 23 23:34:28 2009\n\n"
] |
[
3
] |
[] |
[] |
[
"multiprocessing",
"python"
] |
stackoverflow_0001034848_multiprocessing_python.txt
|
Q:
problem running scons
I am trying to get started with scons. I have Python 3.0.1 and downloaded Scons 1.2.0; when I try to run scons I get the following error. Am I doing something wrong here?
C:\tmp\scons>c:\appl\python\3.0.1\Scripts\scons
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\appl\python\3.0.1\Lib\site-packages\scons-1.2.0\SCons\__init__.py", l
ine 43, in <module>
import SCons.compat
File "c:\appl\python\3.0.1\Lib\site-packages\scons-1.2.0\SCons\compat\__init__
.py", line 208
raise Error, "Cannot move a directory '%s' into itself '%s'." % (src, dst)
^
SyntaxError: invalid syntax
A:
That's Python 2 syntax. I assume scons doesn't run on Python 3. You need to run it using Python 2.
|
problem running scons
|
I am trying to get started with scons. I have Python 3.0.1 and downloaded Scons 1.2.0; when I try to run scons I get the following error. Am I doing something wrong here?
C:\tmp\scons>c:\appl\python\3.0.1\Scripts\scons
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\appl\python\3.0.1\Lib\site-packages\scons-1.2.0\SCons\__init__.py", l
ine 43, in <module>
import SCons.compat
File "c:\appl\python\3.0.1\Lib\site-packages\scons-1.2.0\SCons\compat\__init__
.py", line 208
raise Error, "Cannot move a directory '%s' into itself '%s'." % (src, dst)
^
SyntaxError: invalid syntax
|
[
"That's Python 2 syntax. I assume scons doesn't run on Python 3. You need to run it using Python 2. \n"
] |
[
16
] |
[] |
[] |
[
"python",
"scons"
] |
stackoverflow_0001035581_python_scons.txt
|
Q:
Problem with exiting a daemonized process
I am writing a daemon program that spawns several other children processes. After I run the stop script, the main process keeps running when it's intended to quit, this really confused me.
import daemon, signal
from multiprocessing import Process, cpu_count, JoinableQueue
from http import httpserv
from worker import work
class Manager:
"""
This manager starts the http server processes and worker
processes, creates the input/output queues that keep the processes
work together nicely.
"""
def __init__(self):
self.NUMBER_OF_PROCESSES = cpu_count()
def start(self):
self.i_queue = JoinableQueue()
self.o_queue = JoinableQueue()
# Create worker processes
self.workers = [Process(target=work,
args=(self.i_queue, self.o_queue))
for i in range(self.NUMBER_OF_PROCESSES)]
for w in self.workers:
w.daemon = True
w.start()
# Create the http server process
self.http = Process(target=httpserv, args=(self.i_queue, self.o_queue))
self.http.daemon = True
self.http.start()
# Keep the current process from returning
self.running = True
while self.running:
time.sleep(1)
def stop(self):
print "quiting ..."
# Stop accepting new requests from users
os.kill(self.http.pid, signal.SIGINT)
# Waiting for all requests in output queue to be delivered
self.o_queue.join()
# Put sentinel None to input queue to signal worker processes
# to terminate
self.i_queue.put(None)
for w in self.workers:
w.join()
self.i_queue.join()
# Let main process return
self.running = False
import daemon
manager = Manager()
context = daemon.DaemonContext()
context.signal_map = {
signal.SIGHUP: lambda signum, frame: manager.stop(),
}
context.open()
manager.start()
The stop script is just a one-liner os.kill(pid, signal.SIGHUP), but after that the children processes (worker processes and http server process) end nicely, but the main process just stays there, I don't know what keeps it from returning.
A:
I tried a different approach, and this seems to work (note I took out the daemon portions of the code as I didn't have that module installed).
import signal
class Manager:
"""
This manager starts the http server processes and worker
processes, creates the input/output queues that keep the processes
work together nicely.
"""
def __init__(self):
self.NUMBER_OF_PROCESSES = cpu_count()
def start(self):
# all your code minus the loop
print "waiting to die"
signal.pause()
def stop(self):
print "quitting ..."
# all your code minus self.running
manager = Manager()
signal.signal(signal.SIGHUP, lambda signum, frame: manager.stop())
manager.start()
One warning, is that signal.pause() will unpause for any signal, so you may want to change your code accordingly.
EDIT:
The following works just fine for me:
import daemon
import signal
import time
class Manager:
"""
This manager starts the http server processes and worker
processes, creates the input/output queues that keep the processes
work together nicely.
"""
def __init__(self):
self.NUMBER_OF_PROCESSES = 5
def start(self):
# all your code minus the loop
print "waiting to die"
self.running = 1
while self.running:
time.sleep(1)
print "quit"
def stop(self):
print "quitting ..."
# all your code minus self.running
self.running = 0
manager = Manager()
context = daemon.DaemonContext()
context.signal_map = {signal.SIGHUP : lambda signum, frame: manager.stop()}
context.open()
manager.start()
What version of python are you using?
A:
You create the http server process but don't join() it. What happens if, rather than doing an os.kill() to stop the http server process, you send it a stop-processing sentinel (None, like you send to the workers) and then do a self.http.join()?
Update: You also need to send the None sentinel to the input queue once for each worker. You could try:
for w in self.workers:
self.i_queue.put(None)
for w in self.workers:
w.join()
N.B. The reason you need two loops is that if you put the None into the queue in the same loop that does the join(), that None may be picked up by a worker other than w, so joining on w will cause the caller to block.
You don't show the code for workers or http server, so I assume these are well-behaved in terms of calling task_done etc. and that each worker will quit as soon as it sees a None, without get()-ing any more things from the input queue.
Also, note that there is at least one open, hard-to-reproduce issue with JoinableQueue.task_done(), which may be biting you.
|
Problem with exiting a daemonized process
|
I am writing a daemon program that spawns several other children processes. After I run the stop script, the main process keeps running when it's intended to quit, this really confused me.
import daemon, signal
from multiprocessing import Process, cpu_count, JoinableQueue
from http import httpserv
from worker import work
class Manager:
"""
This manager starts the http server processes and worker
processes, creates the input/output queues that keep the processes
work together nicely.
"""
def __init__(self):
self.NUMBER_OF_PROCESSES = cpu_count()
def start(self):
self.i_queue = JoinableQueue()
self.o_queue = JoinableQueue()
# Create worker processes
self.workers = [Process(target=work,
args=(self.i_queue, self.o_queue))
for i in range(self.NUMBER_OF_PROCESSES)]
for w in self.workers:
w.daemon = True
w.start()
# Create the http server process
self.http = Process(target=httpserv, args=(self.i_queue, self.o_queue))
self.http.daemon = True
self.http.start()
# Keep the current process from returning
self.running = True
while self.running:
time.sleep(1)
def stop(self):
print "quiting ..."
# Stop accepting new requests from users
os.kill(self.http.pid, signal.SIGINT)
# Waiting for all requests in output queue to be delivered
self.o_queue.join()
# Put sentinel None to input queue to signal worker processes
# to terminate
self.i_queue.put(None)
for w in self.workers:
w.join()
self.i_queue.join()
# Let main process return
self.running = False
import daemon
manager = Manager()
context = daemon.DaemonContext()
context.signal_map = {
signal.SIGHUP: lambda signum, frame: manager.stop(),
}
context.open()
manager.start()
The stop script is just a one-liner os.kill(pid, signal.SIGHUP), but after that the children processes (worker processes and http server process) end nicely, but the main process just stays there, I don't know what keeps it from returning.
|
[
"I tried a different approach, and this seems to work (note I took out the daemon portions of the code as I didn't have that module installed).\nimport signal\n\nclass Manager:\n \"\"\"\n This manager starts the http server processes and worker\n processes, creates the input/output queues that keep the processes\n work together nicely.\n \"\"\"\n def __init__(self):\n self.NUMBER_OF_PROCESSES = cpu_count()\n\n def start(self):\n\n # all your code minus the loop\n\n print \"waiting to die\"\n\n signal.pause()\n\n def stop(self):\n print \"quitting ...\"\n\n # all your code minus self.running\n\n\nmanager = Manager()\n\nsignal.signal(signal.SIGHUP, lambda signum, frame: manager.stop())\n\nmanager.start()\n\nOne warning, is that signal.pause() will unpause for any signal, so you may want to change your code accordingly.\nEDIT:\nThe following works just fine for me:\nimport daemon\nimport signal\nimport time\n\nclass Manager:\n \"\"\"\n This manager starts the http server processes and worker\n processes, creates the input/output queues that keep the processes\n work together nicely.\n \"\"\"\n def __init__(self):\n self.NUMBER_OF_PROCESSES = 5\n\n def start(self):\n\n # all your code minus the loop\n\n print \"waiting to die\"\n self.running = 1\n while self.running:\n time.sleep(1)\n\n print \"quit\"\n\n\n\n def stop(self):\n print \"quitting ...\"\n\n # all your code minus self.running\n\n self.running = 0\n\n\nmanager = Manager()\n\ncontext = daemon.DaemonContext()\ncontext.signal_map = {signal.SIGHUP : lambda signum, frame: manager.stop()}\n\ncontext.open()\nmanager.start()\n\nWhat version of python are you using?\n",
"You create the http server process but don't join() it. What happens if, rather than doing an os.kill() to stop the http server process, you send it a stop-processing sentinel (None, like you send to the workers) and then do a self.http.join()?\nUpdate: You also need to send the None sentinel to the input queue once for each worker. You could try:\n for w in self.workers:\n self.i_queue.put(None)\n for w in self.workers:\n w.join()\n\nN.B. The reason you need two loops is that if you put the None into the queue in the same loop that does the join(), that None may be picked up by a worker other than w, so joining on w will cause the caller to block.\nYou don't show the code for workers or http server, so I assume these are well-behaved in terms of calling task_done etc. and that each worker will quit as soon as it sees a None, without get()-ing any more things from the input queue.\nAlso, note that there is at least one open, hard-to-reproduce issue with JoinableQueue.task_done(), which may be biting you.\n"
] |
[
1,
1
] |
[] |
[] |
[
"daemon",
"multiprocessing",
"python"
] |
stackoverflow_0001021613_daemon_multiprocessing_python.txt
|
Q:
Any Python Script to Save Websites Like Firefox?
I am tired of clicking "File" and then "Save Page As" in Firefox when I want to save some websites.
Is there any script to do this in Python? I would like to save the pictures and css files so that when I read it offline, it looks normal.
A:
You could use wget
wget -m -k -E [url]
-E, --html-extension save HTML documents with `.html' extension.
-m, --mirror shortcut for -N -r -l inf --no-remove-listing.
-k, --convert-links make links in downloaded HTML point to local files.
A:
probably a tool like wget is more appropriate for this type of thing.
A:
This is a non-Python answer and I'm not sure what your machine is running, but have you consider using a site ripper such as wget?
import os
cmd = 'wget <parameters>'
os.system(cmd)
A:
Like Cobbal stated, this is largely what wget is designed to do. I believe there's some flags/arguments that you can set to make it download the entire page, CSS + all. I suggest just alias-ing into something more convenient to type, or tossing it into a quick script.
A:
Have you looked at HTTrack?
|
Any Python Script to Save Websites Like Firefox?
|
I am tired of clicking "File" and then "Save Page As" in Firefox when I want to save some websites.
Is there any script to do this in Python? I would like to save the pictures and css files so that when I read it offline, it looks normal.
|
[
"You could use wget\nwget -m -k -E [url]\n-E, --html-extension save HTML documents with `.html' extension.\n-m, --mirror shortcut for -N -r -l inf --no-remove-listing.\n-k, --convert-links make links in downloaded HTML point to local files.\n\n",
"probably a tool like wget is more appropriate for this type of thing.\n",
"This is a non-Python answer and I'm not sure what your machine is running, but have you consider using a site ripper such as wget? \nimport os\ncmd = 'wget <parameters>'\nos.system(cmd)\n\n",
"Like Cobbal stated, this is largely what wget is designed to do. I believe there's some flags/arguments that you can set to make it download the entire page, CSS + all. I suggest just alias-ing into something more convenient to type, or tossing it into a quick script.\n",
"Have you looked at HTTrack?\n"
] |
[
10,
1,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001035825_python.txt
|
Q:
Find cpu-hogging plugin in multithreaded python
I have a system written in python that processes large amounts of data using plug-ins written by several developers with varying levels of experience.
Basically, the application starts several worker threads, then feeds them data. Each thread determines the plugin to use for an item and asks it to process the item. A plug-in is just a python module with a specific function defined. The processing usually involves regular expressions, and should not take more than a second or so.
Occasionally, one of the plugins will take minutes to complete, pegging the CPU on 100% for the whole time. This is usually caused by a sub-optimal regular expression paired with a data item that exposes that inefficiency.
This is where things get tricky. If I have a suspicion of who the culprit is, I can examine its code and find the problem. However, sometimes I'm not so lucky.
I can't go single threaded. It would probably take weeks to reproduce the problem if I do.
Putting a timer on the plugin doesn't help, because when it freezes it takes the GIL with it, and all the other plugins also take minutes to complete.
(In case you were wondering, the SRE engine doesn't release the GIL).
As far as I can tell profiling is pretty useless when multithreading.
Short of rewriting the whole architecture into multiprocessing, any way I can find out who is eating all my CPU?
ADDED: In answer to some of the comments:
Profiling multithreaded code in python is not useful because the profiler measures the total function time and not the active cpu time. Try cProfile.run('time.sleep(3)') to see what I mean. (credit to rog [last comment]).
The reason that going single threaded is tricky is because only 1 item in 20,000 is causing the problem, and I don't know which one it is. Running multithreaded allows me to go through 20,000 items in about an hour, while single threaded can take much longer (there's a lot of network latency involved). There are some more complications that I'd rather not get into right now.
That said, it's not a bad idea to try to serialize the specific code that calls the plugins, so that timing of one will not affect the timing of the others. I'll try that and report back.
A:
You apparently don't need multithreading, only concurrency because your threads don't share any state :
Try multiprocessing instead of multithreading
Single thread / N subprocesses.
There you can time each request, since no GIL is hold.
Other possibility is to get rid of multiple execution threads and use event-based network programming (ie use twisted)
A:
As you said, because of the GIL it is impossible within the same process.
I recommend to start a second monitor process, which listens for life beats from another thread in your original app. Once that time beat is missing for a specified amount of time, the monitor can kill your app and restart it.
A:
If would suggest as you have control over framework disable all but one plugin and see.
Basically if you have P1, P2...Pn plugins
run N process and disable P1 in first, P2 in second and so on
it would be much faster as compared to your multithreaded run, as no GIL blocking and you will come to know sooner which plugin is the culprit.
A:
I'd still look at nosklo's suggestion. You could profile on a single thread to find the item, and get the dump at your very long run an possibly see the culprit. Yeah, I know it's 20,000 items and will take a long time, but sometimes you just got to suck it up and find the darn thing to convince yourself the problem is caught and taken care of. Run the script, and go work on something else constructive. Come back and analyze results. That's what separates the men from the boys sometimes;-)
Or/And, add logging information that tracks the time to execute each item as it is processed from each plugin. Look at the log data at the end of your program being run, and see which one took an awful long time to run compared to the others.
|
Find cpu-hogging plugin in multithreaded python
|
I have a system written in python that processes large amounts of data using plug-ins written by several developers with varying levels of experience.
Basically, the application starts several worker threads, then feeds them data. Each thread determines the plugin to use for an item and asks it to process the item. A plug-in is just a python module with a specific function defined. The processing usually involves regular expressions, and should not take more than a second or so.
Occasionally, one of the plugins will take minutes to complete, pegging the CPU on 100% for the whole time. This is usually caused by a sub-optimal regular expression paired with a data item that exposes that inefficiency.
This is where things get tricky. If I have a suspicion of who the culprit is, I can examine its code and find the problem. However, sometimes I'm not so lucky.
I can't go single threaded. It would probably take weeks to reproduce the problem if I do.
Putting a timer on the plugin doesn't help, because when it freezes it takes the GIL with it, and all the other plugins also take minutes to complete.
(In case you were wondering, the SRE engine doesn't release the GIL).
As far as I can tell profiling is pretty useless when multithreading.
Short of rewriting the whole architecture into multiprocessing, any way I can find out who is eating all my CPU?
ADDED: In answer to some of the comments:
Profiling multithreaded code in python is not useful because the profiler measures the total function time and not the active cpu time. Try cProfile.run('time.sleep(3)') to see what I mean. (credit to rog [last comment]).
The reason that going single threaded is tricky is because only 1 item in 20,000 is causing the problem, and I don't know which one it is. Running multithreaded allows me to go through 20,000 items in about an hour, while single threaded can take much longer (there's a lot of network latency involved). There are some more complications that I'd rather not get into right now.
That said, it's not a bad idea to try to serialize the specific code that calls the plugins, so that timing of one will not affect the timing of the others. I'll try that and report back.
|
[
"You apparently don't need multithreading, only concurrency because your threads don't share any state : \nTry multiprocessing instead of multithreading\nSingle thread / N subprocesses. \nThere you can time each request, since no GIL is hold.\nOther possibility is to get rid of multiple execution threads and use event-based network programming (ie use twisted)\n",
"As you said, because of the GIL it is impossible within the same process. \nI recommend to start a second monitor process, which listens for life beats from another thread in your original app. Once that time beat is missing for a specified amount of time, the monitor can kill your app and restart it.\n",
"If would suggest as you have control over framework disable all but one plugin and see.\nBasically if you have P1, P2...Pn plugins\nrun N process and disable P1 in first, P2 in second and so on\nit would be much faster as compared to your multithreaded run, as no GIL blocking and you will come to know sooner which plugin is the culprit.\n",
"I'd still look at nosklo's suggestion. You could profile on a single thread to find the item, and get the dump at your very long run an possibly see the culprit. Yeah, I know it's 20,000 items and will take a long time, but sometimes you just got to suck it up and find the darn thing to convince yourself the problem is caught and taken care of. Run the script, and go work on something else constructive. Come back and analyze results. That's what separates the men from the boys sometimes;-)\nOr/And, add logging information that tracks the time to execute each item as it is processed from each plugin. Look at the log data at the end of your program being run, and see which one took an awful long time to run compared to the others. \n"
] |
[
3,
0,
0,
0
] |
[] |
[] |
[
"multithreading",
"profiling",
"python",
"regex"
] |
stackoverflow_0001031425_multithreading_profiling_python_regex.txt
|
Q:
Methods for modular customization of locale messages?
There many levels for the customization of programs.
First of course is making it speak your language by creating i18n messages where tools like gettext and xgettext do a great job.
Another comes when you need to modify the meaning of some messages to suite the purpose of your project.
The question is: is it possible to keep customized messages in a separate file in addition to the boilerplate translation and have the standard tools understand that customized messages take precedence?
This would help to keep those messages from being committed to the public repository and from being overwritten by the boilerplate text when upgrading.
edit: since not too many people care about localization I think it's appropriate to collect answers for any platform, however I'm at the moment interested to implement this python/django.
A:
In Java, these localized strings are handled by ResourceBundles. ResourceBundles have a concept of variants. For example, you could have a base English resource, called messages_en.properties. Then you could customize for a specific variant of English with message_en_US.properties or message_en_UK.properties.
US and UK are ISO country codes, but you could also setup your own custom variants that just contain those strings that you want to customize. For example:
#messages_en.properties
button.click=Click
label.go=Go
#messages_en_ZZ.properties
button.click=Click Me
By setting the locale to en_ZZ, your application would first look in messages_en_ZZ.properties to see if the customized string existed, and then fall back to messages_en.properties for your boilerplate translations.
[More info on ResourceBundle loading precedence][1]
[1]: http://java.sun.com/javase/6/docs/api/java/util/ResourceBundle.html#getBundle(java.lang.String, java.util.Locale, java.lang.ClassLoader)
A:
I think Qt's powerful i18n facilities (see here) might meet your needs -- of course, they're available in Python, too, thanks to the usual, blessed PyQt!-)
|
Methods for modular customization of locale messages?
|
There many levels for the customization of programs.
First of course is making it speak your language by creating i18n messages where tools like gettext and xgettext do a great job.
Another comes when you need to modify the meaning of some messages to suite the purpose of your project.
The question is: is it possible to keep customized messages in a separate file in addition to the boilerplate translation and have the standard tools understand that customized messages take precedence?
This would help to keep those messages from being committed to the public repository and from being overwritten by the boilerplate text when upgrading.
edit: since not too many people care about localization I think it's appropriate to collect answers for any platform, however I'm at the moment interested to implement this python/django.
|
[
"In Java, these localized strings are handled by ResourceBundles. ResourceBundles have a concept of variants. For example, you could have a base English resource, called messages_en.properties. Then you could customize for a specific variant of English with message_en_US.properties or message_en_UK.properties.\nUS and UK are ISO country codes, but you could also setup your own custom variants that just contain those strings that you want to customize. For example:\n#messages_en.properties\nbutton.click=Click\nlabel.go=Go\n\n#messages_en_ZZ.properties\nbutton.click=Click Me\n\nBy setting the locale to en_ZZ, your application would first look in messages_en_ZZ.properties to see if the customized string existed, and then fall back to messages_en.properties for your boilerplate translations. \n[More info on ResourceBundle loading precedence][1]\n[1]: http://java.sun.com/javase/6/docs/api/java/util/ResourceBundle.html#getBundle(java.lang.String, java.util.Locale, java.lang.ClassLoader)\n",
"I think Qt's powerful i18n facilities (see here) might meet your needs -- of course, they're available in Python, too, thanks to the usual, blessed PyQt!-)\n"
] |
[
1,
1
] |
[] |
[] |
[
"customization",
"internationalization",
"language_agnostic",
"localization",
"python"
] |
stackoverflow_0001033580_customization_internationalization_language_agnostic_localization_python.txt
|
Q:
python importing relative modules
I have the Python modules a.py and b.py in the same directory.
How can I reliably import b.py from a.py, given that a.py may have been imported from another directory or executed directly? This module will be distributed so I can't hardcode a single path.
I've been playing around with __file__, sys.path and os.chdir, but it feels messy. And __file__ is not always available.
A:
Actually, __file__ is available for an imported module, but only if it was imported from a .py/.pyc file. It won't be available if the module is built in. For example:
>>> import sys, os
>>> hasattr(os, '__file__')
True
>>> hasattr(sys, '__file__')
False
A:
Using the inspect module will make the builtin modules more obvious:
>>> import os
>>> import sys
>>> inspect.getfile(os)
'/usr/local/lib/python2.6/os.pyc'
>>> inspect.getfile(sys)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/inspect.py", line 407, in getfile
raise TypeError('arg is a built-in module')
TypeError: arg is a built-in module
A:
Put the directory that contains both in your python path... or vice versa.
|
python importing relative modules
|
I have the Python modules a.py and b.py in the same directory.
How can I reliably import b.py from a.py, given that a.py may have been imported from another directory or executed directly? This module will be distributed so I can't hardcode a single path.
I've been playing around with __file__, sys.path and os.chdir, but it feels messy. And __file__ is not always available.
|
[
"Actually, __file__ is available for an imported module, but only if it was imported from a .py/.pyc file. It won't be available if the module is built in. For example:\n>>> import sys, os\n>>> hasattr(os, '__file__')\nTrue\n>>> hasattr(sys, '__file__')\nFalse\n\n",
"Using the inspect module will make the builtin modules more obvious:\n>>> import os\n>>> import sys\n>>> inspect.getfile(os)\n'/usr/local/lib/python2.6/os.pyc'\n>>> inspect.getfile(sys)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/local/lib/python2.6/inspect.py\", line 407, in getfile\n raise TypeError('arg is a built-in module')\nTypeError: arg is a built-in module\n\n",
"Put the directory that contains both in your python path... or vice versa.\n"
] |
[
6,
1,
-1
] |
[] |
[] |
[
"python",
"python_import",
"python_module",
"relative_path"
] |
stackoverflow_0001016105_python_python_import_python_module_relative_path.txt
|
Q:
Simple Image Metrics with PIL
I want to process uploaded photos with PIL and determine some "soft" image metrics like:
is the image contrastful or dull?
colorful or monochrome?
bright or dark?
is the image warm or cold (regarding light temperature)?
is there a dominant hue?
the metrics should be measured in a rating-style, e.g. colorful++++ for a very colorful photo, colorful+ for a rather monochrome image.
I already noticed PIL's ImageStat Module, that calculates some interesting values for my metrics, e.g. RMS of histogram etc. However, this module is rather poorly documented, so I'm looking for more concrete algorithms to determine these metrics.
A:
I don't think there are methods that give you a metric exactly for what you want, but the methods that it has, like RMS, takes you a long way there. To do things with color, you can split the image into one layer per color, and get the RMS on each layer, which tells you some of the things you want to know. You can also convert the image in different ways so that you only retain color information, etc.
|
Simple Image Metrics with PIL
|
I want to process uploaded photos with PIL and determine some "soft" image metrics like:
is the image contrastful or dull?
colorful or monochrome?
bright or dark?
is the image warm or cold (regarding light temperature)?
is there a dominant hue?
the metrics should be measured in a rating-style, e.g. colorful++++ for a very colorful photo, colorful+ for a rather monochrome image.
I already noticed PIL's ImageStat Module, that calculates some interesting values for my metrics, e.g. RMS of histogram etc. However, this module is rather poorly documented, so I'm looking for more concrete algorithms to determine these metrics.
|
[
"I don't think there are methods that give you a metric exactly for what you want, but the methods that it has, like RMS, takes you a long way there. To do things with color, you can split the image into one layer per color, and get the RMS on each layer, which tells you some of the things you want to know. You can also convert the image in different ways so that you only retain color information, etc.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_imaging_library"
] |
stackoverflow_0001037090_python_python_imaging_library.txt
|
Q:
Are there stack based variables in Python?
If I do this:
def foo():
a = SomeObject()
Is 'a' destroyed immediately after leaving foo? Or does it wait for some GC to happen?
A:
Yes and no. The object will get destroyed after you leave foo (as long as nothing else has a reference to it), but whether it is immediate or not is an implementation detail, and will vary.
In CPython (the standard python implementation), refcounting is used, so the item will immediately be destroyed. There are some exceptions to this, such as when the object contains cyclical references, or when references are held to the enclosing frame (eg. an exception is raised that retains a reference to the frame's variables.)
In implmentations like Jython or IronPython however, the object won't be finalised until the garbage collector kicks in.
As such, you shouldn't rely on timely finalisation of objects, but should only assume that it will be destroyed at some point after the last reference goes. When you do need some cleanup to be done based on the lexical scope, either explicitely call a cleanup method, or look at the new with statement in python 2.6 (available in 2.5 with "from __future__ import with_statement").
|
Are there stack based variables in Python?
|
If I do this:
def foo():
a = SomeObject()
Is 'a' destroyed immediately after leaving foo? Or does it wait for some GC to happen?
|
[
"Yes and no. The object will get destroyed after you leave foo (as long as nothing else has a reference to it), but whether it is immediate or not is an implementation detail, and will vary.\nIn CPython (the standard python implementation), refcounting is used, so the item will immediately be destroyed. There are some exceptions to this, such as when the object contains cyclical references, or when references are held to the enclosing frame (eg. an exception is raised that retains a reference to the frame's variables.)\nIn implmentations like Jython or IronPython however, the object won't be finalised until the garbage collector kicks in.\nAs such, you shouldn't rely on timely finalisation of objects, but should only assume that it will be destroyed at some point after the last reference goes. When you do need some cleanup to be done based on the lexical scope, either explicitely call a cleanup method, or look at the new with statement in python 2.6 (available in 2.5 with \"from __future__ import with_statement\").\n"
] |
[
18
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001037533_python.txt
|
Q:
apache prefork/mod_wsgi spawned process count seemingly past configuration
in a production environment running nginx reversing back to apache mpm-prefork/mod_wsgi, im seeing 90 apache child processes, when i would expect that 40 would be the maximum, as configured below. the configuration/setup is nothing exciting:
nginx is reverse proxying to apache via proxy_pass, and serving static media
apache only serves dynamic requests
relevant nginx config:
worker_processes 15;
events {
worker_connections 1024;
}
keepalive_timeout 10;
relevant apache config:
KeepAlive Off
MaxKeepAliveRequests 100
KeepAliveTimeout 15
<IfModule mpm_prefork_module>
StartServers 20
MinSpareServers 7
MaxSpareServers 10
MaxClients 200
MaxRequestsPerChild 0
</IfModule>
mod_wsgi config, where webapp is the name of the process:
WSGIDaemonProcess webapp user=www group=users threads=1 processes=40
am i missing something?
A:
The mod_wsgi daemon processes will appear to be Apache server child processes even though they aren't the same. This is because the mod_wsgi daemon processes are a fork of Apache parent process and not a fork/exec. In other words, they executable name doesn't change.
To be able to distinguish mod_wsgi daemon processes from normal Apache server child processes, supply the 'display-name' option to WSGIDaemonProcess. This option allows you to rename the process as viewable in output from 'ps' program and some variants of programs like 'top'. See documentation of WSGIDaemonProcess directive on mod_wsgi site.
http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess
A:
It's possible to have more apache processes than WSGI instances.
Change apache's MaxClients to 40 if you want to limit the apache processes.
A:
You are using mod_wsgi in daemon mode, so mod_wsgi processes and Apache handler process are independent.
By your configuration right after the apache starts you have:
40(processes=) mod_wsgi processes are started the same time.
20(StartServers) Apache handler processes that can be automatically reduced to 10(MaxSpareServers) if there is not incoming activity.
Then on load, Apache handler processes can grow up to 200(MaxClients). But mod_wsgi processes count will be the same - 40.
My advice is to use worker mpm than Apache processes only dynamic content. It can help to reduce memory consumption and better scalability.
|
apache prefork/mod_wsgi spawned process count seemingly past configuration
|
in a production environment running nginx reversing back to apache mpm-prefork/mod_wsgi, im seeing 90 apache child processes, when i would expect that 40 would be the maximum, as configured below. the configuration/setup is nothing exciting:
nginx is reverse proxying to apache via proxy_pass, and serving static media
apache only serves dynamic requests
relevant nginx config:
worker_processes 15;
events {
worker_connections 1024;
}
keepalive_timeout 10;
relevant apache config:
KeepAlive Off
MaxKeepAliveRequests 100
KeepAliveTimeout 15
<IfModule mpm_prefork_module>
StartServers 20
MinSpareServers 7
MaxSpareServers 10
MaxClients 200
MaxRequestsPerChild 0
</IfModule>
mod_wsgi config, where webapp is the name of the process:
WSGIDaemonProcess webapp user=www group=users threads=1 processes=40
am i missing something?
|
[
"The mod_wsgi daemon processes will appear to be Apache server child processes even though they aren't the same. This is because the mod_wsgi daemon processes are a fork of Apache parent process and not a fork/exec. In other words, they executable name doesn't change.\nTo be able to distinguish mod_wsgi daemon processes from normal Apache server child processes, supply the 'display-name' option to WSGIDaemonProcess. This option allows you to rename the process as viewable in output from 'ps' program and some variants of programs like 'top'. See documentation of WSGIDaemonProcess directive on mod_wsgi site.\nhttp://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess\n",
"It's possible to have more apache processes than WSGI instances.\nChange apache's MaxClients to 40 if you want to limit the apache processes.\n",
"You are using mod_wsgi in daemon mode, so mod_wsgi processes and Apache handler process are independent.\nBy your configuration right after the apache starts you have:\n\n40(processes=) mod_wsgi processes are started the same time.\n20(StartServers) Apache handler processes that can be automatically reduced to 10(MaxSpareServers) if there is not incoming activity.\n\nThen on load, Apache handler processes can grow up to 200(MaxClients). But mod_wsgi processes count will be the same - 40.\nMy advice is to use worker mpm than Apache processes only dynamic content. It can help to reduce memory consumption and better scalability.\n"
] |
[
10,
0,
0
] |
[] |
[] |
[
"apache",
"mod_wsgi",
"python"
] |
stackoverflow_0000913632_apache_mod_wsgi_python.txt
|
Q:
mod_wsgi yield output buffer instead of return
Right now I've got a mod_wsgi script that's structured like this..
def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
I was wondering if anyone knows of a way to change this to operate on a yield basis instead of return, that way I can send the page as it's being generated and not only once it's complete, so the page loading can go faster for the user.
However, whenever I swap the output for a list and yield it in the application(), it throws an error:
TypeError: sequence of string values expected, value of type list found
A:
def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
yield output
"However, whenever I swap the output for a list and yield it in the application(), it throws an error:"
Well, don't yield the list. Yield each element instead:
for part in mylist:
yield part
or if the list is the entire content, just:
return mylist
Because the list is already an iterator and can yield by itself.
A:
Note that 'yield' should be avoided unless absolutely necessary. In particular 'yield' will be inefficient if yielding lots of small strings. This is because the WSGI specification requires that after each string yielded that the response must be flushed. For Apache/mod_wsgi, flushing means each string being forced out through the Apache output bucket brigade and filter system and onto the socket. Ignoring the overhead of the Apache output filter system, writing lots of small strings onto a socket is simply just bad to begin with.
This problem also exists where an array of strings is returned from an application as a flush also has to be performed between each string in the array. This is because the string is dealt with as an iterable and not a list. Thus for a preformed list of strings, it is much better to join the individual strings into one large string and return a list containing just that one string. Doing this also allows a WSGI implementation to automatically generate a Content-Length for the response if one wasn't explicitly provided.
Just make sure that when joining all the strings in a list into one, that the result is returned in a list. If this isn't done and instead the string is returned, that string is treated as an iterable, where each element in the string is a single character string. This results in a flush being done after every character, which is going to be even worse than if the strings hadn't been joined.
A:
Don't send the content length and send the output as you derive it.
You don't need to know the size of the output if you simply don't
send the Content-Length header. That way can send part of the response
before you have computed the rest of it.
def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/html')]
start_response(status, response_headers)
yield head()
yield part1()
yield part2()
yield part3()
yield "<!-- bye now! -->"
Otherwise you will get no benefit from sending in chunks,
since computing the output is probably the slow part and
the internet protocol will send the output in chunks anyway.
Sadly, this doesn't work in the case where, for example, the calculation
of part2() decides you really need to change a header (like a cookie)
or need to build other page-global data structures
-- if this ever happens, you need to compute the entire output before
sending the headers, and might as well use a return [output]
For example
http://aaron.oirt.rutgers.edu/myapp/docs/W1200_1200.config_template
Needs to build a page global data structure for the links to subsections
that show at the top of the page -- so the last subsection must be rendered
before the first chunk of output is delivered to the client.
|
mod_wsgi yield output buffer instead of return
|
Right now I've got a mod_wsgi script that's structured like this..
def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
I was wondering if anyone knows of a way to change this to operate on a yield basis instead of return, that way I can send the page as it's being generated and not only once it's complete, so the page loading can go faster for the user.
However, whenever I swap the output for a list and yield it in the application(), it throws an error:
TypeError: sequence of string values expected, value of type list found
|
[
"def application(environ, start_response):\n status = '200 OK'\n output = 'Hello World!'\n\n response_headers = [('Content-type', 'text/plain'),\n ('Content-Length', str(len(output)))]\n start_response(status, response_headers)\n\n yield output\n\n\n\"However, whenever I swap the output for a list and yield it in the application(), it throws an error:\"\n\nWell, don't yield the list. Yield each element instead:\nfor part in mylist:\n yield part\n\nor if the list is the entire content, just:\nreturn mylist\n\nBecause the list is already an iterator and can yield by itself.\n",
"Note that 'yield' should be avoided unless absolutely necessary. In particular 'yield' will be inefficient if yielding lots of small strings. This is because the WSGI specification requires that after each string yielded that the response must be flushed. For Apache/mod_wsgi, flushing means each string being forced out through the Apache output bucket brigade and filter system and onto the socket. Ignoring the overhead of the Apache output filter system, writing lots of small strings onto a socket is simply just bad to begin with.\nThis problem also exists where an array of strings is returned from an application as a flush also has to be performed between each string in the array. This is because the string is dealt with as an iterable and not a list. Thus for a preformed list of strings, it is much better to join the individual strings into one large string and return a list containing just that one string. Doing this also allows a WSGI implementation to automatically generate a Content-Length for the response if one wasn't explicitly provided.\nJust make sure that when joining all the strings in a list into one, that the result is returned in a list. If this isn't done and instead the string is returned, that string is treated as an iterable, where each element in the string is a single character string. This results in a flush being done after every character, which is going to be even worse than if the strings hadn't been joined.\n",
"Don't send the content length and send the output as you derive it.\nYou don't need to know the size of the output if you simply don't\nsend the Content-Length header. That way can send part of the response\nbefore you have computed the rest of it.\ndef application(environ, start_response):\n status = '200 OK'\n output = 'Hello World!'\n\n response_headers = [('Content-type', 'text/html')]\n start_response(status, response_headers)\n\n yield head()\n yield part1()\n yield part2()\n yield part3()\n yield \"<!-- bye now! -->\"\n\nOtherwise you will get no benefit from sending in chunks, \nsince computing the output is probably the slow part and\nthe internet protocol will send the output in chunks anyway.\nSadly, this doesn't work in the case where, for example, the calculation\nof part2() decides you really need to change a header (like a cookie)\nor need to build other page-global data structures\n-- if this ever happens, you need to compute the entire output before\nsending the headers, and might as well use a return [output]\nFor example \nhttp://aaron.oirt.rutgers.edu/myapp/docs/W1200_1200.config_template\nNeeds to build a page global data structure for the links to subsections\nthat show at the top of the page -- so the last subsection must be rendered\nbefore the first chunk of output is delivered to the client.\n"
] |
[
7,
7,
0
] |
[] |
[] |
[
"mod_wsgi",
"python",
"yield"
] |
stackoverflow_0000804898_mod_wsgi_python_yield.txt
|
Q:
Serving static files with mod_wsgi and Django
I have a django application using mod_python, fairly typical configuration except that media files are being served by a (I know, not recommended) 'media' directory in the document root. I would like to test and maybe deploy with mod_wsgi but I cannot figure out how to create something simple to serve static files. mod_python allows the use of Apache directives like:
<Location '/'>
SetHandler MyApplication.xyz.....
</Location>
<Location '/media'>
SetHandler None
</Location>
The django docs seem to point to the second block above as the correct way to make a similar exception for mod_wsgi, but in my tests everything below root is still being sent to the wsgi app. Is there a good way set a static media directory with mod_wsgi, or is what I am trying to do intentionally unsupported for compelling technical reasons? Answers that point to entirely different approaches are welcome.
A:
I run a a dozen or so Django sites on the same server and here's how I configure the media URL's.
Each VirtualHost has the following configuration:
Alias /media /path/to/media/
<Directory /path/to/media>
Include /etc/apache2/vhosts.d/media.include
</Directory>
This way I can make any changes to the media handling in one file.
Then, my media.include file looks like this:
Order allow,deny
Allow from all
SetHandler None
FileETag none
Options FollowSymLinks
<IfModule mod_expires.c>
ExpiresActive On
ExpiresByType image/gif "access plus 30 days"
ExpiresByType image/jpg "access plus 30 days"
ExpiresByType image/png "access plus 30 days"
ExpiresByType image/jpeg "access plus 30 days"
ExpiresByType text/css "access plus 30 days"
ExpiresByType application/x-javascript "modification plus 2 years"
</IfModule>
<IfModule mod_headers.c>
Header append Vary Accept-Encoding
</IfModule>
AddOutputFilterByType DEFLATE text/html text/css text/plain
This has worked very well for me, and gets an A grade from YSlow (also see Jeff Atwood on YSlow).
Also note, for the root dir I use the following configuration:
WSGIScriptAlias / /path/to/app.wsgi
<Directory /path/to>
Options +ExecCGI
Allow from all
</Directory>
... which should be after the Alias /media in your configuration file (because Apache looks at the aliases in order)
A:
The mod_wsgi documentation explains how to setup static files which appear at a URL underneath that which the WSGI application is mounted at. See:
http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#Hosting_Of_Static_Files
Note that 'Options +ExecCGI' is not need when using WSGIScriptAlias directive to mount the WSGI application. The 'ExecCGI' option is only required when using AddHandler to mount applications as resources.
|
Serving static files with mod_wsgi and Django
|
I have a django application using mod_python, fairly typical configuration except that media files are being served by a (I know, not recommended) 'media' directory in the document root. I would like to test and maybe deploy with mod_wsgi but I cannot figure out how to create something simple to serve static files. mod_python allows the use of Apache directives like:
<Location '/'>
SetHandler MyApplication.xyz.....
</Location>
<Location '/media'>
SetHandler None
</Location>
The django docs seem to point to the second block above as the correct way to make a similar exception for mod_wsgi, but in my tests everything below root is still being sent to the wsgi app. Is there a good way set a static media directory with mod_wsgi, or is what I am trying to do intentionally unsupported for compelling technical reasons? Answers that point to entirely different approaches are welcome.
|
[
"I run a a dozen or so Django sites on the same server and here's how I configure the media URL's.\nEach VirtualHost has the following configuration:\nAlias /media /path/to/media/\n<Directory /path/to/media>\n Include /etc/apache2/vhosts.d/media.include\n</Directory>\n\nThis way I can make any changes to the media handling in one file.\nThen, my media.include file looks like this:\nOrder allow,deny\nAllow from all\nSetHandler None\nFileETag none\nOptions FollowSymLinks\n\n<IfModule mod_expires.c>\n ExpiresActive On\n ExpiresByType image/gif \"access plus 30 days\"\n ExpiresByType image/jpg \"access plus 30 days\"\n ExpiresByType image/png \"access plus 30 days\"\n ExpiresByType image/jpeg \"access plus 30 days\"\n ExpiresByType text/css \"access plus 30 days\"\n ExpiresByType application/x-javascript \"modification plus 2 years\"\n</IfModule>\n\n<IfModule mod_headers.c>\n Header append Vary Accept-Encoding\n</IfModule>\n\nAddOutputFilterByType DEFLATE text/html text/css text/plain\n\nThis has worked very well for me, and gets an A grade from YSlow (also see Jeff Atwood on YSlow).\nAlso note, for the root dir I use the following configuration:\nWSGIScriptAlias / /path/to/app.wsgi\n<Directory /path/to>\n Options +ExecCGI\n Allow from all\n</Directory>\n\n... which should be after the Alias /media in your configuration file (because Apache looks at the aliases in order)\n",
"The mod_wsgi documentation explains how to setup static files which appear at a URL underneath that which the WSGI application is mounted at. See:\nhttp://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#Hosting_Of_Static_Files\nNote that 'Options +ExecCGI' is not need when using WSGIScriptAlias directive to mount the WSGI application. The 'ExecCGI' option is only required when using AddHandler to mount applications as resources.\n"
] |
[
18,
13
] |
[] |
[] |
[
"django",
"mod_python",
"mod_wsgi",
"python"
] |
stackoverflow_0000732190_django_mod_python_mod_wsgi_python.txt
|
Q:
Converting from mod_python to mod_wsgi
My website is written in Python and currently runs under mod_python with Apache. Lately I've had to put in a few ugly hacks that make me think it might be worth converting the site to mod_wsgi. But I've gotten used to using some of mod_python's utility classes, especially FieldStorage and Session (and sometimes Cookie), and from a scan of PEP 333, I don't see any equivalents to these. (Not surprising, because I understand that those kinds of utilities don't belong in the WSGI spec)
Question is, are there "standard" (i.e. commonly accepted) replacements for these mod_python utility classes that I can use in WSGI, or could I/should I write my own?
(FYI: currently using Python 2.5)
A:
Look at Werkzeug. You may have to do some rewriting. You will probably be pleased with the results of imposing the WSGI world-view on your application.
A:
You can use FieldStorage in 'cgi' module and the 'Cookie' module. There is no equivalent to Session in Python standard libraries. For WSGI applications you can use Beaker for sessions.
A:
Please look at
whiff -- it provides built in functionality for manipulating field data
and sessions among others to wsgi based applications.
|
Converting from mod_python to mod_wsgi
|
My website is written in Python and currently runs under mod_python with Apache. Lately I've had to put in a few ugly hacks that make me think it might be worth converting the site to mod_wsgi. But I've gotten used to using some of mod_python's utility classes, especially FieldStorage and Session (and sometimes Cookie), and from a scan of PEP 333, I don't see any equivalents to these. (Not surprising, because I understand that those kinds of utilities don't belong in the WSGI spec)
Question is, are there "standard" (i.e. commonly accepted) replacements for these mod_python utility classes that I can use in WSGI, or could I/should I write my own?
(FYI: currently using Python 2.5)
|
[
"Look at Werkzeug. You may have to do some rewriting. You will probably be pleased with the results of imposing the WSGI world-view on your application.\n",
"You can use FieldStorage in 'cgi' module and the 'Cookie' module. There is no equivalent to Session in Python standard libraries. For WSGI applications you can use Beaker for sessions.\n",
"Please look at \nwhiff -- it provides built in functionality for manipulating field data\nand sessions among others to wsgi based applications.\n"
] |
[
9,
2,
1
] |
[] |
[] |
[
"mod_python",
"mod_wsgi",
"python"
] |
stackoverflow_0000644767_mod_python_mod_wsgi_python.txt
|
Q:
In production, Apache + mod_wsgi or Nginx + mod_wsgi?
What to use for a medium to large python WSGI application, Apache + mod_wsgi or Nginx + mod_wsgi?
Which combination will need more memory and CPU time?
Which one is faster?
Which is known for being more stable than the other?
I am also thinking to use CherryPy's WSGI server but I hear it's not very suitable for a very high-load application, what do you know about this?
Note: I didn't use any Python Web Framework, I just wrote the whole thing from scratch.
Note': Other suggestions are also welcome.
A:
For nginx/mod_wsgi, ensure you read:
http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html
Because of how nginx is an event driven system underneath, it has behavioural characteristics which are detrimental to blocking applications such as is the case with WSGI based applications. Worse case scenario is that with multiprocess nginx configuration, you can see user requests be blocked even though some nginx worker processes may be idle. Apache/mod_wsgi doesn't have this issue as Apache processes will only accept requests when it has the resources to actually handle the request. Apache/mod_wsgi will thus give more predictable and reliable behaviour.
A:
The author of nginx mod_wsgi explains some differences to Apache mod_wsgi in this mailing list message.
A:
The main difference is that nginx is built to handle large numbers of connections in a much smaller memory space. This makes it very well suited for apps that are doing comet like connections that can have many idle open connections. This also gives it quite a smaller memory foot print.
From a raw performance perspective, nginx is faster, but not so much faster that I would include that as a determining factor.
Apache has the advantage in the area of modules available, and the fact that it is pretty much standard. Any web host you go with will have it installed, and most techs are going to be very familiar with it.
Also, if you use mod_wsgi, it is your wsgi server so you don't even need cherrypy.
Other than that, the best advice I can give is try setting up your app under both and do some benchmarking, since no matter what any one tells you, your mileage may vary.
A:
One thing that CherryPy's webserver has going for it is that it's a pure python webserver (AFAIK), which may or may not make deployment easier for you. Plus, I could see the benefits of using it if you're just using a server for WSGI and static content.
(shameless plug warning: I wrote the WSGI code that I'm about to mention)
Kamaelia will have WSGI support coming in the next release. The cool thing is that you'll likely be able to either use the pre-made one or build your own using the existing HTTP and WSGI code.
(end shameless plug)
With that said, given the current options, I would personally probably go with CherryPy because it seems to be the simplest to configure and I can understand python code moreso than I can understand C code.
You may do best to try each of them out and see what the pros and cons of each one are for your specific application though.
|
In production, Apache + mod_wsgi or Nginx + mod_wsgi?
|
What to use for a medium to large python WSGI application, Apache + mod_wsgi or Nginx + mod_wsgi?
Which combination will need more memory and CPU time?
Which one is faster?
Which is known for being more stable than the other?
I am also thinking to use CherryPy's WSGI server but I hear it's not very suitable for a very high-load application, what do you know about this?
Note: I didn't use any Python Web Framework, I just wrote the whole thing from scratch.
Note': Other suggestions are also welcome.
|
[
"For nginx/mod_wsgi, ensure you read:\nhttp://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html\nBecause of how nginx is an event driven system underneath, it has behavioural characteristics which are detrimental to blocking applications such as is the case with WSGI based applications. Worse case scenario is that with multiprocess nginx configuration, you can see user requests be blocked even though some nginx worker processes may be idle. Apache/mod_wsgi doesn't have this issue as Apache processes will only accept requests when it has the resources to actually handle the request. Apache/mod_wsgi will thus give more predictable and reliable behaviour.\n",
"The author of nginx mod_wsgi explains some differences to Apache mod_wsgi in this mailing list message.\n",
"The main difference is that nginx is built to handle large numbers of connections in a much smaller memory space. This makes it very well suited for apps that are doing comet like connections that can have many idle open connections. This also gives it quite a smaller memory foot print.\nFrom a raw performance perspective, nginx is faster, but not so much faster that I would include that as a determining factor.\nApache has the advantage in the area of modules available, and the fact that it is pretty much standard. Any web host you go with will have it installed, and most techs are going to be very familiar with it.\nAlso, if you use mod_wsgi, it is your wsgi server so you don't even need cherrypy.\nOther than that, the best advice I can give is try setting up your app under both and do some benchmarking, since no matter what any one tells you, your mileage may vary.\n",
"One thing that CherryPy's webserver has going for it is that it's a pure python webserver (AFAIK), which may or may not make deployment easier for you. Plus, I could see the benefits of using it if you're just using a server for WSGI and static content.\n(shameless plug warning: I wrote the WSGI code that I'm about to mention)\nKamaelia will have WSGI support coming in the next release. The cool thing is that you'll likely be able to either use the pre-made one or build your own using the existing HTTP and WSGI code.\n(end shameless plug)\nWith that said, given the current options, I would personally probably go with CherryPy because it seems to be the simplest to configure and I can understand python code moreso than I can understand C code.\nYou may do best to try each of them out and see what the pros and cons of each one are for your specific application though.\n"
] |
[
78,
16,
14,
7
] |
[] |
[] |
[
"apache",
"mod_wsgi",
"nginx",
"python"
] |
stackoverflow_0000195534_apache_mod_wsgi_nginx_python.txt
|
Q:
Python POST data using mod_wsgi
This must be a very simple question, but I don't seem to be able to figure out.
I'm using apache + mod_wsgi to host my python application, and I'd like to get the post content submitted in one of the forms -however, neither the environment values, nor sys.stdin contains any of this data. Mind giving me a quick hand?
Edit:
Tried already:
environ["CONTENT_TYPE"] = 'application/x-www-form-urlencoded' (no data)
environ["wsgi.input"] seems a plausible way, however, both environ["wsgi.input"].read(), and environ["wsgi.input"].read(-1) returns an empty string (yes, content has been posted, and environ["request_method"] = "post"
A:
PEP 333 says you must read environ['wsgi.input'].
I just saved the following code and made apache's mod_wsgi run it. It works.
You must be doing something wrong.
from pprint import pformat
def application(environ, start_response):
# show the environment:
output = ['<pre>']
output.append(pformat(environ))
output.append('</pre>')
#create a simple form:
output.append('<form method="post">')
output.append('<input type="text" name="test">')
output.append('<input type="submit">')
output.append('</form>')
if environ['REQUEST_METHOD'] == 'POST':
# show form data as received by POST:
output.append('<h1>FORM DATA</h1>')
output.append(pformat(environ['wsgi.input'].read()))
# send results
output_len = sum(len(line) for line in output)
start_response('200 OK', [('Content-type', 'text/html'),
('Content-Length', str(output_len))])
return output
A:
Be aware that technically speaking calling read() or read(-1) on wsgi.input is a violation of the WSGI specification even though Apache/mod_wsgi allows it. This is because the WSGI specification requires that a valid length argument be supplied. The WSGI specification also says you shouldn't read more data than is specified by the CONTENT_LENGTH.
So, the code above may work in Apache/mod_wsgi but it isn't portable WSGI code and will fail on some other WSGI implementations. To be correct, determine request content length and supply that value to read().
|
Python POST data using mod_wsgi
|
This must be a very simple question, but I don't seem to be able to figure out.
I'm using apache + mod_wsgi to host my python application, and I'd like to get the post content submitted in one of the forms -however, neither the environment values, nor sys.stdin contains any of this data. Mind giving me a quick hand?
Edit:
Tried already:
environ["CONTENT_TYPE"] = 'application/x-www-form-urlencoded' (no data)
environ["wsgi.input"] seems a plausible way, however, both environ["wsgi.input"].read(), and environ["wsgi.input"].read(-1) returns an empty string (yes, content has been posted, and environ["request_method"] = "post"
|
[
"PEP 333 says you must read environ['wsgi.input'].\nI just saved the following code and made apache's mod_wsgi run it. It works.\nYou must be doing something wrong.\nfrom pprint import pformat\n\ndef application(environ, start_response):\n # show the environment:\n output = ['<pre>']\n output.append(pformat(environ))\n output.append('</pre>')\n\n #create a simple form:\n output.append('<form method=\"post\">')\n output.append('<input type=\"text\" name=\"test\">')\n output.append('<input type=\"submit\">')\n output.append('</form>')\n\n if environ['REQUEST_METHOD'] == 'POST':\n # show form data as received by POST:\n output.append('<h1>FORM DATA</h1>')\n output.append(pformat(environ['wsgi.input'].read()))\n\n # send results\n output_len = sum(len(line) for line in output)\n start_response('200 OK', [('Content-type', 'text/html'),\n ('Content-Length', str(output_len))])\n return output\n\n",
"Be aware that technically speaking calling read() or read(-1) on wsgi.input is a violation of the WSGI specification even though Apache/mod_wsgi allows it. This is because the WSGI specification requires that a valid length argument be supplied. The WSGI specification also says you shouldn't read more data than is specified by the CONTENT_LENGTH.\nSo, the code above may work in Apache/mod_wsgi but it isn't portable WSGI code and will fail on some other WSGI implementations. To be correct, determine request content length and supply that value to read().\n"
] |
[
22,
14
] |
[] |
[] |
[
"mod_wsgi",
"python"
] |
stackoverflow_0000394465_mod_wsgi_python.txt
|
Q:
Running a Django site under mod_wsgi
I am trying to run my Django sites with mod_wsgi instead of mod_python (RHEL 5). I tried this with all my sites, but get the same problem. I configured it the standard way everyone recommends, but requests to the site simply time out.
Apache conf:
<VirtualHost 74.54.144.34>
DocumentRoot /wwwclients/thymeandagain
ServerName thymeandagain4corners.com
ServerAlias www.thymeandagain4corners.com
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
CustomLog /var/log/httpd/thymeandagain_access_log combined
ErrorLog /var/log/httpd/thymeandagain_error_log
LogLevel error
WSGIScriptAlias / /wwwclients/thymeandagain/wsgi_handler.py
WSGIDaemonProcess thymeandagain user=admin group=admin processes=1 threads=16
WSGIProcessGroup thymeandagain
</VirtualHost>
wsgi_handler.py:
import sys
import os
sys.path.append("/wwwclients")
os.environ['DJANGO_SETTINGS_MODULE'] = 'thymeandagain.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
The daemon mod_wsgi is supposed to spawn off is not there, so requests just time out and I get a bunch of "Unable to connect to WSGI daemon process" errors in the logs. Is there something about the WSGIDaemonProcess directive that is preventing creation of the daemon? Thanks in advance for any help...
EDIT: I get this in the error log:
[[email protected]] mcm_server_readable():2582: timeout: Operation now in progress: select(2) call timed out for read(2)able fds
[[email protected]] mcm_get_line():1592
[[email protected]] mcm_server_readable():2582: timeout: Operation now in progress: select(2) call timed out for read(2)able fds
[[email protected]] mcm_get_line():1592
[Thu Nov 20 21:18:17 2008] [notice] caught SIGTERM, shutting down
[Thu Nov 20 21:18:18 2008] [notice] Digest: generating secret for digest authentication ...
[Thu Nov 20 21:18:18 2008] [notice] Digest: done
[Thu Nov 20 21:18:18 2008] [notice] mod_python: Creating 4 session mutexes based on 8 max processes and 64 max threads.
[Thu Nov 20 21:18:18 2008] [notice] Apache/2.2.3 (Red Hat) mod_python/3.2.8 Python/2.4.3 mod_wsgi/2.1-BRANCH configured -- resuming normal operations
A:
The real problem is permissions on Apache log directory. It is necessary to tell Apache/mod_wsgi to use an alternate location for the UNIX sockets used to communicate with the daemon processes. See:
http://code.google.com/p/modwsgi/wiki/ConfigurationIssues#Location_Of_UNIX_Sockets
A:
The problem is that mod_python doesn't go well together with mod_wsgi. I got into similar issue few weeks ago and everything started working for me shortly after I commented out mod_python inclusion.
Try to search modwsgi.org wiki for "mod_python", I believe there was someone talking about this somewhere in comments
A:
Here is very detailed description on how to integrate django with mod_wsgi.
|
Running a Django site under mod_wsgi
|
I am trying to run my Django sites with mod_wsgi instead of mod_python (RHEL 5). I tried this with all my sites, but get the same problem. I configured it the standard way everyone recommends, but requests to the site simply time out.
Apache conf:
<VirtualHost 74.54.144.34>
DocumentRoot /wwwclients/thymeandagain
ServerName thymeandagain4corners.com
ServerAlias www.thymeandagain4corners.com
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
CustomLog /var/log/httpd/thymeandagain_access_log combined
ErrorLog /var/log/httpd/thymeandagain_error_log
LogLevel error
WSGIScriptAlias / /wwwclients/thymeandagain/wsgi_handler.py
WSGIDaemonProcess thymeandagain user=admin group=admin processes=1 threads=16
WSGIProcessGroup thymeandagain
</VirtualHost>
wsgi_handler.py:
import sys
import os
sys.path.append("/wwwclients")
os.environ['DJANGO_SETTINGS_MODULE'] = 'thymeandagain.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
The daemon mod_wsgi is supposed to spawn off is not there, so requests just time out and I get a bunch of "Unable to connect to WSGI daemon process" errors in the logs. Is there something about the WSGIDaemonProcess directive that is preventing creation of the daemon? Thanks in advance for any help...
EDIT: I get this in the error log:
[[email protected]] mcm_server_readable():2582: timeout: Operation now in progress: select(2) call timed out for read(2)able fds
[[email protected]] mcm_get_line():1592
[[email protected]] mcm_server_readable():2582: timeout: Operation now in progress: select(2) call timed out for read(2)able fds
[[email protected]] mcm_get_line():1592
[Thu Nov 20 21:18:17 2008] [notice] caught SIGTERM, shutting down
[Thu Nov 20 21:18:18 2008] [notice] Digest: generating secret for digest authentication ...
[Thu Nov 20 21:18:18 2008] [notice] Digest: done
[Thu Nov 20 21:18:18 2008] [notice] mod_python: Creating 4 session mutexes based on 8 max processes and 64 max threads.
[Thu Nov 20 21:18:18 2008] [notice] Apache/2.2.3 (Red Hat) mod_python/3.2.8 Python/2.4.3 mod_wsgi/2.1-BRANCH configured -- resuming normal operations
|
[
"The real problem is permissions on Apache log directory. It is necessary to tell Apache/mod_wsgi to use an alternate location for the UNIX sockets used to communicate with the daemon processes. See:\nhttp://code.google.com/p/modwsgi/wiki/ConfigurationIssues#Location_Of_UNIX_Sockets\n",
"The problem is that mod_python doesn't go well together with mod_wsgi. I got into similar issue few weeks ago and everything started working for me shortly after I commented out mod_python inclusion.\nTry to search modwsgi.org wiki for \"mod_python\", I believe there was someone talking about this somewhere in comments\n",
"Here is very detailed description on how to integrate django with mod_wsgi.\n"
] |
[
10,
4,
1
] |
[] |
[] |
[
"apache",
"django",
"mod_wsgi",
"python"
] |
stackoverflow_0000302679_apache_django_mod_wsgi_python.txt
|
Q:
Passing apache2 digest authentication information to a wsgi script run by mod_wsgi
I've got the directive
<VirtualHost *>
<Location />
AuthType Digest
AuthName "global"
AuthDigestDomain /
AuthUserFile /root/apache_users
<Limit GET>
Require valid-user
</Limit>
</Location>
WSGIScriptAlias / /some/script.wsgi
WSGIDaemonProcess mywsgi user=someuser group=somegroup processes=2 threads=25
WSGIProcessGroup mywsgi
ServerName some.example.org
</VirtualHost>
I'd like to know in the /some/script.wsgi
def application(environ, start_response):
start_response('200 OK', [
('Content-Type', 'text/plain'),
])
return ['Hello']
What user is logged in.
How do I do that?
A:
add WSGIPassAuthorization On:
<VirtualHost *>
<Location />
AuthType Digest
AuthName "global"
AuthDigestDomain /
AuthUserFile /root/apache_users
<Limit GET>
Require valid-user
</Limit>
</Location>
WSGIPassAuthorization On
WSGIScriptAlias / /some/script.wsgi
WSGIDaemonProcess mywsgi user=someuser group=somegroup processes=2 threads=25
WSGIProcessGroup mywsgi
ServerName some.example.org
</VirtualHost>
Then just read environ['REMOTE_USER']:
def application(environ, start_response):
start_response('200 OK', [
('Content-Type', 'text/plain'),
])
return ['Hello %s' % environ['REMOTE_USER']]
More information at mod_wsgi documentation.
A:
Additional information about Apache/mod_wsgi and access, authentication and authorization mechanisms can be found in:
http://code.google.com/p/modwsgi/wiki/AccessControlMechanisms
The information isn't passed by default because doing so could leak password information to applications which maybe shouldn't get it.
|
Passing apache2 digest authentication information to a wsgi script run by mod_wsgi
|
I've got the directive
<VirtualHost *>
<Location />
AuthType Digest
AuthName "global"
AuthDigestDomain /
AuthUserFile /root/apache_users
<Limit GET>
Require valid-user
</Limit>
</Location>
WSGIScriptAlias / /some/script.wsgi
WSGIDaemonProcess mywsgi user=someuser group=somegroup processes=2 threads=25
WSGIProcessGroup mywsgi
ServerName some.example.org
</VirtualHost>
I'd like to know in the /some/script.wsgi
def application(environ, start_response):
start_response('200 OK', [
('Content-Type', 'text/plain'),
])
return ['Hello']
What user is logged in.
How do I do that?
|
[
"add WSGIPassAuthorization On:\n<VirtualHost *>\n <Location />\n AuthType Digest\n AuthName \"global\"\n AuthDigestDomain /\n AuthUserFile /root/apache_users\n <Limit GET>\n Require valid-user\n </Limit>\n </Location>\n WSGIPassAuthorization On\n WSGIScriptAlias / /some/script.wsgi\n WSGIDaemonProcess mywsgi user=someuser group=somegroup processes=2 threads=25\n WSGIProcessGroup mywsgi\n ServerName some.example.org\n</VirtualHost>\n\nThen just read environ['REMOTE_USER']:\ndef application(environ, start_response):\n start_response('200 OK', [\n ('Content-Type', 'text/plain'),\n ])\n return ['Hello %s' % environ['REMOTE_USER']]\n\nMore information at mod_wsgi documentation.\n",
"Additional information about Apache/mod_wsgi and access, authentication and authorization mechanisms can be found in:\nhttp://code.google.com/p/modwsgi/wiki/AccessControlMechanisms\nThe information isn't passed by default because doing so could leak password information to applications which maybe shouldn't get it.\n"
] |
[
16,
2
] |
[] |
[] |
[
"apache",
"authentication",
"mod_wsgi",
"python",
"wsgi"
] |
stackoverflow_0000123499_apache_authentication_mod_wsgi_python_wsgi.txt
|
Q:
Is mod_wsgi/Python optimizing things out?
I have been trying to track down weird problems with my mod_wsgi/Python web application. I have the application handler which creates an object and calls a method:
def my_method(self, file):
self.sapi.write("In my method for %d time"%self.mmcount)
self.mmcount += 1
# ... open file (absolute path to file), extract list of files inside
# ... exit if file contains no path/file strings
for f in extracted_files:
self.num_files_found += 1
self.my_method(f)
At the start and end of this, I write
obj.num_files_found
To the browser.
So this is a recursive function that goes down a tree of file-references inside files. Any references in a file are printed and then those references are opened and examined and so on until all files are leaf-nodes containing no files. Why I am doing this isn't really important ... it is more of a pedantic example.
You would expect the output to be deterministic
Such as
Files found: 0
In my method for the 0 time
In my method for the 1 time
In my method for the 2 time
In my method for the 3 time
...
In my method for the n time
Files found: 128
And for the first few requests it is as expected. Then I get the following for as long as I refresh
Files found: 0
In my method for the 0 time
Files found: 128
Even though I know, from previous refreshes and no code/file alterations that it takes n times to enumerate 128 files.
So the question then: Does mod_wsgi/Python include internal optimizations that would stop complete execution? Does it guess the output is deterministic and cache?
As a note, in the refreshes when it is as expected, REMOTE_PORT increments by one each time ... when it uses a short output, the increment of REMOTE_PORT jumps wildly. Might be unrelated however.
I am new to Python, be gentle
Solved
Who knows what it was, but ripping out Apache, mod_python, mod_wsgi and nearly everything HTTP related and re-installing fixed the problem. Something was pretty broken but seems ok now :)
A:
That Apache/mod_wsgi may run in both multi process/multi threaded configurations can trip up code which is written with the assumption that it is run in a single process, with that process possibly being single threaded. For a discussion of different configuration possibilities and what that all means for shared data, see:
http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading
A:
"Does mod_wsgi/Python include internal optimizations that would stop complete execution? Does it guess the output is deterministic and cache?"
No.
The problem is (generally) that you have a global variable somewhere in your program that is not getting reset the way you hoped it would.
Sometimes this can be unintentional, since Python checks local namespace and global namespace for variables.
You can -- inadvertently -- have a function that depends on some global variable. I'd bet on this.
What you're likely seeing is a number of mod_wsgi daemon processes, each with a global variable problem. The first request for each daemon works. Then your global variable is in a state that prevents work from happening. [File is left open, top-level directory variable got overwritten, who knows?]
After the first few, all the daemons are stuck in the "other" mode where they report the answer without doing the real work.
A:
It seems the Python/mod_wsgi installation must be broken. I have never seen such weird bugs.
Traces next to returns:
self.sapi.write("Returning at line 22 for call %d"%self.times_called)
return someval
Appear to happen numerous time:
Returning at line 22 for call 3
Returning at line 22 for call 3
Returning at line 22 for call 3
There is just no consistent logic in the control-flow of anything :( I am also pretty sure that I can write simple incrementing code to count the number of times a method is called. Absolute, frustrating, nonsense. I even put epoch time next to every call to sapi.write() to make sure that wasn't mindlessly repeating code. They are unique :S
Time to rip-out Apache, Python, mod_wsgi and the rest and start again.
Solved
Who knows what it was, but ripping out Apache, mod_python, mod_wsgi and nearly everything HTTP related and re-installing fixed the problem. Something was pretty broken but seems ok now :)
|
Is mod_wsgi/Python optimizing things out?
|
I have been trying to track down weird problems with my mod_wsgi/Python web application. I have the application handler which creates an object and calls a method:
def my_method(self, file):
self.sapi.write("In my method for %d time"%self.mmcount)
self.mmcount += 1
# ... open file (absolute path to file), extract list of files inside
# ... exit if file contains no path/file strings
for f in extracted_files:
self.num_files_found += 1
self.my_method(f)
At the start and end of this, I write
obj.num_files_found
To the browser.
So this is a recursive function that goes down a tree of file-references inside files. Any references in a file are printed and then those references are opened and examined and so on until all files are leaf-nodes containing no files. Why I am doing this isn't really important ... it is more of a pedantic example.
You would expect the output to be deterministic
Such as
Files found: 0
In my method for the 0 time
In my method for the 1 time
In my method for the 2 time
In my method for the 3 time
...
In my method for the n time
Files found: 128
And for the first few requests it is as expected. Then I get the following for as long as I refresh
Files found: 0
In my method for the 0 time
Files found: 128
Even though I know, from previous refreshes and no code/file alterations that it takes n times to enumerate 128 files.
So the question then: Does mod_wsgi/Python include internal optimizations that would stop complete execution? Does it guess the output is deterministic and cache?
As a note, in the refreshes when it is as expected, REMOTE_PORT increments by one each time ... when it uses a short output, the increment of REMOTE_PORT jumps wildly. Might be unrelated however.
I am new to Python, be gentle
Solved
Who knows what it was, but ripping out Apache, mod_python, mod_wsgi and nearly everything HTTP related and re-installing fixed the problem. Something was pretty broken but seems ok now :)
|
[
"That Apache/mod_wsgi may run in both multi process/multi threaded configurations can trip up code which is written with the assumption that it is run in a single process, with that process possibly being single threaded. For a discussion of different configuration possibilities and what that all means for shared data, see:\nhttp://code.google.com/p/modwsgi/wiki/ProcessesAndThreading\n",
"\"Does mod_wsgi/Python include internal optimizations that would stop complete execution? Does it guess the output is deterministic and cache?\"\nNo.\nThe problem is (generally) that you have a global variable somewhere in your program that is not getting reset the way you hoped it would.\nSometimes this can be unintentional, since Python checks local namespace and global namespace for variables.\nYou can -- inadvertently -- have a function that depends on some global variable. I'd bet on this. \nWhat you're likely seeing is a number of mod_wsgi daemon processes, each with a global variable problem. The first request for each daemon works. Then your global variable is in a state that prevents work from happening. [File is left open, top-level directory variable got overwritten, who knows?]\nAfter the first few, all the daemons are stuck in the \"other\" mode where they report the answer without doing the real work.\n",
"It seems the Python/mod_wsgi installation must be broken. I have never seen such weird bugs.\nTraces next to returns:\nself.sapi.write(\"Returning at line 22 for call %d\"%self.times_called)\nreturn someval\n\nAppear to happen numerous time:\n\nReturning at line 22 for call 3\nReturning at line 22 for call 3\nReturning at line 22 for call 3\n\nThere is just no consistent logic in the control-flow of anything :( I am also pretty sure that I can write simple incrementing code to count the number of times a method is called. Absolute, frustrating, nonsense. I even put epoch time next to every call to sapi.write() to make sure that wasn't mindlessly repeating code. They are unique :S\nTime to rip-out Apache, Python, mod_wsgi and the rest and start again.\nSolved\nWho knows what it was, but ripping out Apache, mod_python, mod_wsgi and nearly everything HTTP related and re-installing fixed the problem. Something was pretty broken but seems ok now :)\n"
] |
[
3,
1,
1
] |
[] |
[] |
[
"caching",
"debugging",
"mod_wsgi",
"python"
] |
stackoverflow_0000957685_caching_debugging_mod_wsgi_python.txt
|
Q:
Auto-incrementing attribute with custom logic in SQLAlchemy
I have a simple "Invoices" class with a "Number" attribute that has to
be assigned by the application when the user saves an invoice. There
are some constraints:
1) the application is a (thin) client-server one, so whatever
assigns the number must look out for collisions
2) Invoices has a "version" attribute too, so I can't use a simple
DBMS-level autoincrementing field
I'm trying to build this using a custom Type that would kick in every
time an invoice gets saved. Whenever process_bind_param is called with
a None value, it will call a singleton of some sort to determine the
number and avoid collisions. Is this a decent solution?
Anyway, I'm having a problem.. Here's my custom Type:
class AutoIncrement(types.TypeDecorator):
impl = types.Unicode
def copy(self):
return AutoIncrement()
def process_bind_param(self, value, dialect):
if not value:
# Must find next autoincrement value
value = "1" # Test value :)
return value
My problem right now is that when I save an Invoice and AutoIncrement
sets "1" as value for its number, the Invoice instance doesn't get
updated with the new number.. Is this expected? Am I missing
something?
Many thanks for your time!
(SQLA 0.5.3 on Python 2.6, using postgreSQL 8.3)
Edit: Michael Bayer told me that this behaviour is expected, since TypeDecorators don't deal with default values.
A:
Is there any particular reason you don't just use a default= parameter in your column definition? (This can be an arbitrary Python callable).
def generate_invoice_number():
# special logic to generate a unique invoice number
class Invoice(DeclarativeBase):
__tablename__ = 'invoice'
number = Column(Integer, unique=True, default=generate_invoice_number)
...
|
Auto-incrementing attribute with custom logic in SQLAlchemy
|
I have a simple "Invoices" class with a "Number" attribute that has to
be assigned by the application when the user saves an invoice. There
are some constraints:
1) the application is a (thin) client-server one, so whatever
assigns the number must look out for collisions
2) Invoices has a "version" attribute too, so I can't use a simple
DBMS-level autoincrementing field
I'm trying to build this using a custom Type that would kick in every
time an invoice gets saved. Whenever process_bind_param is called with
a None value, it will call a singleton of some sort to determine the
number and avoid collisions. Is this a decent solution?
Anyway, I'm having a problem.. Here's my custom Type:
class AutoIncrement(types.TypeDecorator):
impl = types.Unicode
def copy(self):
return AutoIncrement()
def process_bind_param(self, value, dialect):
if not value:
# Must find next autoincrement value
value = "1" # Test value :)
return value
My problem right now is that when I save an Invoice and AutoIncrement
sets "1" as value for its number, the Invoice instance doesn't get
updated with the new number.. Is this expected? Am I missing
something?
Many thanks for your time!
(SQLA 0.5.3 on Python 2.6, using postgreSQL 8.3)
Edit: Michael Bayer told me that this behaviour is expected, since TypeDecorators don't deal with default values.
|
[
"Is there any particular reason you don't just use a default= parameter in your column definition? (This can be an arbitrary Python callable).\ndef generate_invoice_number():\n # special logic to generate a unique invoice number\n\nclass Invoice(DeclarativeBase):\n __tablename__ = 'invoice'\n number = Column(Integer, unique=True, default=generate_invoice_number)\n ...\n\n"
] |
[
6
] |
[] |
[] |
[
"auto_increment",
"python",
"sqlalchemy"
] |
stackoverflow_0001038126_auto_increment_python_sqlalchemy.txt
|
Q:
In Python, how do I easily generate an image file from some source data?
I have some data that I would like to visualize. Each byte of the source data roughly corresponds to a pixel value of the image.
What is the easiest way to generate an image file (bitmap?) using Python?
A:
You can create images with a list of pixel values using Pillow:
from PIL import Image
img = Image.new('RGB', (width, height))
img.putdata(my_list)
img.save('image.png')
A:
Have a look at PIL and pyGame. Both of them allow you to draw on a canvas and then save it to a file.
|
In Python, how do I easily generate an image file from some source data?
|
I have some data that I would like to visualize. Each byte of the source data roughly corresponds to a pixel value of the image.
What is the easiest way to generate an image file (bitmap?) using Python?
|
[
"You can create images with a list of pixel values using Pillow:\nfrom PIL import Image\n\nimg = Image.new('RGB', (width, height))\nimg.putdata(my_list)\nimg.save('image.png')\n\n",
"Have a look at PIL and pyGame. Both of them allow you to draw on a canvas and then save it to a file.\n"
] |
[
39,
6
] |
[] |
[] |
[
"data_visualization",
"image",
"python"
] |
stackoverflow_0001038550_data_visualization_image_python.txt
|
Q:
Obtaining references to function objects on the execution stack from the frame object?
Given the output of inspect.stack(), is it possible to get the function objects from anywhere from the stack frame and call these? If so, how?
(I already know how to get the names of the functions.)
Here is what I'm getting at: Let's say I'm a function and I'm trying to determine if my caller is a generator or a regular function? I need to call inspect.isgeneratorfunction() on the function object. And how do you figure out who called you? inspect.stack(), right? So if I can somehow put those together, I'll have the answer to my question. Perhaps there is an easier way to do this?
A:
Here is a code snippet that do it. There is no error checking. The idea is to find in the locals of the grand parent the function object that was called. The function object returned should be the parent. If you want to also search the builtins, then simply look into stacks[2][0].f_builtins.
def f():
stacks = inspect.stack()
grand_parent_locals = stacks[2][0].f_locals
caller_name = stacks[1][3]
candidate = grand_parent_locals[caller_name]
In the case of a class, one can write the following (inspired by Marcin solution)
class test(object):
def f(self):
stack = inspect.stack()
parent_func_name = stack[1][3]
parent_func = getattr(self, parent_func_name).im_func
A:
I took the following approach, very similar to Eolmar's answer.
stack = inspect.stack()
parent_locals = stack[1][0].f_locals['self']
parent_func_name = stack[1][3]
parent_func_attr = getattr(parent_locals,parent_func_name)
parent_func = parent_func_attr.im_func
is_parent_gen = inspect.isgeneratorfunction(func)
|
Obtaining references to function objects on the execution stack from the frame object?
|
Given the output of inspect.stack(), is it possible to get the function objects from anywhere from the stack frame and call these? If so, how?
(I already know how to get the names of the functions.)
Here is what I'm getting at: Let's say I'm a function and I'm trying to determine if my caller is a generator or a regular function? I need to call inspect.isgeneratorfunction() on the function object. And how do you figure out who called you? inspect.stack(), right? So if I can somehow put those together, I'll have the answer to my question. Perhaps there is an easier way to do this?
|
[
"Here is a code snippet that do it. There is no error checking. The idea is to find in the locals of the grand parent the function object that was called. The function object returned should be the parent. If you want to also search the builtins, then simply look into stacks[2][0].f_builtins. \ndef f():\n stacks = inspect.stack()\n grand_parent_locals = stacks[2][0].f_locals\n caller_name = stacks[1][3]\n candidate = grand_parent_locals[caller_name]\n\nIn the case of a class, one can write the following (inspired by Marcin solution)\nclass test(object):\n def f(self):\n stack = inspect.stack()\n parent_func_name = stack[1][3]\n parent_func = getattr(self, parent_func_name).im_func\n\n",
"I took the following approach, very similar to Eolmar's answer.\nstack = inspect.stack()\nparent_locals = stack[1][0].f_locals['self']\nparent_func_name = stack[1][3]\nparent_func_attr = getattr(parent_locals,parent_func_name)\nparent_func = parent_func_attr.im_func\nis_parent_gen = inspect.isgeneratorfunction(func)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"inspect",
"python",
"stack_frame"
] |
stackoverflow_0001034688_inspect_python_stack_frame.txt
|
Q:
When reading a socket in python, is there any difference between os.read and socket.recv?
Suppose I have a socket. What is the difference between these two lines of code?
line 1:
os.read(some_socket.fileno(), 1024)
line 2:
some_socket.recv(1024)
...other than the fact that the first one doesn't work on Windows. In other words, can I substitute the second line for the first one? I've got a codebase that hasn't really been tested with Windows, and this is causing trouble.
A:
line 1 uses the underlining file descriptor to read the socket, so it is platform-dependant. Use line 2, since it is a portable, multi-platform way of accomplishing the same thing.
Obligatory: If you're doing anything serious, it's better to avoid having to deal with low-level sockets. They are hard to get right, it may seem things are working but there are many details. Those details are already solved in many networking frameworks and there's no reason to reinvent the wheel. I suggest twisted, it is pretty good.
|
When reading a socket in python, is there any difference between os.read and socket.recv?
|
Suppose I have a socket. What is the difference between these two lines of code?
line 1:
os.read(some_socket.fileno(), 1024)
line 2:
some_socket.recv(1024)
...other than the fact that the first one doesn't work on Windows. In other words, can I substitute the second line for the first one? I've got a codebase that hasn't really been tested with Windows, and this is causing trouble.
|
[
"line 1 uses the underlining file descriptor to read the socket, so it is platform-dependant. Use line 2, since it is a portable, multi-platform way of accomplishing the same thing.\nObligatory: If you're doing anything serious, it's better to avoid having to deal with low-level sockets. They are hard to get right, it may seem things are working but there are many details. Those details are already solved in many networking frameworks and there's no reason to reinvent the wheel. I suggest twisted, it is pretty good.\n"
] |
[
6
] |
[] |
[] |
[
"python",
"sockets",
"tcp",
"windows"
] |
stackoverflow_0001039462_python_sockets_tcp_windows.txt
|
Q:
Run a task at specific intervals in python
Possible Duplicate:
Suggestions for a Cron like scheduler in Python?
What would be the most pythonic way to schedule a function to run periodically as a background task? There are some ideas here, but they all seem rather ugly to me. And incomplete.
The java Timer class has a very complete solution. Anyone know of a similar python class?
A:
There is a handy event scheduler that might do what you need. Here's a link to the documentation:
http://docs.python.org/library/sched.html
A:
try the multiprocessing module.
from multiprocessing import Process
import time
def doWork():
while True:
print "working...."
time.sleep(10)
if __name__ == "__main__":
p = Process(target=doWork)
p.start()
while True:
time.sleep(60)
A:
Not direct response to the question.
On Linux/Unix operating system there are few ways to do so and usually I just write my program / script normally and then add it to cron or something similar (like launchd on OS X)
Response to the question starts here.
Use standard python sched module - standard library documentation describes some nifty solutions.
A:
Many programmers try to avoid multi-threaded code, since it is highly bug-prone in imperative programming.
If you want to a scheduled task in a single-threaded environment, then you probably need some kind of "Reactor". You may want to use a ready-made one like Twisted's.
Then it would be a basic function provided by your reactor, for example (with pygame):
pygame.time.set_timer - repeatedly create an event on the event queue
A:
Python has a Timer class in threading module but that is one-shot timer, so you would be better doing something as you have seen links.
http://code.activestate.com/recipes/65222/
Why do you think that is ugly, once you have written such a class usage will be as simple as in java.
if you are using it inside some GUI e.g. wxPython than it has wx.Timer which you can directly use
|
Run a task at specific intervals in python
|
Possible Duplicate:
Suggestions for a Cron like scheduler in Python?
What would be the most pythonic way to schedule a function to run periodically as a background task? There are some ideas here, but they all seem rather ugly to me. And incomplete.
The java Timer class has a very complete solution. Anyone know of a similar python class?
|
[
"There is a handy event scheduler that might do what you need. Here's a link to the documentation:\nhttp://docs.python.org/library/sched.html\n",
"try the multiprocessing module.\nfrom multiprocessing import Process\nimport time\n\ndef doWork():\n while True:\n print \"working....\"\n time.sleep(10)\n\n\n\nif __name__ == \"__main__\":\n p = Process(target=doWork)\n p.start()\n\n while True:\n time.sleep(60)\n\n",
"Not direct response to the question.\nOn Linux/Unix operating system there are few ways to do so and usually I just write my program / script normally and then add it to cron or something similar (like launchd on OS X)\nResponse to the question starts here.\nUse standard python sched module - standard library documentation describes some nifty solutions.\n",
"Many programmers try to avoid multi-threaded code, since it is highly bug-prone in imperative programming.\nIf you want to a scheduled task in a single-threaded environment, then you probably need some kind of \"Reactor\". You may want to use a ready-made one like Twisted's.\nThen it would be a basic function provided by your reactor, for example (with pygame):\n\npygame.time.set_timer - repeatedly create an event on the event queue\n\n",
"Python has a Timer class in threading module but that is one-shot timer, so you would be better doing something as you have seen links. \nhttp://code.activestate.com/recipes/65222/\nWhy do you think that is ugly, once you have written such a class usage will be as simple as in java.\nif you are using it inside some GUI e.g. wxPython than it has wx.Timer which you can directly use\n"
] |
[
12,
9,
7,
6,
2
] |
[] |
[] |
[
"python",
"timer"
] |
stackoverflow_0001038907_python_timer.txt
|
Q:
Crunching json with python
Echoing my other question now need to find a way to crunch json down to one line: e.g.
{"node0":{
"node1":{
"attr0":"foo",
"attr1":"foo bar",
"attr2":"value with long spaces"
}
}}
would like to crunch down to a single line:
{"node0":{"node1":{"attr0":"foo","attr1":"foo bar","attr2":"value with long spaces"}}}
by removing insignificant white spaces and preserving the ones that are within the value. Is there a library to do this in python?
EDIT
Thank you both drdaeman and Eli Courtwright for super quick response!
A:
http://docs.python.org/library/json.html
>>> import json
>>> json.dumps(json.loads("""
... {"node0":{
... "node1":{
... "attr0":"foo",
... "attr1":"foo bar",
... "attr2":"value with long spaces"
... }
... }}
... """))
'{"node0": {"node1": {"attr2": "value with long spaces", "attr0": "foo", "attr1": "foo bar"}}}'
A:
In Python 2.6:
import json
print json.loads( json_string )
Basically, when you use the json module to parse json, then you get a Python dict. If you simply print a dict and/or convert it to a string, it'll all be on one line. Of course, in some cases the Python dict will be slightly different than the json-encoded string (such as with booleans and nulls), so if this matters then you can say
import json
print json.dumps( json.loads(json_string) )
If you don't have Python 2.6 then you can use the simplejson module. In this case you'd simply say
import simplejson
print simplejson.loads( json_string )
|
Crunching json with python
|
Echoing my other question now need to find a way to crunch json down to one line: e.g.
{"node0":{
"node1":{
"attr0":"foo",
"attr1":"foo bar",
"attr2":"value with long spaces"
}
}}
would like to crunch down to a single line:
{"node0":{"node1":{"attr0":"foo","attr1":"foo bar","attr2":"value with long spaces"}}}
by removing insignificant white spaces and preserving the ones that are within the value. Is there a library to do this in python?
EDIT
Thank you both drdaeman and Eli Courtwright for super quick response!
|
[
"http://docs.python.org/library/json.html\n>>> import json\n>>> json.dumps(json.loads(\"\"\"\n... {\"node0\":{\n... \"node1\":{\n... \"attr0\":\"foo\",\n... \"attr1\":\"foo bar\",\n... \"attr2\":\"value with long spaces\"\n... }\n... }}\n... \"\"\"))\n'{\"node0\": {\"node1\": {\"attr2\": \"value with long spaces\", \"attr0\": \"foo\", \"attr1\": \"foo bar\"}}}'\n\n",
"In Python 2.6:\nimport json\nprint json.loads( json_string )\n\nBasically, when you use the json module to parse json, then you get a Python dict. If you simply print a dict and/or convert it to a string, it'll all be on one line. Of course, in some cases the Python dict will be slightly different than the json-encoded string (such as with booleans and nulls), so if this matters then you can say\nimport json\nprint json.dumps( json.loads(json_string) )\n\nIf you don't have Python 2.6 then you can use the simplejson module. In this case you'd simply say \nimport simplejson\nprint simplejson.loads( json_string )\n\n"
] |
[
21,
2
] |
[] |
[] |
[
"json",
"parsing",
"python"
] |
stackoverflow_0001039877_json_parsing_python.txt
|
Q:
Best way to get the name of a button that called an event?
In the following code (inspired by this snippet), I use a single event handler buttonClick to change the title of the window. Currently, I need to evaluate if the Id of the event corresponds to the Id of the button. If I decide to add 50 buttons instead of 2, this method could become cumbersome. Is there a better way to do this?
import wx
class MyFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY, 'wxBitmapButton',
pos=(300, 150), size=(300, 350))
self.panel1 = wx.Panel(self, -1)
self.button1 = wx.Button(self.panel1, id=-1,
pos=(10, 20), size = (20,20))
self.button1.Bind(wx.EVT_BUTTON, self.buttonClick)
self.button2 = wx.Button(self.panel1, id=-1,
pos=(40, 20), size = (20,20))
self.button2.Bind(wx.EVT_BUTTON, self.buttonClick)
self.Show(True)
def buttonClick(self,event):
if event.Id == self.button1.Id:
self.SetTitle("Button 1 clicked")
elif event.Id == self.button2.Id:
self.SetTitle("Button 2 clicked")
application = wx.PySimpleApp()
window = MyFrame()
application.MainLoop()
A:
You could give the button a name, and then look at the name in the event handler.
When you make the button
b = wx.Button(self, 10, "Default Button", (20, 20))
b.myname = "default button"
self.Bind(wx.EVT_BUTTON, self.OnClick, b)
When the button is clicked:
def OnClick(self, event):
name = event.GetEventObject().myname
A:
Take advantage of what you can do in a language like Python. You can pass extra arguments to your event callback function, like so.
import functools
def __init__(self):
# ...
for i in range(10):
name = 'Button %d' % i
button = wx.Button(parent, -1, name)
func = functools.partial(self.on_button, name=name)
button.Bind(wx.EVT_BUTTON, func)
# ...
def on_button(self, event, name):
print '%s clicked' % name
Of course, the arguments can be anything you want.
A:
I recommend that you use different event handlers to handle events from each button. If there is a lot of commonality, you can combine that into a function which returns a function with the specific behavior you want, for instance:
def goingTo(self, where):
def goingToHandler(event):
self.SetTitle("I'm going to " + where)
return goingToHandler
def __init__(self):
buttonA.Bind(wx.EVT_BUTTON, self.goingTo("work"))
# clicking will say "I'm going to work"
buttonB.Bind(wx.EVT_BUTTON, self.goingTo("home"))
# clicking will say "I'm going to home"
A:
Keep a dict with keys that are the .Id of the buttons and values that are the button names or whatever, so instead of a long if/elif chain you do a single dict lookup in buttonClick.
Code snippets: in __init__, add creation and update of the dict:
self.panel1 = wx.Panel(self, -1)
self.thebuttons = dict()
self.button1 = wx.Button(self.panel1, id=-1,
pos=(10, 20), size = (20,20))
self.thebuttons[self.button1.Id] = 'Button 1'
self.button1.Bind(wx.EVT_BUTTON, self.buttonClick)
and so on for 50 buttons (or whatever) [they might be better created in a loop, btw;-)].
So buttonClick becomes:
def buttonClick(self,event):
button_name = self.thebuttons.get(event.Id, '?No button?')
self.setTitle(button_name + ' clicked')
A:
You could create a dictionary of buttons, and do the look based on the id ... something like this:
class MyFrame(wx.Frame):
def _add_button (self, *args):
btn = wx.Button (*args)
btn.Bind (wx.EVT_BUTTON, self.buttonClick)
self.buttons[btn.id] = btn
def __init__ (self):
self.button = dict ()
self._add_button (self.panel1, id=-1,
pos=(10, 20), size = (20,20))
self._add_button = (self.panel1, id=-1,
pos=(40, 20), size = (20,20))
self.Show (True)
def buttonClick(self,event):
self.SetTitle (self.buttons[event.Id].label)
A:
I ran into a similar problem: I was generating buttons based on user-supplied data, and I needed the buttons to affect another class, so I needed to pass along information about the buttonclick. What I did was explicitly assign button IDs to each button I generated, then stored information about them in a dictionary to lookup later.
I would have thought there would be a prettier way to do this, constructing a custom event passing along more information, but all I've seen is the dictionary-lookup method. Also, I keep around a list of the buttons so I can erase all of them when needed.
Here's a slightly scrubbed code sample of something similar:
self.buttonDefs = {}
self.buttons = []
id_increment = 800
if (row, col) in self.items:
for ev in self.items[(row, col)]:
id_increment += 1
#### Populate a dict with the event information
self.buttonDefs[id_increment ] = (row, col, ev['user'])
####
tempBtn = wx.Button(self.sidebar, id_increment , "Choose",
(0,50+len(self.buttons)*40), (50,20) )
self.sidebar.Bind(wx.EVT_BUTTON, self.OnShiftClick, tempBtn)
self.buttons.append(tempBtn)
def OnShiftClick(self, evt):
### Lookup the information from the dict
row, col, user = self.buttonDefs[evt.GetId()]
self.WriteToCell(row, col, user)
self.DrawShiftPicker(row, col)
A:
I needed to do the same thing to keep track of button-presses . I used a lambda function to bind to the event . That way I could pass in the entire button object to the event handler function to manipulate accordingly.
class PlatGridderTop(wx.Frame):
numbuttons = 0
buttonlist = []
def create_another_button(self, event): # wxGlade: PlateGridderTop.<event_handler>
buttoncreator_id = wx.ID_ANY
butonname = "button" + str(buttoncreator_id)
PlateGridderTop.numbuttons = PlateGridderTop.numbuttons + 1
thisbutton_number = PlateGridderTop.numbuttons
self.buttonname = wx.Button(self,buttoncreator_id ,"ChildButton %s" % thisbutton_number )
self.Bind(wx.EVT_BUTTON,lambda event, buttonpressed=self.buttonname: self.print_button_press(event,buttonpressed),self.buttonname)
self.buttonlist.append(self.buttonname)
self.__do_layout()
print "Clicked plate button %s" % butonname
event.Skip()
def print_button_press(self,event,clickerbutton):
"""Just a dummy method that responds to a button press"""
print "Clicked a created button named %s with wxpython ID %s" % (clickerbutton.GetLabel(),event.GetId())
Disclaimer : This is my first post to stackoverflow
|
Best way to get the name of a button that called an event?
|
In the following code (inspired by this snippet), I use a single event handler buttonClick to change the title of the window. Currently, I need to evaluate if the Id of the event corresponds to the Id of the button. If I decide to add 50 buttons instead of 2, this method could become cumbersome. Is there a better way to do this?
import wx
class MyFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY, 'wxBitmapButton',
pos=(300, 150), size=(300, 350))
self.panel1 = wx.Panel(self, -1)
self.button1 = wx.Button(self.panel1, id=-1,
pos=(10, 20), size = (20,20))
self.button1.Bind(wx.EVT_BUTTON, self.buttonClick)
self.button2 = wx.Button(self.panel1, id=-1,
pos=(40, 20), size = (20,20))
self.button2.Bind(wx.EVT_BUTTON, self.buttonClick)
self.Show(True)
def buttonClick(self,event):
if event.Id == self.button1.Id:
self.SetTitle("Button 1 clicked")
elif event.Id == self.button2.Id:
self.SetTitle("Button 2 clicked")
application = wx.PySimpleApp()
window = MyFrame()
application.MainLoop()
|
[
"You could give the button a name, and then look at the name in the event handler.\nWhen you make the button\nb = wx.Button(self, 10, \"Default Button\", (20, 20))\nb.myname = \"default button\"\nself.Bind(wx.EVT_BUTTON, self.OnClick, b)\n\nWhen the button is clicked:\ndef OnClick(self, event):\n name = event.GetEventObject().myname\n\n",
"Take advantage of what you can do in a language like Python. You can pass extra arguments to your event callback function, like so.\nimport functools\n\ndef __init__(self):\n # ...\n for i in range(10):\n name = 'Button %d' % i\n button = wx.Button(parent, -1, name)\n func = functools.partial(self.on_button, name=name)\n button.Bind(wx.EVT_BUTTON, func)\n # ...\n\ndef on_button(self, event, name):\n print '%s clicked' % name\n\nOf course, the arguments can be anything you want.\n",
"I recommend that you use different event handlers to handle events from each button. If there is a lot of commonality, you can combine that into a function which returns a function with the specific behavior you want, for instance:\ndef goingTo(self, where):\n def goingToHandler(event):\n self.SetTitle(\"I'm going to \" + where)\n return goingToHandler\n\ndef __init__(self):\n buttonA.Bind(wx.EVT_BUTTON, self.goingTo(\"work\"))\n # clicking will say \"I'm going to work\"\n buttonB.Bind(wx.EVT_BUTTON, self.goingTo(\"home\"))\n # clicking will say \"I'm going to home\"\n\n",
"Keep a dict with keys that are the .Id of the buttons and values that are the button names or whatever, so instead of a long if/elif chain you do a single dict lookup in buttonClick.\nCode snippets: in __init__, add creation and update of the dict:\nself.panel1 = wx.Panel(self, -1)\nself.thebuttons = dict()\n\nself.button1 = wx.Button(self.panel1, id=-1,\n pos=(10, 20), size = (20,20))\nself.thebuttons[self.button1.Id] = 'Button 1'\nself.button1.Bind(wx.EVT_BUTTON, self.buttonClick)\n\nand so on for 50 buttons (or whatever) [they might be better created in a loop, btw;-)].\nSo buttonClick becomes:\n def buttonClick(self,event):\n button_name = self.thebuttons.get(event.Id, '?No button?')\n self.setTitle(button_name + ' clicked')\n\n",
"You could create a dictionary of buttons, and do the look based on the id ... something like this:\nclass MyFrame(wx.Frame):\n def _add_button (self, *args):\n btn = wx.Button (*args)\n btn.Bind (wx.EVT_BUTTON, self.buttonClick)\n self.buttons[btn.id] = btn\n def __init__ (self):\n self.button = dict ()\n self._add_button (self.panel1, id=-1,\n pos=(10, 20), size = (20,20))\n\n self._add_button = (self.panel1, id=-1,\n pos=(40, 20), size = (20,20))\n\n self.Show (True)\n\n def buttonClick(self,event):\n self.SetTitle (self.buttons[event.Id].label)\n\n",
"I ran into a similar problem: I was generating buttons based on user-supplied data, and I needed the buttons to affect another class, so I needed to pass along information about the buttonclick. What I did was explicitly assign button IDs to each button I generated, then stored information about them in a dictionary to lookup later.\nI would have thought there would be a prettier way to do this, constructing a custom event passing along more information, but all I've seen is the dictionary-lookup method. Also, I keep around a list of the buttons so I can erase all of them when needed.\nHere's a slightly scrubbed code sample of something similar:\nself.buttonDefs = {}\nself.buttons = []\nid_increment = 800\nif (row, col) in self.items:\n for ev in self.items[(row, col)]:\n id_increment += 1\n #### Populate a dict with the event information\n self.buttonDefs[id_increment ] = (row, col, ev['user'])\n ####\n tempBtn = wx.Button(self.sidebar, id_increment , \"Choose\",\n (0,50+len(self.buttons)*40), (50,20) )\n self.sidebar.Bind(wx.EVT_BUTTON, self.OnShiftClick, tempBtn)\n self.buttons.append(tempBtn)\n\ndef OnShiftClick(self, evt):\n ### Lookup the information from the dict\n row, col, user = self.buttonDefs[evt.GetId()]\n self.WriteToCell(row, col, user)\n self.DrawShiftPicker(row, col)\n\n",
"I needed to do the same thing to keep track of button-presses . I used a lambda function to bind to the event . That way I could pass in the entire button object to the event handler function to manipulate accordingly.\n class PlatGridderTop(wx.Frame):\n numbuttons = 0\n buttonlist = []\n\n\n def create_another_button(self, event): # wxGlade: PlateGridderTop.<event_handler>\n buttoncreator_id = wx.ID_ANY\n butonname = \"button\" + str(buttoncreator_id)\n PlateGridderTop.numbuttons = PlateGridderTop.numbuttons + 1\n thisbutton_number = PlateGridderTop.numbuttons\n\n self.buttonname = wx.Button(self,buttoncreator_id ,\"ChildButton %s\" % thisbutton_number )\n self.Bind(wx.EVT_BUTTON,lambda event, buttonpressed=self.buttonname: self.print_button_press(event,buttonpressed),self.buttonname)\n self.buttonlist.append(self.buttonname)\n self.__do_layout()\n print \"Clicked plate button %s\" % butonname\n event.Skip()\n def print_button_press(self,event,clickerbutton):\n \"\"\"Just a dummy method that responds to a button press\"\"\"\n print \"Clicked a created button named %s with wxpython ID %s\" % (clickerbutton.GetLabel(),event.GetId())\n\nDisclaimer : This is my first post to stackoverflow \n"
] |
[
12,
8,
7,
3,
2,
0,
0
] |
[] |
[] |
[
"event_handling",
"events",
"python",
"user_interface",
"wxpython"
] |
stackoverflow_0000976395_event_handling_events_python_user_interface_wxpython.txt
|
Q:
reading a configuration information only once in Python
I'm using the ConfigParser to read the configuration information stored in a file. I'm able to read the content and use it across other modules in the project. I'm not sure if the configuration file is read every time I call config.get(parameters). How can I make sure that the configuration information is read only once and rest of the time its read from the cache.
A:
I would try assigning the configuration to a variable.
configVariable = config.get(parameters)
Then you can pass the configuration variable to other modules as necessary.
A:
The default implementation of the ConfigParser class reads its data only once.
|
reading a configuration information only once in Python
|
I'm using the ConfigParser to read the configuration information stored in a file. I'm able to read the content and use it across other modules in the project. I'm not sure if the configuration file is read every time I call config.get(parameters). How can I make sure that the configuration information is read only once and rest of the time its read from the cache.
|
[
"I would try assigning the configuration to a variable.\nconfigVariable = config.get(parameters)\n\nThen you can pass the configuration variable to other modules as necessary.\n",
"The default implementation of the ConfigParser class reads its data only once. \n"
] |
[
2,
1
] |
[] |
[] |
[
"oop",
"python"
] |
stackoverflow_0001040135_oop_python.txt
|
Q:
Python "Event" equivalent in Java?
What's the closest thing in Java (perhaps an idiom) to threading.Event in Python?
A:
The Object.wait() Object.notify()/Object.notifyAll().
Or Condition.await() and Condition.signal()/Condition.signalAll() for Java 5+.
Edit: Because the python specification is similar how we usually wait a Java implementation would look like this:
class Event {
Lock lock = new ReentrantLock();
Condition cond = lock.newCondition();
boolean flag;
public void doWait() throws InterruptedException {
lock.lock();
try {
while (!flag) {
cond.await();
}
} finally {
lock.unlock();
}
}
public void doWait(float seconds) throws InterruptedException {
lock.lock();
try {
while (!flag) {
cond.await((int)(seconds * 1000), TimeUnit.MILLISECONDS);
}
} finally {
lock.unlock();
}
}
public boolean isSet() {
lock.lock();
try {
return flag;
} finally {
lock.unlock();
}
}
public void set() {
lock.lock();
try {
flag = true;
cond.signalAll();
} finally {
lock.unlock();
}
}
public void clear() {
lock.lock();
try {
flag = false;
cond.signalAll();
} finally {
lock.unlock();
}
}
}
A:
A related thread. There is a comment on the accepted answer which suggests a Semaphore or a Latch. Not the same semantics as the above implementation, but handy.
|
Python "Event" equivalent in Java?
|
What's the closest thing in Java (perhaps an idiom) to threading.Event in Python?
|
[
"The Object.wait() Object.notify()/Object.notifyAll().\nOr Condition.await() and Condition.signal()/Condition.signalAll() for Java 5+.\nEdit: Because the python specification is similar how we usually wait a Java implementation would look like this:\nclass Event {\n Lock lock = new ReentrantLock();\n Condition cond = lock.newCondition();\n boolean flag;\n public void doWait() throws InterruptedException {\n lock.lock();\n try {\n while (!flag) {\n cond.await();\n }\n } finally {\n lock.unlock();\n }\n }\n public void doWait(float seconds) throws InterruptedException {\n lock.lock();\n try {\n while (!flag) {\n cond.await((int)(seconds * 1000), TimeUnit.MILLISECONDS);\n }\n } finally {\n lock.unlock();\n }\n }\n public boolean isSet() {\n lock.lock();\n try {\n return flag;\n } finally {\n lock.unlock();\n }\n }\n public void set() {\n lock.lock();\n try {\n flag = true;\n cond.signalAll();\n } finally {\n lock.unlock();\n }\n }\n public void clear() {\n lock.lock();\n try {\n flag = false;\n cond.signalAll();\n } finally {\n lock.unlock();\n }\n }\n}\n\n",
"A related thread. There is a comment on the accepted answer which suggests a Semaphore or a Latch. Not the same semantics as the above implementation, but handy.\n"
] |
[
9,
0
] |
[] |
[] |
[
"java",
"multithreading",
"python"
] |
stackoverflow_0001040818_java_multithreading_python.txt
|
Q:
Configure pyflakes to work with Zope's "script (python)" objects on the filesystem
When I run pyflakes on a Zope Filesystem Directory View file (as are found a lot in plone) it always returns lots of warnings that my parameters and special values like 'context' are not defined, which would be true if it were a real python script, but for a Filesystem Directory View script, they are defined by magic comments at the top, for example:
## Python Script "Name"
##bind container=container
##bind context=context
##bind namespace=
##bind script=script
##bind subpath=traverse_subpath
##parameters=foo, bar, baz
##
from AccessControl import getSecurityManager
user = getSecurityManager().getUser()
from Products.PythonScripts.standard import html_quote
request = container.REQUEST
RESPONSE = request.RESPONSE
return foo + bar + baz
Is this kind of python used anywhere except Zope?
Is it, or can it be supported by pyflakes, pylint or similar tools?
A:
A possible approach I just tried is to pre-process the zope fspython script so that it is vaild. I've used a few calls to sed (below):
#!/bin/bash
sed "s/\(^[^#]\)/ \1/" $1 | \
sed "s/^##bind [a-z]*=\([a-z][a-z]*\)$/import \1/" | \
sed "s/^##parameters=\(.*\)/def foo(\1):/" | pyflakes
It would be good to replace this with a python script that wraps around pyflakes and doesn't alter normal python scripts.
A:
No, that kind of python is not used anywhere except Zope, and in fact almost exclusively in Plone nowadays. And the Plone community is moving away from it because it has many drawbacks, this being one of them.
Pyflakes isn't very configurable, at least not in a documented way. Pylint can be configured to skip some error messages, but the ones you need to skip would be the ones that are most useful, so that is probably not helpful either.
So the short answer is: No you can't syntax check them. On the other hand you don't need to restart the server to run them, so the syntax check won't save you that much time, which it will with other Python code in the Zope world.
|
Configure pyflakes to work with Zope's "script (python)" objects on the filesystem
|
When I run pyflakes on a Zope Filesystem Directory View file (as are found a lot in plone) it always returns lots of warnings that my parameters and special values like 'context' are not defined, which would be true if it were a real python script, but for a Filesystem Directory View script, they are defined by magic comments at the top, for example:
## Python Script "Name"
##bind container=container
##bind context=context
##bind namespace=
##bind script=script
##bind subpath=traverse_subpath
##parameters=foo, bar, baz
##
from AccessControl import getSecurityManager
user = getSecurityManager().getUser()
from Products.PythonScripts.standard import html_quote
request = container.REQUEST
RESPONSE = request.RESPONSE
return foo + bar + baz
Is this kind of python used anywhere except Zope?
Is it, or can it be supported by pyflakes, pylint or similar tools?
|
[
"A possible approach I just tried is to pre-process the zope fspython script so that it is vaild. I've used a few calls to sed (below):\n#!/bin/bash\nsed \"s/\\(^[^#]\\)/ \\1/\" $1 | \\\nsed \"s/^##bind [a-z]*=\\([a-z][a-z]*\\)$/import \\1/\" | \\\nsed \"s/^##parameters=\\(.*\\)/def foo(\\1):/\" | pyflakes\n\nIt would be good to replace this with a python script that wraps around pyflakes and doesn't alter normal python scripts.\n",
"No, that kind of python is not used anywhere except Zope, and in fact almost exclusively in Plone nowadays. And the Plone community is moving away from it because it has many drawbacks, this being one of them.\nPyflakes isn't very configurable, at least not in a documented way. Pylint can be configured to skip some error messages, but the ones you need to skip would be the ones that are most useful, so that is probably not helpful either.\nSo the short answer is: No you can't syntax check them. On the other hand you don't need to restart the server to run them, so the syntax check won't save you that much time, which it will with other Python code in the Zope world.\n"
] |
[
2,
1
] |
[] |
[] |
[
"python",
"zope"
] |
stackoverflow_0001038863_python_zope.txt
|
Q:
Passing data to mod_wsgi
In mod_wsgi I send the headers by running the function start_response(), but all the page content is passed by yield/return. Is there a way to pass the page content in a similar fashion as start_response()? Using the return.yield statement is very restrictive when it comes to working with chunked data.
E.g.
def Application():
b = buffer()
[... page code ...]
while True:
out = b.flush()
if out:
yield out
class buffer:
def __init__(self):
b = ['']
l = 0
def add(self, s):
s = str(s)
l += len(s)
b.append(s)
def flush(self):
if self.l > 1000:
out = ''.join(b)
self.__init__()
return out
I want to have the buffer outputting the content as the page loads, but only outputs the content once enough of it has piled up (in this eg. 1000 bytes).
A:
No; But I don't think it is restrictive. Maybe you want to paste an example code where you describe your restriction and we can help.
To work with chunk data you just yield the chunks:
def application(environ, start_response):
start_response('200 OK', [('Content-type', 'text/plain')]
yield 'Chunk 1\n'
yield 'Chunk 2\n'
yield 'Chunk 3\n'
for chunk in chunk_data_generator():
yield chunk
def chunk_data_generator()
yield 'Chunk 4\n'
yield 'Chunk 5\n'
EDIT: Based in the comments you gave, an example of piling data up to a certain length before sending forward:
BUFFER_SIZE = 10 # 10 bytes for testing. Use something bigger
def application(environ, start_response):
start_response('200 OK', [('Content-type', 'text/plain')]
buffer = []
size = 0
for chunk in chunk_generator():
buffer.append(chunk)
size += len(chunk)
if size > BUFFER_SIZE:
for buf in buffer:
yield buf
buffer = []
size = 0
def chunk_data_generator()
yield 'Chunk 1\n'
yield 'Chunk 2\n'
yield 'Chunk 3\n'
yield 'Chunk 4\n'
yield 'Chunk 5\n'
A:
It is possible for your application to "push" data to the WSGI server:
Some existing application framework APIs support unbuffered output in a different manner than WSGI. Specifically, they provide a "write" function or method of some kind to write an unbuffered block of data, or else they provide a buffered "write" function and a "flush" mechanism to flush the buffer.
Unfortunately, such APIs cannot be implemented in terms of WSGI's "iterable" application return value, unless threads or other special mechanisms are used.
Therefore, to allow these frameworks to continue using an imperative API, WSGI includes a special write() callable, returned by the start_response callable.
New WSGI applications and frameworks should not use the write() callable if it is possible to avoid doing so.
http://www.python.org/dev/peps/pep-0333/#the-write-callable
But it isn't recommended.
Generally speaking, applications will achieve the best throughput by buffering their (modestly-sized) output and sending it all at once. This is a common approach in existing frameworks such as Zope: the output is buffered in a StringIO or similar object, then transmitted all at once, along with the response headers.
The corresponding approach in WSGI is for the application to simply return a single-element iterable (such as a list) containing the response body as a single string. This is the recommended approach for the vast majority of application functions, that render HTML pages whose text easily fits in memory.
http://www.python.org/dev/peps/pep-0333/#buffering-and-streaming
A:
If you don't want to change your WSGI application itself to partially buffer response data before sending it, then implement a WSGI middleware that wraps your WSGI application and which performs that task.
|
Passing data to mod_wsgi
|
In mod_wsgi I send the headers by running the function start_response(), but all the page content is passed by yield/return. Is there a way to pass the page content in a similar fashion as start_response()? Using the return.yield statement is very restrictive when it comes to working with chunked data.
E.g.
def Application():
b = buffer()
[... page code ...]
while True:
out = b.flush()
if out:
yield out
class buffer:
def __init__(self):
b = ['']
l = 0
def add(self, s):
s = str(s)
l += len(s)
b.append(s)
def flush(self):
if self.l > 1000:
out = ''.join(b)
self.__init__()
return out
I want to have the buffer outputting the content as the page loads, but only outputs the content once enough of it has piled up (in this eg. 1000 bytes).
|
[
"No; But I don't think it is restrictive. Maybe you want to paste an example code where you describe your restriction and we can help.\nTo work with chunk data you just yield the chunks:\ndef application(environ, start_response):\n start_response('200 OK', [('Content-type', 'text/plain')]\n yield 'Chunk 1\\n' \n yield 'Chunk 2\\n' \n yield 'Chunk 3\\n'\n for chunk in chunk_data_generator():\n yield chunk\n\ndef chunk_data_generator()\n yield 'Chunk 4\\n'\n yield 'Chunk 5\\n'\n\n\nEDIT: Based in the comments you gave, an example of piling data up to a certain length before sending forward:\nBUFFER_SIZE = 10 # 10 bytes for testing. Use something bigger\ndef application(environ, start_response):\n start_response('200 OK', [('Content-type', 'text/plain')]\n buffer = []\n size = 0\n for chunk in chunk_generator():\n buffer.append(chunk)\n size += len(chunk)\n if size > BUFFER_SIZE:\n for buf in buffer:\n yield buf\n buffer = []\n size = 0\n\ndef chunk_data_generator()\n yield 'Chunk 1\\n' \n yield 'Chunk 2\\n' \n yield 'Chunk 3\\n'\n yield 'Chunk 4\\n'\n yield 'Chunk 5\\n'\n\n",
"It is possible for your application to \"push\" data to the WSGI server:\n\nSome existing application framework APIs support unbuffered output in a different manner than WSGI. Specifically, they provide a \"write\" function or method of some kind to write an unbuffered block of data, or else they provide a buffered \"write\" function and a \"flush\" mechanism to flush the buffer.\nUnfortunately, such APIs cannot be implemented in terms of WSGI's \"iterable\" application return value, unless threads or other special mechanisms are used.\nTherefore, to allow these frameworks to continue using an imperative API, WSGI includes a special write() callable, returned by the start_response callable.\nNew WSGI applications and frameworks should not use the write() callable if it is possible to avoid doing so.\nhttp://www.python.org/dev/peps/pep-0333/#the-write-callable\n\nBut it isn't recommended.\n\nGenerally speaking, applications will achieve the best throughput by buffering their (modestly-sized) output and sending it all at once. This is a common approach in existing frameworks such as Zope: the output is buffered in a StringIO or similar object, then transmitted all at once, along with the response headers.\nThe corresponding approach in WSGI is for the application to simply return a single-element iterable (such as a list) containing the response body as a single string. This is the recommended approach for the vast majority of application functions, that render HTML pages whose text easily fits in memory.\nhttp://www.python.org/dev/peps/pep-0333/#buffering-and-streaming\n\n",
"If you don't want to change your WSGI application itself to partially buffer response data before sending it, then implement a WSGI middleware that wraps your WSGI application and which performs that task.\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"mod_wsgi",
"python",
"wsgi"
] |
stackoverflow_0000940816_mod_wsgi_python_wsgi.txt
|
Q:
Context-sensitive string splitting, preserving delimiters
I have a string of the form "foo-bar-1.23-4", and I need to split at the first hypen followed by a numeral, such that the result is ['foo-bar', '1.23-4']. I've tried the following:
>>> re.split('-\d', 'foo-bar-1.23-4', 1)
['foo-bar', '.23-4']
and
>>> re.split('-(\d)', 'foo-bar-1.23-4', 1)
['foo-bar', '1', '.23-4']
with suboptimal results. Is there a one-liner that will get me what I want, without having to munge the delimiter with the last element?
A:
You were very close, try this:
re.split('-(?=\d)', 'foo-bar-1.23-4', 1)
I am using positive lookahead to accomplish this - basically I am matching a dash that is immediately followed by a numeric character.
A:
re.split('-(?=\d)', 'foo-bar-1.23-4', 1)
Using lookahead, which is exactly what Andrew did but beat me by a minute... :-)
A:
Would a positive lookahead work?
re.split('-?=\d', 'foo-bar-1.23-4', 1)
Not sure if you need the ( and the ) surrounding the lookahead, but give it a shot.
|
Context-sensitive string splitting, preserving delimiters
|
I have a string of the form "foo-bar-1.23-4", and I need to split at the first hypen followed by a numeral, such that the result is ['foo-bar', '1.23-4']. I've tried the following:
>>> re.split('-\d', 'foo-bar-1.23-4', 1)
['foo-bar', '.23-4']
and
>>> re.split('-(\d)', 'foo-bar-1.23-4', 1)
['foo-bar', '1', '.23-4']
with suboptimal results. Is there a one-liner that will get me what I want, without having to munge the delimiter with the last element?
|
[
"You were very close, try this:\nre.split('-(?=\\d)', 'foo-bar-1.23-4', 1)\n\nI am using positive lookahead to accomplish this - basically I am matching a dash that is immediately followed by a numeric character.\n",
"re.split('-(?=\\d)', 'foo-bar-1.23-4', 1)\n\nUsing lookahead, which is exactly what Andrew did but beat me by a minute... :-)\n",
"Would a positive lookahead work?\nre.split('-?=\\d', 'foo-bar-1.23-4', 1)\n\nNot sure if you need the ( and the ) surrounding the lookahead, but give it a shot.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"python",
"split",
"string"
] |
stackoverflow_0001041600_python_split_string.txt
|
Q:
Unable to see Python's approximations in mathematical calculations
Problem: to see when computer makes approximation in mathematical calculations when I use Python
Example of the problem:
My old teacher once said the following statement
You cannot never calculate 200! with your computer.
I am not completely sure whether it is true or not nowadays.
It seems that it is, since I get a lot zeros for it from a Python script.
How can you see when your Python code makes approximations?
A:
Python use arbitrary-precision arithmetic to calculate with integers, so it can exactly calculate 200!. For real numbers (so-called floating-point), Python does not use an exact representation. It uses a binary representation called IEEE 754, which is essentially scientific notation, except in base 2 instead of base 10.
Thus, any real number that cannot be exactly represented in base 2 with 53 bits of precision, Python cannot produce an exact result. For example, 0.1 (in base 10) is an infinite decimal in base 2, 0.0001100110011..., so it cannot be exactly represented. Hence, if you enter on a Python prompt:
>>> 0.1
0.10000000000000001
The result you get back is different, since has been converted from decimal to binary (with 53 bits of precision), back to decimal. As a consequence, you get things like this:
>>> 0.1 + 0.2 == 0.3
False
For a good (but long) read, see What Every Programmer Should Know About Floating-Point Arithmetic.
A:
Python has unbounded integer sizes in the form of a long type. That is to say, if it is a whole number, the limit on the size of the number is restricted by the memory available to Python.
When you compute a large number such as 200! and you see an L on the end of it, that means Python has automatically cast the int to a long, because an int was not large enough to hold that number.
See section 6.4 of this page for more information.
A:
See Handling very large numbers in Python.
Python has a BigNum class for holding 200! and will use it automatically.
Your teacher's statement, though not exactly true here is true in general. Computers have limitations, and it is good to know what they are. Remember that every time you add another integer of data storage, you can store a number that is 2^32 (4 billion +) times larger. It is hard to comprehend how many more numbers that is - but maths gets slower as you add more integers to store the exact value of a very large number.
As an example (what you can store with 1000 bits)
>>> 2 << 1000
2143017214372534641896850098120003621122809623411067214887500776740702102249872244986396
7576313917162551893458351062936503742905713846280871969155149397149607869135549648461970
8421492101247422837559083643060929499671638825347975351183310878921541258291423929553730
84335320859663305248773674411336138752L
I tried to illustrate how big a number you can store with 10000 bits, or even 8,000,000 bits (a megabyte) but that number is many pages long.
A:
200! is a very large number indeed.
If the range of an IEEE 64-bit double is 1.7E +/- 308 (15 digits), you can see that the largest factorial you can get is around 170!.
Python can handle arbitrary sized numbers, as can Java with its BigInteger.
A:
Without some sort of clarification to that statement, it's obviously false. Just from personal experience, early lessons in programming (in the late 1980s) included solving very similar, if not exactly the same, problems. In general, to know some device which does calculations isn't making approximations, you have to prove (in the math sense of a proof) that it isn't.
Python's integer types (named int and long in 2.x, both folded into just the int type in 3.x) are very good, and do not overflow like, for example, the int type in C. If you do the obvious of print 200 * 199 * 198 * ... it may be slow, but it will be exact. Similiarly, addition, subtraction, and modulus are exact. Division is a mixed bag, as there's two operators, / and //, and they underwent a change in 2.x—in general you can only treat it as inexact.
If you want more control yet don't want to limit yourself to integers, look at the decimal module.
A:
Python handles large numbers automatically (unlike a language like C where you can overflow its datatypes and the values reset to zero, for example) - over a certain point (sys.maxint or 2147483647) it converts the integer to a "long" (denoted by the L after the number), which can be any length:
>>> def fact(x):
... return reduce(lambda x, y: x * y, range(1, x+1))
...
>>> fact(10)
3628800
>>> fact(200)
788657867364790503552363213932185062295135977687173263294742533244359449963403342920304284011984623904177212138919638830257642790242637105061926624952829931113462857270763317237396988943922445621451664240254033291864131227428294853277524242407573903240321257405579568660226031904170324062351700858796178922222789623703897374720000000000000000000000000000000000000000000000000L
Long numbers are "easy", floating point is more complicated, and almost any computer representation of a floating point number is an approximation, for example:
>>> float(1)/3
0.33333333333333331
Obviously you can't store an infinite number of 3's in memory, so it cheats and rounds it a bit..
You may want to look at the decimal module:
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 do not have an exact representation in binary floating point. End users typically would not expect 1.1 to display as 1.1000000000000001 as it does with binary floating point.
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem
|
Unable to see Python's approximations in mathematical calculations
|
Problem: to see when computer makes approximation in mathematical calculations when I use Python
Example of the problem:
My old teacher once said the following statement
You cannot never calculate 200! with your computer.
I am not completely sure whether it is true or not nowadays.
It seems that it is, since I get a lot zeros for it from a Python script.
How can you see when your Python code makes approximations?
|
[
"Python use arbitrary-precision arithmetic to calculate with integers, so it can exactly calculate 200!. For real numbers (so-called floating-point), Python does not use an exact representation. It uses a binary representation called IEEE 754, which is essentially scientific notation, except in base 2 instead of base 10.\nThus, any real number that cannot be exactly represented in base 2 with 53 bits of precision, Python cannot produce an exact result. For example, 0.1 (in base 10) is an infinite decimal in base 2, 0.0001100110011..., so it cannot be exactly represented. Hence, if you enter on a Python prompt:\n>>> 0.1\n0.10000000000000001\n\nThe result you get back is different, since has been converted from decimal to binary (with 53 bits of precision), back to decimal. As a consequence, you get things like this:\n>>> 0.1 + 0.2 == 0.3\nFalse\n\nFor a good (but long) read, see What Every Programmer Should Know About Floating-Point Arithmetic.\n",
"Python has unbounded integer sizes in the form of a long type. That is to say, if it is a whole number, the limit on the size of the number is restricted by the memory available to Python.\nWhen you compute a large number such as 200! and you see an L on the end of it, that means Python has automatically cast the int to a long, because an int was not large enough to hold that number.\nSee section 6.4 of this page for more information.\n",
"See Handling very large numbers in Python. \nPython has a BigNum class for holding 200! and will use it automatically. \nYour teacher's statement, though not exactly true here is true in general. Computers have limitations, and it is good to know what they are. Remember that every time you add another integer of data storage, you can store a number that is 2^32 (4 billion +) times larger. It is hard to comprehend how many more numbers that is - but maths gets slower as you add more integers to store the exact value of a very large number.\nAs an example (what you can store with 1000 bits)\n>>> 2 << 1000\n2143017214372534641896850098120003621122809623411067214887500776740702102249872244986396\n7576313917162551893458351062936503742905713846280871969155149397149607869135549648461970\n8421492101247422837559083643060929499671638825347975351183310878921541258291423929553730\n84335320859663305248773674411336138752L\n\nI tried to illustrate how big a number you can store with 10000 bits, or even 8,000,000 bits (a megabyte) but that number is many pages long.\n",
"200! is a very large number indeed. \nIf the range of an IEEE 64-bit double is 1.7E +/- 308 (15 digits), you can see that the largest factorial you can get is around 170!. \nPython can handle arbitrary sized numbers, as can Java with its BigInteger.\n",
"Without some sort of clarification to that statement, it's obviously false. Just from personal experience, early lessons in programming (in the late 1980s) included solving very similar, if not exactly the same, problems. In general, to know some device which does calculations isn't making approximations, you have to prove (in the math sense of a proof) that it isn't.\nPython's integer types (named int and long in 2.x, both folded into just the int type in 3.x) are very good, and do not overflow like, for example, the int type in C. If you do the obvious of print 200 * 199 * 198 * ... it may be slow, but it will be exact. Similiarly, addition, subtraction, and modulus are exact. Division is a mixed bag, as there's two operators, / and //, and they underwent a change in 2.x—in general you can only treat it as inexact.\nIf you want more control yet don't want to limit yourself to integers, look at the decimal module.\n",
"Python handles large numbers automatically (unlike a language like C where you can overflow its datatypes and the values reset to zero, for example) - over a certain point (sys.maxint or 2147483647) it converts the integer to a \"long\" (denoted by the L after the number), which can be any length:\n>>> def fact(x):\n... return reduce(lambda x, y: x * y, range(1, x+1))\n... \n>>> fact(10)\n3628800\n>>> fact(200)\n788657867364790503552363213932185062295135977687173263294742533244359449963403342920304284011984623904177212138919638830257642790242637105061926624952829931113462857270763317237396988943922445621451664240254033291864131227428294853277524242407573903240321257405579568660226031904170324062351700858796178922222789623703897374720000000000000000000000000000000000000000000000000L\n\nLong numbers are \"easy\", floating point is more complicated, and almost any computer representation of a floating point number is an approximation, for example:\n>>> float(1)/3\n0.33333333333333331\n\nObviously you can't store an infinite number of 3's in memory, so it cheats and rounds it a bit..\nYou may want to look at the decimal module:\n\n\nDecimal numbers can be represented exactly. In contrast, numbers like 1.1 do not have an exact representation in binary floating point. End users typically would not expect 1.1 to display as 1.1000000000000001 as it does with binary floating point.\nUnlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem\n\n\n"
] |
[
7,
2,
1,
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001041543_python.txt
|
Q:
SQLAlchemy: Object Mappings lost after commit?
I got a simple problem in SQLAlchemy. I have one model in a table, lets call it Model1 here.
I want to add a row in this table, and get the autoincremented key, so I can create another model with it, and use this key. This is not a flawed database design (1:1 relation etc). I simply need this key in another table, because the other table is being transferred to a remote host, and I need the matching keys so the servers will understand each other. There will be no further local reference between these 2 tables, and it's also not possible to create relations because of that.
Consider the following code:
object1 = model.Model1(param)
DBSession.add(object1)
# if I do this, the line below fails with an UnboundExecutionError.
# and if I dont do this, object1.id won't be set yet
#transaction.commit()
object2 = model.AnotherModel(object1.id) #id holds the primary, autoincremented key
I wished I wouldn't even have to commit "manually".
Basically what I would like to achieve is, "Model1" is constantly growing, with increasing Model.id primary key. AnotherModel is always only a little fraction of Model1, which hasn't been processed yet. Of course I could add a flag in "Model1", a boolean field in the table to mark already processed elements, but I was hoping this would not be necessary.
How can I get my above code working?
Greets,
Tom
A:
A couple of things:
Could you please explain what the variable transaction is bound to?
Exactly what statement raises the UnboundExecutionError?
Please provide the full exception message, including stack trace.
The 'normal' thing to do in this case, would be to call DBSession.flush(). Have you tried that?
Example:
object1 = Model1(param)
DBSession.add(object1)
DBSession.flush()
assert object1.id != None # flushing the session populates the id
object2 = AnotherModel(object1.id)
For a great explanation to the SA session and what flush() does, see Using the Session.
Basically, flush() causes Pending instances to become Persistent - meaning that new objects are INSERTed into the database tables. flush() also UPDATEs the tables with values for instances that the session tracks that has changes.
commit() always issues flush() first.
Within a transaction, you can flush multiple times. Each flush() causes UPDATEs and/or INSERTs in the database. The entire transaction can be commited or rolled back.
A:
if you want to get new primary key identifiers to generate without anything being committed, just call session.flush(). That will emit everything pending to the database within the current transaction.
A:
I've only used this with ForeignKeys, so you in the second case rather would do model.AnotherModel(model1=object1), and then it just worked (tm). So I suspect this may be a problem with your models, so maybe you could post them too?
|
SQLAlchemy: Object Mappings lost after commit?
|
I got a simple problem in SQLAlchemy. I have one model in a table, lets call it Model1 here.
I want to add a row in this table, and get the autoincremented key, so I can create another model with it, and use this key. This is not a flawed database design (1:1 relation etc). I simply need this key in another table, because the other table is being transferred to a remote host, and I need the matching keys so the servers will understand each other. There will be no further local reference between these 2 tables, and it's also not possible to create relations because of that.
Consider the following code:
object1 = model.Model1(param)
DBSession.add(object1)
# if I do this, the line below fails with an UnboundExecutionError.
# and if I dont do this, object1.id won't be set yet
#transaction.commit()
object2 = model.AnotherModel(object1.id) #id holds the primary, autoincremented key
I wished I wouldn't even have to commit "manually".
Basically what I would like to achieve is, "Model1" is constantly growing, with increasing Model.id primary key. AnotherModel is always only a little fraction of Model1, which hasn't been processed yet. Of course I could add a flag in "Model1", a boolean field in the table to mark already processed elements, but I was hoping this would not be necessary.
How can I get my above code working?
Greets,
Tom
|
[
"A couple of things:\n\nCould you please explain what the variable transaction is bound to?\nExactly what statement raises the UnboundExecutionError?\nPlease provide the full exception message, including stack trace.\nThe 'normal' thing to do in this case, would be to call DBSession.flush(). Have you tried that?\n\nExample:\nobject1 = Model1(param)\nDBSession.add(object1)\nDBSession.flush()\nassert object1.id != None # flushing the session populates the id\n\nobject2 = AnotherModel(object1.id)\n\nFor a great explanation to the SA session and what flush() does, see Using the Session.\nBasically, flush() causes Pending instances to become Persistent - meaning that new objects are INSERTed into the database tables. flush() also UPDATEs the tables with values for instances that the session tracks that has changes.\ncommit() always issues flush() first.\nWithin a transaction, you can flush multiple times. Each flush() causes UPDATEs and/or INSERTs in the database. The entire transaction can be commited or rolled back.\n",
"if you want to get new primary key identifiers to generate without anything being committed, just call session.flush(). That will emit everything pending to the database within the current transaction. \n",
"I've only used this with ForeignKeys, so you in the second case rather would do model.AnotherModel(model1=object1), and then it just worked (tm). So I suspect this may be a problem with your models, so maybe you could post them too?\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"database",
"python",
"sqlalchemy"
] |
stackoverflow_0001033199_database_python_sqlalchemy.txt
|
Q:
Where can I learn more about PyPy's translation function?
I've been having a hard time trying to understand PyPy's translation. It looks like something absolutely revolutionary from simply reading the description, however I'm hard-pressed to find good documentation on actually translating a real world piece of code to something such as LLVM. Does such a thing exist? The official PyPy documentation on it just skims over the functionality, rather than providing anything I can try out myself.
A:
This document seems to go into quite a bit of detail (and I think a complete description is out of scope for a stackoverflow answer):
http://codespeak.net/pypy/dist/pypy/doc/translation.html
The general idea of translating from one language to another isn't particularly revolutionary, but it has only recently been gaining popularity / applicability in "real-world" applications. GWT does this with Java (generating Javascript) and there is a library for translating Haskell into various other languages as well (called YHC)
A:
If you want some hand-on examples, PyPy's Getting Started document has a section titled "Trying out the translator".
A:
PyPy translator is in general, not intended for more public use. We use it for translating
our own python interpreter (including JIT and GCs, both written in RPython, this restricted
subset of Python). The idea is that with good JIT and GC, you'll be able to speedups even
without knowing or using PyPy's translation toolchain (and more importantly, without
restricting yourself to RPython).
Cheers,
fijal
A:
Are you looking for Python specific translation, or just the general "how do you compile some code to bytecode"? If the latter is your case, check the LLVM tutorial. I especially find chapter two, which teaches you to write a compiler for your own language, interesting.
A:
It looks like something absolutely revolutionary from simply reading the description,
As far as I know, PyPy is novel in the sense that it is the first system expressly designed for implementing languages. Other tools exist to help with much of the very front end, such as parser generators, or for the very back end, such as code generation, but not much existed for connecting the two.
|
Where can I learn more about PyPy's translation function?
|
I've been having a hard time trying to understand PyPy's translation. It looks like something absolutely revolutionary from simply reading the description, however I'm hard-pressed to find good documentation on actually translating a real world piece of code to something such as LLVM. Does such a thing exist? The official PyPy documentation on it just skims over the functionality, rather than providing anything I can try out myself.
|
[
"This document seems to go into quite a bit of detail (and I think a complete description is out of scope for a stackoverflow answer):\n\nhttp://codespeak.net/pypy/dist/pypy/doc/translation.html\n\nThe general idea of translating from one language to another isn't particularly revolutionary, but it has only recently been gaining popularity / applicability in \"real-world\" applications. GWT does this with Java (generating Javascript) and there is a library for translating Haskell into various other languages as well (called YHC)\n",
"If you want some hand-on examples, PyPy's Getting Started document has a section titled \"Trying out the translator\".\n",
"PyPy translator is in general, not intended for more public use. We use it for translating\nour own python interpreter (including JIT and GCs, both written in RPython, this restricted\nsubset of Python). The idea is that with good JIT and GC, you'll be able to speedups even\nwithout knowing or using PyPy's translation toolchain (and more importantly, without\nrestricting yourself to RPython).\nCheers,\nfijal\n",
"Are you looking for Python specific translation, or just the general \"how do you compile some code to bytecode\"? If the latter is your case, check the LLVM tutorial. I especially find chapter two, which teaches you to write a compiler for your own language, interesting.\n",
"\nIt looks like something absolutely revolutionary from simply reading the description,\n\nAs far as I know, PyPy is novel in the sense that it is the first system expressly designed for implementing languages. Other tools exist to help with much of the very front end, such as parser generators, or for the very back end, such as code generation, but not much existed for connecting the two.\n"
] |
[
6,
3,
3,
2,
1
] |
[] |
[] |
[
"pypy",
"python",
"translation"
] |
stackoverflow_0000027567_pypy_python_translation.txt
|
Q:
jcc.initVM() doesn't return when mod_wsgi is configured as daemon mode
I am using mod-wsgi with django, and in django I use pylucene to do full text search.
While mod-wsgi is configured to be embedded mode, there is no problem at all.
But when mod-wsgi is configured to be daemon mode, the apache just gets stuck,
and the browser just keep loading but nothing appears.
Then I identity the problem to be the jcc.initVM().
Here is my wsgi script:
import os, sys, jcc
sys.stderr.write('jcc.initVM\n')
jcc.initVM()
sys.stderr.write('finished jcc.initVM\n')
....
After I restart my apache, and make a request from my browser, I find that /var/log/apache2/error.log
only has:
jcc.initVM
Meaning that it gets stuck at the line jcc.initVM(). (If the mod_wsgi is configured as embedded mode, there is no problem.)
And here is my /etc/apache2/sites-available/default:
WSGIDaemonProcess site user=ross group=ross threads=1
WSGIProcessGroup site
WSGIScriptAlias / /home/ross/apache/django.wsgi
<Directory /home/ross/apache/>
Order deny,allow
Allow from all
</Directory>
And finally, I find out that in the source code of jcc (jcc.cpp), it hangs at the function:
JNI_CreateJavaVM(&vm, (void **) &vm_env, &vm_args)
How to solve the problem?
Program versions:
libapache2-mod-wsgi 2.3-1
jcc 2.1
python 2.5
Apache 2.2.9-8ubuntu3
Ubuntu 8.10
A:
Please refer to http://code.google.com/p/modwsgi/issues/detail?id=131 for the discussion details.
In short, the
mod_wsgi will block signals for the daemon program, which may make initVM doesn't work. Furthermore according to
Andi from jcc, initVM can only be called from the main thread, and it may cause further problem as well.
Therefore I decided to move the search code with initVM() to a totally separate process and solved the problem.
A:
The fix for this problem was included in mod_wsgi 2.4.
|
jcc.initVM() doesn't return when mod_wsgi is configured as daemon mode
|
I am using mod-wsgi with django, and in django I use pylucene to do full text search.
While mod-wsgi is configured to be embedded mode, there is no problem at all.
But when mod-wsgi is configured to be daemon mode, the apache just gets stuck,
and the browser just keep loading but nothing appears.
Then I identity the problem to be the jcc.initVM().
Here is my wsgi script:
import os, sys, jcc
sys.stderr.write('jcc.initVM\n')
jcc.initVM()
sys.stderr.write('finished jcc.initVM\n')
....
After I restart my apache, and make a request from my browser, I find that /var/log/apache2/error.log
only has:
jcc.initVM
Meaning that it gets stuck at the line jcc.initVM(). (If the mod_wsgi is configured as embedded mode, there is no problem.)
And here is my /etc/apache2/sites-available/default:
WSGIDaemonProcess site user=ross group=ross threads=1
WSGIProcessGroup site
WSGIScriptAlias / /home/ross/apache/django.wsgi
<Directory /home/ross/apache/>
Order deny,allow
Allow from all
</Directory>
And finally, I find out that in the source code of jcc (jcc.cpp), it hangs at the function:
JNI_CreateJavaVM(&vm, (void **) &vm_env, &vm_args)
How to solve the problem?
Program versions:
libapache2-mod-wsgi 2.3-1
jcc 2.1
python 2.5
Apache 2.2.9-8ubuntu3
Ubuntu 8.10
|
[
"Please refer to http://code.google.com/p/modwsgi/issues/detail?id=131 for the discussion details. \nIn short, the \nmod_wsgi will block signals for the daemon program, which may make initVM doesn't work. Furthermore according to \nAndi from jcc, initVM can only be called from the main thread, and it may cause further problem as well.\nTherefore I decided to move the search code with initVM() to a totally separate process and solved the problem.\n",
"The fix for this problem was included in mod_wsgi 2.4.\n"
] |
[
1,
1
] |
[] |
[] |
[
"apache",
"jcc",
"mod_wsgi",
"pylucene",
"python"
] |
stackoverflow_0000548493_apache_jcc_mod_wsgi_pylucene_python.txt
|
Q:
retrieving current URL from FireFox with python
I want to know what is the current url of active tab in running firefox instance from python module. Does FireFox have any API for this and does python know to work with it?
A:
The most convenient way maybe insatll a firefox extension to open up a tcp service, then you can exchange info with firefox.
mozrepl can set up a telnet service, you can call js-like command to get info.
With telnetscript (http: //code.activestate.com/recipes/152043/), you can write:
import telnetscript
script = """rve
w content.location.href;
ru repl>
w repl.quit()
cl
"""
conn = telnetscript.telnetscript( '127.0.0.1', {}, 4242 )
ret = conn.RunScript( script.split( '\n' )).split( '\n' )
print ret[-2][6:]
A:
If on windows you can use win32com
import win32clipboard
import win32com.client
shell = win32com.client.Dispatch("WScript.Shell")
shell.AppActivate('Some Application Title')
Then use shell.SendKeys to do a ctrl+l and a ctrl+c
Then read the string in the clipboard.
It's horkey though it will work, alternatly you can use something like AutoIt an compile the code to an exe that you can work with.
Hope this helps.
|
retrieving current URL from FireFox with python
|
I want to know what is the current url of active tab in running firefox instance from python module. Does FireFox have any API for this and does python know to work with it?
|
[
"The most convenient way maybe insatll a firefox extension to open up a tcp service, then you can exchange info with firefox.\nmozrepl can set up a telnet service, you can call js-like command to get info.\nWith telnetscript (http: //code.activestate.com/recipes/152043/), you can write:\n\nimport telnetscript\n\nscript = \"\"\"rve\nw content.location.href;\nru repl>\nw repl.quit()\ncl\n\"\"\"\n\n\nconn = telnetscript.telnetscript( '127.0.0.1', {}, 4242 )\nret = conn.RunScript( script.split( '\\n' )).split( '\\n' )\nprint ret[-2][6:]\n\n",
"If on windows you can use win32com\nimport win32clipboard\nimport win32com.client\nshell = win32com.client.Dispatch(\"WScript.Shell\")\nshell.AppActivate('Some Application Title')\n\nThen use shell.SendKeys to do a ctrl+l and a ctrl+c\nThen read the string in the clipboard.\nIt's horkey though it will work, alternatly you can use something like AutoIt an compile the code to an exe that you can work with.\nHope this helps.\n"
] |
[
3,
1
] |
[] |
[] |
[
"firefox",
"python",
"python_extensions"
] |
stackoverflow_0000493978_firefox_python_python_extensions.txt
|
Q:
DJANGO - How do you access the current model instance from inside a form
class EditAdminForm(forms.ModelForm):
password = username.CharField(widget=forms.TextInput())
password = forms.CharField(widget=forms.PasswordInput())
password_confirm = forms.CharField(widget=forms.PasswordInput(), initial=???)
You can see what I'm trying to do here. How would I go about pre-populating the pasword_confirm field (which is not part of the model). I'm so confused.
A:
You can't access the instance in the form declaration, because there isn't one until you instantiate it.
However, if all you want to do is set dynamic initial data, do this with the initial parameter on instantation:
form = EditAdminForm(initial={'password':'abcdef'})
A:
You can define __init__ method in EditAdminForm.
something like:
class EditAdminForm(forms.ModelForm):
username = forms.CharField(widget=forms.TextInput())
password = forms.CharField(widget=forms.PasswordInput())
def __init__(self, initial_from, data=None, initial=None)
sefl.fields['password_confirm'] = forms.CharField(widget=forms.PasswordInput(), initial=initial_from)
|
DJANGO - How do you access the current model instance from inside a form
|
class EditAdminForm(forms.ModelForm):
password = username.CharField(widget=forms.TextInput())
password = forms.CharField(widget=forms.PasswordInput())
password_confirm = forms.CharField(widget=forms.PasswordInput(), initial=???)
You can see what I'm trying to do here. How would I go about pre-populating the pasword_confirm field (which is not part of the model). I'm so confused.
|
[
"You can't access the instance in the form declaration, because there isn't one until you instantiate it.\nHowever, if all you want to do is set dynamic initial data, do this with the initial parameter on instantation:\nform = EditAdminForm(initial={'password':'abcdef'})\n\n",
"You can define __init__ method in EditAdminForm.\nsomething like:\nclass EditAdminForm(forms.ModelForm):\n username = forms.CharField(widget=forms.TextInput())\n password = forms.CharField(widget=forms.PasswordInput())\n def __init__(self, initial_from, data=None, initial=None)\n sefl.fields['password_confirm'] = forms.CharField(widget=forms.PasswordInput(), initial=initial_from)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"django",
"instance",
"model",
"python"
] |
stackoverflow_0001040887_django_instance_model_python.txt
|
Q:
Python 2.6 - Upload zip file - Poster 0.4
I came here via this question:
Send file using POST from a Python script
And by and large it's what I need, plus some additional.
Besides the zipfile som additional information is needed and the POST_DATA looks something like this:
POSTDATA =-----------------------------293432744627532
Content-Disposition: form-data; name="categoryID"
1
-----------------------------293432744627532
Content-Disposition: form-data; name="cID"
-3
-----------------------------293432744627532
Content-Disposition: form-data; name="FileType"
zip
-----------------------------293432744627532
Content-Disposition: form-data; name="name"
Kylie Minogue
-----------------------------293432744627532
Content-Disposition: form-data; name="file1"; filename="At the Beach x8-8283.zip"
Content-Type: application/x-zip-compressed
PK........................
Is this somehow possible with the poster 0.4 module (and before you ask, yes, I'm fairly new to Python...)
Kind regards,
Brian K. Andersen
A:
Poster has basic and advanced multipart support.
You may try something like this (modified from poster documentation):
# test_client.py
from poster.encode import multipart_encode
from poster.streaminghttp import register_openers
import urllib2
# Register the streaming http handlers with urllib2
register_openers()
# headers contains the necessary Content-Type and Content-Length
# datagen is a generator object that yields the encoded parameters
datagen, headers = multipart_encode({
'categoryID' : 1,
'cID' : -3,
'FileType' : 'zip',
'name' : 'Kylie Minogue',
'file1' : open('At the Beach x8-8283.zip')
})
# Create the Request object
request = urllib2.Request("http://localhost:5000/upload_data", datagen, headers)
# Actually do the request, and get the response
print urllib2.urlopen(request).read()
|
Python 2.6 - Upload zip file - Poster 0.4
|
I came here via this question:
Send file using POST from a Python script
And by and large it's what I need, plus some additional.
Besides the zipfile som additional information is needed and the POST_DATA looks something like this:
POSTDATA =-----------------------------293432744627532
Content-Disposition: form-data; name="categoryID"
1
-----------------------------293432744627532
Content-Disposition: form-data; name="cID"
-3
-----------------------------293432744627532
Content-Disposition: form-data; name="FileType"
zip
-----------------------------293432744627532
Content-Disposition: form-data; name="name"
Kylie Minogue
-----------------------------293432744627532
Content-Disposition: form-data; name="file1"; filename="At the Beach x8-8283.zip"
Content-Type: application/x-zip-compressed
PK........................
Is this somehow possible with the poster 0.4 module (and before you ask, yes, I'm fairly new to Python...)
Kind regards,
Brian K. Andersen
|
[
"Poster has basic and advanced multipart support.\nYou may try something like this (modified from poster documentation):\n# test_client.py\nfrom poster.encode import multipart_encode\nfrom poster.streaminghttp import register_openers\nimport urllib2\n\n# Register the streaming http handlers with urllib2\nregister_openers()\n\n# headers contains the necessary Content-Type and Content-Length\n# datagen is a generator object that yields the encoded parameters\ndatagen, headers = multipart_encode({\n 'categoryID' : 1,\n 'cID' : -3,\n 'FileType' : 'zip',\n 'name' : 'Kylie Minogue',\n 'file1' : open('At the Beach x8-8283.zip')\n})\n\n# Create the Request object\nrequest = urllib2.Request(\"http://localhost:5000/upload_data\", datagen, headers)\n\n# Actually do the request, and get the response\nprint urllib2.urlopen(request).read()\n\n"
] |
[
4
] |
[] |
[] |
[
"file_upload",
"python",
"upload",
"urllib2",
"zip"
] |
stackoverflow_0001042451_file_upload_python_upload_urllib2_zip.txt
|
Q:
Best Practise for transferring a MySQL table to another server?
I have a system sitting on a "Master Server", that is periodically transferring quite a few chunks of information from a MySQL DB to another server in the web.
Both servers have a MySQL Server and an Apache running. I would like an easy-to-use solution for this.
Currently I'm looking into:
XMLRPC
RestFul Services
a simple POST to a processing script
socket transfers
The app on my master is a TurboGears app, so I would prefer "pythonic" aka less ugly solutions. Copying a dumped table to another server via FTP / SCP or something like that might be quick, but in my eyes it is also very (quick and) dirty, and I'd love to have a nicer solution.
Can anyone describe shortly how you would do this the "best-practise" way?
This doesn't necessarily have to involve Databases. Dumping the table on Server1 and transferring the raw data in a structured way so server2 can process it without parsing too much is just as good. One requirement though: As soon as the data arrives on server2, I want it to be processed, so there has to be a notification of some sort when the transfer is done. Of course I could just write my whole own server sitting on a socket on the second machine and accepting the file with own code and processing it and so forth, but this is just a very very small piece of a very big system, so I dont want to spend half a day implementing this.
Thanks,
Tom
A:
Server 1: Convert rows to JSON, call the RESTful api of second with JSON data
Server 2: listens on a URI e.g. POST /data , get json data convert back to dictionary or ORM objects, insert into db
sqlalchemy/sqlobject and simplejson is what you need.
A:
If the table is small and you can send the whole table and just delete the old data and then insert the new data on remote server - then there can be an easy generic solution: you can create a long string with table data and send it via webservice. Here is how it can be implemented. Note, that this is far not perfect solution, just an example how I transfer small simple tables between websites:
function DumpTableIntoString($tableName, $includeFieldsHeader = true)
{
global $adoConn;
$recordSet = $adoConn->Execute("SELECT * FROM $tableName");
if(!$recordSet) return false;
$data = "";
if($includeFieldsHeader)
{
// fetching fields
$numFields = $recordSet->FieldCount();
for($i = 0; $i < $numFields; $i++)
$data .= $recordSet->FetchField($i)->name . ",";
$data = substr($data, 0, -1) . "\n";
}
while(!$recordSet->EOF)
{
$row = $recordSet->GetRowAssoc();
foreach ($row as &$value)
{
$value = str_replace("\r\n", "", $value);
$value = str_replace('"', '\\"', $value);
if($value == null) $value = "\\N";
$value = "\"" . $value . "\"";
}
$data .= join(',', $row);
$recordSet->MoveNext();
if(!$recordSet->EOF)
$data .= "\n";
}
return $data;
}
// NOTE: CURRENTLY FUNCTION DOESN'T SUPPORT HANDLING FIELDS HEADER, SO NOW IT JUST SKIPS IT
// IF NECESSARRY
function FillTableFromDumpString($tableName, $dumpString, $truncateTable = true, $fieldsHeaderIncluded = true)
{
global $adoConn;
if($truncateTable)
if($adoConn->Execute("TRUNCATE TABLE $tableName") === false)
return false;
$rows = explode("\n", $dumpString);
$startRowIndex = $fieldsHeaderIncluded ? 1 : 0;
$query = "INSERT INTO $tableName VALUES ";
$numRows = count($rows);
for($i = $startRowIndex; $i < $numRows; $i++)
{
$row = explode(",", $rows[$i]);
foreach($row as &$value)
{
if($value == "\"\\N\"")
$value = "NULL";
}
$query .= "(". implode(",", $row) .")";
if($i != $numRows - 1)
$query .= ",";
}
if($adoConn->Execute($query) === false)
{
return false;
}
return true;
}
If you have large tables, then I think that you need to send only new data. Ask your remote server for the latest timestamp, and then read all newer data from your main server and send the data either in generic way (as I've shown above) or in non-generic way (in this case you have to write separate functions for each table).
A:
Assuming your situation allows this security-wise, you forgot one transport mechanism: simply opening a mysql connection from one server to another.
Me, I would start by thinking about one script that ran regularly on the write server and opens a read only db connection to the read server (A bit of added security) and a full connection to it's own data base server.
How you then proceed depends on the data (is it just inserts to deal with? do you have to mirror deletes? how many inserts vs updates? etc) but basically you could write a script that pulled data from the read server and processed it immediately into the write server.
Also, would mysql server replication work or would it be to over-blown as a solution?
A:
If you have access to MySQL's data port, and don't mind the constant network traffic, you can use replication.
A:
If you're using MyISAM or Archive tables, then I would highly recommend mysqlhotcopy
|
Best Practise for transferring a MySQL table to another server?
|
I have a system sitting on a "Master Server", that is periodically transferring quite a few chunks of information from a MySQL DB to another server in the web.
Both servers have a MySQL Server and an Apache running. I would like an easy-to-use solution for this.
Currently I'm looking into:
XMLRPC
RestFul Services
a simple POST to a processing script
socket transfers
The app on my master is a TurboGears app, so I would prefer "pythonic" aka less ugly solutions. Copying a dumped table to another server via FTP / SCP or something like that might be quick, but in my eyes it is also very (quick and) dirty, and I'd love to have a nicer solution.
Can anyone describe shortly how you would do this the "best-practise" way?
This doesn't necessarily have to involve Databases. Dumping the table on Server1 and transferring the raw data in a structured way so server2 can process it without parsing too much is just as good. One requirement though: As soon as the data arrives on server2, I want it to be processed, so there has to be a notification of some sort when the transfer is done. Of course I could just write my whole own server sitting on a socket on the second machine and accepting the file with own code and processing it and so forth, but this is just a very very small piece of a very big system, so I dont want to spend half a day implementing this.
Thanks,
Tom
|
[
"Server 1: Convert rows to JSON, call the RESTful api of second with JSON data\nServer 2: listens on a URI e.g. POST /data , get json data convert back to dictionary or ORM objects, insert into db\nsqlalchemy/sqlobject and simplejson is what you need.\n",
"If the table is small and you can send the whole table and just delete the old data and then insert the new data on remote server - then there can be an easy generic solution: you can create a long string with table data and send it via webservice. Here is how it can be implemented. Note, that this is far not perfect solution, just an example how I transfer small simple tables between websites:\nfunction DumpTableIntoString($tableName, $includeFieldsHeader = true)\n{\n global $adoConn;\n\n $recordSet = $adoConn->Execute(\"SELECT * FROM $tableName\");\n if(!$recordSet) return false;\n\n $data = \"\";\n\n if($includeFieldsHeader)\n {\n // fetching fields\n $numFields = $recordSet->FieldCount();\n for($i = 0; $i < $numFields; $i++)\n $data .= $recordSet->FetchField($i)->name . \",\";\n $data = substr($data, 0, -1) . \"\\n\";\n }\n\n while(!$recordSet->EOF)\n {\n $row = $recordSet->GetRowAssoc();\n foreach ($row as &$value)\n {\n $value = str_replace(\"\\r\\n\", \"\", $value);\n $value = str_replace('\"', '\\\\\"', $value);\n if($value == null) $value = \"\\\\N\";\n $value = \"\\\"\" . $value . \"\\\"\";\n }\n $data .= join(',', $row);\n\n $recordSet->MoveNext();\n\n if(!$recordSet->EOF)\n $data .= \"\\n\";\n }\n\n return $data;\n}\n\n// NOTE: CURRENTLY FUNCTION DOESN'T SUPPORT HANDLING FIELDS HEADER, SO NOW IT JUST SKIPS IT\n// IF NECESSARRY\nfunction FillTableFromDumpString($tableName, $dumpString, $truncateTable = true, $fieldsHeaderIncluded = true)\n{\n global $adoConn;\n\n if($truncateTable)\n if($adoConn->Execute(\"TRUNCATE TABLE $tableName\") === false)\n return false;\n\n\n $rows = explode(\"\\n\", $dumpString);\n $startRowIndex = $fieldsHeaderIncluded ? 1 : 0;\n\n $query = \"INSERT INTO $tableName VALUES \";\n $numRows = count($rows);\n\n for($i = $startRowIndex; $i < $numRows; $i++)\n {\n $row = explode(\",\", $rows[$i]);\n\n foreach($row as &$value)\n {\n if($value == \"\\\"\\\\N\\\"\")\n $value = \"NULL\";\n }\n\n $query .= \"(\". implode(\",\", $row) .\")\";\n if($i != $numRows - 1)\n $query .= \",\";\n }\n\n if($adoConn->Execute($query) === false)\n {\n return false;\n }\n\n return true;\n}\n\nIf you have large tables, then I think that you need to send only new data. Ask your remote server for the latest timestamp, and then read all newer data from your main server and send the data either in generic way (as I've shown above) or in non-generic way (in this case you have to write separate functions for each table).\n",
"Assuming your situation allows this security-wise, you forgot one transport mechanism: simply opening a mysql connection from one server to another.\nMe, I would start by thinking about one script that ran regularly on the write server and opens a read only db connection to the read server (A bit of added security) and a full connection to it's own data base server. \nHow you then proceed depends on the data (is it just inserts to deal with? do you have to mirror deletes? how many inserts vs updates? etc) but basically you could write a script that pulled data from the read server and processed it immediately into the write server.\nAlso, would mysql server replication work or would it be to over-blown as a solution?\n",
"If you have access to MySQL's data port, and don't mind the constant network traffic, you can use replication.\n",
"If you're using MyISAM or Archive tables, then I would highly recommend mysqlhotcopy\n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"database_design",
"python",
"web_services"
] |
stackoverflow_0001043528_database_design_python_web_services.txt
|
Q:
Convert param into python?
I am trying to learn web programming in python. I am converting my old php-flash project into python. Now, I am confused about how to set param value and create object using python.
FYI I used a single php file, index.php to communicate with flash.swf. So, my other php files like login.php, logout.php, mail.php, xml.php etc used to be called from this.
Below is the flash object call from index.php-
<object classid="clsid:XXXXXXXXX-YYYY-ZZZZ-AAAA-BBBBBBBBBB" codebase="http://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=7,0,0,0" width="100%" height="100%" id="main" align="middle">
<param name="allowScriptAccess" value="all" />
<param name="flashvars" value= />
<param name="movie" value="flash.swf?<?=substr($_SERVER["REQUEST_URI"],strrpos($_SERVER["SCRIPT_NAME"],"/")+1);?>" />
<param name="loop" value="false" />
<param name="quality" value="high" />
<param name="bgcolor" value="#eeeeee" />
<embed src="flash.swf?<?=substr($_SERVER["REQUEST_URI"],strrpos($_SERVER["SCRIPT_NAME"],"/")+1);?>" loop="false" quality="high" bgcolor="#eeeeee" width="100%" height="100%" name="main" align="middle" allowScriptAccess="all" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" />
</object>
Can any geek help me out with examples how I can convert this into python? Or, any reference on how it can be done?
Cheers :)
A:
Python is a general purpose language, not exactly made for web. There exists some embeddable PHP-like solutions, but in most Python web frameworks, you write Python and HTML (template) code separately.
For example in Django web framework you first write a view (view — you know — from that famous Model-View-Controller pattern) function:
def my_view(request, movie):
return render_to_template('my_view.html',
{'movie': settings.MEDIA_URL + 'flash.swf?' + movie})
And register it with URL dispatcher (in Django, there is a special file, called urls.py):
...
url(r'/flash/(?P<movie>.+)$', 'myapp.views.my_view'),
...
Then a my_view.html template:
...
<object classid="clsid:XXXXXXXXX-YYYY-ZZZZ-AAAA-BBBBBBBBBB" codebase="http://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=7,0,0,0" width="100%" height="100%" id="main" align="middle">
<param name="allowScriptAccess" value="all" />
<param name="flashvars" value= />
<param name="movie" value="{{ movie }}" />
<param name="loop" value="false" />
<param name="quality" value="high" />
<param name="bgcolor" value="#eeeeee" />
<embed src="{{ movie }}" loop="false" quality="high" bgcolor="#eeeeee" width="100%" height="100%" name="main" align="middle" allowScriptAccess="all" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" />
</object>
...
While this may seem like a lot of work for such a tiny task, when you have to write something bigger than simple value-substituting script, the framework pays back. For example, you may actually write a simple blog application in less than 100 lines of code. The framework will automatically take care of URL parsing (somehow like Apache's mod_rewrite for PHP), complex templating, database access, form generation, processing and validation, user authentication, debugging and so on.
There are a lot of different frameworks, each having its own good and bad sides. I recommend spending some time reading introductions and choosing one you like. Personally, I like Django, and had success with web.py. I've also heard good things about Pylons and TurboGears.
If you need something really simple (like in your example), where you don't need almost anything, you may just write small WSGI application, which then can be used, for example, with Apache's mod_python or mod_wsgi. It will be something like this:
def return_movie_html(environ, start_response):
request_uri = environ.get('REQUEST_URI')
movie_uri = request_uri[request_uri.rfind('/')+1:]
start_response('200 OK', [('Content-Type', 'text/html')])
return ['''
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
...
<object ...>
<param name="allowScriptAccess" value="all" />
<param name="flashvars" value= />
<param name="movie" value="%(movie)s" />
<param name="loop" value="false" />
<param name="quality" value="high" />
<param name="bgcolor" value="#eeeeee" />
<embed src="%(movie)s" loop="false" ... />
</object>
...
</html>
''' % {'movie': movie_uri}]
To sum it up: without additional support libraries, Python web programming is somehow painful, and requires doing everything from the URI parsing to output formatting from scratch. However, there are a lot of good libraries and frameworks, to make job not only painless, but sometimes even pleasant :) Learn about them more, and I believe you won't regret it.
A:
php itself can be used as templating language for generatin html, but in python you will need to use one of the several templating engines available. If you want you can do simple templating using string.Template class but that is not what you would want to do.
Your first step should be to decide on web framework you are going to use, and use templating it provides, e.g django
But if you want just a plane cgi python script, you will need to write your html to stdout.
so create a simple html and replace template values with your parameters e.g.
from string import Template
htmlTemplate = Template("""
<html>
<title>$title</title>
</html>
""")
myvalues = {'title':'wow it works!'}
print "Content-type: text/html"
print
print htmlTemplate.substitute(myvalues)
to work with cgi you can use cgi module e.g.
import cgi
form = cgi.FieldStorage()
A:
You can also try using simple web framework like web.py which has a simple templating system along with a simple ORM for database related functionalities. A basic tutorial is available here which is enough to help you get a simple web page, such as yours, up and running.
A:
First of all just make a web2py view that contains {{=BEAUTIFY(response.env)}} you will be able to see all environment systems variables defined in web2py.
Look into slide 24 (www.web2py.com) to see the default mapping of urls into web2py variables.
To solve your problem, the easier way would be to change the paths in the flash code, but I will assume you do not want to do that. I will assume instead your urls all look like
http://127.0.0.1:8000/[..script..].php[..anything..]
and your web2py app is called "app".
Here is what you do:
Create routes.py in the main web2py folder that contains
routes_in=(('/(?P<script>\w+)\.php(?P<anything>.*)',
'/app/default/\g<script>\g<anything>'),
(('/flash.swf','/app/static/slash.swf'))
routes_out(('/app/default/(?P<script>\w+)\.php(?P<anything>.*)',
'/\g<script>\.php\g<anything>'),)
this maps
http://127.0.0.1:8000/index.php into http://127.0.0.1:8000/app/default/index
http://127.0.0.1:8000/index.php/junk into http://127.0.0.1:8000/app/default/index/junk
http://127.0.0.1:8000/flash.swf into http://127.0.0.1:8000/app/static/flash.swf
create a controller default.py that contains
def index(): return dict()
Put the file "flash.swf" in the "app/static" folder.
Create a view default/index.html that contains
<object classid="clsid:XXXXXXXXX-YYYY-ZZZZ-AAAA-BBBBBBBBBB" codebase="http://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=7,0,0,0" width="100%" height="100%" id="main" align="middle">
<param name="allowScriptAccess" value="all" />
<param name="flashvars" value= />
<param name="movie" value="flash.swf?{{=request.env.web2py_original_uri[len(request.function)+5:]}}" />
<param name="loop" value="false" />
<param name="quality" value="high" />
<param name="bgcolor" value="#eeeeee" />
<embed src="flash.swf?{{=request.env.web2py_original_uri[len(request.function)+5:]}}" />
" loop="false" quality="high" bgcolor="#eeeeee" width="100%" height="100%" name="main" align="middle" allowScriptAccess="all" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" />
</object>
I am not sure on whether it is "+5" or "+4"above. Give it a try.
I suggest moving this discussion on the web2py mailing list since there is a much simpler way by changing the paths.
|
Convert param into python?
|
I am trying to learn web programming in python. I am converting my old php-flash project into python. Now, I am confused about how to set param value and create object using python.
FYI I used a single php file, index.php to communicate with flash.swf. So, my other php files like login.php, logout.php, mail.php, xml.php etc used to be called from this.
Below is the flash object call from index.php-
<object classid="clsid:XXXXXXXXX-YYYY-ZZZZ-AAAA-BBBBBBBBBB" codebase="http://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=7,0,0,0" width="100%" height="100%" id="main" align="middle">
<param name="allowScriptAccess" value="all" />
<param name="flashvars" value= />
<param name="movie" value="flash.swf?<?=substr($_SERVER["REQUEST_URI"],strrpos($_SERVER["SCRIPT_NAME"],"/")+1);?>" />
<param name="loop" value="false" />
<param name="quality" value="high" />
<param name="bgcolor" value="#eeeeee" />
<embed src="flash.swf?<?=substr($_SERVER["REQUEST_URI"],strrpos($_SERVER["SCRIPT_NAME"],"/")+1);?>" loop="false" quality="high" bgcolor="#eeeeee" width="100%" height="100%" name="main" align="middle" allowScriptAccess="all" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" />
</object>
Can any geek help me out with examples how I can convert this into python? Or, any reference on how it can be done?
Cheers :)
|
[
"Python is a general purpose language, not exactly made for web. There exists some embeddable PHP-like solutions, but in most Python web frameworks, you write Python and HTML (template) code separately.\nFor example in Django web framework you first write a view (view — you know — from that famous Model-View-Controller pattern) function:\ndef my_view(request, movie):\n return render_to_template('my_view.html',\n {'movie': settings.MEDIA_URL + 'flash.swf?' + movie})\n\nAnd register it with URL dispatcher (in Django, there is a special file, called urls.py):\n...\nurl(r'/flash/(?P<movie>.+)$', 'myapp.views.my_view'),\n...\n\nThen a my_view.html template:\n...\n<object classid=\"clsid:XXXXXXXXX-YYYY-ZZZZ-AAAA-BBBBBBBBBB\" codebase=\"http://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=7,0,0,0\" width=\"100%\" height=\"100%\" id=\"main\" align=\"middle\">\n<param name=\"allowScriptAccess\" value=\"all\" />\n<param name=\"flashvars\" value= />\n<param name=\"movie\" value=\"{{ movie }}\" />\n<param name=\"loop\" value=\"false\" />\n<param name=\"quality\" value=\"high\" />\n<param name=\"bgcolor\" value=\"#eeeeee\" />\n<embed src=\"{{ movie }}\" loop=\"false\" quality=\"high\" bgcolor=\"#eeeeee\" width=\"100%\" height=\"100%\" name=\"main\" align=\"middle\" allowScriptAccess=\"all\" type=\"application/x-shockwave-flash\" pluginspage=\"http://www.macromedia.com/go/getflashplayer\" />\n</object>\n...\n\nWhile this may seem like a lot of work for such a tiny task, when you have to write something bigger than simple value-substituting script, the framework pays back. For example, you may actually write a simple blog application in less than 100 lines of code. The framework will automatically take care of URL parsing (somehow like Apache's mod_rewrite for PHP), complex templating, database access, form generation, processing and validation, user authentication, debugging and so on.\nThere are a lot of different frameworks, each having its own good and bad sides. I recommend spending some time reading introductions and choosing one you like. Personally, I like Django, and had success with web.py. I've also heard good things about Pylons and TurboGears.\nIf you need something really simple (like in your example), where you don't need almost anything, you may just write small WSGI application, which then can be used, for example, with Apache's mod_python or mod_wsgi. It will be something like this:\ndef return_movie_html(environ, start_response):\n request_uri = environ.get('REQUEST_URI')\n movie_uri = request_uri[request_uri.rfind('/')+1:]\n start_response('200 OK', [('Content-Type', 'text/html')])\n return ['''\n <!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\"\n \"http://www.w3.org/TR/html4/strict.dtd\">\n <html>\n ...\n <object ...>\n <param name=\"allowScriptAccess\" value=\"all\" />\n <param name=\"flashvars\" value= />\n <param name=\"movie\" value=\"%(movie)s\" />\n <param name=\"loop\" value=\"false\" />\n <param name=\"quality\" value=\"high\" />\n <param name=\"bgcolor\" value=\"#eeeeee\" />\n <embed src=\"%(movie)s\" loop=\"false\" ... />\n </object>\n ...\n </html>\n ''' % {'movie': movie_uri}]\n\nTo sum it up: without additional support libraries, Python web programming is somehow painful, and requires doing everything from the URI parsing to output formatting from scratch. However, there are a lot of good libraries and frameworks, to make job not only painless, but sometimes even pleasant :) Learn about them more, and I believe you won't regret it.\n",
"php itself can be used as templating language for generatin html, but in python you will need to use one of the several templating engines available. If you want you can do simple templating using string.Template class but that is not what you would want to do.\nYour first step should be to decide on web framework you are going to use, and use templating it provides, e.g django\nBut if you want just a plane cgi python script, you will need to write your html to stdout.\nso create a simple html and replace template values with your parameters e.g.\nfrom string import Template \n\nhtmlTemplate = Template(\"\"\" \n<html>\n<title>$title</title>\n</html> \n\"\"\")\n\nmyvalues = {'title':'wow it works!'}\nprint \"Content-type: text/html\"\nprint\nprint htmlTemplate.substitute(myvalues)\n\nto work with cgi you can use cgi module e.g.\nimport cgi\nform = cgi.FieldStorage()\n\n",
"You can also try using simple web framework like web.py which has a simple templating system along with a simple ORM for database related functionalities. A basic tutorial is available here which is enough to help you get a simple web page, such as yours, up and running.\n",
"First of all just make a web2py view that contains {{=BEAUTIFY(response.env)}} you will be able to see all environment systems variables defined in web2py. \nLook into slide 24 (www.web2py.com) to see the default mapping of urls into web2py variables.\nTo solve your problem, the easier way would be to change the paths in the flash code, but I will assume you do not want to do that. I will assume instead your urls all look like\nhttp://127.0.0.1:8000/[..script..].php[..anything..]\n\nand your web2py app is called \"app\".\nHere is what you do:\nCreate routes.py in the main web2py folder that contains\nroutes_in=(('/(?P<script>\\w+)\\.php(?P<anything>.*)',\n '/app/default/\\g<script>\\g<anything>'),\n (('/flash.swf','/app/static/slash.swf'))\nroutes_out(('/app/default/(?P<script>\\w+)\\.php(?P<anything>.*)',\n '/\\g<script>\\.php\\g<anything>'),)\n\nthis maps\nhttp://127.0.0.1:8000/index.php into http://127.0.0.1:8000/app/default/index\nhttp://127.0.0.1:8000/index.php/junk into http://127.0.0.1:8000/app/default/index/junk\nhttp://127.0.0.1:8000/flash.swf into http://127.0.0.1:8000/app/static/flash.swf\n\ncreate a controller default.py that contains\ndef index(): return dict()\n\nPut the file \"flash.swf\" in the \"app/static\" folder.\nCreate a view default/index.html that contains\n<object classid=\"clsid:XXXXXXXXX-YYYY-ZZZZ-AAAA-BBBBBBBBBB\" codebase=\"http://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=7,0,0,0\" width=\"100%\" height=\"100%\" id=\"main\" align=\"middle\">\n<param name=\"allowScriptAccess\" value=\"all\" />\n<param name=\"flashvars\" value= />\n<param name=\"movie\" value=\"flash.swf?{{=request.env.web2py_original_uri[len(request.function)+5:]}}\" />\n\n<param name=\"loop\" value=\"false\" />\n<param name=\"quality\" value=\"high\" />\n<param name=\"bgcolor\" value=\"#eeeeee\" />\n<embed src=\"flash.swf?{{=request.env.web2py_original_uri[len(request.function)+5:]}}\" />\n\" loop=\"false\" quality=\"high\" bgcolor=\"#eeeeee\" width=\"100%\" height=\"100%\" name=\"main\" align=\"middle\" allowScriptAccess=\"all\" type=\"application/x-shockwave-flash\" pluginspage=\"http://www.macromedia.com/go/getflashplayer\" />\n</object>\n\nI am not sure on whether it is \"+5\" or \"+4\"above. Give it a try.\nI suggest moving this discussion on the web2py mailing list since there is a much simpler way by changing the paths.\n"
] |
[
3,
0,
0,
0
] |
[] |
[] |
[
"parameters",
"php",
"python"
] |
stackoverflow_0001042391_parameters_php_python.txt
|
Q:
Splitting a string @ once using different seps
datetime = '0000-00-00 00:00:00'.split('-')
Right now it just splits it at the hyphen, but is it possible to split this string at both -'s and :'s ?
A:
I'm guessing you also want to split on the space in the middle:
import re
values = re.split(r'[- :]', "1122-33-44 55:66:77")
print values
# Prints ['1122', '33', '44', '55', '66', '77']
A:
One idea would be something like this (untested):
years, months, days = the_string.split('-')
days, time = days.split(' ')
time = time.split(':')
Or this, which fits your data better.
date, time = the_string.split(' ')
years, months, days = date.split('-')
hours, minute, seconds = time.split(":")
A:
Maybe what you really want is to parse the date
>>> from time import strptime
>>> strptime( '2000-01-01 00:00:00', '%Y-%m-%d %H:%M:%S')
time.struct_time(tm_year=2000, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=5, tm_yday=1, tm_isdst=-1)
>>>
A:
Using regexps, and using a pattern to match the desired format,
>>> pat = r"(\d{4})-(\d\d)-(\d\d) (\d\d):(\d\d):(\d\d)"
>>> m = re.match(pat,'0000-00-00 00:00:00')
>>> m.groups()
('0000', '00', '00', '00', '00', '00')
>>>
A more verbose option, named groups, will pair field names with values (for @skurpur).
>>> pat = "(?P<year>\d{4})-(?P<month>\d\d)-(?P<day>\d\d) (?P<hour>\d\d):(?P<min>\d\d):(?P<sec>\d\d)"
>>> m = re.match(pat,'0000-00-00 00:00:00')
>>> m.groupdict()
{'hour': '00', 'min': '00', 'month': '00', 'sec': '00', 'year': '0000', 'day': '00'}
>>>
|
Splitting a string @ once using different seps
|
datetime = '0000-00-00 00:00:00'.split('-')
Right now it just splits it at the hyphen, but is it possible to split this string at both -'s and :'s ?
|
[
"I'm guessing you also want to split on the space in the middle:\nimport re\nvalues = re.split(r'[- :]', \"1122-33-44 55:66:77\")\nprint values\n# Prints ['1122', '33', '44', '55', '66', '77']\n\n",
"One idea would be something like this (untested):\nyears, months, days = the_string.split('-')\ndays, time = days.split(' ')\ntime = time.split(':')\n\nOr this, which fits your data better.\ndate, time = the_string.split(' ')\nyears, months, days = date.split('-')\nhours, minute, seconds = time.split(\":\")\n\n",
"Maybe what you really want is to parse the date \n>>> from time import strptime\n>>> strptime( '2000-01-01 00:00:00', '%Y-%m-%d %H:%M:%S') \ntime.struct_time(tm_year=2000, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=5, tm_yday=1, tm_isdst=-1)\n>>>\n\n",
"Using regexps, and using a pattern to match the desired format,\n>>> pat = r\"(\\d{4})-(\\d\\d)-(\\d\\d) (\\d\\d):(\\d\\d):(\\d\\d)\"\n>>> m = re.match(pat,'0000-00-00 00:00:00')\n>>> m.groups()\n('0000', '00', '00', '00', '00', '00')\n>>> \n\nA more verbose option, named groups, will pair field names with values (for @skurpur).\n>>> pat = \"(?P<year>\\d{4})-(?P<month>\\d\\d)-(?P<day>\\d\\d) (?P<hour>\\d\\d):(?P<min>\\d\\d):(?P<sec>\\d\\d)\"\n>>> m = re.match(pat,'0000-00-00 00:00:00')\n>>> m.groupdict()\n{'hour': '00', 'min': '00', 'month': '00', 'sec': '00', 'year': '0000', 'day': '00'}\n>>> \n\n"
] |
[
13,
5,
2,
0
] |
[] |
[] |
[
"python",
"split",
"string"
] |
stackoverflow_0001042751_python_split_string.txt
|
Q:
Unable to solve a Python error message
The code is from K. Pollari-Malmi's lecture notes for the course "Introduction to Programming":
def main():
print "Ohjelma laskee asuntolainan kuukausierat."
rivi = raw_input("Anna lainasumma: ")
lainasumma = float(rivi)
rivi = raw_input("Anna laina-aika vuosina: ")
laina_aika = int(rivi)
if laina_aika < 1:
print "liian lyhyt laina-aika"
else:
kk_lkm = 12 * laina_aika
rivi = raw_input("Anna korkoprosentti: ")
korko = float(rivi)
lyhennys = lainasumma / kk_lkm
paaoma = lainasumma
i = 0
print " lyhennys korko yhteensa"
while i < kk_lkm:
i = i + 1
korkoera = korko / 1200.0 * paaoma
paaoma = paaoma - lyhennys
kuukausiera = korkoera + lyhennys
print "%2d. %8.2f %8.2f %8.2f" % \ # mistake probably here
(i, lyhennys, korkoera, kuukausiera)
main()
I get the syntax error
SyntaxError: unexpected character after line continuation character
How can you solve the error message?
A:
Several answers already gave you the crux of your problem, but I want to make a plug for my favorite way to get logical line continuation in Python, when feasible:
print "%2d. %8.2f %8.2f %8.2f" % ( # no mistake here
i, lyhennys, korkoera, kuukausiera)
i.e., instead of using extra parentheses as some answers advise, you can take advantage of any parenthesis you already naturally happen to have -- that will tell Python that you need logical line continuation, too;-)
A:
You can't have anything, even whitespace, after the line continuation character.
Either delete the whitespace, or wrap the entire line in a pair of parentheses. Python implicitly joins lines between parentheses, curly braces, and square brackets:
print ( "%2d. %8.2f %8.2f %8.2f" %
(i, lyhennys, korkoera, kuukausiera) )
A:
Try modifying these lines:
print "%2d. %8.2f %8.2f %8.2f" % \ # mistake probably here
(i, lyhennys, korkoera, kuukausiera)
to this line:
print "%2d. %8.2f %8.2f %8.2f" % (i, lyhennys, korkoera, kuukausiera)
Also, note that a line ending with a backslash cannot carry a comment. So your #mistake probably here comment is likely causing the problem.
A:
Try rewriting this:
print "%2d. %8.2f %8.2f %8.2f" % \ # mistake probably here
(i, lyhennys, korkoera, kuukausiera)
To this:
print ("%2d. %8.2f %8.2f %8.2f" %
(i, lyhennys, korkoera, kuukausiera))
\ should work too, but IMO it's less readable.
A:
Replace:
print "%2d. %8.2f %8.2f %8.2f" % \
(i, lyhennys, korkoera, kuukausiera)
By:
print "%2d. %8.2f %8.2f %8.2f" % (
i, lyhennys, korkoera, kuukausiera)
General remark: always use English for identifiers
A:
In general, I find I don't use line continuation in Python. You can make it cleaner with parentheses and so on.
A:
"\" means "this line continue to the next line" and it can't have any character after it. There probably is a space right after.
Two solution :
Ensure there is not spaces after;
Rewrite the statement on one line :
print "%2d. %8.2f %8.2f %8.2f" % (i, lyhennys, korkoera, kuukausiera)
You can even use parenthesis to make it fit on several lines :
print "%2d. %8.2f %8.2f %8.2f" % (
i, lyhennys, korkoera, kuukausiera)
|
Unable to solve a Python error message
|
The code is from K. Pollari-Malmi's lecture notes for the course "Introduction to Programming":
def main():
print "Ohjelma laskee asuntolainan kuukausierat."
rivi = raw_input("Anna lainasumma: ")
lainasumma = float(rivi)
rivi = raw_input("Anna laina-aika vuosina: ")
laina_aika = int(rivi)
if laina_aika < 1:
print "liian lyhyt laina-aika"
else:
kk_lkm = 12 * laina_aika
rivi = raw_input("Anna korkoprosentti: ")
korko = float(rivi)
lyhennys = lainasumma / kk_lkm
paaoma = lainasumma
i = 0
print " lyhennys korko yhteensa"
while i < kk_lkm:
i = i + 1
korkoera = korko / 1200.0 * paaoma
paaoma = paaoma - lyhennys
kuukausiera = korkoera + lyhennys
print "%2d. %8.2f %8.2f %8.2f" % \ # mistake probably here
(i, lyhennys, korkoera, kuukausiera)
main()
I get the syntax error
SyntaxError: unexpected character after line continuation character
How can you solve the error message?
|
[
"Several answers already gave you the crux of your problem, but I want to make a plug for my favorite way to get logical line continuation in Python, when feasible:\nprint \"%2d. %8.2f %8.2f %8.2f\" % ( # no mistake here\n i, lyhennys, korkoera, kuukausiera)\n\ni.e., instead of using extra parentheses as some answers advise, you can take advantage of any parenthesis you already naturally happen to have -- that will tell Python that you need logical line continuation, too;-)\n",
"You can't have anything, even whitespace, after the line continuation character.\nEither delete the whitespace, or wrap the entire line in a pair of parentheses. Python implicitly joins lines between parentheses, curly braces, and square brackets:\nprint ( \"%2d. %8.2f %8.2f %8.2f\" % \n (i, lyhennys, korkoera, kuukausiera) )\n\n",
"Try modifying these lines:\n print \"%2d. %8.2f %8.2f %8.2f\" % \\ # mistake probably here\n (i, lyhennys, korkoera, kuukausiera)\n\nto this line:\n print \"%2d. %8.2f %8.2f %8.2f\" % (i, lyhennys, korkoera, kuukausiera)\n\nAlso, note that a line ending with a backslash cannot carry a comment. So your #mistake probably here comment is likely causing the problem. \n",
"Try rewriting this:\nprint \"%2d. %8.2f %8.2f %8.2f\" % \\ # mistake probably here\n (i, lyhennys, korkoera, kuukausiera)\n\nTo this:\nprint (\"%2d. %8.2f %8.2f %8.2f\" %\n (i, lyhennys, korkoera, kuukausiera))\n\n\\ should work too, but IMO it's less readable.\n",
"Replace:\nprint \"%2d. %8.2f %8.2f %8.2f\" % \\\n (i, lyhennys, korkoera, kuukausiera)\n\nBy:\nprint \"%2d. %8.2f %8.2f %8.2f\" % (\n i, lyhennys, korkoera, kuukausiera)\n\nGeneral remark: always use English for identifiers\n",
"In general, I find I don't use line continuation in Python. You can make it cleaner with parentheses and so on.\n",
"\"\\\" means \"this line continue to the next line\" and it can't have any character after it. There probably is a space right after.\nTwo solution :\nEnsure there is not spaces after;\nRewrite the statement on one line :\nprint \"%2d. %8.2f %8.2f %8.2f\" % (i, lyhennys, korkoera, kuukausiera)\n\nYou can even use parenthesis to make it fit on several lines :\nprint \"%2d. %8.2f %8.2f %8.2f\" % (\n i, lyhennys, korkoera, kuukausiera)\n\n"
] |
[
10,
6,
2,
2,
2,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001044705_python.txt
|
Q:
Setting the flags field of the IP header
I have a simple Python script that uses the socket module to send a UDP packet. The script works fine on my Windows box, but on my Ubuntu Linux PC the packet it sends is slightly different. On Windows the flags field in the IP header is zero, but using the same code on Linux created a packet with the flags field set to 4. I'd like to modify my script so it has consistent behavior on Windows and Linux.
Is there a method for controlling the flags field in the socket module? Or, is this a setting I have to change in Linux?
A:
Here's the route I ended up taking. I followed the link posted by SashaN in the comments of D.Shwley's answer and learned a little bit about why the "don't fragment" bit is set in Linux's UDP packets. Turns out it has something to do with PMTU discovery. Long story short, you can clear the don't fragment bit from your UDP packets in Python by using the setsockopts function in the socket object.
import socket
IP_MTU_DISCOVER = 10
IP_PMTUDISC_DONT = 0 # Never send DF frames.
IP_PMTUDISC_WANT = 1 # Use per route hints.
IP_PMTUDISC_DO = 2 # Always DF.
IP_PMTUDISC_PROBE = 3 # Ignore dst pmtu.
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("10.0.0.1", 8000))
s.send("Hello World!") # DF bit is set in this packet
s.setsockopt(socket.SOL_IP, IP_MTU_DISCOVER, IP_PMTUDISC_DONT)
s.send("Hello World!") # DF bit is cleared in this packet
A:
I'm guessing that the flags field is actually set to 2 = b010 instead of 4 - flags equal to 4 is an invalid IP packet. Remember that flags is a 3 bit value in the IP Header. I would expect to see UDP datagrams with a flags value of 2 which means "Don't fragment".
As for your question, I don't believe there is a way to set the IP flags directly without going all of the way to using raw sockets. I wouldn't worry about it since most applications don't really have a good reason to muck with IP or even UDP/TCP headers directly.
A:
construct might do the job?
|
Setting the flags field of the IP header
|
I have a simple Python script that uses the socket module to send a UDP packet. The script works fine on my Windows box, but on my Ubuntu Linux PC the packet it sends is slightly different. On Windows the flags field in the IP header is zero, but using the same code on Linux created a packet with the flags field set to 4. I'd like to modify my script so it has consistent behavior on Windows and Linux.
Is there a method for controlling the flags field in the socket module? Or, is this a setting I have to change in Linux?
|
[
"Here's the route I ended up taking. I followed the link posted by SashaN in the comments of D.Shwley's answer and learned a little bit about why the \"don't fragment\" bit is set in Linux's UDP packets. Turns out it has something to do with PMTU discovery. Long story short, you can clear the don't fragment bit from your UDP packets in Python by using the setsockopts function in the socket object. \nimport socket\nIP_MTU_DISCOVER = 10\nIP_PMTUDISC_DONT = 0 # Never send DF frames.\nIP_PMTUDISC_WANT = 1 # Use per route hints.\nIP_PMTUDISC_DO = 2 # Always DF.\nIP_PMTUDISC_PROBE = 3 # Ignore dst pmtu.\ns = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\ns.connect((\"10.0.0.1\", 8000))\ns.send(\"Hello World!\") # DF bit is set in this packet\ns.setsockopt(socket.SOL_IP, IP_MTU_DISCOVER, IP_PMTUDISC_DONT)\ns.send(\"Hello World!\") # DF bit is cleared in this packet\n\n",
"I'm guessing that the flags field is actually set to 2 = b010 instead of 4 - flags equal to 4 is an invalid IP packet. Remember that flags is a 3 bit value in the IP Header. I would expect to see UDP datagrams with a flags value of 2 which means \"Don't fragment\".\nAs for your question, I don't believe there is a way to set the IP flags directly without going all of the way to using raw sockets. I wouldn't worry about it since most applications don't really have a good reason to muck with IP or even UDP/TCP headers directly.\n",
"construct might do the job? \n"
] |
[
6,
2,
1
] |
[] |
[] |
[
"python",
"sockets"
] |
stackoverflow_0001035799_python_sockets.txt
|
Q:
Unable to understand a line of Python code exactly
Alex's answer has the following line when translated to English
print "%2d. %8.2f %8.2f %8.2f" % (
i, payment, interest, monthPayment)
I am unsure about the line
"%2d. %8.2f %8.2f %8.2f" % #Why do we need the last % here?
It seems to mean the following
apply %2d. to i
apply %8.2f to payment
apply %8.2f to interest
apply %8.2f to monthPayment
The %-words seem to mean the following
%2d.: a decimal presentation of two decimals
2-4. %8.2f: a floating point presentation of two decimals
I am not sure why we use the 8 in %8.2f.
How do you understand the challenging line?
A:
The 8 in 8.2 is the width
"Minimum number of characters to be printed. If the value to be printed is shorter than this number, the result is padded with blank spaces. The value is not truncated even if the result is larger"
The 2 is the number of decimal places
The final % just links the format string (in quotes) with the list of arguments (in brackets).
It's a bit confusing that they chose a % to do this - there is probably some deep python reason.
edit: Apparently '%' is used simply because '%' is used inside the format - which is IMHO stupid and guaranteed to cause confusion. It's like requiring an extra dot at the end of a floating point number to show that it's floating point!
A:
The last % is an operator that takes the string before it and the tuple after and applies the formatting as you note. See the Python tutorial for more details.
A:
The % is an operator which makes a format string. A simple example would be:
"%s is %s" % ( "Alice", "Happy" )
Which would evaluate to the string "Alice is Happy". The format string that is provided defines how the values you pass are put into the string; the syntax is available here. In short the d is "treat as a decimal number" and the 8.2 is "pad to 8 characters and round to 2 decimal places". In essence it looks like that format in particular is being used so that the answers line up when viewed with a monospace font. :)
In my code example the s means "treat as a string".
A:
The % after a string tells Python to attempt to fill in the variables on the left side of the
'%' operator with the items in the list on the right side of the '%' operator.
The '%' operator knows to find the variable in the string by looking for character in the string starting with %.
Your confusion is that you think the % operator and the % character in the string are the same.
Try to look at it this way, outside a string % is an operator, inside a string it is possibly a template for substitution.
A:
As usual, a quote of the doc is required - string-formatting:
String and Unicode objects have one unique built-in operation: the % operator (modulo). This is also known as the string formatting or interpolation operator. Given format % values (where format is a string or Unicode object), % conversion specifications in format are replaced with zero or more elements of values. The effect is similar to the using sprintf in the C language.
And the description of the conversion specifier to explain %8.2f
A conversion specifier contains two or more characters and has the following components, which must occur in this order:
The '%' character, which marks the start of the specifier.
Mapping key (optional), consisting of a parenthesised sequence of characters (for example, (somename)).
Conversion flags (optional), which affect the result of some conversion types.
Minimum field width (optional). If specified as an '*' (asterisk), the actual width is read from the next element of the tuple in values, and the object to convert comes after the minimum field width and optional precision.
Precision (optional), given as a '.' (dot) followed by the precision. If specified as '*' (an asterisk), the actual width is read from the next element of the tuple in values, and the value to convert comes after the precision.
Length modifier (optional).
Conversion type.
When the right argument is a dictionary (or other mapping type), the format string includes mapping keys (2). Breaking the example to 2 steps, we have a dictionary and a format that includes keys from the dictionary (the # is a key):
>>> mydict = {'language':'python', '#':2}
>>> '%(language)s has %(#)03d quote types.' % mydict
'python has 002 quote types.'
>>>
A:
the %8.2f means allow 8 character spaces to hold the number given by the corrisponding variable holding a float, and then have decimal precision of 2.
|
Unable to understand a line of Python code exactly
|
Alex's answer has the following line when translated to English
print "%2d. %8.2f %8.2f %8.2f" % (
i, payment, interest, monthPayment)
I am unsure about the line
"%2d. %8.2f %8.2f %8.2f" % #Why do we need the last % here?
It seems to mean the following
apply %2d. to i
apply %8.2f to payment
apply %8.2f to interest
apply %8.2f to monthPayment
The %-words seem to mean the following
%2d.: a decimal presentation of two decimals
2-4. %8.2f: a floating point presentation of two decimals
I am not sure why we use the 8 in %8.2f.
How do you understand the challenging line?
|
[
"The 8 in 8.2 is the width\n\"Minimum number of characters to be printed. If the value to be printed is shorter than this number, the result is padded with blank spaces. The value is not truncated even if the result is larger\"\nThe 2 is the number of decimal places\nThe final % just links the format string (in quotes) with the list of arguments (in brackets).\nIt's a bit confusing that they chose a % to do this - there is probably some deep python reason.\nedit: Apparently '%' is used simply because '%' is used inside the format - which is IMHO stupid and guaranteed to cause confusion. It's like requiring an extra dot at the end of a floating point number to show that it's floating point!\n",
"The last % is an operator that takes the string before it and the tuple after and applies the formatting as you note. See the Python tutorial for more details.\n",
"The % is an operator which makes a format string. A simple example would be:\n\"%s is %s\" % ( \"Alice\", \"Happy\" )\n\nWhich would evaluate to the string \"Alice is Happy\". The format string that is provided defines how the values you pass are put into the string; the syntax is available here. In short the d is \"treat as a decimal number\" and the 8.2 is \"pad to 8 characters and round to 2 decimal places\". In essence it looks like that format in particular is being used so that the answers line up when viewed with a monospace font. :)\nIn my code example the s means \"treat as a string\".\n",
"The % after a string tells Python to attempt to fill in the variables on the left side of the \n'%' operator with the items in the list on the right side of the '%' operator.\nThe '%' operator knows to find the variable in the string by looking for character in the string starting with %. \nYour confusion is that you think the % operator and the % character in the string are the same.\nTry to look at it this way, outside a string % is an operator, inside a string it is possibly a template for substitution. \n",
"As usual, a quote of the doc is required - string-formatting:\n\nString and Unicode objects have one unique built-in operation: the % operator (modulo). This is also known as the string formatting or interpolation operator. Given format % values (where format is a string or Unicode object), % conversion specifications in format are replaced with zero or more elements of values. The effect is similar to the using sprintf in the C language. \n\nAnd the description of the conversion specifier to explain %8.2f\n\nA conversion specifier contains two or more characters and has the following components, which must occur in this order:\n\n\nThe '%' character, which marks the start of the specifier.\nMapping key (optional), consisting of a parenthesised sequence of characters (for example, (somename)).\nConversion flags (optional), which affect the result of some conversion types.\nMinimum field width (optional). If specified as an '*' (asterisk), the actual width is read from the next element of the tuple in values, and the object to convert comes after the minimum field width and optional precision.\nPrecision (optional), given as a '.' (dot) followed by the precision. If specified as '*' (an asterisk), the actual width is read from the next element of the tuple in values, and the value to convert comes after the precision.\nLength modifier (optional).\nConversion type.\n\nWhen the right argument is a dictionary (or other mapping type), the format string includes mapping keys (2). Breaking the example to 2 steps, we have a dictionary and a format that includes keys from the dictionary (the # is a key):\n>>> mydict = {'language':'python', '#':2}\n>>> '%(language)s has %(#)03d quote types.' % mydict\n'python has 002 quote types.'\n>>> \n\n",
"the %8.2f means allow 8 character spaces to hold the number given by the corrisponding variable holding a float, and then have decimal precision of 2.\n"
] |
[
7,
4,
1,
1,
1,
0
] |
[] |
[] |
[
"python",
"syntax"
] |
stackoverflow_0001044889_python_syntax.txt
|
Q:
How to do cleanup reliably in python?
I have some ctypes bindings, and for each body.New I should call body.Free. The library I'm binding doesn't have allocation routines insulated out from the rest of the code (they can be called about anywhere there), and to use couple of useful features I need to make cyclic references.
I think It'd solve if I'd find a reliable way to hook destructor to an object. (weakrefs would help if they'd give me the callback just before the data is dropped.
So obviously this code megafails when I put in velocity_func:
class Body(object):
def __init__(self, mass, inertia):
self._body = body.New(mass, inertia)
def __del__(self):
print '__del__ %r' % self
if body:
body.Free(self._body)
...
def set_velocity_func(self, func):
self._body.contents.velocity_func = ctypes_wrapping(func)
I also tried to solve it through weakrefs, with those the things seem getting just worse, just only largely more unpredictable.
Even if I don't put in the velocity_func, there will appear cycles at least then when I do this:
class Toy(object):
def __init__(self, body):
self.body.owner = self
...
def collision(a, b, contacts):
whatever(a.body.owner)
So how to make sure Structures will get garbage collected, even if they are allocated/freed by the shared library?
There's repository if you are interested about more details: http://bitbucket.org/cheery/ctypes-chipmunk/
A:
What you want to do, that is create an object that allocates things and then deallocates automatically when the object is no longer in use, is almost impossible in Python, unfortunately. The del statement is not guaranteed to be called, so you can't rely on that.
The standard way in Python is simply:
try:
allocate()
dostuff()
finally:
cleanup()
Or since 2.5 you can also create context-managers and use the with statement, which is a neater way of doing that.
But both of these are primarily for when you allocate/lock in the beginning of a code snippet. If you want to have things allocated for the whole run of the program, you need to allocate the resource at startup, before the main code of the program runs, and deallocate afterwards. There is one situation which isn't covered here, and that is when you want to allocate and deallocate many resources dynamically and use them in many places in the code. For example of you want a pool of memory buffers or similar. But most of those cases are for memory, which Python will handle for you, so you don't have to bother about those. There are of course cases where you want to have dynamic pool allocation of things that are NOT memory, and then you would want the type of deallocation you try in your example, and that is tricky to do with Python.
A:
If weakrefs aren't broken, I guess this may work:
from weakref import ref
pointers = set()
class Pointer(object):
def __init__(self, cfun, ptr):
pointers.add(self)
self.ref = ref(ptr, self.cleanup)
self.data = cast(ptr, c_void_p).value # python cast it so smart, but it can't be smarter than this.
self.cfun = cfun
def cleanup(self, obj):
print 'cleanup 0x%x' % self.data
self.cfun(self.data)
pointers.remove(self)
def cleanup(cfun, ptr):
Pointer(cfun, ptr)
I yet try it. The important piece is that the Pointer doesn't have any strong references to the foreign pointer, except an integer. This should work if ctypes doesn't free memory that I should free with the bindings. Yeah, it's basicly a hack, but I think it may work better than the earlier things I've been trying.
Edit: Tried it, and it seem to work after small finetuning my code. A surprising thing is that even if I got del out from all of my structures, it seem to still fail. Interesting but frustrating.
Neither works, from some weird chance I've been able to drop away cyclic references in places, but things stay broke.
Edit: Well.. weakrefs WERE broken after all! so there's likely no solution for reliable cleanup in python, except than forcing it being explicit.
|
How to do cleanup reliably in python?
|
I have some ctypes bindings, and for each body.New I should call body.Free. The library I'm binding doesn't have allocation routines insulated out from the rest of the code (they can be called about anywhere there), and to use couple of useful features I need to make cyclic references.
I think It'd solve if I'd find a reliable way to hook destructor to an object. (weakrefs would help if they'd give me the callback just before the data is dropped.
So obviously this code megafails when I put in velocity_func:
class Body(object):
def __init__(self, mass, inertia):
self._body = body.New(mass, inertia)
def __del__(self):
print '__del__ %r' % self
if body:
body.Free(self._body)
...
def set_velocity_func(self, func):
self._body.contents.velocity_func = ctypes_wrapping(func)
I also tried to solve it through weakrefs, with those the things seem getting just worse, just only largely more unpredictable.
Even if I don't put in the velocity_func, there will appear cycles at least then when I do this:
class Toy(object):
def __init__(self, body):
self.body.owner = self
...
def collision(a, b, contacts):
whatever(a.body.owner)
So how to make sure Structures will get garbage collected, even if they are allocated/freed by the shared library?
There's repository if you are interested about more details: http://bitbucket.org/cheery/ctypes-chipmunk/
|
[
"What you want to do, that is create an object that allocates things and then deallocates automatically when the object is no longer in use, is almost impossible in Python, unfortunately. The del statement is not guaranteed to be called, so you can't rely on that. \nThe standard way in Python is simply:\ntry:\n allocate()\n dostuff()\nfinally:\n cleanup()\n\nOr since 2.5 you can also create context-managers and use the with statement, which is a neater way of doing that.\nBut both of these are primarily for when you allocate/lock in the beginning of a code snippet. If you want to have things allocated for the whole run of the program, you need to allocate the resource at startup, before the main code of the program runs, and deallocate afterwards. There is one situation which isn't covered here, and that is when you want to allocate and deallocate many resources dynamically and use them in many places in the code. For example of you want a pool of memory buffers or similar. But most of those cases are for memory, which Python will handle for you, so you don't have to bother about those. There are of course cases where you want to have dynamic pool allocation of things that are NOT memory, and then you would want the type of deallocation you try in your example, and that is tricky to do with Python.\n",
"If weakrefs aren't broken, I guess this may work:\nfrom weakref import ref\n\npointers = set()\n\nclass Pointer(object):\n def __init__(self, cfun, ptr):\n pointers.add(self)\n self.ref = ref(ptr, self.cleanup)\n self.data = cast(ptr, c_void_p).value # python cast it so smart, but it can't be smarter than this.\n self.cfun = cfun\n\n def cleanup(self, obj):\n print 'cleanup 0x%x' % self.data\n self.cfun(self.data)\n pointers.remove(self)\n\ndef cleanup(cfun, ptr):\n Pointer(cfun, ptr)\n\nI yet try it. The important piece is that the Pointer doesn't have any strong references to the foreign pointer, except an integer. This should work if ctypes doesn't free memory that I should free with the bindings. Yeah, it's basicly a hack, but I think it may work better than the earlier things I've been trying.\nEdit: Tried it, and it seem to work after small finetuning my code. A surprising thing is that even if I got del out from all of my structures, it seem to still fail. Interesting but frustrating.\nNeither works, from some weird chance I've been able to drop away cyclic references in places, but things stay broke.\nEdit: Well.. weakrefs WERE broken after all! so there's likely no solution for reliable cleanup in python, except than forcing it being explicit.\n"
] |
[
3,
0
] |
[
"In CPython, __del__ is a reliable destructor of an object, because it will always be called when the reference count reaches zero (note: there may be cases - like circular references of items with __del__ method defined - where the reference count will never reaches zero, but that is another issue).\nUpdate\nFrom the comments, I understand the problem is related to the order of destruction of objects: body is a global object, and it is being destroyed before all other objects, thus it is no longer available to them.\nActually, using global objects is not good; not only because of issues like this one, but also because of maintenance.\nI would then change your class with something like this\nclass Body(object):\n def __init__(self, mass, inertia):\n self._bodyref = body\n self._body = body.New(mass, inertia)\n\n def __del__(self):\n print '__del__ %r' % self\n if body:\n body.Free(self._body)\n\n... \n\ndef set_velocity_func(self, func):\n self._body.contents.velocity_func = ctypes_wrapping(func)\n\nA couple of notes:\n\nThe change is only adding a reference to the global body object, that thus will live at least as much as all the objects derived from that class.\nStill, using a global object is not good because of unit testing and maintenance; better would be to have a factory for the object, that will set the correct \"body\" to the class, and in case of unit test will easily put a mock object. But that's really up to you and how much effort you think makes sense in this project.\n\n"
] |
[
-1
] |
[
"ctypes",
"cyclic_reference",
"python"
] |
stackoverflow_0001044073_ctypes_cyclic_reference_python.txt
|
Q:
Custom Managers and "through"
I have a many-to-many relationship in my django application where I use the "add" method of the manager pretty heavily (ie album.photos.add() ).
I find myself needing to store some data about the many-to-many relationship now, but I don't want to lose the add method. Can I just set a default value for all the additional fields on the "through" model and somehow re-implement the add method?
I don't really know much about custom managers but I suspect that might be the right place to look.
Update:
Been reading up on custom managers. Maybe I can just keep the add/remove/etc from being disabled when I add the "through" argument to my Many-to-many field?
Does anyone know how to do that?
A:
Simplest way is to just add a method to Album (i.e. album.add_photo()) which handles the metadata and manually creates a properly-linked Photo instance.
If you want to get all funky, you can write a custom manager for Photos, make it the default (i.e. first assigned manager), set use_for_related_fields = True on it, and give it an add() method that is able to properly set the default metadata for the relationship.
Aside: seems like it wouldn't be too hard for Django to make this generic; instead of removing the add() method when there's a through table, just make add() accept arbitrary kwargs and treat them as data for the through table.
|
Custom Managers and "through"
|
I have a many-to-many relationship in my django application where I use the "add" method of the manager pretty heavily (ie album.photos.add() ).
I find myself needing to store some data about the many-to-many relationship now, but I don't want to lose the add method. Can I just set a default value for all the additional fields on the "through" model and somehow re-implement the add method?
I don't really know much about custom managers but I suspect that might be the right place to look.
Update:
Been reading up on custom managers. Maybe I can just keep the add/remove/etc from being disabled when I add the "through" argument to my Many-to-many field?
Does anyone know how to do that?
|
[
"Simplest way is to just add a method to Album (i.e. album.add_photo()) which handles the metadata and manually creates a properly-linked Photo instance.\nIf you want to get all funky, you can write a custom manager for Photos, make it the default (i.e. first assigned manager), set use_for_related_fields = True on it, and give it an add() method that is able to properly set the default metadata for the relationship.\nAside: seems like it wouldn't be too hard for Django to make this generic; instead of removing the add() method when there's a through table, just make add() accept arbitrary kwargs and treat them as data for the through table.\n"
] |
[
2
] |
[] |
[] |
[
"django",
"django_models",
"many_to_many",
"python"
] |
stackoverflow_0001038542_django_django_models_many_to_many_python.txt
|
Q:
Django: Model name clash
I am trying to use different open source apps in my project. Problem is that there is a same Model name used by two different apps with their own model definition.
I tried using:
class Meta:
db_table = "db_name"
but it didn't work. I am still getting field name clash error at syncdb. Any suggestions.
Update
I am actually trying to integrate Satchmo with Pinax. And the error is:
Error: One or more models did not validate:
contact.contact: Accessor for field 'user' clashes with related m2m field 'User.contact_set'. Add a related_name argument to the definition for 'user'.
friends.contact: Accessor for m2m field 'users' clashes with related field User.contact_set'. Add a related_name argument to the definition for 'users'.
You are right, table names are already unique. I analyzed the model and the Model 'Contact' is in two models of two different apps. When I comment out one of these models, it works fine.
May be the error is there because both apps are in PYTHON_PATH and when other app defines the its model with same name the clash occurs.
A:
The problem is that both Satchmo and Pinax have a Contact model with a ForeignKey to User. Django tries to add a "contact_set" reverse relationship attribute to User for each of those ForeignKeys, so there is a clash.
The solution is to add something like related_name="pinax_contact_set" as an argument to the ForeignKey in Pinax's Contact model, or similarly in the Satchmo Contact model. That will require editing the source directly for one or the other. You might be able to find a way to do it via monkeypatching, but I'd expect that to be tricky.
|
Django: Model name clash
|
I am trying to use different open source apps in my project. Problem is that there is a same Model name used by two different apps with their own model definition.
I tried using:
class Meta:
db_table = "db_name"
but it didn't work. I am still getting field name clash error at syncdb. Any suggestions.
Update
I am actually trying to integrate Satchmo with Pinax. And the error is:
Error: One or more models did not validate:
contact.contact: Accessor for field 'user' clashes with related m2m field 'User.contact_set'. Add a related_name argument to the definition for 'user'.
friends.contact: Accessor for m2m field 'users' clashes with related field User.contact_set'. Add a related_name argument to the definition for 'users'.
You are right, table names are already unique. I analyzed the model and the Model 'Contact' is in two models of two different apps. When I comment out one of these models, it works fine.
May be the error is there because both apps are in PYTHON_PATH and when other app defines the its model with same name the clash occurs.
|
[
"The problem is that both Satchmo and Pinax have a Contact model with a ForeignKey to User. Django tries to add a \"contact_set\" reverse relationship attribute to User for each of those ForeignKeys, so there is a clash.\nThe solution is to add something like related_name=\"pinax_contact_set\" as an argument to the ForeignKey in Pinax's Contact model, or similarly in the Satchmo Contact model. That will require editing the source directly for one or the other. You might be able to find a way to do it via monkeypatching, but I'd expect that to be tricky.\n"
] |
[
6
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0001036506_django_django_models_python.txt
|
Q:
Formatting output when writing a list to textfile
i have a list of lists that looks like this:
dupe = [['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'apa.txt'], ['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt'], ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'apa2.txt'], ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'jude.txt']]
I write it to a file using a very basic function():
try:
file_name = open("dupe.txt", "w")
except IOError:
pass
for a in range (len(dupe)):
file_name.write(dupe[a][0] + " " + dupe[a][1] + " " + dupe[a][2] + "\n");
file_name.close()
With the output in the file looking like this:
95d1543adea47e88923c3d4ad56e9f65c2b40c76 ron\c apa.txt
95d1543adea47e88923c3d4ad56e9f65c2b40c76 ron\c knark.txt
b5cc17d3a35877ca8b76f0b2e07497039c250696 ron\a apa2.txt
b5cc17d3a35877ca8b76f0b2e07497039c250696 ron\a jude.txt
However, how can i make the output in the dupe.txt file to look like this:
95d1543adea47e88923c3d4ad56e9f65c2b40c76 ron\c apa.txt, knark.txt
b5cc17d3a35877ca8b76f0b2e07497039c250696 ron\a apa2.txt, jude.txt
A:
First, group the lines by the "key" (the first two elements of each array):
dupedict = {}
for a, b, c in dupe:
dupedict.setdefault((a,b),[]).append(c)
Then print it out:
for key, values in dupedict.iteritems():
print ' '.join(key), ', '.join(values)
A:
i take it your last question didn't solve your problem?
instead of putting each list with repeating ID's and directories in seperate lists, why not make the file element of the list another sub list which contains all the files which have the same id and directory.
so dupe would look like this:
dupe = [['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', ['apa.txt','knark.txt']],
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', ['apa2.txt','jude.txt']]
then your print loop could be similar to:
for i in dupe:
print i[0], i[1],
for j in i[2]
print j,
print
A:
from collections import defaultdict
dupe = [
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'apa.txt'],
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt'],
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'apa2.txt'],
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'jude.txt'],
]
with open("dupe.txt", "w") as f:
data = defaultdict(list)
for hash, dir, fn in dupe:
data[(hash, dir)].append(fn)
for hash_dir, fns in data.items():
f.write("{0[0]} {0[1]} {1}\n".format(hash_dir, ', '.join(fns)))
A:
If this is your actual answer, you can:
Output one line per every two elements in dupe. This is easier. Or,
If your data isn't as structured (so you may you can make a dictionary where your long hash is the key, and the tail end of the string is your output. Make sense?
In idea one, mean that you can something like this:
tmp_string = ""
for a in range (len(dupe)):
if isOdd(a):
tmp_string = dupe[a][0] + " " + dupe[a][1] + " " + dupe[a][2]
else:
tmp_string += ", " + dupe[a][2]
file_name.write(dupe[a][0] + " " + dupe[a][1] + " " + dupe[a][2] + "\n");
In idea two, you may have something like this:
x=dict()
for a in range(len(dupe)):
# check if the hash exists in x; bad syntax - I dunno "exists?" syntax
if (exists(x[dupe[a][0]])):
x[a] += "," + dupe[a][2]
else:
x[a] = dupe[a][0] + " " + dupe[a][1] + " " + dupe[a][2]
for b in x: # bad syntax: basically, for every key in dictionary x
file_name.write(x[b]);
A:
Use a dict to group them:
data = [['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'apa.txt'], \
['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt'], \
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'apa2.txt'], \
['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'jude.txt']]
dupes = {}
for row in data:
if dupes.has_key(row[0]):
dupes[row[0]].append(row)
else:
dupes[row[0]] = [row]
for dupe in dupes.itervalues():
print "%s\t%s\t%s" % (dupe[0][0], dupe[0][1], ",".join([x[2] for x in dupe]))
|
Formatting output when writing a list to textfile
|
i have a list of lists that looks like this:
dupe = [['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'apa.txt'], ['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\c', 'knark.txt'], ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'apa2.txt'], ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\a', 'jude.txt']]
I write it to a file using a very basic function():
try:
file_name = open("dupe.txt", "w")
except IOError:
pass
for a in range (len(dupe)):
file_name.write(dupe[a][0] + " " + dupe[a][1] + " " + dupe[a][2] + "\n");
file_name.close()
With the output in the file looking like this:
95d1543adea47e88923c3d4ad56e9f65c2b40c76 ron\c apa.txt
95d1543adea47e88923c3d4ad56e9f65c2b40c76 ron\c knark.txt
b5cc17d3a35877ca8b76f0b2e07497039c250696 ron\a apa2.txt
b5cc17d3a35877ca8b76f0b2e07497039c250696 ron\a jude.txt
However, how can i make the output in the dupe.txt file to look like this:
95d1543adea47e88923c3d4ad56e9f65c2b40c76 ron\c apa.txt, knark.txt
b5cc17d3a35877ca8b76f0b2e07497039c250696 ron\a apa2.txt, jude.txt
|
[
"First, group the lines by the \"key\" (the first two elements of each array):\ndupedict = {}\nfor a, b, c in dupe:\n dupedict.setdefault((a,b),[]).append(c)\n\nThen print it out:\nfor key, values in dupedict.iteritems():\n print ' '.join(key), ', '.join(values)\n\n",
"i take it your last question didn't solve your problem?\ninstead of putting each list with repeating ID's and directories in seperate lists, why not make the file element of the list another sub list which contains all the files which have the same id and directory.\nso dupe would look like this:\ndupe = [['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', ['apa.txt','knark.txt']],\n['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\a', ['apa2.txt','jude.txt']]\n\nthen your print loop could be similar to:\nfor i in dupe:\n print i[0], i[1],\n for j in i[2]\n print j,\n print\n\n",
"from collections import defaultdict\n\ndupe = [\n ['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', 'apa.txt'],\n ['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', 'knark.txt'],\n ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\a', 'apa2.txt'],\n ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\a', 'jude.txt'],\n]\nwith open(\"dupe.txt\", \"w\") as f:\n data = defaultdict(list)\n for hash, dir, fn in dupe:\n data[(hash, dir)].append(fn)\n for hash_dir, fns in data.items():\n f.write(\"{0[0]} {0[1]} {1}\\n\".format(hash_dir, ', '.join(fns)))\n\n",
"If this is your actual answer, you can:\n\nOutput one line per every two elements in dupe. This is easier. Or, \nIf your data isn't as structured (so you may you can make a dictionary where your long hash is the key, and the tail end of the string is your output. Make sense?\n\nIn idea one, mean that you can something like this:\ntmp_string = \"\" \nfor a in range (len(dupe)):\nif isOdd(a):\n tmp_string = dupe[a][0] + \" \" + dupe[a][1] + \" \" + dupe[a][2]\nelse:\n tmp_string += \", \" + dupe[a][2]\n file_name.write(dupe[a][0] + \" \" + dupe[a][1] + \" \" + dupe[a][2] + \"\\n\");\n\nIn idea two, you may have something like this:\nx=dict()\nfor a in range(len(dupe)):\n # check if the hash exists in x; bad syntax - I dunno \"exists?\" syntax\n if (exists(x[dupe[a][0]])): \n x[a] += \",\" + dupe[a][2]\n else:\n x[a] = dupe[a][0] + \" \" + dupe[a][1] + \" \" + dupe[a][2]\nfor b in x: # bad syntax: basically, for every key in dictionary x\n file_name.write(x[b]);\n\n",
"Use a dict to group them:\ndata = [['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', 'apa.txt'], \\\n ['95d1543adea47e88923c3d4ad56e9f65c2b40c76', 'ron\\\\c', 'knark.txt'], \\\n ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\a', 'apa2.txt'], \\\n ['b5cc17d3a35877ca8b76f0b2e07497039c250696', 'ron\\\\a', 'jude.txt']]\n\ndupes = {}\nfor row in data:\n if dupes.has_key(row[0]):\n dupes[row[0]].append(row)\n else:\n dupes[row[0]] = [row]\n\nfor dupe in dupes.itervalues():\n print \"%s\\t%s\\t%s\" % (dupe[0][0], dupe[0][1], \",\".join([x[2] for x in dupe]))\n\n"
] |
[
2,
1,
1,
0,
0
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0001045699_list_python.txt
|
Q:
Is there any good reason to convert an app written in python to c#?
I have written several Python tools for my company. Is there any good reason to convert them from Python to C# now that their usefulness has been proven? Would I be better off just leaving them in Python?
A:
Quote: "If it doesn't break, don't fix it."
Unless your company is moving towards .NET and/or there are no more qualified Python developer available anymore, then don't.
A:
There's IronPython , a python implementation for .NET. You could port it to that if you really need to get away from the "standard" python vm and onto the .NET platform
A:
Leave them as Python unless you hear a very good business reason to convert.
A:
Changing languages just for the sake of changing languages is rarely a good idea. If the app is doing its obj then let it do its job. If you've got a corporate mandate to only use C#, that may be a different story (estimate the work involved, give the estimate to management, let them decide to pursue it or write up an exception). Even if there isn't a strong (or any) knowledge of Python across the organization, developers are rather proficient at picking up new languages {it's a survival thing}, so that tends to be less of a concern.
Moral of the story, if an app is to be rewritten, there should really be more of a justification to do the rewrite than just to change languages. If you were to add features that would be significantly easier to implement and maintain using another languages library/framework ... good justification. If maintaining the environment/framework for one language is causing a significant operational expense that could be saved by a re-write, cool. "Because our other code is in c#" ... not cool.
A:
If you're the only Python developer in a C# shop, then it makes plenty of sense to convert. If you quit tomorrow, no one will be able to maintain these systems.
A:
I would not convert unless you are converting as part of an enterprise-wide switch from one language to another. Even then I would avoid converting until absolutely necessary. If it works, why change it?
A:
As long as the application is running fine there is no reason to switch to C#.
A:
IF you are looking for reasons to convert them, I can think of a few. These don't necessarily mean you should, these are just possible reasons in the "recode" corner.
Maintainability
If you have a dev-shop that is primarily C# focused, then have python applications around may not be useful for maintainability reasons. It would mean that they need to keep python staffers around (assuming it's a complicated app) in order to maintain it. This probably isn't a restriction they want, especially if they don't intend to write anything in python from here on out.
Consistency
This sort of falls under maintainability, but it is of a different flavour. If they wanted to integrate part of this (python) application into a C# application, but not the whole thing, it's possible to write some boilerplate code, but again, that's messy to maintain. Ultimately, you would want to code of P_App to be able to be seamlessly integrated into C#_App, and not have to run them separately.
On the other side of the coin, it is fair to point out that you are throwing time and money at converting something which already works.
A:
Short and to the point answer: No, there is no reason.
A:
The only thing I can think of is the performance boost C# will give you.
A:
Some reasons that come to mind:
Performance
Easier to find developers
Huge huge huge developer and tools
ecosystem
But I second what the others have stated: if it ain't broke, don't fix it. Unless your boss is saying, "hey, we're moving all our crap to .NET", or you're presented with some other business reason for migrating, just stick with what you've got.
|
Is there any good reason to convert an app written in python to c#?
|
I have written several Python tools for my company. Is there any good reason to convert them from Python to C# now that their usefulness has been proven? Would I be better off just leaving them in Python?
|
[
"Quote: \"If it doesn't break, don't fix it.\"\nUnless your company is moving towards .NET and/or there are no more qualified Python developer available anymore, then don't.\n",
"There's IronPython , a python implementation for .NET. You could port it to that if you really need to get away from the \"standard\" python vm and onto the .NET platform\n",
"Leave them as Python unless you hear a very good business reason to convert.\n",
"Changing languages just for the sake of changing languages is rarely a good idea. If the app is doing its obj then let it do its job. If you've got a corporate mandate to only use C#, that may be a different story (estimate the work involved, give the estimate to management, let them decide to pursue it or write up an exception). Even if there isn't a strong (or any) knowledge of Python across the organization, developers are rather proficient at picking up new languages {it's a survival thing}, so that tends to be less of a concern.\nMoral of the story, if an app is to be rewritten, there should really be more of a justification to do the rewrite than just to change languages. If you were to add features that would be significantly easier to implement and maintain using another languages library/framework ... good justification. If maintaining the environment/framework for one language is causing a significant operational expense that could be saved by a re-write, cool. \"Because our other code is in c#\" ... not cool.\n",
"If you're the only Python developer in a C# shop, then it makes plenty of sense to convert. If you quit tomorrow, no one will be able to maintain these systems.\n",
"I would not convert unless you are converting as part of an enterprise-wide switch from one language to another. Even then I would avoid converting until absolutely necessary. If it works, why change it? \n",
"As long as the application is running fine there is no reason to switch to C#.\n",
"IF you are looking for reasons to convert them, I can think of a few. These don't necessarily mean you should, these are just possible reasons in the \"recode\" corner.\n\nMaintainability\n\nIf you have a dev-shop that is primarily C# focused, then have python applications around may not be useful for maintainability reasons. It would mean that they need to keep python staffers around (assuming it's a complicated app) in order to maintain it. This probably isn't a restriction they want, especially if they don't intend to write anything in python from here on out.\n\nConsistency\n\nThis sort of falls under maintainability, but it is of a different flavour. If they wanted to integrate part of this (python) application into a C# application, but not the whole thing, it's possible to write some boilerplate code, but again, that's messy to maintain. Ultimately, you would want to code of P_App to be able to be seamlessly integrated into C#_App, and not have to run them separately. \n\nOn the other side of the coin, it is fair to point out that you are throwing time and money at converting something which already works. \n",
"Short and to the point answer: No, there is no reason.\n",
"The only thing I can think of is the performance boost C# will give you.\n",
"Some reasons that come to mind:\n\nPerformance \nEasier to find developers\nHuge huge huge developer and tools\necosystem\n\nBut I second what the others have stated: if it ain't broke, don't fix it. Unless your boss is saying, \"hey, we're moving all our crap to .NET\", or you're presented with some other business reason for migrating, just stick with what you've got.\n"
] |
[
13,
8,
6,
4,
3,
1,
1,
1,
1,
0,
0
] |
[
"i will convert it from language a to language b for 1 million dollars. <--- this would be the only business reason I would consider legit.\n"
] |
[
-1
] |
[
".net",
"c#",
"python"
] |
stackoverflow_0001045334_.net_c#_python.txt
|
Q:
python observer pattern
I'm new to python but I've run into a hitch when trying to implement a variation of the observer pattern.
class X(models.Model):
a = models.ForeignKey(Voter)
b = models.CharField(max_length=200)
# Register
Y.register(X)
This doesn't seem to work because it says X is not defined. A couple of things are possible:
A) There is a way to refer to the current class (not the instance, but the class object).
B) You can't even run code outside a method. (I thought this may work almost like a static constructor - it would just get run once).
A:
In python, code defined in a class block is executed and only then, depending on various things---like what has been defined in this block---a class is created. So if you want to relate one class with another, you'd write:
class X(models.Model):
a = models.ForeignKey(Voter)
b = models.CharField(max_length=200)
# Register
Y.register(X)
And this behaviour is not related to django.
A:
There is nothing wrong with running (limited) code in the class definition:
class X(object):
print("Loading X")
However, you can not refer to X because it is not yet fully defined.
|
python observer pattern
|
I'm new to python but I've run into a hitch when trying to implement a variation of the observer pattern.
class X(models.Model):
a = models.ForeignKey(Voter)
b = models.CharField(max_length=200)
# Register
Y.register(X)
This doesn't seem to work because it says X is not defined. A couple of things are possible:
A) There is a way to refer to the current class (not the instance, but the class object).
B) You can't even run code outside a method. (I thought this may work almost like a static constructor - it would just get run once).
|
[
"In python, code defined in a class block is executed and only then, depending on various things---like what has been defined in this block---a class is created. So if you want to relate one class with another, you'd write:\nclass X(models.Model):\n a = models.ForeignKey(Voter)\n b = models.CharField(max_length=200)\n\n# Register \nY.register(X)\n\nAnd this behaviour is not related to django.\n",
"There is nothing wrong with running (limited) code in the class definition:\nclass X(object):\n print(\"Loading X\")\n\nHowever, you can not refer to X because it is not yet fully defined.\n"
] |
[
5,
4
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001046190_django_python.txt
|
Q:
Unable to put a variable in Python's print
My code:
year=[51-52,53,55,56,58,59,60,61]
photo=[{70,72,73},{64,65,68},{79,80,81,82},{74,77,78},{60,61,62},{84,85,87},{57,58,59},{53,54,55,56}]
for i in range(7):
print "<img src=\"http://files.getdropbox.com/u/100000/Akuja/",year,"/P10104",photo,".JPG\">"
I run it and I get
File "/tmp/aku.py", line 2
photo=[{70,72,73},{64,65,68},{79,80,81,82},{74,77,78},{60,61,62},{84,85,87},{57,58,59},{53,54,55,56}]
^
SyntaxError: invalid syntax
I want the following output
<img src="http://files.getdropbox.com/u/100000/Akuja/51-52/P1010470.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/51-52/P1010472.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/51-52/P1010473.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/53/P1010464.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/53/P1010465.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/53/P1010468.JPG">
...
I also run the same code with the syntax for list unsuccessfully
photo={[70,72,73],...}
How can you make the Python code to work?
A:
Braces are used to indicate a dictionary (associative array). You want to use square brackets, which indicates a list.
Also you probably don't want 51-52 in that first line, as that will evaluate to -1. You should put "51-52" to ensure that it is a string.
Then to get the indexing that you seem to want, you need to do year**[i]** instead of just year, which prints out the whole list.
A:
So here is your simple solution to your simple problem
year=['51-52', '53', '55' , '56' , '58', '59', '60', '61']
photo=[[70,72,73], [64,65,68],[79,80,81,82],[74,77,78],[60,61,62],[84,85,87],[57,58,59],[53,54,55,56]]
for i in range(len(year)):
for j in range(len(photo[i])):
print '<img src=\"http://files.getdropbox.com/u/100000/Akuja/%s/P10104%s.JPG>' % (year[i], photo[i][j])
A:
Since you are trying to iterate over years and print photos for that year I would do it this way:
year=["51-52","53","55","56","58","59","60","61"]
photo=[(70,72,73),(64,65,68),(79,80,81,82),(74,77,78),(60,61,62),
(84,85,87),(57,58,59),(53,54,55,56)]
# this dictionary will be generated with the code below
#photos = {
# "51-52": (70,72,73),
# "53": (64,65,68),
# "55": (79,80,81,82),
# "56": (74,77,78),
# "58": (60,61,62),
# "59": (84,85,87),
# "60": (57,58,59),
# "61": (53,54,55,56)
#}
photos = {} # create photos dictionary
for y in xrange(len(year)):
photos[year[y]] = photo[y]
years = photos.keys() # sort the years
years.sort()
for year in years:
for photo in photos[year]:
print "<img src=\"http://files.getdropbox.com/u/100000/Akuja/"+year+"/P10104"+str(photo)+".JPG\">"
You get:
<img src="http://files.getdropbox.com/u/100000/Akuja/51-52/P1010470.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/51-52/P1010472.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/51-52/P1010473.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/53/P1010464.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/53/P1010465.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/53/P1010468.JPG">
I would store photos and years in dictionary as shown above but if you have years and photos in separate lists (as in your question) you can create the photos like this, note years are in quotes "":
photos = {}
for y in xrange(len(year)):
photos[year[y]] = photo[y]
|
Unable to put a variable in Python's print
|
My code:
year=[51-52,53,55,56,58,59,60,61]
photo=[{70,72,73},{64,65,68},{79,80,81,82},{74,77,78},{60,61,62},{84,85,87},{57,58,59},{53,54,55,56}]
for i in range(7):
print "<img src=\"http://files.getdropbox.com/u/100000/Akuja/",year,"/P10104",photo,".JPG\">"
I run it and I get
File "/tmp/aku.py", line 2
photo=[{70,72,73},{64,65,68},{79,80,81,82},{74,77,78},{60,61,62},{84,85,87},{57,58,59},{53,54,55,56}]
^
SyntaxError: invalid syntax
I want the following output
<img src="http://files.getdropbox.com/u/100000/Akuja/51-52/P1010470.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/51-52/P1010472.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/51-52/P1010473.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/53/P1010464.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/53/P1010465.JPG">
<img src="http://files.getdropbox.com/u/100000/Akuja/53/P1010468.JPG">
...
I also run the same code with the syntax for list unsuccessfully
photo={[70,72,73],...}
How can you make the Python code to work?
|
[
"Braces are used to indicate a dictionary (associative array). You want to use square brackets, which indicates a list.\nAlso you probably don't want 51-52 in that first line, as that will evaluate to -1. You should put \"51-52\" to ensure that it is a string.\nThen to get the indexing that you seem to want, you need to do year**[i]** instead of just year, which prints out the whole list.\n",
"So here is your simple solution to your simple problem\nyear=['51-52', '53', '55' , '56' , '58', '59', '60', '61']\nphoto=[[70,72,73], [64,65,68],[79,80,81,82],[74,77,78],[60,61,62],[84,85,87],[57,58,59],[53,54,55,56]]\n\nfor i in range(len(year)):\n for j in range(len(photo[i])):\n print '<img src=\\\"http://files.getdropbox.com/u/100000/Akuja/%s/P10104%s.JPG>' % (year[i], photo[i][j])\n\n",
"Since you are trying to iterate over years and print photos for that year I would do it this way:\nyear=[\"51-52\",\"53\",\"55\",\"56\",\"58\",\"59\",\"60\",\"61\"]\nphoto=[(70,72,73),(64,65,68),(79,80,81,82),(74,77,78),(60,61,62),\n (84,85,87),(57,58,59),(53,54,55,56)]\n\n# this dictionary will be generated with the code below\n#photos = {\n# \"51-52\": (70,72,73),\n# \"53\": (64,65,68),\n# \"55\": (79,80,81,82),\n# \"56\": (74,77,78),\n# \"58\": (60,61,62),\n# \"59\": (84,85,87),\n# \"60\": (57,58,59),\n# \"61\": (53,54,55,56)\n#}\n\n\nphotos = {} # create photos dictionary\n\nfor y in xrange(len(year)):\n photos[year[y]] = photo[y]\n\nyears = photos.keys() # sort the years\nyears.sort()\n\nfor year in years:\n for photo in photos[year]:\n print \"<img src=\\\"http://files.getdropbox.com/u/100000/Akuja/\"+year+\"/P10104\"+str(photo)+\".JPG\\\">\"\n\nYou get:\n<img src=\"http://files.getdropbox.com/u/100000/Akuja/51-52/P1010470.JPG\">\n<img src=\"http://files.getdropbox.com/u/100000/Akuja/51-52/P1010472.JPG\">\n<img src=\"http://files.getdropbox.com/u/100000/Akuja/51-52/P1010473.JPG\">\n<img src=\"http://files.getdropbox.com/u/100000/Akuja/53/P1010464.JPG\">\n<img src=\"http://files.getdropbox.com/u/100000/Akuja/53/P1010465.JPG\">\n<img src=\"http://files.getdropbox.com/u/100000/Akuja/53/P1010468.JPG\">\n\nI would store photos and years in dictionary as shown above but if you have years and photos in separate lists (as in your question) you can create the photos like this, note years are in quotes \"\":\nphotos = {}\n\nfor y in xrange(len(year)):\n photos[year[y]] = photo[y]\n\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001046584_python.txt
|
Q:
Would one have to know the machine architecture to write code?
Let's say I'm programming in Java or Python or C++ for a simple problem, could be to build an TCP/UDP echo server or computation of factorial. Do I've to bother about the architecture details, i.e., if it is 32 or 64-bit?
IMHO, unless I'm programming something to do with fairly low-level stuff then I don't have to bother if its 32 or 64 bit. Where am I going wrong? Or am I correct???
A:
correct for most circumstances
The runtime/language/compiler will abstract those details unless you are dealing directly with word sizes or binary at a low level.
Even byteorder is abstracted by the NIC/Network stack in the kernel. It is translated for you. When programming sockets in C, you do sometimes have to deal with byte ordering for the network when sending data ... but that doesn't concern 32 or 64 bit differences.
When dealing with blobs of binary data, mapping them from one architecture to another (as an overlay to a C struct for example) can cause problems as others have mentioned, but this is why we develop architecture independent protocols based on characters and so on.
In-fact things like Java run in a virtual machine that abstracts the machine another step!
Knowing a bit about the instruction set of the architecture, and how the syntax is compiled to that can help you understand the platform and write cleaner, tighter code. I know I grimace at some old C code after studying compilers!
A:
Knowing how things work, be it how the virtual machine works, and how it works on your platform, or how certain C++ constructs are transformed into assembly will always make you a better programmer, because you will understand why things should be done the way they are.
You need to understand things like memory to know what cache-misses are and why those might affect your program. You should know how certain things are implemented, even though you might only use an interface or high-level way to get to it, knowing how it works will make sure you're doing it in the best way.
For packet work, you need to understand how data is stored on platforms and how sending that across the network to a different platform might change how the data is read (endian-ness).
Your compiler will make best use of the platform you're compiling on, so as long as you stick to standards and code well, you can ignore most things and assume the compiler will whip out what's best.
So in short, no. You don't need to know the low level stuff, but it never hurts to know.
A:
The last time I looked at the Java language spec, it contained a ridiculous gotcha in the section on integer boxing.
Integer a = 100;
Integer b = 100;
System.out.println(a == b);
That is guaranteed to print true.
Integer a = 300;
Integer b = 300;
System.out.println(a == b);
That is not guaranteed to print true. It depends on the runtime. The spec left it completely open. It's because boxing an int between -128 and 127 returns "interned" objects (analogous to the way string literals are interned), but the implementer of the language runtime is encouraged to raise that limit if they wish.
I personally regard that as an insane decision, and I hope they've fixed it since (write once, run anywhere?)
A:
You sometimes must bother.
You can be surprised when these low-level details suddenly jump out and bite you. For example, Java standardized double to be 64 bit. However, Linux JVM uses the "extended precision" mode, when the double is 80 bit as long as it's in the CPU register. This means that the following code may fail:
double x = fun1();
double y = x;
System.out.println(fun2(x));
assert( y == x );
Simply because y is forced out of the register into memory and truncated from 80 to 64 bits.
A:
In Java and Python, architecture details are abstracted away so that it is in fact more or less impossible to write architecture-dependant code.
With C++, this is an entirely different matter - you can certainly write code that does not depend on architecture details, but you have be careful to avoid pitfalls, specifically concerning basic data types that are are architecture-dependant, such as int.
A:
As long as you do things correctly, you almost never need to know for most languages. On many, you never need to know, as the language behavior doesn't vary (Java, for example, specifies the runtime behavior precisely).
In C++ and C, doing things correctly includes not making assumptions about int. Don't put pointers in int, and when you're doing anything with memory sizes or addresses use size_t and ptrdiff_t. Don't count on the size of data types: int must be at least 16 bits, almost always is 32, and may be 64 on some architectures. Don't assume that floating-point arithmetic will be done in exactly the same way on different machines (the IEEE standards have some leeway in them).
Pretty much all OSes that support networking will give you some way to deal with possible endianness problems. Use them. Use language facilities like isalpha() to classify characters, rather than arithmetic operations on characters (which might be something weird like EBCDIC). (Of course, it's now more usual to use wchar_t as character type, and use Unicode internally.)
A:
If you are programming in Python or in Java, the interpreter and the virtual machine respectively abstract this layer of the architecture. You then need not to worry if it's running on a 32 or 64 bits architecture.
The same cannot be said for C++, in which you'll have to ask yourself sometimes if you are running on a 32 or 64 bits machine
A:
You will need to care about "endian-ness" only if you send and receive raw C structs
over the wire like
ret = send(socket, &myStruct, sizeof(myStruct));
However this is not a recommended practice.
It's recommended that you define a protocol between the parties such it doesn't matter
the parties' machine architectures.
A:
In C++, you have to be very careful if you want to write code that works indifferently on 32 or 64 bits.
Many people wrongly assume that int can store a pointer, for example.
A:
With java and .net you don't really have to bother with it unless you are doing very low level stuff like twiddling bits. If you are using c, c++, fortran you might get by but I would actually recommend using things like "stdint.h" where you use definitive declares like uint64_t and uint32_t so as to be explicit. Also, you will need to build with particularly libraries depending on how you are linking, for example a 64 bit system might use gcc in a default 64 bit compile mode.
A:
A 32 bit machine will allow you to have a maximum of 4 GB of addressable virtual memory. (In practice, it's even less than that, usually 2 GB or 3 GB depending on the OS and various linker options.) On a 64 bit machine, you can have a HUGE virtual address space (in any practical sense, limited only by disk) and a pretty damn big RAM.
So if you are expecting 6GB data sets for some computation (let's say something that needs incoherent access and can't just be streamed a bit at a time), on a 64 bit architecture you could just read it into RAM and do your stuff, whereas on a 32 bit architecture you need a fundamentally different way to approach it, since you simply do not have the option of keeping the entire data set resident.
|
Would one have to know the machine architecture to write code?
|
Let's say I'm programming in Java or Python or C++ for a simple problem, could be to build an TCP/UDP echo server or computation of factorial. Do I've to bother about the architecture details, i.e., if it is 32 or 64-bit?
IMHO, unless I'm programming something to do with fairly low-level stuff then I don't have to bother if its 32 or 64 bit. Where am I going wrong? Or am I correct???
|
[
"correct for most circumstances\nThe runtime/language/compiler will abstract those details unless you are dealing directly with word sizes or binary at a low level.\nEven byteorder is abstracted by the NIC/Network stack in the kernel. It is translated for you. When programming sockets in C, you do sometimes have to deal with byte ordering for the network when sending data ... but that doesn't concern 32 or 64 bit differences.\nWhen dealing with blobs of binary data, mapping them from one architecture to another (as an overlay to a C struct for example) can cause problems as others have mentioned, but this is why we develop architecture independent protocols based on characters and so on.\nIn-fact things like Java run in a virtual machine that abstracts the machine another step!\nKnowing a bit about the instruction set of the architecture, and how the syntax is compiled to that can help you understand the platform and write cleaner, tighter code. I know I grimace at some old C code after studying compilers!\n",
"Knowing how things work, be it how the virtual machine works, and how it works on your platform, or how certain C++ constructs are transformed into assembly will always make you a better programmer, because you will understand why things should be done the way they are.\nYou need to understand things like memory to know what cache-misses are and why those might affect your program. You should know how certain things are implemented, even though you might only use an interface or high-level way to get to it, knowing how it works will make sure you're doing it in the best way.\nFor packet work, you need to understand how data is stored on platforms and how sending that across the network to a different platform might change how the data is read (endian-ness).\nYour compiler will make best use of the platform you're compiling on, so as long as you stick to standards and code well, you can ignore most things and assume the compiler will whip out what's best.\nSo in short, no. You don't need to know the low level stuff, but it never hurts to know.\n",
"The last time I looked at the Java language spec, it contained a ridiculous gotcha in the section on integer boxing.\nInteger a = 100;\nInteger b = 100;\n\nSystem.out.println(a == b);\n\nThat is guaranteed to print true.\nInteger a = 300;\nInteger b = 300;\n\nSystem.out.println(a == b);\n\nThat is not guaranteed to print true. It depends on the runtime. The spec left it completely open. It's because boxing an int between -128 and 127 returns \"interned\" objects (analogous to the way string literals are interned), but the implementer of the language runtime is encouraged to raise that limit if they wish.\nI personally regard that as an insane decision, and I hope they've fixed it since (write once, run anywhere?)\n",
"You sometimes must bother.\nYou can be surprised when these low-level details suddenly jump out and bite you. For example, Java standardized double to be 64 bit. However, Linux JVM uses the \"extended precision\" mode, when the double is 80 bit as long as it's in the CPU register. This means that the following code may fail:\ndouble x = fun1();\ndouble y = x;\n\nSystem.out.println(fun2(x));\n\nassert( y == x );\n\nSimply because y is forced out of the register into memory and truncated from 80 to 64 bits.\n",
"In Java and Python, architecture details are abstracted away so that it is in fact more or less impossible to write architecture-dependant code.\nWith C++, this is an entirely different matter - you can certainly write code that does not depend on architecture details, but you have be careful to avoid pitfalls, specifically concerning basic data types that are are architecture-dependant, such as int.\n",
"As long as you do things correctly, you almost never need to know for most languages. On many, you never need to know, as the language behavior doesn't vary (Java, for example, specifies the runtime behavior precisely).\nIn C++ and C, doing things correctly includes not making assumptions about int. Don't put pointers in int, and when you're doing anything with memory sizes or addresses use size_t and ptrdiff_t. Don't count on the size of data types: int must be at least 16 bits, almost always is 32, and may be 64 on some architectures. Don't assume that floating-point arithmetic will be done in exactly the same way on different machines (the IEEE standards have some leeway in them).\nPretty much all OSes that support networking will give you some way to deal with possible endianness problems. Use them. Use language facilities like isalpha() to classify characters, rather than arithmetic operations on characters (which might be something weird like EBCDIC). (Of course, it's now more usual to use wchar_t as character type, and use Unicode internally.)\n",
"If you are programming in Python or in Java, the interpreter and the virtual machine respectively abstract this layer of the architecture. You then need not to worry if it's running on a 32 or 64 bits architecture.\nThe same cannot be said for C++, in which you'll have to ask yourself sometimes if you are running on a 32 or 64 bits machine\n",
"You will need to care about \"endian-ness\" only if you send and receive raw C structs \nover the wire like\n\nret = send(socket, &myStruct, sizeof(myStruct));\n\nHowever this is not a recommended practice.\nIt's recommended that you define a protocol between the parties such it doesn't matter\nthe parties' machine architectures.\n",
"In C++, you have to be very careful if you want to write code that works indifferently on 32 or 64 bits.\nMany people wrongly assume that int can store a pointer, for example.\n",
"With java and .net you don't really have to bother with it unless you are doing very low level stuff like twiddling bits. If you are using c, c++, fortran you might get by but I would actually recommend using things like \"stdint.h\" where you use definitive declares like uint64_t and uint32_t so as to be explicit. Also, you will need to build with particularly libraries depending on how you are linking, for example a 64 bit system might use gcc in a default 64 bit compile mode.\n",
"A 32 bit machine will allow you to have a maximum of 4 GB of addressable virtual memory. (In practice, it's even less than that, usually 2 GB or 3 GB depending on the OS and various linker options.) On a 64 bit machine, you can have a HUGE virtual address space (in any practical sense, limited only by disk) and a pretty damn big RAM.\nSo if you are expecting 6GB data sets for some computation (let's say something that needs incoherent access and can't just be streamed a bit at a time), on a 64 bit architecture you could just read it into RAM and do your stuff, whereas on a 32 bit architecture you need a fundamentally different way to approach it, since you simply do not have the option of keeping the entire data set resident.\n"
] |
[
16,
16,
8,
6,
3,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"32_bit",
"64_bit",
"c++",
"java",
"python"
] |
stackoverflow_0001046068_32_bit_64_bit_c++_java_python.txt
|
Q:
What is the most secure python "password" encryption
I am making a little webgame that has tasks and solutions, the solutions are solved by entering a code given to user after completion of a task. To have some security (against cheating) i dont want to store the codes genereted by the game in plain text. But since i need to be able to give a player the code when he has accomplished the task i cant hash it since then i cant retrive it.
So what is the most secure way to encrypt/decrypt something using python?
A:
The most secure encryption is no encryption. Passwords should be reduced to a hash. This is a one-way transformation, making the password (almost) unrecoverable.
When giving someone a code, you can do the following to be actually secure.
(1) generate some random string.
(2) give them the string.
(3) save the hash of the string you generated.
Once.
If they "forget" the code, you have to (1) be sure they're authorized to be given the code, then (2) do the process again (generate a new code, give it to them, save the hash.)
A:
If your script can decode the passwords, so can someone breaking in to your server. Encryption is only really useful when someone enters a password to unlock it - if it remains unlocked (or the script has the password to unlock it), the encryption is pointless
This is why hashing is more useful, since it is a one way process - even if someone knows your password hash, they don't know the plain-text they must enter to generate it (without lots of brute-force)
I wouldn't worry about keeping the game passwords as plain-text. If you are concerned about securing them, fix up possibly SQL injections/etc, make sure your web-server and other software is up to date and configured correctly and so on.
Perhaps think of a way to make it less appealing to steal the passwords than actually play the game? For example, there was a game (I don't recall what it was) which if you used the level skip cheat, you went to the next level but it didn't mark it as "complete", or you could skip the level but didn't get any points. Or look at Project Euler, you can do any level, but you only get points if you enter the answer (and working out the answer is the whole point of the game, so cheating defeats the playing of the game)
If you are really paranoid, you could possibly use asymmetric crypto, where you basically encrypt something with key A, and you can only read it with key B..
I came up with an similar concept for using GPG encryption (popular asymmetric crypto system, mainly used for email encryption or signing) to secure website data. I'm not quite sure how this would apply to securing game level passwords, and as I said, you'd need to be really paranoid to even consider this..
In short, I'd say store the passwords in plain-text, and concentrate your security-concerns elsewhere (the web applications code itself)
A:
If it's a web game, can't you store the codes server side and send them to the client when he completed a task? What's the architecture of your game?
As for encryption, maybe try something like pyDes?
A:
Your question isn't quite clear. Where do you want the decryption to occur? One way or another, the plaintext has to surface since you need players to eventually know it.
Pick a cipher and be done with it.
|
What is the most secure python "password" encryption
|
I am making a little webgame that has tasks and solutions, the solutions are solved by entering a code given to user after completion of a task. To have some security (against cheating) i dont want to store the codes genereted by the game in plain text. But since i need to be able to give a player the code when he has accomplished the task i cant hash it since then i cant retrive it.
So what is the most secure way to encrypt/decrypt something using python?
|
[
"The most secure encryption is no encryption. Passwords should be reduced to a hash. This is a one-way transformation, making the password (almost) unrecoverable.\nWhen giving someone a code, you can do the following to be actually secure. \n(1) generate some random string.\n(2) give them the string.\n(3) save the hash of the string you generated.\nOnce.\nIf they \"forget\" the code, you have to (1) be sure they're authorized to be given the code, then (2) do the process again (generate a new code, give it to them, save the hash.)\n",
"If your script can decode the passwords, so can someone breaking in to your server. Encryption is only really useful when someone enters a password to unlock it - if it remains unlocked (or the script has the password to unlock it), the encryption is pointless\nThis is why hashing is more useful, since it is a one way process - even if someone knows your password hash, they don't know the plain-text they must enter to generate it (without lots of brute-force)\nI wouldn't worry about keeping the game passwords as plain-text. If you are concerned about securing them, fix up possibly SQL injections/etc, make sure your web-server and other software is up to date and configured correctly and so on.\nPerhaps think of a way to make it less appealing to steal the passwords than actually play the game? For example, there was a game (I don't recall what it was) which if you used the level skip cheat, you went to the next level but it didn't mark it as \"complete\", or you could skip the level but didn't get any points. Or look at Project Euler, you can do any level, but you only get points if you enter the answer (and working out the answer is the whole point of the game, so cheating defeats the playing of the game)\nIf you are really paranoid, you could possibly use asymmetric crypto, where you basically encrypt something with key A, and you can only read it with key B..\nI came up with an similar concept for using GPG encryption (popular asymmetric crypto system, mainly used for email encryption or signing) to secure website data. I'm not quite sure how this would apply to securing game level passwords, and as I said, you'd need to be really paranoid to even consider this..\nIn short, I'd say store the passwords in plain-text, and concentrate your security-concerns elsewhere (the web applications code itself)\n",
"If it's a web game, can't you store the codes server side and send them to the client when he completed a task? What's the architecture of your game?\nAs for encryption, maybe try something like pyDes?\n",
"Your question isn't quite clear. Where do you want the decryption to occur? One way or another, the plaintext has to surface since you need players to eventually know it.\nPick a cipher and be done with it.\n"
] |
[
5,
5,
2,
0
] |
[] |
[] |
[
"encryption",
"python",
"security"
] |
stackoverflow_0001043735_encryption_python_security.txt
|
Q:
Can a slow network cause a Python app to use *more* CPU?
Let's say we have a system like this:
______
{ application instances ---network--- (______)
{ application instances ---network--- | |
requests ---> load balancer { application instances ---network--- | data |
{ application instances ---network--- | base |
{ application instances ---network--- \______/
A request comes in, a load balancer sends it to an application server instance, and the app server instances talk to a database (elsewhere on the LAN). The application instances can either be separate processes or separate threads. Just to cover all the bases, let's say there are several identical processes, each with a pool of identical application service threads.
If the database is performing slowly, or the network gets bogged down, clearly the throughput of request servicing is going to get worse.
Now, in all my pre-Python experience, this would be accompanied by a corresponding drop in CPU usage by the application instances -- they'd be spending more time blocking on I/O and less time doing CPU-intensive things.
However, I'm being told that with Python, this is not the case -- under certain Python circumstances, this situation can cause Python's CPU usage to go up, perhaps all the way to 100%. Something about the Global Interpreter Lock and the multiple threads supposedly causes Python to spend all its time switching between threads, checking to see if any of them have an answer yet from the database. "Hence the rise in single-process event-driven libraries of late."
Is that correct? Do Python application service threads actually use more CPU when their I/O latency increases?
A:
In theory, no, in practice, its possible; it depends on what you're doing.
There's a full hour-long video and pdf about it, but essentially it boils down to some unforeseen consequences of the GIL with CPU vs IO bound threads with multicores. Basically, a thread waiting on IO needs to wake up, so Python begins "pre-empting" other threads every Python "tick" (instead of every 100 ticks). The IO thread then has trouble taking the GIL from the CPU thread, causing the cycle to repeat.
Thats grossly oversimplified, but thats the gist of it. The video and slides has more information. It manifests itself and a larger problem on multi-core machines. It could also occur if the process received signals from the os (since that triggers the thread switching code, too).
Of course, as other posters have said, this goes away if each has its own process.
Coincidentally, the slides and video explain why you can't CTRL+C in Python sometimes.
A:
The key is to launch the application instances in separate processes. Otherwise multi-threading issues seem to be likely to follow.
A:
No this is not the case. Stop spreading the FUD.
If your python app is blocked on a C API call ex. blocking sockets or file read, it probably released the GIL.
A:
Something about the Global Interpreter Lock and the multiple threads supposedly causes Python to spend all its time switching between threads, checking to see if any of them have an answer yet from the database.
That is completely baseless. If all threads are blocked on I/O, Python should use 0% CPU. If there is one unblocked thread, it will be able to run without GIL contention; it will periodically release and reacquire the GIL, but it doesn't do any work "checking up" on the other threads.
However, it is possible on multicore systems for a thread to have to wait a while to reacquire the GIL if there is a CPU-bound thread, and for response times to drop (see this presentation for details). This shouldn't be an issue for most servers, though.
|
Can a slow network cause a Python app to use *more* CPU?
|
Let's say we have a system like this:
______
{ application instances ---network--- (______)
{ application instances ---network--- | |
requests ---> load balancer { application instances ---network--- | data |
{ application instances ---network--- | base |
{ application instances ---network--- \______/
A request comes in, a load balancer sends it to an application server instance, and the app server instances talk to a database (elsewhere on the LAN). The application instances can either be separate processes or separate threads. Just to cover all the bases, let's say there are several identical processes, each with a pool of identical application service threads.
If the database is performing slowly, or the network gets bogged down, clearly the throughput of request servicing is going to get worse.
Now, in all my pre-Python experience, this would be accompanied by a corresponding drop in CPU usage by the application instances -- they'd be spending more time blocking on I/O and less time doing CPU-intensive things.
However, I'm being told that with Python, this is not the case -- under certain Python circumstances, this situation can cause Python's CPU usage to go up, perhaps all the way to 100%. Something about the Global Interpreter Lock and the multiple threads supposedly causes Python to spend all its time switching between threads, checking to see if any of them have an answer yet from the database. "Hence the rise in single-process event-driven libraries of late."
Is that correct? Do Python application service threads actually use more CPU when their I/O latency increases?
|
[
"In theory, no, in practice, its possible; it depends on what you're doing.\nThere's a full hour-long video and pdf about it, but essentially it boils down to some unforeseen consequences of the GIL with CPU vs IO bound threads with multicores. Basically, a thread waiting on IO needs to wake up, so Python begins \"pre-empting\" other threads every Python \"tick\" (instead of every 100 ticks). The IO thread then has trouble taking the GIL from the CPU thread, causing the cycle to repeat.\nThats grossly oversimplified, but thats the gist of it. The video and slides has more information. It manifests itself and a larger problem on multi-core machines. It could also occur if the process received signals from the os (since that triggers the thread switching code, too).\nOf course, as other posters have said, this goes away if each has its own process.\nCoincidentally, the slides and video explain why you can't CTRL+C in Python sometimes.\n",
"The key is to launch the application instances in separate processes. Otherwise multi-threading issues seem to be likely to follow.\n",
"No this is not the case. Stop spreading the FUD.\nIf your python app is blocked on a C API call ex. blocking sockets or file read, it probably released the GIL.\n",
"\nSomething about the Global Interpreter Lock and the multiple threads supposedly causes Python to spend all its time switching between threads, checking to see if any of them have an answer yet from the database. \n\nThat is completely baseless. If all threads are blocked on I/O, Python should use 0% CPU. If there is one unblocked thread, it will be able to run without GIL contention; it will periodically release and reacquire the GIL, but it doesn't do any work \"checking up\" on the other threads.\nHowever, it is possible on multicore systems for a thread to have to wait a while to reacquire the GIL if there is a CPU-bound thread, and for response times to drop (see this presentation for details). This shouldn't be an issue for most servers, though.\n"
] |
[
6,
1,
1,
1
] |
[] |
[] |
[
"multithreading",
"python"
] |
stackoverflow_0001046873_multithreading_python.txt
|
Q:
Does Python's heapify() not play well with list comprehension and slicing?
I found an interesting bug in a program that I implemented somewhat lazily, and wondered if I'm comprehending it correctly. The short version is that Python's heapq implementation doesn't actually order a list, it merely groks the list in a heap-centric way. Specifically, I was expecting heapify() to result in an ordered list that facilitated list comprehension in an ordered fashion.
Using a priority cue example, as in the Python documentation:
from heapq import heapify, heappush, heappop
from random import shuffle
class Item(object):
def __init__(self, name):
self.name = name
lst = []
# iterate over a pseudo-random list of unique numbers
for i in sample(range(100), 15):
it = Item("Some name for %i" % i)
heappush(lst, (i, it))
print([i[0] for i in lst])
Results in
>>> [2, 22, 7, 69, 32, 40, 10, 97, 89, 33, 45, 51, 94, 27, 67]
This, we note, is not the original ordering of the list, but apparently some heap-centric ordering as described here. I was lazily expecting this to be fully ordered.
As a test, running the list through heapify() results in no change (as the list is already heap-ishly ordered):
heapify(lst)
print([i[0] for i in lst])
>>> [2, 22, 7, 69, 32, 40, 10, 97, 89, 33, 45, 51, 94, 27, 67]
Whereas iterating through the list with the heappop() function results in ordering as expected:
lst2 = []
while lst: lst2.append(heappop(lst))
print([i[0] for i in lst2])
>>> [2, 7, 10, 22, 27, 32, 33, 40, 45, 51, 67, 69, 89, 94, 97]
So, it would seem that heapq does not order a list (at least in the human sense of the word), but rather the heappush() and heappop() functions are able to grok the heap-ishly ordered list.
The result: Any slicing and list comprehension operations on a heapified list will yield non-ordered results.
Is this true, and is this always true?
(BTW: Python 3.0.1 on a WinXP system)
A:
A heap is not a sorted list (it's a representation of a partially sorted binary tree).
So yes, you're right, if you expect a heapified list to behave like a sorted list, you'll be disappointed. The only sorting assumption you can make about a heap is that heap[0] is always its smallest element.
(It's difficult to add much to what you've already written - your question is an excellent writeup of How Things Are. 8-)
A:
The result: Any slicing and list
comprehension operations on a
heapified list will yield non-ordered
results.
Is this true, and is this always true?
If you just want to get a one-time sorted list, use:
myList.sort()
Priority queues/heaps can be used to implement a sort, or they can be used to keep a queue in priority form. Insertions into a heap are O(lg n), gets are O(1), and removals are O(lg n), which is a lot better than just resorting the entire list over and over again.
A:
"""I was expecting heapify() to result in an ordered list that facilitated list comprehension in an ordered fashion.""": If this expectation was based on a reading of the manual, you should raise a docs bug report.
""" The result: Any slicing and list comprehension operations on a heapified list will yield non-ordered results. Is this true, and is this always true?""": Just like e.g. random.shuffle(), the mentioned activity is not defined to produce "ordered" results. It may produce "ordered" results occasionally, but this is coincidental and not to be relied on and not worth asking (IMHO).
A:
" The result: Any slicing and list comprehension operations on a heapified list will yield non-ordered results. Is this true, and is this always true?" No, it is not always true. Although it will be non-ordered most of the time, it is possible for it to be ordered. heapify() produces a list that satisfies the "heap invariant". In this case, it is a min-heap. It turns out that a sorted list also satisfies the heap invariant (see heapq paragraph 4: "heap.sort() maintains the heap invariant"). So in theory it is possible that a heapified list will also happen to be sorted.
|
Does Python's heapify() not play well with list comprehension and slicing?
|
I found an interesting bug in a program that I implemented somewhat lazily, and wondered if I'm comprehending it correctly. The short version is that Python's heapq implementation doesn't actually order a list, it merely groks the list in a heap-centric way. Specifically, I was expecting heapify() to result in an ordered list that facilitated list comprehension in an ordered fashion.
Using a priority cue example, as in the Python documentation:
from heapq import heapify, heappush, heappop
from random import shuffle
class Item(object):
def __init__(self, name):
self.name = name
lst = []
# iterate over a pseudo-random list of unique numbers
for i in sample(range(100), 15):
it = Item("Some name for %i" % i)
heappush(lst, (i, it))
print([i[0] for i in lst])
Results in
>>> [2, 22, 7, 69, 32, 40, 10, 97, 89, 33, 45, 51, 94, 27, 67]
This, we note, is not the original ordering of the list, but apparently some heap-centric ordering as described here. I was lazily expecting this to be fully ordered.
As a test, running the list through heapify() results in no change (as the list is already heap-ishly ordered):
heapify(lst)
print([i[0] for i in lst])
>>> [2, 22, 7, 69, 32, 40, 10, 97, 89, 33, 45, 51, 94, 27, 67]
Whereas iterating through the list with the heappop() function results in ordering as expected:
lst2 = []
while lst: lst2.append(heappop(lst))
print([i[0] for i in lst2])
>>> [2, 7, 10, 22, 27, 32, 33, 40, 45, 51, 67, 69, 89, 94, 97]
So, it would seem that heapq does not order a list (at least in the human sense of the word), but rather the heappush() and heappop() functions are able to grok the heap-ishly ordered list.
The result: Any slicing and list comprehension operations on a heapified list will yield non-ordered results.
Is this true, and is this always true?
(BTW: Python 3.0.1 on a WinXP system)
|
[
"A heap is not a sorted list (it's a representation of a partially sorted binary tree).\nSo yes, you're right, if you expect a heapified list to behave like a sorted list, you'll be disappointed. The only sorting assumption you can make about a heap is that heap[0] is always its smallest element.\n(It's difficult to add much to what you've already written - your question is an excellent writeup of How Things Are. 8-)\n",
"\nThe result: Any slicing and list\n comprehension operations on a\n heapified list will yield non-ordered\n results.\nIs this true, and is this always true?\n\nIf you just want to get a one-time sorted list, use:\nmyList.sort()\n\nPriority queues/heaps can be used to implement a sort, or they can be used to keep a queue in priority form. Insertions into a heap are O(lg n), gets are O(1), and removals are O(lg n), which is a lot better than just resorting the entire list over and over again.\n",
"\"\"\"I was expecting heapify() to result in an ordered list that facilitated list comprehension in an ordered fashion.\"\"\": If this expectation was based on a reading of the manual, you should raise a docs bug report.\n\"\"\" The result: Any slicing and list comprehension operations on a heapified list will yield non-ordered results. Is this true, and is this always true?\"\"\": Just like e.g. random.shuffle(), the mentioned activity is not defined to produce \"ordered\" results. It may produce \"ordered\" results occasionally, but this is coincidental and not to be relied on and not worth asking (IMHO).\n",
"\" The result: Any slicing and list comprehension operations on a heapified list will yield non-ordered results. Is this true, and is this always true?\" No, it is not always true. Although it will be non-ordered most of the time, it is possible for it to be ordered. heapify() produces a list that satisfies the \"heap invariant\". In this case, it is a min-heap. It turns out that a sorted list also satisfies the heap invariant (see heapq paragraph 4: \"heap.sort() maintains the heap invariant\"). So in theory it is possible that a heapified list will also happen to be sorted.\n"
] |
[
9,
0,
0,
0
] |
[] |
[] |
[
"heap",
"list_comprehension",
"python"
] |
stackoverflow_0001046683_heap_list_comprehension_python.txt
|
Q:
orbited comment server issue
I tried installing orbited on vista . but I get following error when I try to run the orbited server.When I type on twisted cmd prompt orbited i get following o/p.
C:\>orbited
Traceback (most recent call last):
File "C:\Python26\scripts\orbited-script.py", line 8, in <module>
load_entry_point('orbited==0.7.9', 'console_scripts', 'orbited')()
File "C:\Python26\lib\site-packages\orbited-0.7.9-py2.6.egg\orbited\start.py",
line 75, in main
logging.setup(config.map)
File "C:\Python26\lib\site-packages\orbited-0.7.9-py2.6.egg\orbited\logging.py
", line 33, in setup
defaults[logtype][-1].open()
File "C:\Python26\lib\site-packages\orbited-0.7.9-py2.6.egg\orbited\logging.py
", line 195, in open
self.f = open(self.filename, 'a')
IOError: [Errno 13] Permission denied: 'debug.log'
A:
Do you have write permission on the file debug.log (and the directory it's to be placed in, which I think is the current directory)? If not, you could try tweaking the config.map being used to setup the logging subsystem (about midway through this stack trace).
|
orbited comment server issue
|
I tried installing orbited on vista . but I get following error when I try to run the orbited server.When I type on twisted cmd prompt orbited i get following o/p.
C:\>orbited
Traceback (most recent call last):
File "C:\Python26\scripts\orbited-script.py", line 8, in <module>
load_entry_point('orbited==0.7.9', 'console_scripts', 'orbited')()
File "C:\Python26\lib\site-packages\orbited-0.7.9-py2.6.egg\orbited\start.py",
line 75, in main
logging.setup(config.map)
File "C:\Python26\lib\site-packages\orbited-0.7.9-py2.6.egg\orbited\logging.py
", line 33, in setup
defaults[logtype][-1].open()
File "C:\Python26\lib\site-packages\orbited-0.7.9-py2.6.egg\orbited\logging.py
", line 195, in open
self.f = open(self.filename, 'a')
IOError: [Errno 13] Permission denied: 'debug.log'
|
[
"Do you have write permission on the file debug.log (and the directory it's to be placed in, which I think is the current directory)? If not, you could try tweaking the config.map being used to setup the logging subsystem (about midway through this stack trace).\n"
] |
[
1
] |
[] |
[] |
[
"comet",
"orbited",
"python",
"twisted"
] |
stackoverflow_0001047349_comet_orbited_python_twisted.txt
|
Q:
Easiest way to persist a data structure to a file in python?
Let's say I have something like this:
d = { "abc" : [1, 2, 3], "qwerty" : [4,5,6] }
What's the easiest way to progammatically get that into a file that I can load from python later?
Can I somehow save it as python source (from within a python script, not manually!), then import it later?
Or should I use JSON or something?
A:
Use the pickle module.
import pickle
d = { "abc" : [1, 2, 3], "qwerty" : [4,5,6] }
afile = open(r'C:\d.pkl', 'wb')
pickle.dump(d, afile)
afile.close()
#reload object from file
file2 = open(r'C:\d.pkl', 'rb')
new_d = pickle.load(file2)
file2.close()
#print dictionary object loaded from file
print new_d
A:
Take your pick: Python Standard Library - Data Persistance. Which one is most appropriate can vary by what your specific needs are.
pickle is probably the simplest and most capable as far as "write an arbitrary object to a file and recover it" goes—it can automatically handle custom classes and circular references.
For the best pickling performance (speed and space), use cPickle at HIGHEST_PROTOCOL.
A:
Try the shelve module which will give you persistent dictionary, for example:
import shelve
d = { "abc" : [1, 2, 3], "qwerty" : [4,5,6] }
shelf = shelve.open('shelf_file')
for key in d:
shelf[key] = d[key]
shelf.close()
....
# reopen the shelf
shelf = shelve.open('shelf_file')
print(shelf) # => {'qwerty': [4, 5, 6], 'abc': [1, 2, 3]}
A:
JSON has faults, but when it meets your needs, it is also:
simple to use
included in the standard library as the json module
interface somewhat similar to pickle, which can handle more complex situations
human-editable text for debugging, sharing, and version control
valid Python code
well-established on the web (if your program touches any of that domain)
A:
You also might want to take a look at Zope's Object Database the more complex you get:-) Probably overkill for what you have, but it scales well and is not too hard to use.
A:
Just to add to the previous suggestions, if you want the file format to be easily readable and modifiable, you can also use YAML. It works extremely well for nested dicts and lists, but scales for more complex data structures (i.e. ones involving custom objects) as well, and its big plus is that the format is readable.
A:
If you want to save it in an easy to read JSON-like format, use repr to serialize the object and eval to deserialize it.
repr(object) -> string
Return the canonical string representation of the object.
For most object types, eval(repr(object)) == object.
|
Easiest way to persist a data structure to a file in python?
|
Let's say I have something like this:
d = { "abc" : [1, 2, 3], "qwerty" : [4,5,6] }
What's the easiest way to progammatically get that into a file that I can load from python later?
Can I somehow save it as python source (from within a python script, not manually!), then import it later?
Or should I use JSON or something?
|
[
"Use the pickle module.\nimport pickle\nd = { \"abc\" : [1, 2, 3], \"qwerty\" : [4,5,6] }\nafile = open(r'C:\\d.pkl', 'wb')\npickle.dump(d, afile)\nafile.close()\n\n#reload object from file\nfile2 = open(r'C:\\d.pkl', 'rb')\nnew_d = pickle.load(file2)\nfile2.close()\n\n#print dictionary object loaded from file\nprint new_d\n\n",
"Take your pick: Python Standard Library - Data Persistance. Which one is most appropriate can vary by what your specific needs are.\npickle is probably the simplest and most capable as far as \"write an arbitrary object to a file and recover it\" goes—it can automatically handle custom classes and circular references.\nFor the best pickling performance (speed and space), use cPickle at HIGHEST_PROTOCOL.\n",
"Try the shelve module which will give you persistent dictionary, for example:\nimport shelve\nd = { \"abc\" : [1, 2, 3], \"qwerty\" : [4,5,6] }\n\nshelf = shelve.open('shelf_file')\nfor key in d:\n shelf[key] = d[key]\n\nshelf.close()\n\n....\n\n# reopen the shelf\nshelf = shelve.open('shelf_file')\nprint(shelf) # => {'qwerty': [4, 5, 6], 'abc': [1, 2, 3]}\n\n",
"JSON has faults, but when it meets your needs, it is also:\n\nsimple to use\nincluded in the standard library as the json module\ninterface somewhat similar to pickle, which can handle more complex situations\nhuman-editable text for debugging, sharing, and version control\nvalid Python code\nwell-established on the web (if your program touches any of that domain)\n\n",
"You also might want to take a look at Zope's Object Database the more complex you get:-) Probably overkill for what you have, but it scales well and is not too hard to use.\n",
"Just to add to the previous suggestions, if you want the file format to be easily readable and modifiable, you can also use YAML. It works extremely well for nested dicts and lists, but scales for more complex data structures (i.e. ones involving custom objects) as well, and its big plus is that the format is readable.\n",
"If you want to save it in an easy to read JSON-like format, use repr to serialize the object and eval to deserialize it.\n\nrepr(object) -> string \nReturn the canonical string representation of the object.\n For most object types, eval(repr(object)) == object.\n \n\n"
] |
[
69,
15,
7,
5,
5,
3,
1
] |
[] |
[] |
[
"file",
"persistence",
"python"
] |
stackoverflow_0001047318_file_persistence_python.txt
|
Q:
Retrieving a tuple from a collection of tuples based on a contained value
I have a data structure which is a collection of tuples like this:
things = ( (123, 1, "Floogle"), (154, 33, "Blurgle"), (156, 55, "Blarg") )
The first and third elements are each unique to the collection.
What I want to do is retrieve a specific tuple by referring to the third value, eg:
>>> my_thing = things.get( value(3) == "Blurgle" )
(154, 33, "Blurgle")
There must be a better way than writing a loop to check each value one by one!
A:
A loop (or something 100% equivalent like a list comprehension or genexp) is really the only approach if your outer-level structure is a tuple, as you indicate -- tuples are, by deliberate design, an extremely light-weight container, with hardly any methods in fact (just the few special methods needed to implement indexing, looping and the like;-).
Lightning-fast retrieval is a characteristic of dictionaries, not tuples. Can't you have a dictionary (as the main structure, or as a side auxiliary one) mapping "value of third element" to the subtuple you seek (or its index in the main tuple, maybe)? That could be built with a single loop and then deliver as many fast searches as you care to have!
If you choose to loop, a genexp as per Brian's comment an my reply to it is both more readable and on average maybe twice as fast than a listcomp (as it only does half the looping):
my_thing = next(item for item in things if item[2] == "Blurgle")
which reads smoothly as "the next item in things whose [2] sub-item equale Blurgle" (as you're starting from the beginning the "next" item you find will be the "first" -- and, in your case, only -- suitable one).
If you need to cover the case in which no item meets the predicate, you can pass next a second argument (which it will return if needed), otherwise (with no second argument, as in my snippet) you'll get a StopIteration exception if no item meets the predicate -- either behavior may be what you desire (as you say the case should never arise, an exception looks suitable for your particular application, since the occurrence in question would be an unexpected error).
A:
If things is a list, and you know that the third element is uniqe, what about a list comprehension?
>> my_thing = [x for x in things if x[2]=="Blurgle"][0]
Although under the hood, I assume that goes through all the values and checks them individually. If you don't like that, what about changing the my_things structure so that it's a dict and using either the first or the third value as the key?
A:
if you have to do this type of search multiple times, why don't you convert things to things_dict one time , then it will be easy and faster to search later on
things = ( (123, 1, "Floogle"), (154, 33, "Blurgle"), (156, 55, "Blarg") )
things_dict = {}
for t in things:
things_dict[t[2]] = t
print things_dict['Blarg']
|
Retrieving a tuple from a collection of tuples based on a contained value
|
I have a data structure which is a collection of tuples like this:
things = ( (123, 1, "Floogle"), (154, 33, "Blurgle"), (156, 55, "Blarg") )
The first and third elements are each unique to the collection.
What I want to do is retrieve a specific tuple by referring to the third value, eg:
>>> my_thing = things.get( value(3) == "Blurgle" )
(154, 33, "Blurgle")
There must be a better way than writing a loop to check each value one by one!
|
[
"A loop (or something 100% equivalent like a list comprehension or genexp) is really the only approach if your outer-level structure is a tuple, as you indicate -- tuples are, by deliberate design, an extremely light-weight container, with hardly any methods in fact (just the few special methods needed to implement indexing, looping and the like;-).\nLightning-fast retrieval is a characteristic of dictionaries, not tuples. Can't you have a dictionary (as the main structure, or as a side auxiliary one) mapping \"value of third element\" to the subtuple you seek (or its index in the main tuple, maybe)? That could be built with a single loop and then deliver as many fast searches as you care to have!\nIf you choose to loop, a genexp as per Brian's comment an my reply to it is both more readable and on average maybe twice as fast than a listcomp (as it only does half the looping):\nmy_thing = next(item for item in things if item[2] == \"Blurgle\")\n\nwhich reads smoothly as \"the next item in things whose [2] sub-item equale Blurgle\" (as you're starting from the beginning the \"next\" item you find will be the \"first\" -- and, in your case, only -- suitable one).\nIf you need to cover the case in which no item meets the predicate, you can pass next a second argument (which it will return if needed), otherwise (with no second argument, as in my snippet) you'll get a StopIteration exception if no item meets the predicate -- either behavior may be what you desire (as you say the case should never arise, an exception looks suitable for your particular application, since the occurrence in question would be an unexpected error).\n",
"If things is a list, and you know that the third element is uniqe, what about a list comprehension?\n>> my_thing = [x for x in things if x[2]==\"Blurgle\"][0]\n\nAlthough under the hood, I assume that goes through all the values and checks them individually. If you don't like that, what about changing the my_things structure so that it's a dict and using either the first or the third value as the key?\n",
"if you have to do this type of search multiple times, why don't you convert things to things_dict one time , then it will be easy and faster to search later on\nthings = ( (123, 1, \"Floogle\"), (154, 33, \"Blurgle\"), (156, 55, \"Blarg\") )\n\nthings_dict = {}\nfor t in things:\n things_dict[t[2]] = t\n\nprint things_dict['Blarg']\n\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"python",
"tuples"
] |
stackoverflow_0001047403_python_tuples.txt
|
Q:
wxpython: How can I redraw something when a window is retored?
In my wx.Frame based wxpython application, I draw some lines on a panel when some events occur by creating wx.ClientDC instances when needed. The only problem is, if the window is minimized and then restored, the lines disappear! Is there some kind of method that I should override or event to bind to that will allow me to call the drawing method I created when the window is restored?
Thanks!
A:
only place you must be drawing is on wx.EVT_PAINT, so bind to that event in init of panel e.g.
self.Bind(wx.EVT_PAINT, self._onPaint)
in _onPaint, use wx.PaintDC to to draw e.g.
dc = wx.PaintDC(self)
dc.DrawLine(0,0,100,100)
A:
When the window is restored it is (on some platforms) repainted using EVT_PAINT handler.
The solution is e.g. to draw the same lines in OnPaint(). Or buffer what you draw. See the wxBufferedDC class.
|
wxpython: How can I redraw something when a window is retored?
|
In my wx.Frame based wxpython application, I draw some lines on a panel when some events occur by creating wx.ClientDC instances when needed. The only problem is, if the window is minimized and then restored, the lines disappear! Is there some kind of method that I should override or event to bind to that will allow me to call the drawing method I created when the window is restored?
Thanks!
|
[
"only place you must be drawing is on wx.EVT_PAINT, so bind to that event in init of panel e.g.\nself.Bind(wx.EVT_PAINT, self._onPaint)\n\nin _onPaint, use wx.PaintDC to to draw e.g.\ndc = wx.PaintDC(self)\ndc.DrawLine(0,0,100,100)\n\n",
"When the window is restored it is (on some platforms) repainted using EVT_PAINT handler.\nThe solution is e.g. to draw the same lines in OnPaint(). Or buffer what you draw. See the wxBufferedDC class.\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"wxpython"
] |
stackoverflow_0001046157_python_wxpython.txt
|
Q:
How to parse for tags with '+' in python
I'm getting a "nothing to repeat" error when I try to compile this:
search = re.compile(r'([^a-zA-Z0-9])(%s)([^a-zA-Z0-9])' % '+test', re.I)
The problem is the '+' sign. How should I handle that?
A:
re.compile(r'([^a-zA-Z0-9])(%s)([^a-zA-Z0-9])' % '\+test', re.I)
The "+" is the "repeat at least once" quantifier in regular expressions. It must follow something that is repeatable, or it must be escaped if you want to match a literal "+".
Better is this, if you want to build your regex dynamically.
re.compile(r'([^a-zA-Z0-9])(%s)([^a-zA-Z0-9])' % re.escape('+test'), re.I)
A:
Escape the plus:
r'\+test'
The plus has a special meaning in regexes (meaning "match the previous once or several times"). Since in your regex it appears after an open paren, there is no "previous" to match repeatedly.
|
How to parse for tags with '+' in python
|
I'm getting a "nothing to repeat" error when I try to compile this:
search = re.compile(r'([^a-zA-Z0-9])(%s)([^a-zA-Z0-9])' % '+test', re.I)
The problem is the '+' sign. How should I handle that?
|
[
"re.compile(r'([^a-zA-Z0-9])(%s)([^a-zA-Z0-9])' % '\\+test', re.I)\n\nThe \"+\" is the \"repeat at least once\" quantifier in regular expressions. It must follow something that is repeatable, or it must be escaped if you want to match a literal \"+\".\nBetter is this, if you want to build your regex dynamically.\nre.compile(r'([^a-zA-Z0-9])(%s)([^a-zA-Z0-9])' % re.escape('+test'), re.I)\n\n",
"Escape the plus:\nr'\\+test'\n\nThe plus has a special meaning in regexes (meaning \"match the previous once or several times\"). Since in your regex it appears after an open paren, there is no \"previous\" to match repeatedly.\n"
] |
[
9,
8
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001048541_python_regex.txt
|
Q:
Python and PGP/encryption
i want to make a function using python to encrypt password by the public key.
at the user end i need to install PGP software which will generate the key pair .i want to use public key only for encryption and private key for decryption.
The problem is coming with the encryption function(how to use key for encryption) and also in pgp installing .
can somebody tell me the correct way to do this
thanks
A:
Did you check out PyCrypto?
A:
Here is an open source project for using pgp with python. I think this is what you're looking for.
You actually don't have to invent the algorithms yourself, they're already there.
|
Python and PGP/encryption
|
i want to make a function using python to encrypt password by the public key.
at the user end i need to install PGP software which will generate the key pair .i want to use public key only for encryption and private key for decryption.
The problem is coming with the encryption function(how to use key for encryption) and also in pgp installing .
can somebody tell me the correct way to do this
thanks
|
[
"Did you check out PyCrypto?\n",
"Here is an open source project for using pgp with python. I think this is what you're looking for.\nYou actually don't have to invent the algorithms yourself, they're already there.\n"
] |
[
4,
4
] |
[] |
[] |
[
"encryption",
"pgp",
"python"
] |
stackoverflow_0001048722_encryption_pgp_python.txt
|
Q:
Resize images in directory
I have a directory full of images that I would like to resize to around 60% of their original size.
How would I go about doing this? Can be in either Python or Perl
Cheers
Eef
A:
If you want to do it programatically, which I assume is the case, use PIL to resize e.g.
newIm = im.resize((newW, newH)
then save it to same file or a new location.
Go through the folder recursively and apply resize function to all images.
I have come up with a sample script which I think will work for you. You can improve on it: Maybe make it graphical, add more options e.g. same extension or may be all png, resize sampling linear/bilinear etc
import os
import sys
from PIL import Image
def resize(folder, fileName, factor):
filePath = os.path.join(folder, fileName)
im = Image.open(filePath)
w, h = im.size
newIm = im.resize((int(w*factor), int(h*factor)))
# i am saving a copy, you can overrider orginal, or save to other folder
newIm.save(filePath+"copy.png")
def bulkResize(imageFolder, factor):
imgExts = ["png", "bmp", "jpg"]
for path, dirs, files in os.walk(imageFolder):
for fileName in files:
ext = fileName[-3:].lower()
if ext not in imgExts:
continue
resize(path, fileName, factor)
if __name__ == "__main__":
imageFolder=sys.argv[1] # first arg is path to image folder
resizeFactor=float(sys.argv[2])/100.0# 2nd is resize in %
bulkResize(imageFolder, resizeFactor)
A:
How about using mogrify, part of ImageMagick? If you really need to control this from Perl, then you could use Image::Magick, Image::Resize or Imager.
A:
Can it be in shell?
mkdir resized
for a in *.jpg; do convert "$a" -resize 60% resized/"$a"; done
If you have > 1 core, you can do it like this:
find . -maxdepth 1 -type f -name '*.jpg' -print0 | xargs -0 -P3 -I XXX convert XXX -resize 60% resized/XXX
-P3 means that you want to resize up to 3 images at the same time (parallelization).
If you don't need to keep originals you can use mogrify, but I prefer to use convert, and then rm ...; mv ... - just to be on safe side if resizing would (for whatever reason) fail.
A:
Use PerlMagick, it's an interface to the popular ImageMagick suite of command line tools to do just this kind of stuff. PythonMagic is available as well.
A:
I use Python with PIL (Python Image Library). Of course there are specialized programs to do this.
Many people use PIL to such things. Look at: Quick image resizing with python
PIL is very powerful and recently I have found this recipe:
Putting watermark to images in batch
A:
do you need to just resize it or you want to resize programmatically?
If just resize use PixResizer. http://bluefive.pair.com/pixresizer.htm
|
Resize images in directory
|
I have a directory full of images that I would like to resize to around 60% of their original size.
How would I go about doing this? Can be in either Python or Perl
Cheers
Eef
|
[
"If you want to do it programatically, which I assume is the case, use PIL to resize e.g.\nnewIm = im.resize((newW, newH)\n\nthen save it to same file or a new location.\nGo through the folder recursively and apply resize function to all images.\nI have come up with a sample script which I think will work for you. You can improve on it: Maybe make it graphical, add more options e.g. same extension or may be all png, resize sampling linear/bilinear etc\nimport os\nimport sys\nfrom PIL import Image\n\ndef resize(folder, fileName, factor):\n filePath = os.path.join(folder, fileName)\n im = Image.open(filePath)\n w, h = im.size\n newIm = im.resize((int(w*factor), int(h*factor)))\n # i am saving a copy, you can overrider orginal, or save to other folder\n newIm.save(filePath+\"copy.png\")\n\ndef bulkResize(imageFolder, factor):\n imgExts = [\"png\", \"bmp\", \"jpg\"]\n for path, dirs, files in os.walk(imageFolder):\n for fileName in files:\n ext = fileName[-3:].lower()\n if ext not in imgExts:\n continue\n\n resize(path, fileName, factor)\n\nif __name__ == \"__main__\":\n imageFolder=sys.argv[1] # first arg is path to image folder\n resizeFactor=float(sys.argv[2])/100.0# 2nd is resize in %\n bulkResize(imageFolder, resizeFactor)\n\n",
"How about using mogrify, part of ImageMagick? If you really need to control this from Perl, then you could use Image::Magick, Image::Resize or Imager.\n",
"Can it be in shell?\nmkdir resized\nfor a in *.jpg; do convert \"$a\" -resize 60% resized/\"$a\"; done\n\nIf you have > 1 core, you can do it like this:\nfind . -maxdepth 1 -type f -name '*.jpg' -print0 | xargs -0 -P3 -I XXX convert XXX -resize 60% resized/XXX\n\n-P3 means that you want to resize up to 3 images at the same time (parallelization).\nIf you don't need to keep originals you can use mogrify, but I prefer to use convert, and then rm ...; mv ... - just to be on safe side if resizing would (for whatever reason) fail.\n",
"Use PerlMagick, it's an interface to the popular ImageMagick suite of command line tools to do just this kind of stuff. PythonMagic is available as well.\n",
"I use Python with PIL (Python Image Library). Of course there are specialized programs to do this.\nMany people use PIL to such things. Look at: Quick image resizing with python\nPIL is very powerful and recently I have found this recipe:\nPutting watermark to images in batch\n",
"do you need to just resize it or you want to resize programmatically?\nIf just resize use PixResizer. http://bluefive.pair.com/pixresizer.htm\n"
] |
[
17,
11,
10,
2,
1,
0
] |
[] |
[] |
[
"image",
"image_scaling",
"perl",
"python",
"resize"
] |
stackoverflow_0001048658_image_image_scaling_perl_python_resize.txt
|
Q:
Display row count from another table in Django
I have the following classes in my models file
class HardwareNode(models.Model):
ip_address = models.CharField(max_length=15)
port = models.IntegerField()
location = models.CharField(max_length=50)
hostname = models.CharField(max_length=30)
def __unicode__(self):
return self.hostname
class Subscription(models.Model):
customer = models.ForeignKey(Customer)
package = models.ForeignKey(Package)
location = models.ForeignKey(HardwareNode)
renewal_date = models.DateTimeField('renewal date')
def __unicode__(self):
x = '%s %s' % (self.customer.hostname, str(self.package))
return x
I'd like to do a count on the number of Subscriptions on a particular HardwareNode and display that on the admin section for the HardwareNode class e.g. 10 subscriptions hosted on node 2.
I'm still learning Django and I'm not sure where I would accomplish this. Can/should I do it in the models.py or in the HTML?
Thanks,
-seth
A:
When creating a foreign_key, the other model gets a manager that returns all instances of the first model (see navigating backward)
In your case, it would be named "subscription_set".
In addition, Django allows for virtual fields in models, called "Model Methods", that are not connected to database data, but are implemented as methods of the model (see model methods)
Putting all together, you can have something like this:
class HardwareNode(models.Model):
ip_address = models.CharField(max_length=15)
port = models.IntegerField()
location = models.CharField(max_length=50)
hostname = models.CharField(max_length=30)
subscription_count = lambda(self: self.subscription_set.count())
And then, include subscription_count in the list of fields to be listed in the admin panel.
Note: as usual, I did not check this code, and it may even not run as it is, but it should give some idea on how to work on your problem; moreover, I have used a lambda just for brevity but usually I think it would be a better option (style, maintenability, etc.) to use a named one.
A:
In your HardwareNode class keep a list of Subscriptions, and then either create a function that returns the length of that list or just access the variable's length through the HTML. This would be better than going through all of your subscriptions and counting the number of HardwareNodes, especially since Django makes it easy to have a bi-directional relationship in the database.
|
Display row count from another table in Django
|
I have the following classes in my models file
class HardwareNode(models.Model):
ip_address = models.CharField(max_length=15)
port = models.IntegerField()
location = models.CharField(max_length=50)
hostname = models.CharField(max_length=30)
def __unicode__(self):
return self.hostname
class Subscription(models.Model):
customer = models.ForeignKey(Customer)
package = models.ForeignKey(Package)
location = models.ForeignKey(HardwareNode)
renewal_date = models.DateTimeField('renewal date')
def __unicode__(self):
x = '%s %s' % (self.customer.hostname, str(self.package))
return x
I'd like to do a count on the number of Subscriptions on a particular HardwareNode and display that on the admin section for the HardwareNode class e.g. 10 subscriptions hosted on node 2.
I'm still learning Django and I'm not sure where I would accomplish this. Can/should I do it in the models.py or in the HTML?
Thanks,
-seth
|
[
"When creating a foreign_key, the other model gets a manager that returns all instances of the first model (see navigating backward)\nIn your case, it would be named \"subscription_set\".\nIn addition, Django allows for virtual fields in models, called \"Model Methods\", that are not connected to database data, but are implemented as methods of the model (see model methods)\nPutting all together, you can have something like this:\nclass HardwareNode(models.Model):\n ip_address = models.CharField(max_length=15)\n port = models.IntegerField()\n location = models.CharField(max_length=50)\n hostname = models.CharField(max_length=30)\n subscription_count = lambda(self: self.subscription_set.count())\n\nAnd then, include subscription_count in the list of fields to be listed in the admin panel.\nNote: as usual, I did not check this code, and it may even not run as it is, but it should give some idea on how to work on your problem; moreover, I have used a lambda just for brevity but usually I think it would be a better option (style, maintenability, etc.) to use a named one.\n",
"In your HardwareNode class keep a list of Subscriptions, and then either create a function that returns the length of that list or just access the variable's length through the HTML. This would be better than going through all of your subscriptions and counting the number of HardwareNodes, especially since Django makes it easy to have a bi-directional relationship in the database.\n"
] |
[
7,
0
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0001048782_django_django_models_python.txt
|
Q:
How to sort on number of visits in Django app?
In Django (1.0.2), I have 2 models: Lesson and StatLesson.
class Lesson(models.Model):
contents = models.TextField()
def get_visits(self):
return self.statlesson_set.all().count()
class StatLesson(models.Model):
lesson = models.ForeignKey(Lesson)
datetime = models.DateTimeField(default=datetime.datetime.now())
Each StatLesson registers 1 visit of a certain Lesson. I can use lesson.get_visits() to get the number of visits for that lesson.
How do I get a queryset of lessons, that's sorted by the number of visits? I'm looking for something like this: Lesson.objects.all().order_by('statlesson__count') (but this obviously doesn't work)
A:
Django 1.1 will have aggregate support.
On Django 1.0.x you can count automatically with an extra field:
class Lesson(models.Model):
contents = models.TextField()
visit_count = models.IntegerField(default=0)
class StatLesson(models.Model):
lesson = models.ForeignKey(Lesson)
datetime = models.DateTimeField(default=datetime.datetime.now())
def save(self, *args, **kwargs):
if self.pk is None:
self.lesson.visit_count += 1
self.lesson.save()
super(StatLesson, self).save(*args, **kwargs)
Then you can query like this:
Lesson.objects.all().order_by('visit_count')
A:
You'll need to do some native SQL stuff using extra.
e.g. (very roughly)
Lesson.objects.extra(select={'visit_count': "SELECT COUNT(*) FROM statlesson WHERE statlesson.lesson_id=lesson.id"}).order_by('visit_count')
You'll need to make sure that SELECT COUNT(*) FROM statlesson WHERE statlesson.lesson_id=lesson.id has the write db table names for your project (as statlesson and lesson won't be right).
|
How to sort on number of visits in Django app?
|
In Django (1.0.2), I have 2 models: Lesson and StatLesson.
class Lesson(models.Model):
contents = models.TextField()
def get_visits(self):
return self.statlesson_set.all().count()
class StatLesson(models.Model):
lesson = models.ForeignKey(Lesson)
datetime = models.DateTimeField(default=datetime.datetime.now())
Each StatLesson registers 1 visit of a certain Lesson. I can use lesson.get_visits() to get the number of visits for that lesson.
How do I get a queryset of lessons, that's sorted by the number of visits? I'm looking for something like this: Lesson.objects.all().order_by('statlesson__count') (but this obviously doesn't work)
|
[
"Django 1.1 will have aggregate support.\nOn Django 1.0.x you can count automatically with an extra field:\nclass Lesson(models.Model):\n contents = models.TextField()\n visit_count = models.IntegerField(default=0)\n\nclass StatLesson(models.Model):\n lesson = models.ForeignKey(Lesson)\n datetime = models.DateTimeField(default=datetime.datetime.now())\n\n def save(self, *args, **kwargs):\n if self.pk is None:\n self.lesson.visit_count += 1\n self.lesson.save()\n super(StatLesson, self).save(*args, **kwargs)\n\nThen you can query like this:\nLesson.objects.all().order_by('visit_count')\n\n",
"You'll need to do some native SQL stuff using extra.\ne.g. (very roughly)\n\n\nLesson.objects.extra(select={'visit_count': \"SELECT COUNT(*) FROM statlesson WHERE statlesson.lesson_id=lesson.id\"}).order_by('visit_count')\n\n\nYou'll need to make sure that SELECT COUNT(*) FROM statlesson WHERE statlesson.lesson_id=lesson.id has the write db table names for your project (as statlesson and lesson won't be right).\n"
] |
[
2,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001048265_django_python.txt
|
Q:
wxPython SplitterWindow does not expand within a Panel
I'm trying a simple layout and the panel divided by a SplitterWindow doesn't expand to fill the whole area, what I want is this:
[button] <= (fixed size)
---------
TEXT AREA }
~~~~~~~~~ <= (this is the splitter) } this is a panel
TEXT AREA }
The actual code is:
import wx
app = wx.App()
frame = wx.Frame(None, wx.ID_ANY, "Register Translator")
parseButton = wx.Button(frame, label="Parse")
panel = wx.Panel(frame)
panel.SetBackgroundColour("BLUE")
splitter = wx.SplitterWindow(panel)
inputArea = wx.TextCtrl(splitter, style=wx.TE_MULTILINE)
outputArea = wx.TextCtrl(splitter, style=wx.TE_MULTILINE)
splitter.SplitHorizontally(inputArea, outputArea)
sizer=wx.BoxSizer(wx.VERTICAL)
sizer.Add(parseButton, 0, wx.ALIGN_CENTER)
sizer.Add(panel, 1, wx.EXPAND | wx.ALL)
frame.SetSizerAndFit(sizer)
frame.SetAutoLayout(1)
frame.Show(True)
app.MainLoop()
I set the panel color different, and it's actually using the whole area, so the problem is just the SplitterWindow within the Panel, not within the BoxSizer.
Any ideas about why it isn't working? Thanks!
A:
The Panel is probably expanding but the ScrolledWindow within the Panel is not, because you aren't using a sizer for the panel, only the frame.
You could also try just having the SplitterWindow be a child of the frame, without the panel.
|
wxPython SplitterWindow does not expand within a Panel
|
I'm trying a simple layout and the panel divided by a SplitterWindow doesn't expand to fill the whole area, what I want is this:
[button] <= (fixed size)
---------
TEXT AREA }
~~~~~~~~~ <= (this is the splitter) } this is a panel
TEXT AREA }
The actual code is:
import wx
app = wx.App()
frame = wx.Frame(None, wx.ID_ANY, "Register Translator")
parseButton = wx.Button(frame, label="Parse")
panel = wx.Panel(frame)
panel.SetBackgroundColour("BLUE")
splitter = wx.SplitterWindow(panel)
inputArea = wx.TextCtrl(splitter, style=wx.TE_MULTILINE)
outputArea = wx.TextCtrl(splitter, style=wx.TE_MULTILINE)
splitter.SplitHorizontally(inputArea, outputArea)
sizer=wx.BoxSizer(wx.VERTICAL)
sizer.Add(parseButton, 0, wx.ALIGN_CENTER)
sizer.Add(panel, 1, wx.EXPAND | wx.ALL)
frame.SetSizerAndFit(sizer)
frame.SetAutoLayout(1)
frame.Show(True)
app.MainLoop()
I set the panel color different, and it's actually using the whole area, so the problem is just the SplitterWindow within the Panel, not within the BoxSizer.
Any ideas about why it isn't working? Thanks!
|
[
"The Panel is probably expanding but the ScrolledWindow within the Panel is not, because you aren't using a sizer for the panel, only the frame.\nYou could also try just having the SplitterWindow be a child of the frame, without the panel.\n"
] |
[
4
] |
[] |
[] |
[
"panel",
"python",
"user_interface",
"wxpython"
] |
stackoverflow_0001049070_panel_python_user_interface_wxpython.txt
|
Q:
What to do after starting simple_server?
For some quick background, I'm an XHTML/CSS guy with some basic PHP knowledge. I'm trying to dip my feet into the Python pool, and so far understand how to start simple_server and access a simple Hello World return in the same .py file. This is the extent of what I understand though, heh.
How do I integrate the simple_server and your basic XHTML/CSS files? I want to start the server and automagically call, for instance, index.py (does it need to be .py?). Obviously within the index file I would have my markup and stylesheet and I would operate it like a normal site at that point.
My eventual goal is to get a basic message board going (post, edit, delete, user sessions). I realize I'll need access to a database, and I know my way around MySQL enough to not have to worry about those portions.
Thanks for the help.
EDIT: Allow me to clarify my goal, as I have been told Python does a LOT more than PHP. My goal is to begin building simple web applications into my pre-existing static XHTML pages. Obviously with PHP, you simply make sure its installed on your server and you start writing the code. I'd like to know how different Python is in that sense, and what I have to do to, say, write a basic message board in Python.
A:
I would recommend Django.
A:
The other answers give good recommendations for what you probably want to do towards your "eventual goal", but, if you first want to persist with wsgiref.simple_server for an instructive while, you can do that too. WSGI is the crucial "glue" between web servers (not just the simple one in wsgiref of course -- real ones, too, such as Apache or Nginx [both with respective modules called mod_wsgi] as well as, for example, Google App Engine -- that one offers WSGI, too, as its fundamental API) and web applications (and frameworks that make it easier to write such applications).
Everybody's recommending various frameworks to you, but understanding WSGI can't hurt (since it will underlie whatever framework you eventually choose). And for the purpose of such understanding wsgiref.simple_server will serve you for a while longer, if you wish.
Essentially, what you want to do is write a WSGI app -- a function or class that takes two parameters (an "enviroment" dictionary, and a "start response" callable that it must call back with status and headers before returning the response's body). Your "WSGI app" can open your index.py or whatever else it wants to prep the status, headers and body it returns.
There's much more to WSGI (the middleware concept is particularly powerful), though of course you don't have to understand it very deeply -- only as deeply as you care to! See wsgi.org for tutorials &c. Gardner's two-part article, I think, is especially interesting.
Once (and if that's your choice) you understand WSGI, you can better decide whether you want it all hidden in a higher level framework such as Django (so you can focus on application-level issues instead) or use a very light and modular toolbox of WSGI utilities such as Werkzeug -- or anything in-between!-)
A:
Take a look at CherryPy. It's a nice http framework.
A:
"Obviously with PHP, you simply make sure its installed on your server and you start writing the code."
Not true with Python. Python is just a language, not an Apache plug-in like PHP.
Generally, you can use something like mod_wsgi to create a Python plug-in for Apache. What you find is that web page processing involves a lot of steps, none of which are part of the Python language.
You must use either extension libraries or a framework to process web requests in Python. [At this point, some PHP folks ask why Python is so popular. And the reason is because you have choices of which library or framework to use.]
PHP parses the request and allows you to embed code in the resulting page.
Python frameworks -- generally -- do not work this way. Most Python frameworks break the operation down into several steps.
Parsing the URL and locating an appropriate piece of code.
Running the code to get a result data objects.
Interpolating the resulting data objects into HTML templates.
"My goal is to begin building simple web applications into my pre-existing static XHTML pages."
Let's look at how you'd do this in Django.
Create a Django project.
Create a Django app.
Transform your XTHML pages into Django templates. Pull out the dynamic content and put in {{ somevariable }} markers. Depending on what the dynamic content is, this can be simple or rather complex.
Define URL to View function mappings in your urls.py file.
Define view functions in your views.py file. These view functions create the dynamic content that goes in the template, and which template to render.
At that point, you should be able to start the server, start a browser, pick a URL and see your template rendered.
"write a basic message board in Python."
Let's look at how you'd do this in Django.
Create a Django project.
Create a Django app.
Define your data model in models.py
Write unit tests in tests.py. Test your model's methods to be sure they all work properly.
Play with the built-in admin pages.
Create Django templates.
Define URL to View function mappings in your urls.py file.
Define view functions in your views.py file. These view functions create the dynamic content that goes in the template, and which template to render.
A:
It depends on what you want to achieve,
a) do you want to just write a web application without worrying too much abt what goes in the background, how request are being handled, or templates being rendered than go for a goo webframework, there are many choices simple http server is NOT one of them. e.g. use django, turbogears, webpy, cheerpy, pylons etc etc
see http://wiki.python.org/moin/WebFrameworks for full list
b) if you want to develope a simple web framework from start so that you understand internals and improve you knowledge of python, then I will suggest use simple http server
see
how can you create a URL scheme so that URLs are dispatched to correct python function,
see how can you render a html
template e.g. containing place
holder variables $title etc which
you can convert to string using
string.Template
b) would be difficult but interesting exercise to do, a) will get you started and you may be writing web apps in couple of days
|
What to do after starting simple_server?
|
For some quick background, I'm an XHTML/CSS guy with some basic PHP knowledge. I'm trying to dip my feet into the Python pool, and so far understand how to start simple_server and access a simple Hello World return in the same .py file. This is the extent of what I understand though, heh.
How do I integrate the simple_server and your basic XHTML/CSS files? I want to start the server and automagically call, for instance, index.py (does it need to be .py?). Obviously within the index file I would have my markup and stylesheet and I would operate it like a normal site at that point.
My eventual goal is to get a basic message board going (post, edit, delete, user sessions). I realize I'll need access to a database, and I know my way around MySQL enough to not have to worry about those portions.
Thanks for the help.
EDIT: Allow me to clarify my goal, as I have been told Python does a LOT more than PHP. My goal is to begin building simple web applications into my pre-existing static XHTML pages. Obviously with PHP, you simply make sure its installed on your server and you start writing the code. I'd like to know how different Python is in that sense, and what I have to do to, say, write a basic message board in Python.
|
[
"I would recommend Django.\n",
"The other answers give good recommendations for what you probably want to do towards your \"eventual goal\", but, if you first want to persist with wsgiref.simple_server for an instructive while, you can do that too. WSGI is the crucial \"glue\" between web servers (not just the simple one in wsgiref of course -- real ones, too, such as Apache or Nginx [both with respective modules called mod_wsgi] as well as, for example, Google App Engine -- that one offers WSGI, too, as its fundamental API) and web applications (and frameworks that make it easier to write such applications).\nEverybody's recommending various frameworks to you, but understanding WSGI can't hurt (since it will underlie whatever framework you eventually choose). And for the purpose of such understanding wsgiref.simple_server will serve you for a while longer, if you wish.\nEssentially, what you want to do is write a WSGI app -- a function or class that takes two parameters (an \"enviroment\" dictionary, and a \"start response\" callable that it must call back with status and headers before returning the response's body). Your \"WSGI app\" can open your index.py or whatever else it wants to prep the status, headers and body it returns.\nThere's much more to WSGI (the middleware concept is particularly powerful), though of course you don't have to understand it very deeply -- only as deeply as you care to! See wsgi.org for tutorials &c. Gardner's two-part article, I think, is especially interesting.\nOnce (and if that's your choice) you understand WSGI, you can better decide whether you want it all hidden in a higher level framework such as Django (so you can focus on application-level issues instead) or use a very light and modular toolbox of WSGI utilities such as Werkzeug -- or anything in-between!-)\n",
"Take a look at CherryPy. It's a nice http framework.\n",
"\"Obviously with PHP, you simply make sure its installed on your server and you start writing the code.\"\nNot true with Python. Python is just a language, not an Apache plug-in like PHP.\nGenerally, you can use something like mod_wsgi to create a Python plug-in for Apache. What you find is that web page processing involves a lot of steps, none of which are part of the Python language.\nYou must use either extension libraries or a framework to process web requests in Python. [At this point, some PHP folks ask why Python is so popular. And the reason is because you have choices of which library or framework to use.]\nPHP parses the request and allows you to embed code in the resulting page.\nPython frameworks -- generally -- do not work this way. Most Python frameworks break the operation down into several steps.\n\nParsing the URL and locating an appropriate piece of code. \nRunning the code to get a result data objects.\nInterpolating the resulting data objects into HTML templates.\n\n\"My goal is to begin building simple web applications into my pre-existing static XHTML pages.\"\nLet's look at how you'd do this in Django.\n\nCreate a Django project.\nCreate a Django app.\nTransform your XTHML pages into Django templates. Pull out the dynamic content and put in {{ somevariable }} markers. Depending on what the dynamic content is, this can be simple or rather complex.\nDefine URL to View function mappings in your urls.py file.\nDefine view functions in your views.py file. These view functions create the dynamic content that goes in the template, and which template to render.\n\nAt that point, you should be able to start the server, start a browser, pick a URL and see your template rendered.\n\"write a basic message board in Python.\"\nLet's look at how you'd do this in Django.\n\nCreate a Django project.\nCreate a Django app.\nDefine your data model in models.py\nWrite unit tests in tests.py. Test your model's methods to be sure they all work properly.\nPlay with the built-in admin pages.\nCreate Django templates. \nDefine URL to View function mappings in your urls.py file.\nDefine view functions in your views.py file. These view functions create the dynamic content that goes in the template, and which template to render.\n\n",
"It depends on what you want to achieve, \na) do you want to just write a web application without worrying too much abt what goes in the background, how request are being handled, or templates being rendered than go for a goo webframework, there are many choices simple http server is NOT one of them. e.g. use django, turbogears, webpy, cheerpy, pylons etc etc\nsee http://wiki.python.org/moin/WebFrameworks for full list\nb) if you want to develope a simple web framework from start so that you understand internals and improve you knowledge of python, then I will suggest use simple http server\nsee \n\nhow can you create a URL scheme so that URLs are dispatched to correct python function,\nsee how can you render a html\ntemplate e.g. containing place\nholder variables $title etc which\nyou can convert to string using\nstring.Template\n\nb) would be difficult but interesting exercise to do, a) will get you started and you may be writing web apps in couple of days\n"
] |
[
3,
3,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001046980_python.txt
|
Q:
Importing database data into Joomla
How to import data from a database to Joomla CMS?
I have a database with lots of data I want to use in my new website. An ideal solution for me would be a Python/Perl/PHP API that would know how to do Joomla' basic routines:
adding/removing a section/category/material/menu/module;
changing properties of existing entities
A:
You could try the following extensions:
Bulk Import
CSV Import
If that doesn't work for you, maybe take a look at the Joomla API
|
Importing database data into Joomla
|
How to import data from a database to Joomla CMS?
I have a database with lots of data I want to use in my new website. An ideal solution for me would be a Python/Perl/PHP API that would know how to do Joomla' basic routines:
adding/removing a section/category/material/menu/module;
changing properties of existing entities
|
[
"You could try the following extensions:\n\nBulk Import\nCSV Import\n\nIf that doesn't work for you, maybe take a look at the Joomla API\n"
] |
[
1
] |
[] |
[] |
[
"api",
"content_management_system",
"database",
"joomla",
"python"
] |
stackoverflow_0001049320_api_content_management_system_database_joomla_python.txt
|
Q:
Dictionaries with volatile values in Python unit tests?
I need to write a unit test for a function that returns a dictionary. One of the values in this dictionary is datetime.datetime.now() which of course changes with every test run.
I want to ignore that key completely in my assert. Right now I have a dictionary comparison function but I really want to use assertEqual like this:
def my_func(self):
return {'monkey_head_count': 3, 'monkey_creation': datetime.datetime.now()}
... unit tests
class MonkeyTester(unittest.TestCase):
def test_myfunc(self):
self.assertEqual(my_func(), {'monkey_head_count': 3}) # I want to ignore the timestamp!
Is there any best practices or elegant solutions for doing this? I am aware of assertAlmostEqual(), but that's only useful for floats iirc.
A:
Just delete the timestamp from the dict before doing the comparison:
class MonkeyTester(unittest.TestCase):
def test_myfunc(self):
without_timestamp = my_func()
del without_timestamp["monkey_creation"]
self.assertEqual(without_timestamp, {'monkey_head_count': 3})
If you find yourself doing a lot of time-related tests that involve datetime.now() then you can monkeypatch the datetime class for your unit tests. Consider this
import datetime
constant_now = datetime.datetime(2009,8,7,6,5,4)
old_datetime_class = datetime.datetime
class new_datetime(datetime.datetime):
@staticmethod
def now():
return constant_now
datetime.datetime = new_datetime
Now whenever you call datetime.datetime.now() in your unit tests, it'll always return the constant_now timestamp. And if you want/need to switch back to the original datetime.datetime.now() then you can simple say
datetime.datetime = old_datetime_class
and things will be back to normal. This sort of thing can be useful, though in the simple example you gave, I'd recommend just deleting the timestamp from the dict before comparing.
|
Dictionaries with volatile values in Python unit tests?
|
I need to write a unit test for a function that returns a dictionary. One of the values in this dictionary is datetime.datetime.now() which of course changes with every test run.
I want to ignore that key completely in my assert. Right now I have a dictionary comparison function but I really want to use assertEqual like this:
def my_func(self):
return {'monkey_head_count': 3, 'monkey_creation': datetime.datetime.now()}
... unit tests
class MonkeyTester(unittest.TestCase):
def test_myfunc(self):
self.assertEqual(my_func(), {'monkey_head_count': 3}) # I want to ignore the timestamp!
Is there any best practices or elegant solutions for doing this? I am aware of assertAlmostEqual(), but that's only useful for floats iirc.
|
[
"Just delete the timestamp from the dict before doing the comparison:\nclass MonkeyTester(unittest.TestCase):\n def test_myfunc(self):\n without_timestamp = my_func()\n del without_timestamp[\"monkey_creation\"]\n self.assertEqual(without_timestamp, {'monkey_head_count': 3})\n\nIf you find yourself doing a lot of time-related tests that involve datetime.now() then you can monkeypatch the datetime class for your unit tests. Consider this\nimport datetime\nconstant_now = datetime.datetime(2009,8,7,6,5,4)\nold_datetime_class = datetime.datetime\nclass new_datetime(datetime.datetime):\n @staticmethod\n def now():\n return constant_now\n\ndatetime.datetime = new_datetime\n\nNow whenever you call datetime.datetime.now() in your unit tests, it'll always return the constant_now timestamp. And if you want/need to switch back to the original datetime.datetime.now() then you can simple say\ndatetime.datetime = old_datetime_class\n\nand things will be back to normal. This sort of thing can be useful, though in the simple example you gave, I'd recommend just deleting the timestamp from the dict before comparing.\n"
] |
[
9
] |
[] |
[] |
[
"python",
"unit_testing"
] |
stackoverflow_0001049551_python_unit_testing.txt
|
Q:
How do I link a combo box and a command button?
This is my combo box code:
self.lblname = wx.StaticText(self, -1,"Timeslot" ,wx.Point(20,150))
self.sampleList = ['09.00-10.00','10.00-11.00','11.00-12.00']
self.edithear=wx.ComboBox(self, 30, "", wx.Point(150,150 ), wx.Size(95, -1),
self.sampleList, wx.CB_DROPDOWN)
and this is my command button code:
def OnClick(self,event):
self.logger.AppendText(" %d\n" %event.GetId())
I need to send the contents of the combo box to a flat file after clicking the command button. How should I link them?
A:
If the Onclick method is in the same class you can reach your combo via self.edithear
|
How do I link a combo box and a command button?
|
This is my combo box code:
self.lblname = wx.StaticText(self, -1,"Timeslot" ,wx.Point(20,150))
self.sampleList = ['09.00-10.00','10.00-11.00','11.00-12.00']
self.edithear=wx.ComboBox(self, 30, "", wx.Point(150,150 ), wx.Size(95, -1),
self.sampleList, wx.CB_DROPDOWN)
and this is my command button code:
def OnClick(self,event):
self.logger.AppendText(" %d\n" %event.GetId())
I need to send the contents of the combo box to a flat file after clicking the command button. How should I link them?
|
[
"If the Onclick method is in the same class you can reach your combo via self.edithear \n"
] |
[
0
] |
[] |
[] |
[
"python",
"wxpython"
] |
stackoverflow_0001048831_python_wxpython.txt
|
Q:
subprocess module: using the call method with tempfile objects
I have created temporary named files, with the tempfile libraries NamedTemporaryFile method.
I have written to them flushed the buffers, and I have not closed them (or else they might go away)
I am trying to use the subprocess module to call some shell commands using these generated files.
subprocess.call('cat %s' % f.name) always fails saying that the named temporary file does not exist.
os.path.exists(f.name) always returns true.
I can run the cat command on the file directly from the shell.
Is there some reason the subprocess module will not work with temporary files?
Is there any way to make it work?
Thanks in advance.
A:
Why don't you make your NamedTemporaryFiles with the optional parameter delete=False? That way you can safely close them knowing they won't disappear, use them normally afterwards, and explicitly unlink them when you're done. This way everything will work cross-platform, too.
A:
Are you using shell=True option for subprocess?
|
subprocess module: using the call method with tempfile objects
|
I have created temporary named files, with the tempfile libraries NamedTemporaryFile method.
I have written to them flushed the buffers, and I have not closed them (or else they might go away)
I am trying to use the subprocess module to call some shell commands using these generated files.
subprocess.call('cat %s' % f.name) always fails saying that the named temporary file does not exist.
os.path.exists(f.name) always returns true.
I can run the cat command on the file directly from the shell.
Is there some reason the subprocess module will not work with temporary files?
Is there any way to make it work?
Thanks in advance.
|
[
"Why don't you make your NamedTemporaryFiles with the optional parameter delete=False? That way you can safely close them knowing they won't disappear, use them normally afterwards, and explicitly unlink them when you're done. This way everything will work cross-platform, too.\n",
"Are you using shell=True option for subprocess?\n"
] |
[
3,
1
] |
[] |
[] |
[
"python",
"subprocess"
] |
stackoverflow_0001049648_python_subprocess.txt
|
Q:
Get remote text file, process, and update database - approach and scripting language to use?
I've been having to do some basic feed processing. So, get a file via ftp, process it (i.e. get the fields I care about), and then update the local database. And similarly the other direction: get data from db, create file, and upload by ftp. The scripts will be called by cron.
I think the idea would be for each type of feed, define the ftp connection/file information. Then there should be a translation of how data fields in the file relate to data fields that the application can work with (and of course process this translation). Additionally write separate scripts that do the common inserting functions for the different objects that may be used in different feeds.
As an e-commerce example, lets say I work with different suppliers who provide feeds to me. The feeds can be different (object) types: product, category, or order information. For each type of feed I obviously work with different fields and call different update or insert scripts.
What is the best language to implement this in? I can work with PHP but am looking for a project to start learning Perl or Python so this could be good for me as well.
If Perl or Python, can you briefly give high level implementation. So how to separate the different scripts, object oriented approach?, how to make it easy to implement new feeds or processing functions in the future, etc.
[full disclosure: There were already classes written in PHP which I used to create a new feed recently. I already did my job, but it was super messy and difficult to do. So this question is not 'Please help me do my job' but rather a 'best approach' type of question for my own development.]
Thanks!
A:
Kind of depends on the format of the files you're ftp'ing. If it's a crazy proprietary format, you might be stuck with whatever language already has a library managing it. If it's CSV or XML, then any language might do.
FTP: Net::FTP
Parse: Text::CSV_XS (for CSV or tab-separated) or XML::Twig (for XML)
Insert: DBI with your appropriate db driver, though there are higher level wrappers, too, such as DBIx::Class.
Just as examples. It seems pretty straight-forward, but I do perl nearly every day ;-)
A:
"Best" language is pretty subjective. Python is generally considered to be easy to learn and easy to read, whereas Perl is often jokingly referred to as a "write-only" language. On the other hand, Perl is use extensively for network management. Python tends to be used more for system management or programming in the large. Both have areas of excellence, and areas where they don't work as well.
Either language will allow you to solve your problem fairly easily. They both have all the necessary modules as either bundled libraries, or easily available.
If I were using Python I would use the ConfigParser
http://docs.python.org/library/configparser.html#module-ConfigParser
to store the settings for each project, ftplib:
http://docs.python.org/library/ftplib.html
to talk to the ftp server, and one of the many database libraries. For example, assuming that you are using postgres:
http://www.pygresql.org/
Finally for command line options I would use the excellent option parser module that comes with Python:
http://docs.python.org/library/optparse.html#module-optparse
From a code standpoint I would have the following objects:
# Reads in a config file, decides which feed to use, and passes
# the commands in to one of the classes below for import and export
class FeedManager
# Get data from db into a canonical format
class DbImport
# Put data into db from a canonical format
class DbExport
# Get data from ftp into a canonical format
class FtpImport
# Put data into ftp from canonical format
class FtpExport
each class translates to/from a canonical format that can be handed to one of the other complementary classes.
The config file might look like this:
[GetVitalStats]
SourceUrl=ftp.myhost.com
SourceType=FTP
Destination=Host=mydbserver; Database=somedb
SourceType=Postgres
And finally, you would call it like this:
process_feed.py --feed=GetVitalStats
A:
Most modern languages scripting languages allow you to do all of these things. Because of that, I think your choice of language should be based on what you and the people who read your code know.
In Perl I'd make use of the following modules:
Net::FTP to access the ftp sites.
DBI to insert data into your database.
Modules like that are nice reusable pieces of code that you don't have to write, and interaction with ftp sites and databases are so common that every modern scripting language should have similar modules.
I don't think that PHP is a great language so I'd avoid it if possible, but it might make sense for you if you have a lot of experience in it.
A:
Python.
1st. What format are these FTP'd files? I'll assume they're CSV.
2nd. How do you know when to run the FTP get? Fixed schedule? Event? I'll assume it's a fixed schedule. You'll use cron to control this.
You have three issues: FTP get, data extract, DB load.
ftp_get_load.py
import ftplib
import csv
import someDatabaseAPI as sql
class GetFile( object ):
... general case solution using ftplib ...
class ExtractData( object ):
... general case solution using csv ...
class LoadDB( object ):
... general case solution using sql ...
some_load.py
import ftp_get_load
class UniqueExtractor( ftp_get_load.ExtractData ):
... overrides ...
get = GetFile( url, filename, etc. )
extract = UniqueExtractor( filenamein, filenameout, etc. )
load = LoadDB( filename, etc. )
if __name__ == "__main__":
get.execute()
extract.execute()
load.execute()
|
Get remote text file, process, and update database - approach and scripting language to use?
|
I've been having to do some basic feed processing. So, get a file via ftp, process it (i.e. get the fields I care about), and then update the local database. And similarly the other direction: get data from db, create file, and upload by ftp. The scripts will be called by cron.
I think the idea would be for each type of feed, define the ftp connection/file information. Then there should be a translation of how data fields in the file relate to data fields that the application can work with (and of course process this translation). Additionally write separate scripts that do the common inserting functions for the different objects that may be used in different feeds.
As an e-commerce example, lets say I work with different suppliers who provide feeds to me. The feeds can be different (object) types: product, category, or order information. For each type of feed I obviously work with different fields and call different update or insert scripts.
What is the best language to implement this in? I can work with PHP but am looking for a project to start learning Perl or Python so this could be good for me as well.
If Perl or Python, can you briefly give high level implementation. So how to separate the different scripts, object oriented approach?, how to make it easy to implement new feeds or processing functions in the future, etc.
[full disclosure: There were already classes written in PHP which I used to create a new feed recently. I already did my job, but it was super messy and difficult to do. So this question is not 'Please help me do my job' but rather a 'best approach' type of question for my own development.]
Thanks!
|
[
"Kind of depends on the format of the files you're ftp'ing. If it's a crazy proprietary format, you might be stuck with whatever language already has a library managing it. If it's CSV or XML, then any language might do.\n\nFTP: Net::FTP\nParse: Text::CSV_XS (for CSV or tab-separated) or XML::Twig (for XML)\nInsert: DBI with your appropriate db driver, though there are higher level wrappers, too, such as DBIx::Class.\n\nJust as examples. It seems pretty straight-forward, but I do perl nearly every day ;-)\n",
"\"Best\" language is pretty subjective. Python is generally considered to be easy to learn and easy to read, whereas Perl is often jokingly referred to as a \"write-only\" language. On the other hand, Perl is use extensively for network management. Python tends to be used more for system management or programming in the large. Both have areas of excellence, and areas where they don't work as well.\nEither language will allow you to solve your problem fairly easily. They both have all the necessary modules as either bundled libraries, or easily available.\nIf I were using Python I would use the ConfigParser\nhttp://docs.python.org/library/configparser.html#module-ConfigParser\nto store the settings for each project, ftplib:\nhttp://docs.python.org/library/ftplib.html\nto talk to the ftp server, and one of the many database libraries. For example, assuming that you are using postgres:\nhttp://www.pygresql.org/\nFinally for command line options I would use the excellent option parser module that comes with Python:\nhttp://docs.python.org/library/optparse.html#module-optparse\nFrom a code standpoint I would have the following objects:\n# Reads in a config file, decides which feed to use, and passes\n# the commands in to one of the classes below for import and export\nclass FeedManager\n\n# Get data from db into a canonical format\nclass DbImport\n\n# Put data into db from a canonical format\nclass DbExport\n\n# Get data from ftp into a canonical format\nclass FtpImport\n\n# Put data into ftp from canonical format\nclass FtpExport\n\neach class translates to/from a canonical format that can be handed to one of the other complementary classes.\nThe config file might look like this:\n[GetVitalStats]\nSourceUrl=ftp.myhost.com\nSourceType=FTP\n\nDestination=Host=mydbserver; Database=somedb\nSourceType=Postgres\n\nAnd finally, you would call it like this:\nprocess_feed.py --feed=GetVitalStats\n\n",
"Most modern languages scripting languages allow you to do all of these things. Because of that, I think your choice of language should be based on what you and the people who read your code know. \nIn Perl I'd make use of the following modules:\nNet::FTP to access the ftp sites.\nDBI to insert data into your database. \nModules like that are nice reusable pieces of code that you don't have to write, and interaction with ftp sites and databases are so common that every modern scripting language should have similar modules. \nI don't think that PHP is a great language so I'd avoid it if possible, but it might make sense for you if you have a lot of experience in it.\n",
"Python.\n1st. What format are these FTP'd files? I'll assume they're CSV.\n2nd. How do you know when to run the FTP get? Fixed schedule? Event? I'll assume it's a fixed schedule. You'll use cron to control this.\nYou have three issues: FTP get, data extract, DB load. \nftp_get_load.py\nimport ftplib\nimport csv\nimport someDatabaseAPI as sql\n\nclass GetFile( object ):\n ... general case solution using ftplib ...\n\nclass ExtractData( object ):\n ... general case solution using csv ...\n\nclass LoadDB( object ):\n ... general case solution using sql ...\n\nsome_load.py\nimport ftp_get_load\n\nclass UniqueExtractor( ftp_get_load.ExtractData ):\n ... overrides ...\n\nget = GetFile( url, filename, etc. )\nextract = UniqueExtractor( filenamein, filenameout, etc. )\nload = LoadDB( filename, etc. )\n\nif __name__ == \"__main__\":\n get.execute()\n extract.execute()\n load.execute()\n\n"
] |
[
3,
2,
1,
1
] |
[] |
[] |
[
"feed",
"parsing",
"perl",
"php",
"python"
] |
stackoverflow_0001050089_feed_parsing_perl_php_python.txt
|
Q:
Problem with SQLite executemany
I can't find my error in the following code. When it is run a type error is given for line: cur.executemany(sql % itr.next()) => 'function takes exactly 2 arguments (1 given),
import sqlite3
con = sqlite3.connect('test.sqlite')
cur = con.cursor()
cur.execute("create table IF NOT EXISTS fred (dat)")
def newSave(className, fields, objData):
sets = []
itr = iter(objData)
if len(fields) == 1:
sets.append( ':' + fields[0])
else:
for name in fields:
sets.append( ':' + name)
if len(sets)== 1:
colNames = sets[0]
else:
colNames = ', '.join(sets)
sql = " '''insert into %s (%s) values(%%s)'''," % (className, colNames)
print itr.next()
cur.executemany(sql % itr.next())
con.commit()
if __name__=='__main__':
newSave('fred', ['dat'], [{'dat':1}, {'dat':2}, { 'dat':3}, {'dat':4}])
I would appreciate your thoughts.
A:
Like it says, executemany takes two arguments. Instead of interpolating the string values yourself with the %, you should pass both the sql and the values and let the db adapter quote them.
sql = " '''insert into %s (%s) values(%%s)'''," % (className, colNames)
cur.executemany(sql, itr.next())
A:
See the sqlite3 documentation. As you'll see, the Cursor.executemany method expects two parameters. Perhaps you mistook it for the Connection.executemany method which only takes one?
A:
Thank you all for your answers. After pushing and poking for several days and using your guidance the following works. I'm guilty of overthinking my problem. Didn't need an iter() conversion. The objData variable is a list and already an iterable! This was one of the reasons the code didn't work.
import sqlite3
con = sqlite3.connect('test.sqlite')
cur = con.cursor()
cur.execute("create table IF NOT EXISTS fred (dat, tad)")
def newSave(className, fields, objData):
colSets = []
valSets = []
If len(fields) == 1:
colSets.append( fields[0])
valSets.append(':' + fields[0])
else:
for name in fields:
colSets.append( name)
valSets.append(':' + name)
if len(colSets)== 1:
colNames = colSets[0]
vals = valSets[0]
else:
colNames = ', '.join(colSets)
vals = ', '.join(valSets)
sql = "insert into %s (%s) values(%s)" % (className, colNames, vals)
cur.executemany(sql , objDat)
con.commit()
if __name__=='__main__':
newSave('fred', ['dat', 'tad'], [{'dat': 100, 'tad' : 42}, {'dat': 200 , 'tad' : 43}, {'dat': 3 , 'tad' : 44}, {'dat': 4 , 'tad' : 45} ])
A:
Perhaps you meant:
cur.executemany(sql, itr)
also note that the print statement consumes one item from the iterator.
|
Problem with SQLite executemany
|
I can't find my error in the following code. When it is run a type error is given for line: cur.executemany(sql % itr.next()) => 'function takes exactly 2 arguments (1 given),
import sqlite3
con = sqlite3.connect('test.sqlite')
cur = con.cursor()
cur.execute("create table IF NOT EXISTS fred (dat)")
def newSave(className, fields, objData):
sets = []
itr = iter(objData)
if len(fields) == 1:
sets.append( ':' + fields[0])
else:
for name in fields:
sets.append( ':' + name)
if len(sets)== 1:
colNames = sets[0]
else:
colNames = ', '.join(sets)
sql = " '''insert into %s (%s) values(%%s)'''," % (className, colNames)
print itr.next()
cur.executemany(sql % itr.next())
con.commit()
if __name__=='__main__':
newSave('fred', ['dat'], [{'dat':1}, {'dat':2}, { 'dat':3}, {'dat':4}])
I would appreciate your thoughts.
|
[
"Like it says, executemany takes two arguments. Instead of interpolating the string values yourself with the %, you should pass both the sql and the values and let the db adapter quote them.\nsql = \" '''insert into %s (%s) values(%%s)''',\" % (className, colNames)\ncur.executemany(sql, itr.next())\n\n",
"See the sqlite3 documentation. As you'll see, the Cursor.executemany method expects two parameters. Perhaps you mistook it for the Connection.executemany method which only takes one?\n",
"Thank you all for your answers. After pushing and poking for several days and using your guidance the following works. I'm guilty of overthinking my problem. Didn't need an iter() conversion. The objData variable is a list and already an iterable! This was one of the reasons the code didn't work.\nimport sqlite3\ncon = sqlite3.connect('test.sqlite')\ncur = con.cursor()\ncur.execute(\"create table IF NOT EXISTS fred (dat, tad)\")\n\ndef newSave(className, fields, objData):\n colSets = []\n valSets = []\n If len(fields) == 1:\n colSets.append( fields[0])\n valSets.append(':' + fields[0])\n else:\n for name in fields:\n colSets.append( name)\n valSets.append(':' + name)\n if len(colSets)== 1:\n colNames = colSets[0]\n vals = valSets[0]\n else:\n colNames = ', '.join(colSets)\n vals = ', '.join(valSets)\n sql = \"insert into %s (%s) values(%s)\" % (className, colNames, vals)\n cur.executemany(sql , objDat)\n con.commit()\n\nif __name__=='__main__':\n newSave('fred', ['dat', 'tad'], [{'dat': 100, 'tad' : 42}, {'dat': 200 , 'tad' : 43}, {'dat': 3 , 'tad' : 44}, {'dat': 4 , 'tad' : 45} ])\n\n",
"Perhaps you meant:\n\ncur.executemany(sql, itr)\n\nalso note that the print statement consumes one item from the iterator.\n"
] |
[
3,
2,
2,
0
] |
[] |
[] |
[
"pysqlite",
"python"
] |
stackoverflow_0001030941_pysqlite_python.txt
|
Q:
Parsing an unknown data structure in python
I have a file containing lots of data put in a form similar to this:
Group1 {
Entry1 {
Title1 [{Data1:Member1, Data2:Member2}]
Title2 [{Data3:Member3, Data4:Member4}]
}
Entry2 {
...
}
}
Group2 {
DifferentEntry1 {
DiffTitle1 {
...
}
}
}
Thing is, I don't know how many layers of parentheses there are, and how the data is structured. I need modify the data, and delete entire 'Entry's depending on conditions involving data members before writing everything to a new file. What's the best way of reading in a file like this? Thanks!
A:
The data structure basically seems to be a dict where they keys are strings and the value is either a string or another dict of the same type, so I'd recommend maybe pulling it into that sort of python structure,
eg:
{'group1': {'Entry2': {}, 'Entry1': {'Title1':{'Data4': 'Member4',
'Data1': 'Member1','Data3': 'Member3', 'Data2': 'Member2'},
'Title2': {}}}
At the top level of the file you would create a blank dict, and then for each line you read, you use the identifier as a key, and then when you see a { you create the value for that key as a dict. When you see Key:Value, then instead of creating that key as a dict, you just insert the value normally. When you see a } you have to 'go back up' to the previous dict you were working on and go back to filling that in.
I'd think this whole parser to put the file into a python structure like this could be done in one fairly short recursive function that just called itself to fill in each sub-dict when it saw a { and then returned to its caller upon seeing }
A:
Here is a grammar.
dict_content : NAME ':' NAME [ ',' dict_content ]?
| NAME '{' [ dict_content ]? '}' [ dict_content ]?
| NAME '[' [ list_content ]? ']' [ dict_content ]?
;
list_content : NAME [ ',' list_content ]?
| '{' [ dict_content ]? '}' [ ',' list_content ]?
| '[' [ list_content ]? ']' [ ',' list_content ]?
;
Top level is dict_content.
I'm a little unsure about the comma after dicts and lists embedded in a list, as you didn't provide any example of that.
A:
If you have the grammar for the structure of your data file, or you can create it yourself, you could use a parser generator for Python, like YAPPS: link text.
A:
That depends on how the data is structured, and what kind of changes you need to do.
One option might be to parse that into a Python data structure, it seems similar, except that you don't have quotes around the strings. That makes complex manipulation easy.
On the other hand, if all you need to do is make changes that modify some entries to other entries, you can do it with search and replace.
So you need to understand the issue better before you can know what the best way is.
A:
This is a pretty similar problem to XML processing, and there's a lot of Python code to do that. So if you could somehow convert the file to XML, you could just run it through a parser from the standard library. An XML version of your example would be something like this:
<group id="Group1">
<entry id="Entry1">
<title id="Title1"><data id="Data1">Member1</data> <data id="Data2">Member2</data></title>
<title id="Title2"><data id="Data3">Member3</data> <data id="Data4">Member4</data></title>
</entry>
<entry id="Entry2">
...
</entry>
</group>
Of course, converting to XML probably isn't the most straightforward thing to do. But your job is pretty similar to what's already been done with the XML parsers, you just have a different syntax to deal with. So you could take a look at some XML parsing code and write a little Python parser for your data file based on that. (Depending on how the XML parser is implemented, you might even be able to copy the code, just change a few regular expressions, and run it for your file)
A:
I have something similar but written in java. It parses a file with the same basic structure with a little different syntax (no '{' and '}' only indentation like in python). It is a very simple script language.
Basically it works like this: It uses a stack to keep track of the inner most block of instructions (or in your case data) and appends every new instruction to the block on the top. If it parses an instruction which expects a new block it is pushed to the stack. If a block ends it pops one element from the stack.
I do not want to post the entire source because it is big and it is available on google code (lizzard-entertainment, revision 405). There is a few things you need to know.
Instruction is an abstract class and it has a block_expected method to indicate wether the concrete instruction needs a block (like loops, etc) In your case this is unnecessary you only need to check for '{'.
Block extends Instruction. It contains a list of instructions and has an add method to add more.
indent_level return how many spaces are preceding the instruction text. This is also unneccessary with '{}' singns.
placeholder
BufferedReader input = null;
try {
input = new BufferedReader(new FileReader(inputFileName));
// Stack of instruction blocks
Stack<Block> stack = new Stack<Block>();
// Push the root block
stack.push(this.topLevelBlock);
String line = null;
Instruction prev = new Noop();
while ((line = input.readLine()) != null) {
// Difference between the indentation of the previous and this line
// You do not need this you will be using {} to specify block boundaries
int level = indent_level(line) - stack.size();
// Parse the line (returns an instruction object)
Instruction inst = Instruction.parse(line.trim().split(" +"));
// If the previous instruction expects a block (for example repeat)
if (prev.block_expected()) {
if (level != 1) {
// TODO handle error
continue;
}
// Push the previous instruction and add the current instruction
stack.push((Block)(prev));
stack.peek().add(inst);
} else {
if (level > 0) {
// TODO handle error
continue;
} else if (level < 0) {
// Pop the stack at the end of blocks
for (int i = 0; i < -level; ++i)
stack.pop();
}
stack.peek().add(inst);
}
prev = inst;
}
} finally {
if (input != null)
input.close();
}
|
Parsing an unknown data structure in python
|
I have a file containing lots of data put in a form similar to this:
Group1 {
Entry1 {
Title1 [{Data1:Member1, Data2:Member2}]
Title2 [{Data3:Member3, Data4:Member4}]
}
Entry2 {
...
}
}
Group2 {
DifferentEntry1 {
DiffTitle1 {
...
}
}
}
Thing is, I don't know how many layers of parentheses there are, and how the data is structured. I need modify the data, and delete entire 'Entry's depending on conditions involving data members before writing everything to a new file. What's the best way of reading in a file like this? Thanks!
|
[
"The data structure basically seems to be a dict where they keys are strings and the value is either a string or another dict of the same type, so I'd recommend maybe pulling it into that sort of python structure,\neg:\n{'group1': {'Entry2': {}, 'Entry1': {'Title1':{'Data4': 'Member4',\n'Data1': 'Member1','Data3': 'Member3', 'Data2': 'Member2'}, \n'Title2': {}}}\n\nAt the top level of the file you would create a blank dict, and then for each line you read, you use the identifier as a key, and then when you see a { you create the value for that key as a dict. When you see Key:Value, then instead of creating that key as a dict, you just insert the value normally. When you see a } you have to 'go back up' to the previous dict you were working on and go back to filling that in.\nI'd think this whole parser to put the file into a python structure like this could be done in one fairly short recursive function that just called itself to fill in each sub-dict when it saw a { and then returned to its caller upon seeing }\n",
"Here is a grammar.\ndict_content : NAME ':' NAME [ ',' dict_content ]?\n | NAME '{' [ dict_content ]? '}' [ dict_content ]?\n | NAME '[' [ list_content ]? ']' [ dict_content ]?\n ;\n\nlist_content : NAME [ ',' list_content ]?\n | '{' [ dict_content ]? '}' [ ',' list_content ]?\n | '[' [ list_content ]? ']' [ ',' list_content ]?\n ;\n\nTop level is dict_content.\nI'm a little unsure about the comma after dicts and lists embedded in a list, as you didn't provide any example of that.\n",
"If you have the grammar for the structure of your data file, or you can create it yourself, you could use a parser generator for Python, like YAPPS: link text.\n",
"That depends on how the data is structured, and what kind of changes you need to do.\nOne option might be to parse that into a Python data structure, it seems similar, except that you don't have quotes around the strings. That makes complex manipulation easy.\nOn the other hand, if all you need to do is make changes that modify some entries to other entries, you can do it with search and replace. \nSo you need to understand the issue better before you can know what the best way is.\n",
"This is a pretty similar problem to XML processing, and there's a lot of Python code to do that. So if you could somehow convert the file to XML, you could just run it through a parser from the standard library. An XML version of your example would be something like this:\n<group id=\"Group1\"> \n <entry id=\"Entry1\">\n <title id=\"Title1\"><data id=\"Data1\">Member1</data> <data id=\"Data2\">Member2</data></title>\n <title id=\"Title2\"><data id=\"Data3\">Member3</data> <data id=\"Data4\">Member4</data></title>\n </entry> \n <entry id=\"Entry2\"> \n ...\n </entry>\n</group>\n\nOf course, converting to XML probably isn't the most straightforward thing to do. But your job is pretty similar to what's already been done with the XML parsers, you just have a different syntax to deal with. So you could take a look at some XML parsing code and write a little Python parser for your data file based on that. (Depending on how the XML parser is implemented, you might even be able to copy the code, just change a few regular expressions, and run it for your file)\n",
"I have something similar but written in java. It parses a file with the same basic structure with a little different syntax (no '{' and '}' only indentation like in python). It is a very simple script language.\nBasically it works like this: It uses a stack to keep track of the inner most block of instructions (or in your case data) and appends every new instruction to the block on the top. If it parses an instruction which expects a new block it is pushed to the stack. If a block ends it pops one element from the stack.\nI do not want to post the entire source because it is big and it is available on google code (lizzard-entertainment, revision 405). There is a few things you need to know.\n\nInstruction is an abstract class and it has a block_expected method to indicate wether the concrete instruction needs a block (like loops, etc) In your case this is unnecessary you only need to check for '{'.\nBlock extends Instruction. It contains a list of instructions and has an add method to add more.\nindent_level return how many spaces are preceding the instruction text. This is also unneccessary with '{}' singns.\n\nplaceholder\nBufferedReader input = null;\ntry {\n input = new BufferedReader(new FileReader(inputFileName));\n // Stack of instruction blocks\n Stack<Block> stack = new Stack<Block>();\n // Push the root block\n stack.push(this.topLevelBlock);\n String line = null;\n Instruction prev = new Noop();\n while ((line = input.readLine()) != null) {\n // Difference between the indentation of the previous and this line\n // You do not need this you will be using {} to specify block boundaries\n int level = indent_level(line) - stack.size();\n // Parse the line (returns an instruction object)\n Instruction inst = Instruction.parse(line.trim().split(\" +\"));\n // If the previous instruction expects a block (for example repeat)\n if (prev.block_expected()) {\n if (level != 1) {\n // TODO handle error\n continue;\n }\n // Push the previous instruction and add the current instruction\n stack.push((Block)(prev));\n stack.peek().add(inst);\n } else {\n if (level > 0) {\n // TODO handle error\n continue;\n } else if (level < 0) {\n // Pop the stack at the end of blocks\n for (int i = 0; i < -level; ++i)\n stack.pop();\n }\n stack.peek().add(inst);\n }\n prev = inst;\n }\n} finally {\n if (input != null)\n input.close();\n}\n\n"
] |
[
3,
3,
2,
1,
1,
1
] |
[] |
[] |
[
"data_structures",
"parsing",
"python"
] |
stackoverflow_0001050773_data_structures_parsing_python.txt
|
Q:
Including a dynamic image in a web page using POST?
I have written a CGI script that creates an image dynamically using GET data. To include this image in my webpage, I am using the following code:
<img src="image.py?text=xxxxxxxxxxxxxx">
The problem is that I expect in the future the "text" field will get very long and the URL will become too large. From Googling around there doesn't seem to be a fixed limit on URL length (ie. depends on the browser, server, proxy, etc.) Is there a better way to do this?
If it matters, I am working with Django and Python and I cannot use any client-side scripting (ie. JavaScript).
Cheers,
Ben
A:
Store the text somewhere (e.g. a database) and then pass through the primary key.
A:
This will get you an Image as the result of a POST -- you may not like it
Put an iFrame where you want the image and size it and remove scrollbars
Set the src to a form with hidden inputs set to your post parameters and the action set to the URL that will generate the image
submit the form automatically with JavaScript in the body.onload of the iFrame's HTML
Then, either:
Serve back an content-type set to an image and stream the image bytes
or:
store the post parameters somewhere and generate a small id
serve back HTML with an img tag using the id in the url -- on the server look up the post parameters
or:
generate a page with an image tag with an embedded image
http://danielmclaren.net/2008/03/embedding-base64-image-data-into-a-webpage
A:
Putting together what has already been said, how about creating two pages. First page sends a POST request when the form is submitted (lets say to create_img.py) with a text=xxxxxxx... parameter. Then create_img.py takes the text parameter and creates an image with it and inserts it (or a filesystem reference) into the db, then when rendering the second page, generate img tags like <img src="render_img.py?row_id=0122">. At this point, render_img.py simply queries the db for the given image. Before creating the image you can check to see if its already in the database therefore reusing/recycling previous images with the same text parameter.
A:
img's use GET. You'll have to come up with another mechanism. How about calling the same functionality in image.py and saving the file as a temp file which you ref in the img tag? Or how about saving the value of text in a db row during the rendering of this img tag and using the row_id as what you pass into the image.py script?
A:
You may be able to mitigate the problem by compressing the text in the get parameter.
A:
From the link below it looks like you'll be fine for a while ;)
http://www.boutell.com/newfaq/misc/urllength.html
A:
If you're using django, maybe you can do this via a template tag instead?
Something like:
<img src="{% create_image "This is the text that will be displayed" %}">
The create_image function would create the image with a dummy/random/generated filename, and return the path.
This avoids having to GET or POST to the script, and the images will have manageable filenames.
I can see some potential issues with this approach, I'm just tossing the idea out there ;)
A:
OK, I'm a bit late to the party, but you could use a mix of MHTML (for IE7 and below) and the data URI scheme (for all other modern browsers). It does require a bit of work on both client and server but you can ultimately end up with
newimg.src = 'blah';
The write-up on how to do this is at http://gingerbbm.com/?p=127.
|
Including a dynamic image in a web page using POST?
|
I have written a CGI script that creates an image dynamically using GET data. To include this image in my webpage, I am using the following code:
<img src="image.py?text=xxxxxxxxxxxxxx">
The problem is that I expect in the future the "text" field will get very long and the URL will become too large. From Googling around there doesn't seem to be a fixed limit on URL length (ie. depends on the browser, server, proxy, etc.) Is there a better way to do this?
If it matters, I am working with Django and Python and I cannot use any client-side scripting (ie. JavaScript).
Cheers,
Ben
|
[
"Store the text somewhere (e.g. a database) and then pass through the primary key.\n",
"This will get you an Image as the result of a POST -- you may not like it\n\nPut an iFrame where you want the image and size it and remove scrollbars\nSet the src to a form with hidden inputs set to your post parameters and the action set to the URL that will generate the image\nsubmit the form automatically with JavaScript in the body.onload of the iFrame's HTML\nThen, either:\nServe back an content-type set to an image and stream the image bytes\nor:\nstore the post parameters somewhere and generate a small id\nserve back HTML with an img tag using the id in the url -- on the server look up the post parameters\nor:\ngenerate a page with an image tag with an embedded image\nhttp://danielmclaren.net/2008/03/embedding-base64-image-data-into-a-webpage\n\n",
"Putting together what has already been said, how about creating two pages. First page sends a POST request when the form is submitted (lets say to create_img.py) with a text=xxxxxxx... parameter. Then create_img.py takes the text parameter and creates an image with it and inserts it (or a filesystem reference) into the db, then when rendering the second page, generate img tags like <img src=\"render_img.py?row_id=0122\">. At this point, render_img.py simply queries the db for the given image. Before creating the image you can check to see if its already in the database therefore reusing/recycling previous images with the same text parameter.\n",
"img's use GET. You'll have to come up with another mechanism. How about calling the same functionality in image.py and saving the file as a temp file which you ref in the img tag? Or how about saving the value of text in a db row during the rendering of this img tag and using the row_id as what you pass into the image.py script?\n",
"You may be able to mitigate the problem by compressing the text in the get parameter.\n",
"From the link below it looks like you'll be fine for a while ;)\nhttp://www.boutell.com/newfaq/misc/urllength.html\n",
"If you're using django, maybe you can do this via a template tag instead?\nSomething like:\n<img src=\"{% create_image \"This is the text that will be displayed\" %}\">\n\nThe create_image function would create the image with a dummy/random/generated filename, and return the path.\nThis avoids having to GET or POST to the script, and the images will have manageable filenames.\nI can see some potential issues with this approach, I'm just tossing the idea out there ;)\n",
"OK, I'm a bit late to the party, but you could use a mix of MHTML (for IE7 and below) and the data URI scheme (for all other modern browsers). It does require a bit of work on both client and server but you can ultimately end up with\nnewimg.src = 'blah';\n\nThe write-up on how to do this is at http://gingerbbm.com/?p=127.\n"
] |
[
5,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"cgi",
"django",
"html",
"image",
"python"
] |
stackoverflow_0000243375_cgi_django_html_image_python.txt
|
Q:
Anyone successfully adopted JaikuEngine?
Are there real world adaptations of JaikuEngine on Google App Engine?
(Question from my boss which wants to use it instead writing our own system)
A:
This is jaikuengine running on AppEngine - https://jaiku.appspot.com/
If you wish to have your own version of jaiku, its pretty straightforward. Check this out- http://code.google.com/p/jaikuengine/
A:
Whilst I love the App-Engine. Does your solution need to be hosted on the AppEngine? If not I would check out identi.ca which is based on the Open Source Laconi.ca
A:
Does http://www.jaiku.com/ count?
|
Anyone successfully adopted JaikuEngine?
|
Are there real world adaptations of JaikuEngine on Google App Engine?
(Question from my boss which wants to use it instead writing our own system)
|
[
"This is jaikuengine running on AppEngine - https://jaiku.appspot.com/\nIf you wish to have your own version of jaiku, its pretty straightforward. Check this out- http://code.google.com/p/jaikuengine/\n",
"Whilst I love the App-Engine. Does your solution need to be hosted on the AppEngine? If not I would check out identi.ca which is based on the Open Source Laconi.ca\n",
"Does http://www.jaiku.com/ count?\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000852268_google_app_engine_python.txt
|
Q:
How can I write my own aggregate functions with sqlalchemy?
How can I write my own aggregate functions with SQLAlchemy? As an easy example I would like to use numpy to calculate the variance. With sqlite it would look like this:
import sqlite3 as sqlite
import numpy as np
class self_written_SQLvar(object):
def __init__(self):
import numpy as np
self.values = []
def step(self, value):
self.values.append(value)
def finalize(self):
return np.array(self.values).var()
cxn = sqlite.connect(':memory:')
cur = cxn.cursor()
cxn.create_aggregate("self_written_SQLvar", 1, self_written_SQLvar)
# Now - how to use it:
cur.execute("CREATE TABLE 'mytable' ('numbers' INTEGER)")
cur.execute("INSERT INTO 'mytable' VALUES (1)")
cur.execute("INSERT INTO 'mytable' VALUES (2)")
cur.execute("INSERT INTO 'mytable' VALUES (3)")
cur.execute("INSERT INTO 'mytable' VALUES (4)")
a = cur.execute("SELECT avg(numbers), self_written_SQLvar(numbers) FROM mytable")
print a.fetchall()
>>> [(2.5, 1.25)]
A:
The creation of new aggregate functions is backend-dependant, and must be done
directly with the API of the underlining connection. SQLAlchemy offers no
facility for creating those.
However after created you can just use them in SQLAlchemy normally.
Example:
import sqlalchemy
from sqlalchemy import Column, Table, create_engine, MetaData, Integer
from sqlalchemy import func, select
from sqlalchemy.pool import StaticPool
from random import randrange
import numpy
import sqlite3
class NumpyVarAggregate(object):
def __init__(self):
self.values = []
def step(self, value):
self.values.append(value)
def finalize(self):
return numpy.array(self.values).var()
def sqlite_memory_engine_creator():
con = sqlite3.connect(':memory:')
con.create_aggregate("np_var", 1, NumpyVarAggregate)
return con
e = create_engine('sqlite://', echo=True, poolclass=StaticPool,
creator=sqlite_memory_engine_creator)
m = MetaData(bind=e)
t = Table('mytable', m,
Column('id', Integer, primary_key=True),
Column('number', Integer)
)
m.create_all()
Now for the testing:
# insert 30 random-valued rows
t.insert().execute([{'number': randrange(100)} for x in xrange(30)])
for row in select([func.avg(t.c.number), func.np_var(t.c.number)]).execute():
print 'RESULT ROW: ', row
That prints (with SQLAlchemy statement echo turned on):
2009-06-15 14:55:34,171 INFO sqlalchemy.engine.base.Engine.0x...d20c PRAGMA
table_info("mytable")
2009-06-15 14:55:34,174 INFO sqlalchemy.engine.base.Engine.0x...d20c ()
2009-06-15 14:55:34,175 INFO sqlalchemy.engine.base.Engine.0x...d20c
CREATE TABLE mytable (
id INTEGER NOT NULL,
number INTEGER,
PRIMARY KEY (id)
)
2009-06-15 14:55:34,175 INFO sqlalchemy.engine.base.Engine.0x...d20c ()
2009-06-15 14:55:34,176 INFO sqlalchemy.engine.base.Engine.0x...d20c COMMIT
2009-06-15 14:55:34,177 INFO sqlalchemy.engine.base.Engine.0x...d20c INSERT
INTO mytable (number) VALUES (?)
2009-06-15 14:55:34,177 INFO sqlalchemy.engine.base.Engine.0x...d20c [[98],
[94], [7], [1], [79], [77], [51], [28], [85], [26], [34], [68], [15], [43],
[52], [97], [64], [82], [11], [71], [27], [75], [60], [85], [42], [40],
[76], [12], [81], [69]]
2009-06-15 14:55:34,178 INFO sqlalchemy.engine.base.Engine.0x...d20c COMMIT
2009-06-15 14:55:34,180 INFO sqlalchemy.engine.base.Engine.0x...d20c SELECT
avg(mytable.number) AS avg_1, np_var(mytable.number) AS np_var_1 FROM mytable
2009-06-15 14:55:34,180 INFO sqlalchemy.engine.base.Engine.0x...d20c []
RESULT ROW: (55.0, 831.0)
Note that I didn't use SQLAlchemy's ORM (just the sql expression part of SQLAlchemy was used) but you could use ORM just as well.
|
How can I write my own aggregate functions with sqlalchemy?
|
How can I write my own aggregate functions with SQLAlchemy? As an easy example I would like to use numpy to calculate the variance. With sqlite it would look like this:
import sqlite3 as sqlite
import numpy as np
class self_written_SQLvar(object):
def __init__(self):
import numpy as np
self.values = []
def step(self, value):
self.values.append(value)
def finalize(self):
return np.array(self.values).var()
cxn = sqlite.connect(':memory:')
cur = cxn.cursor()
cxn.create_aggregate("self_written_SQLvar", 1, self_written_SQLvar)
# Now - how to use it:
cur.execute("CREATE TABLE 'mytable' ('numbers' INTEGER)")
cur.execute("INSERT INTO 'mytable' VALUES (1)")
cur.execute("INSERT INTO 'mytable' VALUES (2)")
cur.execute("INSERT INTO 'mytable' VALUES (3)")
cur.execute("INSERT INTO 'mytable' VALUES (4)")
a = cur.execute("SELECT avg(numbers), self_written_SQLvar(numbers) FROM mytable")
print a.fetchall()
>>> [(2.5, 1.25)]
|
[
"The creation of new aggregate functions is backend-dependant, and must be done \ndirectly with the API of the underlining connection. SQLAlchemy offers no \nfacility for creating those.\nHowever after created you can just use them in SQLAlchemy normally.\nExample:\nimport sqlalchemy\nfrom sqlalchemy import Column, Table, create_engine, MetaData, Integer\nfrom sqlalchemy import func, select\nfrom sqlalchemy.pool import StaticPool\nfrom random import randrange\nimport numpy\nimport sqlite3\n\nclass NumpyVarAggregate(object):\n def __init__(self):\n self.values = []\n def step(self, value):\n self.values.append(value)\n def finalize(self):\n return numpy.array(self.values).var()\n\ndef sqlite_memory_engine_creator():\n con = sqlite3.connect(':memory:')\n con.create_aggregate(\"np_var\", 1, NumpyVarAggregate)\n return con\n\ne = create_engine('sqlite://', echo=True, poolclass=StaticPool,\n creator=sqlite_memory_engine_creator)\nm = MetaData(bind=e)\nt = Table('mytable', m, \n Column('id', Integer, primary_key=True),\n Column('number', Integer)\n )\nm.create_all()\n\nNow for the testing:\n# insert 30 random-valued rows\nt.insert().execute([{'number': randrange(100)} for x in xrange(30)])\n\nfor row in select([func.avg(t.c.number), func.np_var(t.c.number)]).execute():\n print 'RESULT ROW: ', row\n\nThat prints (with SQLAlchemy statement echo turned on):\n2009-06-15 14:55:34,171 INFO sqlalchemy.engine.base.Engine.0x...d20c PRAGMA \ntable_info(\"mytable\")\n2009-06-15 14:55:34,174 INFO sqlalchemy.engine.base.Engine.0x...d20c ()\n2009-06-15 14:55:34,175 INFO sqlalchemy.engine.base.Engine.0x...d20c \nCREATE TABLE mytable (\n id INTEGER NOT NULL, \n number INTEGER, \n PRIMARY KEY (id)\n)\n2009-06-15 14:55:34,175 INFO sqlalchemy.engine.base.Engine.0x...d20c ()\n2009-06-15 14:55:34,176 INFO sqlalchemy.engine.base.Engine.0x...d20c COMMIT\n2009-06-15 14:55:34,177 INFO sqlalchemy.engine.base.Engine.0x...d20c INSERT\nINTO mytable (number) VALUES (?)\n2009-06-15 14:55:34,177 INFO sqlalchemy.engine.base.Engine.0x...d20c [[98], \n[94], [7], [1], [79], [77], [51], [28], [85], [26], [34], [68], [15], [43], \n[52], [97], [64], [82], [11], [71], [27], [75], [60], [85], [42], [40], \n[76], [12], [81], [69]]\n2009-06-15 14:55:34,178 INFO sqlalchemy.engine.base.Engine.0x...d20c COMMIT\n2009-06-15 14:55:34,180 INFO sqlalchemy.engine.base.Engine.0x...d20c SELECT\navg(mytable.number) AS avg_1, np_var(mytable.number) AS np_var_1 FROM mytable\n2009-06-15 14:55:34,180 INFO sqlalchemy.engine.base.Engine.0x...d20c []\nRESULT ROW: (55.0, 831.0)\n\nNote that I didn't use SQLAlchemy's ORM (just the sql expression part of SQLAlchemy was used) but you could use ORM just as well.\n"
] |
[
13
] |
[
"at first you have to import func from sqlalchemy\nyou can write \nfunc.avg('fieldname')\nor func.avg('fieldname').label('user_deined') \nor you can go thru for mre information \nhttp://www.sqlalchemy.org/docs/05/ormtutorial.html#using-subqueries\n"
] |
[
-1
] |
[
"aggregate_functions",
"python",
"sqlalchemy",
"sqlite"
] |
stackoverflow_0000996922_aggregate_functions_python_sqlalchemy_sqlite.txt
|
Q:
Create SQL query using SqlAlchemy select and join functions
I have two tables "tags" and "deal_tag", and table definition follows,
Table('tags', metadata,
Column('id', types.Integer(), Sequence('tag_uid_seq'),
primary_key=True),
Column('name', types.String()),
)
Table('deal_tag', metadata,
Column('dealid', types.Integer(), ForeignKey('deals.id')),
Column('tagid', types.Integer(), ForeignKey
('tags.id')),
)
I want to select tag id, tag name and deal count (number of deals per
tag). Sample query is
SELECT tags.Name,tags.id,COUNT(deal_tag.dealid) FROM tags INNER JOIN
deal_tag ON tags.id = deal_tag.tagid GROUP BY deal_tag.tagid;
How do I create the above query using SqlAlchemy select & join functions?
A:
Give this a try...
s = select([tags.c.Name, tags.c.id, func.count(deal_tag.dealid)],
tags.c.id == deal_tag.c.tagid).group_by(tags.c.Name, tags.c.id)
A:
you can join table in the time of mapping table
in the orm.mapper()
for more information you can go thru the link
www.sqlalchemy.org/docs/
|
Create SQL query using SqlAlchemy select and join functions
|
I have two tables "tags" and "deal_tag", and table definition follows,
Table('tags', metadata,
Column('id', types.Integer(), Sequence('tag_uid_seq'),
primary_key=True),
Column('name', types.String()),
)
Table('deal_tag', metadata,
Column('dealid', types.Integer(), ForeignKey('deals.id')),
Column('tagid', types.Integer(), ForeignKey
('tags.id')),
)
I want to select tag id, tag name and deal count (number of deals per
tag). Sample query is
SELECT tags.Name,tags.id,COUNT(deal_tag.dealid) FROM tags INNER JOIN
deal_tag ON tags.id = deal_tag.tagid GROUP BY deal_tag.tagid;
How do I create the above query using SqlAlchemy select & join functions?
|
[
"Give this a try...\ns = select([tags.c.Name, tags.c.id, func.count(deal_tag.dealid)], \n tags.c.id == deal_tag.c.tagid).group_by(tags.c.Name, tags.c.id)\n\n",
"you can join table in the time of mapping table\nin the orm.mapper()\nfor more information you can go thru the link\nwww.sqlalchemy.org/docs/\n"
] |
[
2,
0
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0000950910_python_sqlalchemy.txt
|
Q:
How can I parse the output of /proc/net/dev into key:value pairs per interface using Python?
The output of /proc/net/dev on Linux looks like this:
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
lo:18748525 129811 0 0 0 0 0 0 18748525 129811 0 0 0 0 0 0
eth0:1699369069 226296437 0 0 0 0 0 3555 4118745424 194001149 0 0 0 0 0 0
eth1: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
sit0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
How can I use Python to parse this output into key:value pairs for each interface? I have found this forum topic for achieving it using shell scripting and there is a Perl extension but I need to use Python.
A:
this is pretty formatted input and you can easily get columns and data list by splitting each line, and then create a dict of of it.
here is a simple script without regex
lines = open("/proc/net/dev", "r").readlines()
columnLine = lines[1]
_, receiveCols , transmitCols = columnLine.split("|")
receiveCols = map(lambda a:"recv_"+a, receiveCols.split())
transmitCols = map(lambda a:"trans_"+a, transmitCols.split())
cols = receiveCols+transmitCols
faces = {}
for line in lines[2:]:
if line.find(":") < 0: continue
face, data = line.split(":")
faceData = dict(zip(cols, data.split()))
faces[face] = faceData
import pprint
pprint.pprint(faces)
it outputs
{' lo': {'recv_bytes': '7056295',
'recv_compressed': '0',
'recv_drop': '0',
'recv_errs': '0',
'recv_fifo': '0',
'recv_frame': '0',
'recv_multicast': '0',
'recv_packets': '12148',
'trans_bytes': '7056295',
'trans_carrier': '0',
'trans_colls': '0',
'trans_compressed': '0',
'trans_drop': '0',
'trans_errs': '0',
'trans_fifo': '0',
'trans_packets': '12148'},
' eth0': {'recv_bytes': '34084530',
'recv_compressed': '0',
'recv_drop': '0',
'recv_errs': '0',
'recv_fifo': '0',
'recv_frame': '0',
'recv_multicast': '0',
'recv_packets': '30599',
'trans_bytes': '6170441',
'trans_carrier': '0',
'trans_colls': '0',
'trans_compressed': '0',
'trans_drop': '0',
'trans_errs': '0',
'trans_fifo': '0',
'trans_packets': '32377'}}
A:
Does this help?
dev = open("/proc/net/dev", "r").readlines()
header_line = dev[1]
header_names = header_line[header_line.index("|")+1:].replace("|", " ").split()
values={}
for line in dev[2:]:
intf = line[:line.index(":")].strip()
values[intf] = [int(value) for value in line[line.index(":")+1:].split()]
print intf,values[intf]
Output:
lo [803814, 16319, 0, 0, 0, 0, 0, 0, 803814, 16319, 0, 0, 0, 0, 0, 0]
eth0 [123605646, 102196, 0, 0, 0, 0, 0, 0, 9029534, 91901, 0, 0, 0, 0, 0, 0]
wmaster0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
eth1 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
vboxnet0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
pan0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
You could, of course, use the header names in header_names to construct a dict of dicts.
A:
#!/usr/bin/env python
from __future__ import with_statement
import re
import pprint
ifaces = {}
with open('/proc/net/dev') as fd:
lines = map(lambda x: x.strip(), fd.readlines())
lines = lines[1:]
lines[0] = lines[0].replace('|', ':', 1)
lines[0] = lines[0].replace('|', ' ', 1)
lines[0] = lines[0].split(':')[1]
keys = re.split('\s+', lines[0])
keys = map(lambda x: 'rx' + x[1] if x[0] < 8 else 'tx' + x[1], enumerate(keys))
for line in lines[1:]:
interface, values = line.split(':')
values = re.split('\s+', values)
if values[0] == '':
values = values[1:]
values = map(int, values)
ifaces[interface] = dict(zip(keys, values))
pprint.pprint(ifaces)
|
How can I parse the output of /proc/net/dev into key:value pairs per interface using Python?
|
The output of /proc/net/dev on Linux looks like this:
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
lo:18748525 129811 0 0 0 0 0 0 18748525 129811 0 0 0 0 0 0
eth0:1699369069 226296437 0 0 0 0 0 3555 4118745424 194001149 0 0 0 0 0 0
eth1: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
sit0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
How can I use Python to parse this output into key:value pairs for each interface? I have found this forum topic for achieving it using shell scripting and there is a Perl extension but I need to use Python.
|
[
"this is pretty formatted input and you can easily get columns and data list by splitting each line, and then create a dict of of it.\nhere is a simple script without regex\nlines = open(\"/proc/net/dev\", \"r\").readlines()\n\ncolumnLine = lines[1]\n_, receiveCols , transmitCols = columnLine.split(\"|\")\nreceiveCols = map(lambda a:\"recv_\"+a, receiveCols.split())\ntransmitCols = map(lambda a:\"trans_\"+a, transmitCols.split())\n\ncols = receiveCols+transmitCols\n\nfaces = {}\nfor line in lines[2:]:\n if line.find(\":\") < 0: continue\n face, data = line.split(\":\")\n faceData = dict(zip(cols, data.split()))\n faces[face] = faceData\n\nimport pprint\npprint.pprint(faces)\n\nit outputs\n{' lo': {'recv_bytes': '7056295',\n 'recv_compressed': '0',\n 'recv_drop': '0',\n 'recv_errs': '0',\n 'recv_fifo': '0',\n 'recv_frame': '0',\n 'recv_multicast': '0',\n 'recv_packets': '12148',\n 'trans_bytes': '7056295',\n 'trans_carrier': '0',\n 'trans_colls': '0',\n 'trans_compressed': '0',\n 'trans_drop': '0',\n 'trans_errs': '0',\n 'trans_fifo': '0',\n 'trans_packets': '12148'},\n ' eth0': {'recv_bytes': '34084530',\n 'recv_compressed': '0',\n 'recv_drop': '0',\n 'recv_errs': '0',\n 'recv_fifo': '0',\n 'recv_frame': '0',\n 'recv_multicast': '0',\n 'recv_packets': '30599',\n 'trans_bytes': '6170441',\n 'trans_carrier': '0',\n 'trans_colls': '0',\n 'trans_compressed': '0',\n 'trans_drop': '0',\n 'trans_errs': '0',\n 'trans_fifo': '0',\n 'trans_packets': '32377'}}\n\n",
"Does this help?\ndev = open(\"/proc/net/dev\", \"r\").readlines()\nheader_line = dev[1]\nheader_names = header_line[header_line.index(\"|\")+1:].replace(\"|\", \" \").split()\n\nvalues={}\nfor line in dev[2:]:\n intf = line[:line.index(\":\")].strip()\n values[intf] = [int(value) for value in line[line.index(\":\")+1:].split()]\n\n print intf,values[intf]\n\nOutput:\nlo [803814, 16319, 0, 0, 0, 0, 0, 0, 803814, 16319, 0, 0, 0, 0, 0, 0]\neth0 [123605646, 102196, 0, 0, 0, 0, 0, 0, 9029534, 91901, 0, 0, 0, 0, 0, 0]\nwmaster0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\neth1 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\nvboxnet0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\npan0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n\nYou could, of course, use the header names in header_names to construct a dict of dicts.\n",
"#!/usr/bin/env python\nfrom __future__ import with_statement\nimport re\nimport pprint\n\n\nifaces = {}\n\n\nwith open('/proc/net/dev') as fd:\n lines = map(lambda x: x.strip(), fd.readlines())\n\n\nlines = lines[1:]\n\n\nlines[0] = lines[0].replace('|', ':', 1)\nlines[0] = lines[0].replace('|', ' ', 1)\nlines[0] = lines[0].split(':')[1]\n\n\nkeys = re.split('\\s+', lines[0])\nkeys = map(lambda x: 'rx' + x[1] if x[0] < 8 else 'tx' + x[1], enumerate(keys))\n\n\nfor line in lines[1:]:\n interface, values = line.split(':')\n values = re.split('\\s+', values)\n\n if values[0] == '':\n values = values[1:]\n\n values = map(int, values)\n\n ifaces[interface] = dict(zip(keys, values))\n\n\npprint.pprint(ifaces)\n\n"
] |
[
15,
1,
1
] |
[] |
[] |
[
"linux",
"parsing",
"python"
] |
stackoverflow_0001052589_linux_parsing_python.txt
|
Q:
Creating an interactive shell for .NET apps and embed scripting languages like python/iron python into it
I was learning python using the tutorial that comes with the standard python installation. One of the benefits that the author states about python is "maybe you’ve written a program that could use an extension language, and you don’t want to design and implement a whole new language for your application" - My question is how would i go about designing a program (using c#) that can be extended using Python interactively(for this to be possible, i would imagine that i would need to create some sort of a "shell" or "interactive" mode for the .net program) ?
Are there any pointers on how to design .NET programs that have an interactive shell. I would then like to use python script in the shell to "extend" or interact with the program.
EDIT: This question partly stems from the demo give by Miguel de Icaza during PDC 2008 where he showed the interactive csharp command prompt, C# 4.0 i think also has this "compiler as a service" feature. I looked at that and thought how cool would it be to design a windows or web program in .NET that had a interactive shell.. and a scripting language like python could be used to extend the features provided by the program.
Also, i started thinking about this kind of functionality after reading one of Steve Yegge's essays where he talks about systems that live forever.
A:
This sounds like a great use of IronPython.
It's fairly easy to set up a simple scripting host from C# to allow calls into IronPython scripts, as well as allowing IronPython to call into your C# code. There are samples and examples on the CodePlex site that show how to do this very thing.
Another good site for examples and samples is ironpython.info
And here is a page dedicated to an example answering your very question, albeit in a generic DLR-centric way -- this would allow you to host IronPython, IronRuby, or whatever DLR languages you want to support.
I've used these examples in the past to create an IronPython environment inside a private installation of ScrewTurn Wiki - it allowed me to create very expressive Wiki templates and proved to be very useful in general.
A:
I was looking solution for the same problem, and found IronTextBox: http://www.codeproject.com/KB/edit/irontextbox2.aspx
It needs a little tuning for current versions, but seems to be everything I needed. First made it compile, and then added variables I wanted to access from shell to the scope.
A:
Python as an extension language is called "Embedding Python".
you can call a python module from c++ by bascially calling the python intepreter and have it execute the python source. This is called embedding.
It works from C and C++, and will probably work just as well from C#.
And no, you do not need any kind of "shell". While Python can be interactive, that's not a requirement at all.
A:
Here is a link to a blog post about adding IronRuby to script a C# application.
http://blog.jimmy.schementi.com/2008/11/adding-scripting-to-c-silverlight-app.html
The principles would also work well for using IronPython.
A:
If your goal is to avoid learning a new language you can use CSScript.Net and embedded scripts written in C# or VB into you application. With CSScript you get full access to the CLR. Three different models of script execution are supported so that you can execute script that refers to objects in your current app domain, execute using remoting, or execute as a shell.
Currently I am using CCScript as "glue" code for configuring application objects somewhat similar to using Boo.
This link tasks you to a code project article that provides a good overview.
A:
I don't know what you mean with
"extend" or interact with the program
so I can't answer your question. Can you give an example?
There is an open source interactive C# shell in mono: http://www.mono-project.com/CsharpRepl
When you like python, .Net and language extension, you will probably like Boo over Iron python. Boo comes with an open source interactive shell too.
I disagree with
"you don’t want to design and
implement a whole new language for
your application"
It's not as hard as it used to be to create a simple DSL. It won't take you days to implement, just hours. It might be an interesting option.
|
Creating an interactive shell for .NET apps and embed scripting languages like python/iron python into it
|
I was learning python using the tutorial that comes with the standard python installation. One of the benefits that the author states about python is "maybe you’ve written a program that could use an extension language, and you don’t want to design and implement a whole new language for your application" - My question is how would i go about designing a program (using c#) that can be extended using Python interactively(for this to be possible, i would imagine that i would need to create some sort of a "shell" or "interactive" mode for the .net program) ?
Are there any pointers on how to design .NET programs that have an interactive shell. I would then like to use python script in the shell to "extend" or interact with the program.
EDIT: This question partly stems from the demo give by Miguel de Icaza during PDC 2008 where he showed the interactive csharp command prompt, C# 4.0 i think also has this "compiler as a service" feature. I looked at that and thought how cool would it be to design a windows or web program in .NET that had a interactive shell.. and a scripting language like python could be used to extend the features provided by the program.
Also, i started thinking about this kind of functionality after reading one of Steve Yegge's essays where he talks about systems that live forever.
|
[
"This sounds like a great use of IronPython.\nIt's fairly easy to set up a simple scripting host from C# to allow calls into IronPython scripts, as well as allowing IronPython to call into your C# code. There are samples and examples on the CodePlex site that show how to do this very thing.\nAnother good site for examples and samples is ironpython.info\nAnd here is a page dedicated to an example answering your very question, albeit in a generic DLR-centric way -- this would allow you to host IronPython, IronRuby, or whatever DLR languages you want to support.\nI've used these examples in the past to create an IronPython environment inside a private installation of ScrewTurn Wiki - it allowed me to create very expressive Wiki templates and proved to be very useful in general.\n",
"I was looking solution for the same problem, and found IronTextBox: http://www.codeproject.com/KB/edit/irontextbox2.aspx\nIt needs a little tuning for current versions, but seems to be everything I needed. First made it compile, and then added variables I wanted to access from shell to the scope.\n",
"Python as an extension language is called \"Embedding Python\".\nyou can call a python module from c++ by bascially calling the python intepreter and have it execute the python source. This is called embedding.\nIt works from C and C++, and will probably work just as well from C#.\nAnd no, you do not need any kind of \"shell\". While Python can be interactive, that's not a requirement at all.\n",
"Here is a link to a blog post about adding IronRuby to script a C# application.\nhttp://blog.jimmy.schementi.com/2008/11/adding-scripting-to-c-silverlight-app.html\nThe principles would also work well for using IronPython.\n",
"If your goal is to avoid learning a new language you can use CSScript.Net and embedded scripts written in C# or VB into you application. With CSScript you get full access to the CLR. Three different models of script execution are supported so that you can execute script that refers to objects in your current app domain, execute using remoting, or execute as a shell.\nCurrently I am using CCScript as \"glue\" code for configuring application objects somewhat similar to using Boo.\nThis link tasks you to a code project article that provides a good overview. \n",
"I don't know what you mean with\n\n\"extend\" or interact with the program\n\nso I can't answer your question. Can you give an example?\nThere is an open source interactive C# shell in mono: http://www.mono-project.com/CsharpRepl\nWhen you like python, .Net and language extension, you will probably like Boo over Iron python. Boo comes with an open source interactive shell too.\nI disagree with\n\n\"you don’t want to design and\nimplement a whole new language for\nyour application\"\n\nIt's not as hard as it used to be to create a simple DSL. It won't take you days to implement, just hours. It might be an interesting option.\n"
] |
[
12,
3,
1,
1,
1,
0
] |
[] |
[] |
[
".net",
"c#",
"ironpython",
"python",
"python.net"
] |
stackoverflow_0000808692_.net_c#_ironpython_python_python.net.txt
|
Q:
How can I convert a Perl regex with named groups to Python?
I am trying to convert the following Perl regex I found in the Video::Filename Perl module to a Python 2.5.4 regex to parse a filename
# Perl > v5.10
re => '^(?:(?<name>.*?)[\/\s._-]*)?(?<openb>\[)?(?<season>\d{1,2})[x\/](?<episode>\d{1,2})(?:-(?:\k<season>x)?(?<endep>\d{1,2}))?(?(<openb>)\])(?:[\s._-]*(?<epname>[^\/]+?))?$',
I would like to use named groups too, and I know in Python the regex extension for named groups is different, but I am not 100% sure on the syntax.
This is what I tried:
# Python (not working)
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))?(?(P<openb>)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$')
The error I get:
raise error, v # invalid expression
sre_constants.error: bad character in group name
For example, this one I managed to convert and it works. But the one above I can't seem to get right. I get a compilation error in Python.
# Perl:
re => '^(?:(?<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?<part>\d+))?(?<subep>[a-z])?(?:[\/\s._-]*(?<epname>[^\/]+?))?$',
# Python (working):
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?P<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?P<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?P<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?P<part>\d+))?(?P<subep>[a-z])?(?:[\/\s._-]*(?P<epname>[^\/]+?))?$')
I am not sure where to start looking.
A:
There are 2 problems with your translation. First of all, the second mention of openb has extra parenthesis around it making it a conditional expression, not a named expression.
Next is that you didn't translate the \k<season> backreference, Python uses (P=season) to match the same. The following compiles for me:
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:(?P=season)x)?(?P<endep>\d{1,2}))?(?(openb)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$')
If I were you, I'd use re.VERBOSE to split this expression over multiple lines and add copious documentation so you can keep understanding the expression in the future if this is something that needs to remain maintainable though.
(edited after realising the second openb reference was a conditional expression, and to properly translate the backreference).
A:
I found the offending part but can't figure out what exactly is wrong without wrapping my mind around the whole thing.
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))?
(?(P<openb>)\]) // this part here causes the error message
(?:[\s._-]*(?P<epname>[^\/]+?))?$')
The problem seems to be with the fact that group names in python must be valid python identifiers (check documentation). The parentheses seem to be the problem. Removing them gives
(?(P<openb>)\]) //with parentheses
(?P<openb>\]) //without parentheses
redefinition of group name 'openb' as group 6; was group 2
A:
Those regexps are the product of a sick an twisted mind... :-)
Anyway, (?()) are conditions in both Python and Perl, and the perl syntax above looks like it should be the same as the Python syntax, i.e., it evaluates as true of the group named exists.
Where to start looking? The documentation for the modules are here:
http://docs.python.org/library/re.html
http://www.perl.com/doc/manual/html/pod/perlre.html
A:
I may be wrong but you tried to get the backreference using :
(?:\k<season>x)
Isn't the syntax \g<name> in Python ?
|
How can I convert a Perl regex with named groups to Python?
|
I am trying to convert the following Perl regex I found in the Video::Filename Perl module to a Python 2.5.4 regex to parse a filename
# Perl > v5.10
re => '^(?:(?<name>.*?)[\/\s._-]*)?(?<openb>\[)?(?<season>\d{1,2})[x\/](?<episode>\d{1,2})(?:-(?:\k<season>x)?(?<endep>\d{1,2}))?(?(<openb>)\])(?:[\s._-]*(?<epname>[^\/]+?))?$',
I would like to use named groups too, and I know in Python the regex extension for named groups is different, but I am not 100% sure on the syntax.
This is what I tried:
# Python (not working)
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))?(?(P<openb>)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$')
The error I get:
raise error, v # invalid expression
sre_constants.error: bad character in group name
For example, this one I managed to convert and it works. But the one above I can't seem to get right. I get a compilation error in Python.
# Perl:
re => '^(?:(?<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?<part>\d+))?(?<subep>[a-z])?(?:[\/\s._-]*(?<epname>[^\/]+?))?$',
# Python (working):
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?P<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?P<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?P<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?P<part>\d+))?(?P<subep>[a-z])?(?:[\/\s._-]*(?P<epname>[^\/]+?))?$')
I am not sure where to start looking.
|
[
"There are 2 problems with your translation. First of all, the second mention of openb has extra parenthesis around it making it a conditional expression, not a named expression.\nNext is that you didn't translate the \\k<season> backreference, Python uses (P=season) to match the same. The following compiles for me:\nr = re.compile(r'^(?:(?P<name>.*?)[\\/\\s._-]*)?(?P<openb>\\[)?(?P<season>\\d{1,2})[x\\/](?P<episode>\\d{1,2})(?:-(?:(?P=season)x)?(?P<endep>\\d{1,2}))?(?(openb)\\])(?:[\\s._-]*(?P<epname>[^\\/]+?))?$')\n\nIf I were you, I'd use re.VERBOSE to split this expression over multiple lines and add copious documentation so you can keep understanding the expression in the future if this is something that needs to remain maintainable though.\n(edited after realising the second openb reference was a conditional expression, and to properly translate the backreference).\n",
"I found the offending part but can't figure out what exactly is wrong without wrapping my mind around the whole thing.\nr = re.compile(r'^(?:(?P<name>.*?)[\\/\\s._-]*)?(?P<openb>\\[)?(?P<season>\\d{1,2})[x\\/](?P<episode>\\d{1,2})(?:-(?:\\kP<season>x)?(?P<endep>\\d{1,2}))?\n\n(?(P<openb>)\\]) // this part here causes the error message\n\n(?:[\\s._-]*(?P<epname>[^\\/]+?))?$')\n\nThe problem seems to be with the fact that group names in python must be valid python identifiers (check documentation). The parentheses seem to be the problem. Removing them gives\n(?(P<openb>)\\]) //with parentheses\n(?P<openb>\\]) //without parentheses\n\nredefinition of group name 'openb' as group 6; was group 2\n\n",
"Those regexps are the product of a sick an twisted mind... :-)\nAnyway, (?()) are conditions in both Python and Perl, and the perl syntax above looks like it should be the same as the Python syntax, i.e., it evaluates as true of the group named exists.\nWhere to start looking? The documentation for the modules are here:\nhttp://docs.python.org/library/re.html\nhttp://www.perl.com/doc/manual/html/pod/perlre.html\n",
"I may be wrong but you tried to get the backreference using :\n(?:\\k<season>x)\n\nIsn't the syntax \\g<name> in Python ?\n"
] |
[
6,
2,
0,
0
] |
[] |
[] |
[
"perl",
"python",
"regex"
] |
stackoverflow_0001052930_perl_python_regex.txt
|
Q:
Testing for ImportErrors in Python
We're having a real problem with people checking in code that doesn't work because something's been refactored. Admittedly, this is partly because our developers don't really have any good tools for finding these kinds of mistakes easily.
Are there any tools to help find ImportErrors in Python? Of course, the correct answer here is "you should use your unit tests for that." But, I'm in legacy code land (at least by Michael Feathers's definition), so our unit tests are something we're working on heavily.
In the meantime, it would be nice to have some sort of tool that will walk through each directory and import each file within it just to find any scripts that have ImportErrors (like say if a file or class has been renamed recently). I suppose this wouldn't be terribly difficult to write myself, but are there any programs that are already written to do this?
A:
Pychecker is for you. It imports the modules and will find these errors.
http://pychecker.sourceforge.net/
Oh, and "pylint <modulename>" will import the module, but I guess you would have to call it once for every module you want, where pychecker at least supports *.py. (Pylint also support *.py but won't import the modules in that situation).
A:
Something like this?
import os
for x in os.listdir("some/path"):
execfile( x )
Is that enough, or would you need more?
Note that any module that lacks if __name__ == "__main__": will -- obviously -- take off and start doing stuff.
|
Testing for ImportErrors in Python
|
We're having a real problem with people checking in code that doesn't work because something's been refactored. Admittedly, this is partly because our developers don't really have any good tools for finding these kinds of mistakes easily.
Are there any tools to help find ImportErrors in Python? Of course, the correct answer here is "you should use your unit tests for that." But, I'm in legacy code land (at least by Michael Feathers's definition), so our unit tests are something we're working on heavily.
In the meantime, it would be nice to have some sort of tool that will walk through each directory and import each file within it just to find any scripts that have ImportErrors (like say if a file or class has been renamed recently). I suppose this wouldn't be terribly difficult to write myself, but are there any programs that are already written to do this?
|
[
"Pychecker is for you. It imports the modules and will find these errors.\nhttp://pychecker.sourceforge.net/\nOh, and \"pylint <modulename>\" will import the module, but I guess you would have to call it once for every module you want, where pychecker at least supports *.py. (Pylint also support *.py but won't import the modules in that situation).\n",
"Something like this?\nimport os\nfor x in os.listdir(\"some/path\"):\n execfile( x )\n\nIs that enough, or would you need more?\nNote that any module that lacks if __name__ == \"__main__\": will -- obviously -- take off and start doing stuff.\n"
] |
[
3,
1
] |
[] |
[] |
[
"continuous_integration",
"importerror",
"python",
"refactoring",
"unit_testing"
] |
stackoverflow_0001052931_continuous_integration_importerror_python_refactoring_unit_testing.txt
|
Q:
python-scapy: how to translate port numbers to service names?
A TCP layer in Scapy contains source port:
>>> a[TCP].sport
80
Is there a simple way to convert port number to service name? I've seen Scapy has TCP_SERVICES and UDP_SERVICES to translate port number, but
print TCP_SERVICES[80] # fails
print TCP_SERVICES['80'] # fails
print TCP_SERVICES.__getitem__(80) # fails
print TCP_SERVICES['www'] # works, but it's not what i need
80
Someone know how can I map ports to services?
Thank you in advance
A:
Python's socket module will do that:
>>> import socket
>>> socket.getservbyport(80)
'http'
>>> socket.getservbyport(21)
'ftp'
>>> socket.getservbyport(53, 'udp')
'domain'
A:
If this is something you need to do frequently, you can create a reverse mapping of TCP_SERVICES:
>>> TCP_REVERSE = dict((TCP_SERVICES[k], k) for k in TCP_SERVICES.keys())
>>> TCP_REVERSE[80]
'www'
A:
This may work for you (filtering the dictionary based on the value):
>>> [k for k, v in TCP_SERVICES.iteritems() if v == 80][0]
'www'
A:
If you are using unix or linux there is a file /etc/services which contains this mapping.
A:
I've found a good solution filling another dict self.MYTCP_SERVICES
for p in scapy.data.TCP_SERVICES.keys():
self.MYTCP_SERVICES[scapy.data.TCP_SERVICES[p]] = p
|
python-scapy: how to translate port numbers to service names?
|
A TCP layer in Scapy contains source port:
>>> a[TCP].sport
80
Is there a simple way to convert port number to service name? I've seen Scapy has TCP_SERVICES and UDP_SERVICES to translate port number, but
print TCP_SERVICES[80] # fails
print TCP_SERVICES['80'] # fails
print TCP_SERVICES.__getitem__(80) # fails
print TCP_SERVICES['www'] # works, but it's not what i need
80
Someone know how can I map ports to services?
Thank you in advance
|
[
"Python's socket module will do that:\n>>> import socket\n>>> socket.getservbyport(80)\n'http'\n>>> socket.getservbyport(21)\n'ftp'\n>>> socket.getservbyport(53, 'udp')\n'domain'\n\n",
"If this is something you need to do frequently, you can create a reverse mapping of TCP_SERVICES:\n>>> TCP_REVERSE = dict((TCP_SERVICES[k], k) for k in TCP_SERVICES.keys())\n>>> TCP_REVERSE[80]\n'www'\n\n",
"This may work for you (filtering the dictionary based on the value):\n>>> [k for k, v in TCP_SERVICES.iteritems() if v == 80][0]\n'www'\n\n",
"If you are using unix or linux there is a file /etc/services which contains this mapping.\n",
"I've found a good solution filling another dict self.MYTCP_SERVICES\nfor p in scapy.data.TCP_SERVICES.keys():\n self.MYTCP_SERVICES[scapy.data.TCP_SERVICES[p]] = p \n\n"
] |
[
20,
5,
1,
0,
0
] |
[] |
[] |
[
"python",
"scapy",
"tcp"
] |
stackoverflow_0000976599_python_scapy_tcp.txt
|
Q:
How to "keep-alive" with cookielib and httplib in python?
In python, I'm using httplib because it "keep-alive" the http connection (as oppose to urllib(2)). Now, I want to use cookielib with httplib but they seem to hate each other!! (no way to interface them together).
Does anyone know of a solution to that problem?
A:
HTTP handler for urllib2 that supports keep-alive
A:
You should consider using the Requests library instead at the earliest chance you have to refactor your code. In the mean time;
HACK ALERT! :)
I'd go other suggested way, but I've done a hack (done for different reasons though), which does create an interface between httplib and cookielib.
What I did was creating a fake HTTPRequest with minimal required set of methods, so that CookieJar would recognize it and process cookies as needed. I've used that fake request object, setting all the data needed for cookielib.
Here is the code of the class:
class HTTPRequest( object ):
"""
Data container for HTTP request (used for cookie processing).
"""
def __init__( self, host, url, headers={}, secure=False ):
self._host = host
self._url = url
self._secure = secure
self._headers = {}
for key, value in headers.items():
self.add_header(key, value)
def has_header( self, name ):
return name in self._headers
def add_header( self, key, val ):
self._headers[key.capitalize()] = val
def add_unredirected_header(self, key, val):
self._headers[key.capitalize()] = val
def is_unverifiable( self ):
return True
def get_type( self ):
return 'https' if self._secure else 'http'
def get_full_url( self ):
port_str = ""
port = str(self._host[1])
if self._secure:
if port != 443:
port_str = ":"+port
else:
if port != 80:
port_str = ":"+port
return self.get_type() + '://' + self._host[0] + port_str + self._url
def get_header( self, header_name, default=None ):
return self._headers.get( header_name, default )
def get_host( self ):
return self._host[0]
get_origin_req_host = get_host
def get_headers( self ):
return self._headers
Please note, the class has support for HTTPS protocol only (all I needed at the moment).
The code, which used this class was (please note another hack to make response compatible with cookielib):
cookies = CookieJar()
headers = {
# headers that you wish to set
}
# construct fake request
fake_request = HTTPRequest( host, request_url, headers )
# add cookies to fake request
cookies.add_cookie_header(fake_request)
# issue an httplib.HTTPConnection based request using cookies and headers from the fake request
http_connection.request(type, request_url, body, fake_request.get_headers())
response = http_connection.getresponse()
if response.status == httplib.OK:
# HACK: pretend we're urllib2 response
response.info = lambda : response.msg
# read and store cookies from response
cookies.extract_cookies(response, fake_request)
# process response...
|
How to "keep-alive" with cookielib and httplib in python?
|
In python, I'm using httplib because it "keep-alive" the http connection (as oppose to urllib(2)). Now, I want to use cookielib with httplib but they seem to hate each other!! (no way to interface them together).
Does anyone know of a solution to that problem?
|
[
"HTTP handler for urllib2 that supports keep-alive\n",
"You should consider using the Requests library instead at the earliest chance you have to refactor your code. In the mean time;\nHACK ALERT! :)\nI'd go other suggested way, but I've done a hack (done for different reasons though), which does create an interface between httplib and cookielib.\nWhat I did was creating a fake HTTPRequest with minimal required set of methods, so that CookieJar would recognize it and process cookies as needed. I've used that fake request object, setting all the data needed for cookielib.\nHere is the code of the class:\nclass HTTPRequest( object ):\n\"\"\"\nData container for HTTP request (used for cookie processing).\n\"\"\"\n\n def __init__( self, host, url, headers={}, secure=False ):\n self._host = host\n self._url = url\n self._secure = secure\n self._headers = {}\n for key, value in headers.items():\n self.add_header(key, value)\n\n def has_header( self, name ):\n return name in self._headers\n\n def add_header( self, key, val ):\n self._headers[key.capitalize()] = val\n\n def add_unredirected_header(self, key, val):\n self._headers[key.capitalize()] = val\n\n def is_unverifiable( self ):\n return True\n\n def get_type( self ):\n return 'https' if self._secure else 'http'\n\n def get_full_url( self ):\n port_str = \"\"\n port = str(self._host[1])\n if self._secure:\n if port != 443:\n port_str = \":\"+port\n else:\n if port != 80:\n port_str = \":\"+port\n return self.get_type() + '://' + self._host[0] + port_str + self._url\n\n def get_header( self, header_name, default=None ):\n return self._headers.get( header_name, default )\n\n def get_host( self ):\n return self._host[0]\n\n get_origin_req_host = get_host\n\n def get_headers( self ):\n return self._headers\n\nPlease note, the class has support for HTTPS protocol only (all I needed at the moment).\nThe code, which used this class was (please note another hack to make response compatible with cookielib):\ncookies = CookieJar()\n\nheaders = {\n # headers that you wish to set\n}\n\n# construct fake request\nfake_request = HTTPRequest( host, request_url, headers )\n\n# add cookies to fake request\ncookies.add_cookie_header(fake_request)\n\n# issue an httplib.HTTPConnection based request using cookies and headers from the fake request\nhttp_connection.request(type, request_url, body, fake_request.get_headers())\n\nresponse = http_connection.getresponse()\n\nif response.status == httplib.OK:\n # HACK: pretend we're urllib2 response\n response.info = lambda : response.msg\n\n # read and store cookies from response\n cookies.extract_cookies(response, fake_request)\n\n # process response...\n\n"
] |
[
2,
2
] |
[] |
[] |
[
"cookies",
"httplib",
"python",
"urllib2"
] |
stackoverflow_0001016765_cookies_httplib_python_urllib2.txt
|
Q:
How do I convert a Python float to a hexadecimal string in python 2.5? Nonworking solution attached
What I really need to do is to export a floating point number to C with no precision loss.
I did this in python:
import math
import struct
x = math.sqrt(2)
print struct.unpack('ii', struct.pack('d', x))
# prints (1719614413, 1073127582)
And in C I try this:
#include <math.h>
#include <stdio.h>
int main(void)
{
unsigned long long x[2] = {1719614413, 1073127582};
long long lx;
double xf;
lx = (x[0] << 32) | x[1];
xf = (double)lx;
printf("%lf\n", xf);
return 0;
}
But in C I get:
7385687666638364672.000000 and not sqrt(2).
What am I missing?
Thanks.
A:
The Python code appears to work. The problem is in the C code: you have the long long filled out right, but then you convert the integer value directly into floating point, rather than reinterpreting the bytes as a double. If you throw some pointers/addressing at it it works:
jkugelman$ cat float.c
#include <stdio.h>
int main(void)
{
unsigned long x[2] = {1719614413, 1073127582};
double d = *(double *) x;
printf("%f\n", d);
return 0;
}
jkugelman$ gcc -o float float.c
jkugelman$ ./float
1.414214
Notice also that the format specifier for double (and for float) is %f, not %lf. %lf is for long double.
A:
If you're targeting a little-endian architecture,
>>> s = struct.pack('<d', x)
>>> ''.join('%.2x' % ord(c) for c in s)
'cd3b7f669ea0f63f'
if big-endian, use '>d' instead of <d. In either case, this gives you a hex string as you're asking for in the question title's, and of course C code can interpret it; I'm not sure what those two ints have to do with a "hex string".
A:
repr() is your friend.
C:\junk\es2>type es2.c
#include <stdio.h>
#include <math.h>
#include <assert.h>
int main(int argc, char** argv) {
double expected, actual;
int nconv;
expected = sqrt(2.0);
printf("expected: %20.17g\n", expected);
actual = -666.666;
nconv = scanf("%lf", &actual);
assert(nconv == 1);
printf("actual: %20.17g\n", actual);
assert(actual == expected);
return 0;
}
C:\junk\es2>gcc es2.c
C:\junk\es2>\python26\python -c "import math; print repr(math.sqrt(2.0))" | a
expected: 1.4142135623730951
actual: 1.4142135623730951
C:\junk\es2>
|
How do I convert a Python float to a hexadecimal string in python 2.5? Nonworking solution attached
|
What I really need to do is to export a floating point number to C with no precision loss.
I did this in python:
import math
import struct
x = math.sqrt(2)
print struct.unpack('ii', struct.pack('d', x))
# prints (1719614413, 1073127582)
And in C I try this:
#include <math.h>
#include <stdio.h>
int main(void)
{
unsigned long long x[2] = {1719614413, 1073127582};
long long lx;
double xf;
lx = (x[0] << 32) | x[1];
xf = (double)lx;
printf("%lf\n", xf);
return 0;
}
But in C I get:
7385687666638364672.000000 and not sqrt(2).
What am I missing?
Thanks.
|
[
"The Python code appears to work. The problem is in the C code: you have the long long filled out right, but then you convert the integer value directly into floating point, rather than reinterpreting the bytes as a double. If you throw some pointers/addressing at it it works:\njkugelman$ cat float.c\n#include <stdio.h>\n\nint main(void)\n{\n unsigned long x[2] = {1719614413, 1073127582};\n double d = *(double *) x;\n\n printf(\"%f\\n\", d);\n return 0;\n}\njkugelman$ gcc -o float float.c \njkugelman$ ./float \n1.414214\n\nNotice also that the format specifier for double (and for float) is %f, not %lf. %lf is for long double.\n",
"If you're targeting a little-endian architecture,\n>>> s = struct.pack('<d', x)\n>>> ''.join('%.2x' % ord(c) for c in s)\n'cd3b7f669ea0f63f'\n\nif big-endian, use '>d' instead of <d. In either case, this gives you a hex string as you're asking for in the question title's, and of course C code can interpret it; I'm not sure what those two ints have to do with a \"hex string\".\n",
"repr() is your friend.\nC:\\junk\\es2>type es2.c\n#include <stdio.h>\n#include <math.h>\n#include <assert.h>\n\nint main(int argc, char** argv) {\n double expected, actual;\n int nconv;\n expected = sqrt(2.0);\n printf(\"expected: %20.17g\\n\", expected);\n actual = -666.666;\n nconv = scanf(\"%lf\", &actual);\n assert(nconv == 1);\n printf(\"actual: %20.17g\\n\", actual);\n assert(actual == expected);\n return 0;\n }\n\n\nC:\\junk\\es2>gcc es2.c\n\nC:\\junk\\es2>\\python26\\python -c \"import math; print repr(math.sqrt(2.0))\" | a\nexpected: 1.4142135623730951\nactual: 1.4142135623730951\n\nC:\\junk\\es2>\n\n"
] |
[
6,
3,
1
] |
[] |
[] |
[
"double",
"python"
] |
stackoverflow_0001053121_double_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.